Mercurial > cpdt > repo
comparison src/Coinductive.v @ 346:5d85de065540
Pass through Coinductive up to PL example
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Sun, 23 Oct 2011 13:33:46 -0400 |
parents | d5787b70cf48 |
children | 36a08cad9245 |
comparison
equal
deleted
inserted
replaced
345:518c8994a715 | 346:5d85de065540 |
---|---|
1 (* Copyright (c) 2008-2010, Adam Chlipala | 1 (* Copyright (c) 2008-2011, Adam Chlipala |
2 * | 2 * |
3 * This work is licensed under a | 3 * This work is licensed under a |
4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 | 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 |
5 * Unported License. | 5 * Unported License. |
6 * The license text is available at: | 6 * The license text is available at: |
16 (* end hide *) | 16 (* end hide *) |
17 | 17 |
18 | 18 |
19 (** %\chapter{Infinite Data and Proofs}% *) | 19 (** %\chapter{Infinite Data and Proofs}% *) |
20 | 20 |
21 (** In lazy functional programming languages like Haskell, infinite data structures are everywhere. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow. | 21 (** In lazy functional programming languages like %\index{Haskell}%Haskell, infinite data structures are everywhere. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow. |
22 | 22 |
23 Laziness is easy to implement in Haskell, where all the definitions in a program may be thought of as mutually recursive. In such an unconstrained setting, it is easy to implement an infinite loop when you really meant to build an infinite list, where any finite prefix of the list should be forceable in finite time. Haskell programmers learn how to avoid such slip-ups. In Coq, such a laissez-faire policy is not good enough. | 23 %\index{laziness}%Laziness is easy to implement in Haskell, where all the definitions in a program may be thought of as mutually recursive. In such an unconstrained setting, it is easy to implement an infinite loop when you really meant to build an infinite list, where any finite prefix of the list should be forceable in finite time. Haskell programmers learn how to avoid such slip-ups. In Coq, such a laissez-faire policy is not good enough. |
24 | 24 |
25 We spent some time in the last chapter discussing the Curry-Howard isomorphism, where proofs are identified with functional programs. In such a setting, infinite loops, intended or otherwise, are disastrous. If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously. That is, the addition of general recursion would make CIC %\textit{%#<i>#inconsistent#</i>#%}%. For an arbitrary proposition [P], we could write: | 25 We spent some time in the last chapter discussing the %\index{Curry-Howard correspondence}%Curry-Howard isomorphism, where proofs are identified with functional programs. In such a setting, infinite loops, intended or otherwise, are disastrous. If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously. That is, the addition of general recursion would make CIC %\textit{%#<i>#inconsistent#</i>#%}%. For an arbitrary proposition [P], we could write: |
26 | |
27 [[ | 26 [[ |
28 Fixpoint bad (u : unit) : P := bad u. | 27 Fixpoint bad (u : unit) : P := bad u. |
29 | |
30 ]] | 28 ]] |
31 | 29 |
32 This would leave us with [bad tt] as a proof of [P]. | 30 This would leave us with [bad tt] as a proof of [P]. |
33 | 31 |
34 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. | 32 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. |
35 | 33 |
36 One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs. This is a heavyweight solution, and so we would like to avoid it whenever possible. | 34 One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs. This is a heavyweight solution, and so we would like to avoid it whenever possible. |
37 | 35 |
38 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\textit{%#<i>#co-inductive types#</i>#%}%, is the subject of this chapter. *) | 36 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\index{co-inductive types}\textit{%#<i>#co-inductive types#</i>#%}%, is the subject of this chapter. *) |
39 | 37 |
40 | 38 |
41 (** * Computing with Infinite Data *) | 39 (** * Computing with Infinite Data *) |
42 | 40 |
43 (** Let us begin with the most basic type of infinite data, %\textit{%#<i>#streams#</i>#%}%, or lazy lists. *) | 41 (** Let us begin with the most basic type of infinite data, %\textit{%#<i>#streams#</i>#%}%, or lazy lists.%\index{Vernacular commands!CoInductive}% *) |
44 | 42 |
45 Section stream. | 43 Section stream. |
46 Variable A : Set. | 44 Variable A : Set. |
47 | 45 |
48 CoInductive stream : Set := | 46 CoInductive stream : Set := |
49 | Cons : A -> stream -> stream. | 47 | Cons : A -> stream -> stream. |
50 End stream. | 48 End stream. |
51 | 49 |
52 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite. | 50 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite. |
53 | 51 |
54 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to %\textit{%#<i>#use#</i>#%}% values of recursive inductive types effectively, here we find that we need %\textit{%#<i>#co-recursive definitions#</i>#%}% to %\textit{%#<i>#build#</i>#%}% values of co-inductive types effectively. | 52 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to %\textit{%#<i>#use#</i>#%}% values of recursive inductive types effectively, here we find that we need %\index{co-recursive definitions}\textit{%#<i>#co-recursive definitions#</i>#%}% to %\textit{%#<i>#build#</i>#%}% values of co-inductive types effectively. |
55 | 53 |
56 We can define a stream consisting only of zeroes. *) | 54 We can define a stream consisting only of zeroes.%\index{Vernacular commands!CoFixpoint}% *) |
57 | 55 |
58 CoFixpoint zeroes : stream nat := Cons 0 zeroes. | 56 CoFixpoint zeroes : stream nat := Cons 0 zeroes. |
59 | 57 |
58 (* EX: Define a stream that alternates between [true] and [false]. *) | |
59 (* begin thide *) | |
60 | |
60 (** We can also define a stream that alternates between [true] and [false]. *) | 61 (** We can also define a stream that alternates between [true] and [false]. *) |
61 | 62 |
62 CoFixpoint trues : stream bool := Cons true falses | 63 CoFixpoint trues_falses : stream bool := Cons true falses_trues |
63 with falses : stream bool := Cons false trues. | 64 with falses_trues : stream bool := Cons false trues_falses. |
65 (* end thide *) | |
64 | 66 |
65 (** Co-inductive values are fair game as arguments to recursive functions, and we can use that fact to write a function to take a finite approximation of a stream. *) | 67 (** Co-inductive values are fair game as arguments to recursive functions, and we can use that fact to write a function to take a finite approximation of a stream. *) |
68 | |
69 (* EX: Defint a function to calculate a finite approximation of a stream, to a particular length. *) | |
70 (* begin thide *) | |
66 | 71 |
67 Fixpoint approx A (s : stream A) (n : nat) : list A := | 72 Fixpoint approx A (s : stream A) (n : nat) : list A := |
68 match n with | 73 match n with |
69 | O => nil | 74 | O => nil |
70 | S n' => | 75 | S n' => |
78 = 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: nil | 83 = 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: nil |
79 : list nat | 84 : list nat |
80 ]] | 85 ]] |
81 *) | 86 *) |
82 | 87 |
83 Eval simpl in approx trues 10. | 88 Eval simpl in approx trues_falses 10. |
84 (** %\vspace{-.15in}% [[ | 89 (** %\vspace{-.15in}% [[ |
85 = true | 90 = true |
86 :: false | 91 :: false |
87 :: true | 92 :: true |
88 :: false | 93 :: false |
89 :: true :: false :: true :: false :: true :: false :: nil | 94 :: true :: false :: true :: false :: true :: false :: nil |
90 : list bool | 95 : list bool |
91 | 96 ]] |
92 ]] | 97 *) |
93 | 98 |
94 So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints %\textit{%#<i>#consume#</i>#%}% values of inductive types, with restrictions on which %\textit{%#<i>#arguments#</i>#%}% may be passed in recursive calls. Dually, co-fixpoints %\textit{%#<i>#produce#</i>#%}% values of co-inductive types, with restrictions on what may be done with the %\textit{%#<i>#results#</i>#%}% of co-recursive calls. | 99 (* end thide *) |
95 | 100 |
96 The restriction for co-inductive types shows up as the %\textit{%#<i>#guardedness condition#</i>#%}%, and it can be broken into two parts. First, consider this stream definition, which would be legal in Haskell. | 101 (** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints %\textit{%#<i>#consume#</i>#%}% values of inductive types, with restrictions on which %\textit{%#<i>#arguments#</i>#%}% may be passed in recursive calls. Dually, co-fixpoints %\textit{%#<i>#produce#</i>#%}% values of co-inductive types, with restrictions on what may be done with the %\textit{%#<i>#results#</i>#%}% of co-recursive calls. |
97 | 102 |
103 The restriction for co-inductive types shows up as the %\index{guardedness condition}\textit{%#<i>#guardedness condition#</i>#%}%, and it can be broken into two parts. First, consider this stream definition, which would be legal in Haskell. | |
98 [[ | 104 [[ |
99 CoFixpoint looper : stream nat := looper. | 105 CoFixpoint looper : stream nat := looper. |
100 | 106 ]] |
107 | |
108 << | |
101 Error: | 109 Error: |
102 Recursive definition of looper is ill-formed. | 110 Recursive definition of looper is ill-formed. |
103 In environment | 111 In environment |
104 looper : stream nat | 112 looper : stream nat |
105 | 113 |
106 unguarded recursive call in "looper" | 114 unguarded recursive call in "looper" |
107 | 115 >> |
108 ]] | |
109 | 116 |
110 The rule we have run afoul of here is that %\textit{%#<i>#every co-recursive call must be guarded by a constructor#</i>#%}%; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating. It is a good thing that this rule is enforced. If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency. | 117 The rule we have run afoul of here is that %\textit{%#<i>#every co-recursive call must be guarded by a constructor#</i>#%}%; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating. It is a good thing that this rule is enforced. If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency. |
111 | 118 |
112 The second rule of guardedness is easiest to see by first introducing a more complicated, but legal, co-fixpoint. *) | 119 Some familiar functions are easy to write in co-recursive fashion. *) |
113 | 120 |
114 Section map. | 121 Section map. |
115 Variables A B : Set. | 122 Variables A B : Set. |
116 Variable f : A -> B. | 123 Variable f : A -> B. |
117 | 124 |
119 match s with | 126 match s with |
120 | Cons h t => Cons (f h) (map t) | 127 | Cons h t => Cons (f h) (map t) |
121 end. | 128 end. |
122 End map. | 129 End map. |
123 | 130 |
124 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy even the first guardedness condition. | 131 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy the guardedness condition. |
125 | 132 |
126 The second condition is subtler. To illustrate it, we start off with another co-recursive function definition that %\textit{%#<i>#is#</i>#%}% legal. The function [interleave] takes two streams and produces a new stream that alternates between their elements. *) | 133 The implications of the condition can be subtle. To illustrate how, we start off with another co-recursive function definition that %\textit{%#<i>#is#</i>#%}% legal. The function [interleave] takes two streams and produces a new stream that alternates between their elements. *) |
127 | 134 |
128 Section interleave. | 135 Section interleave. |
129 Variable A : Set. | 136 Variable A : Set. |
130 | 137 |
131 CoFixpoint interleave (s1 s2 : stream A) : stream A := | 138 CoFixpoint interleave (s1 s2 : stream A) : stream A := |
141 Variable f : A -> B. | 148 Variable f : A -> B. |
142 (* begin thide *) | 149 (* begin thide *) |
143 (** [[ | 150 (** [[ |
144 CoFixpoint map' (s : stream A) : stream B := | 151 CoFixpoint map' (s : stream A) : stream B := |
145 match s with | 152 match s with |
146 | Cons h t => interleave (Cons (f h) (map' t) (Cons (f h) (map' t)) | 153 | Cons h t => interleave (Cons (f h) (map' t)) (Cons (f h) (map' t)) |
147 end. | 154 end. |
148 | |
149 ]] | 155 ]] |
150 | 156 |
151 We get another error message about an unguarded recursive call. This is because we are violating the second guardedness condition, which says that, not only must co-recursive calls be arguments to constructors, there must also %\textit{%#<i>#not be anything but [match]es and calls to constructors of the same co-inductive type#</i>#%}% wrapped around these immediate uses of co-recursive calls. The actual implemented rule for guardedness is a little more lenient than what we have just stated, but you can count on the illegality of any exception that would enhance the expressive power of co-recursion. | 157 We get another error message about an unguarded recursive call. *) |
152 | 158 |
153 Why enforce a rule like this? Imagine that, instead of [interleave], we had called some other, less well-behaved function on streams. Perhaps this other function might be defined mutually with [map']. It might deconstruct its first argument, retrieving [map' t] from within [Cons (f h) (map' t)]. Next it might try a [match] on this retrieved value, which amounts to deconstructing [map' t]. To figure out how this [match] turns out, we need to know the top-level structure of [map' t], but this is exactly what we started out trying to determine! We run into a loop in the evaluation process, and we have reached a witness of inconsistency if we are evaluating [approx (map' s) 1] for any [s]. *) | 159 End map'. |
160 | |
161 (** What is going wrong here? Imagine that, instead of [interleave], we had called some other, less well-behaved function on streams. Here is one simpler example demonstrating the essential pitfall. We start defining a standard function for taking the tail of a stream. Since streams are infinite, this operation is total. *) | |
162 | |
163 Definition tl A (s : stream A) : stream A := | |
164 match s with | |
165 | Cons _ s' => s' | |
166 end. | |
167 | |
168 (** Coq rejects the following definition that uses [tl]. | |
169 [[ | |
170 CoFixpoint bad : stream nat := tl (Cons 0 bad). | |
171 ]] | |
172 | |
173 Imagine that Coq had accepted our definition, and consider how we might evaluate [approx bad 1]. We would be trying to calculate the first element in the stream [bad]. However, it is not hard to see that the definition of [bad] %``%#"#begs the question#"#%''%: unfolding the definition of [tl], we see that we essentially say %``%#"#define [bad] to equal itself#"#%''%! Of course such an equation admits no single well-defined solution, which does not fit well with the determinism of Gallina reduction. | |
174 | |
175 Since Coq can be considered to check definitions after inlining and simplification of previously defined identifiers, the basic guardedness condition rules out our definition of [bad]. Such an inlining reduces [bad] to: | |
176 [[ | |
177 CoFixpoint bad : stream nat := bad. | |
178 ]] | |
179 This is the same looping definition we rejected earlier. A similar inlining process reveals the way that Coq saw our failed definition of [map']: | |
180 [[ | |
181 CoFixpoint map' (s : stream A) : stream B := | |
182 match s with | |
183 | Cons h t => Cons (f h) (Cons (f h) (interleave (map' t) (map' t))) | |
184 end. | |
185 ]] | |
186 Clearly in this case the [map'] calls are not immediate arguments to constructors, so we violate the guardedness condition. *) | |
154 (* end thide *) | 187 (* end thide *) |
155 | 188 |
156 End map'. | 189 (** A more interesting question is why that condition is the right one. We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx]. The guardedness condition is an example of a syntactic check for %\index{productivity}\emph{%#<i>#productivity#</i>#%}% of co-recursive definitions. A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx]. If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions. However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks. Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *) |
157 | 190 |
158 | 191 |
159 (** * Infinite Proofs *) | 192 (** * Infinite Proofs *) |
160 | 193 |
161 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *) | 194 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *) |
172 | 205 |
173 Abort. | 206 Abort. |
174 | 207 |
175 (** Co-inductive datatypes make sense by analogy from Haskell. What we need now is a %\textit{%#<i>#co-inductive proposition#</i>#%}%. That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition. The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures. Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions. | 208 (** Co-inductive datatypes make sense by analogy from Haskell. What we need now is a %\textit{%#<i>#co-inductive proposition#</i>#%}%. That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition. The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures. Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions. |
176 | 209 |
177 We are ready for our first co-inductive predicate. *) | 210 We are ready for our first %\index{co-inductive predicates}%co-inductive predicate. *) |
178 | 211 |
179 Section stream_eq. | 212 Section stream_eq. |
180 Variable A : Set. | 213 Variable A : Set. |
181 | 214 |
182 CoInductive stream_eq : stream A -> stream A -> Prop := | 215 CoInductive stream_eq : stream A -> stream A -> Prop := |
203 It looks like this proof might be easier than we expected! *) | 236 It looks like this proof might be easier than we expected! *) |
204 | 237 |
205 assumption. | 238 assumption. |
206 (** [[ | 239 (** [[ |
207 Proof completed. | 240 Proof completed. |
208 | |
209 ]] | 241 ]] |
210 | 242 |
211 Unfortunately, we are due for some disappointment in our victory lap. | 243 Unfortunately, we are due for some disappointment in our victory lap. |
212 | |
213 [[ | 244 [[ |
214 Qed. | 245 Qed. |
215 | 246 ]] |
247 | |
248 << | |
216 Error: | 249 Error: |
217 Recursive definition of ones_eq is ill-formed. | 250 Recursive definition of ones_eq is ill-formed. |
218 | 251 |
219 In environment | 252 In environment |
220 ones_eq : stream_eq ones ones' | 253 ones_eq : stream_eq ones ones' |
221 | 254 |
222 unguarded recursive call in "ones_eq" | 255 unguarded recursive call in "ones_eq" |
223 | 256 >> |
224 ]] | |
225 | 257 |
226 Via the Curry-Howard correspondence, the same guardedness condition applies to our co-inductive proofs as to our co-inductive data structures. We should be grateful that this proof is rejected, because, if it were not, the same proof structure could be used to prove any co-inductive theorem vacuously, by direct appeal to itself! | 258 Via the Curry-Howard correspondence, the same guardedness condition applies to our co-inductive proofs as to our co-inductive data structures. We should be grateful that this proof is rejected, because, if it were not, the same proof structure could be used to prove any co-inductive theorem vacuously, by direct appeal to itself! |
227 | 259 |
228 Thinking about how Coq would generate a proof term from the proof script above, we see that the problem is that we are violating the first part of the guardedness condition. During our proofs, Coq can help us check whether we have yet gone wrong in this way. We can run the command [Guarded] in any context to see if it is possible to finish the proof in a way that will yield a properly guarded proof term. | 260 Thinking about how Coq would generate a proof term from the proof script above, we see that the problem is that we are violating the guardedness condition. During our proofs, Coq can help us check whether we have yet gone wrong in this way. We can run the command [Guarded] in any context to see if it is possible to finish the proof in a way that will yield a properly guarded proof term.%\index{Vernacular command!Guarded}% |
229 | |
230 [[ | 261 [[ |
231 Guarded. | 262 Guarded. |
232 | |
233 ]] | 263 ]] |
234 | 264 |
235 Running [Guarded] here gives us the same error message that we got when we tried to run [Qed]. In larger proofs, [Guarded] can be helpful in detecting problems %\textit{%#<i>#before#</i>#%}% we think we are ready to run [Qed]. | 265 Running [Guarded] here gives us the same error message that we got when we tried to run [Qed]. In larger proofs, [Guarded] can be helpful in detecting problems %\textit{%#<i>#before#</i>#%}% we think we are ready to run [Qed]. |
236 | 266 |
237 We need to start the co-induction by applying [stream_eq]'s constructor. To do that, we need to know that both arguments to the predicate are [Cons]es. Informally, this is trivial, but [simpl] is not able to help us. *) | 267 We need to start the co-induction by applying [stream_eq]'s constructor. To do that, we need to know that both arguments to the predicate are [Cons]es. Informally, this is trivial, but [simpl] is not able to help us. *) |
291 | Cons h t => Cons (S h) (map t) | 321 | Cons h t => Cons (S h) (map t) |
292 end) zeroes)) | 322 end) zeroes)) |
293 | 323 |
294 ]] | 324 ]] |
295 | 325 |
296 Since we have exposed the [Cons] structure of each stream, we can apply the constructor of [stream_eq]. *) | 326 Note that [cofix] notation for anonymous co-recursion, which is analogous to the [fix] notation we have already seen for recursion. Since we have exposed the [Cons] structure of each stream, we can apply the constructor of [stream_eq]. *) |
297 | 327 |
298 constructor. | 328 constructor. |
299 (** [[ | 329 (** [[ |
300 ones_eq : stream_eq ones ones' | 330 ones_eq : stream_eq ones ones' |
301 ============================ | 331 ============================ |
322 | 352 |
323 Theorem ones_eq' : stream_eq ones ones'. | 353 Theorem ones_eq' : stream_eq ones ones'. |
324 cofix; crush. | 354 cofix; crush. |
325 (** [[ | 355 (** [[ |
326 Guarded. | 356 Guarded. |
327 | |
328 ]] | 357 ]] |
329 *) | 358 *) |
330 Abort. | 359 Abort. |
360 | |
361 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. | |
362 | |
363 %\medskip% | |
364 | |
365 Must we always be cautious with automation in proofs by co-induction? Induction seems to have dual versions the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles. It turns out that we can usually do the same with %\index{co-induction principles}\emph{%#<i>#co-induction principles#</i>#%}%. Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq']. | |
366 | |
367 An induction principle is parameterized over a predicate characterizing what we mean to prove, %\emph{%#<i>#as a function of the inductive fact that we already know#</i>#%}%. Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, %\emph{%#<i>#as a function of the arguments to the co-inductive predicate that we are trying to prove#</i>#%}%. | |
368 | |
369 To state a useful principle for [stream_eq], it will be useful first to define the stream head function. *) | |
370 | |
371 Definition hd A (s : stream A) : A := | |
372 match s with | |
373 | Cons x _ => x | |
374 end. | |
375 | |
376 (** Now we enter a section for the co-induction principle, based on %\index{Park's principle}%Park's principle as introduced in a tutorial by Gim%\'%enez%~\cite{IT}%. *) | |
377 | |
378 Section stream_eq_coind. | |
379 Variable A : Set. | |
380 Variable R : stream A -> stream A -> Prop. | |
381 (** This relation generalizes the theorem we want to prove, characterizinge exactly which two arguments to [stream_eq] we want to consider. *) | |
382 | |
383 Hypothesis Cons_case_hd : forall s1 s2, R s1 s2 -> hd s1 = hd s2. | |
384 Hypothesis Cons_case_tl : forall s1 s2, R s1 s2 -> R (tl s1) (tl s2). | |
385 (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that a [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails. An established technical term for such a relation is %\index{bisimulation}\emph{%#<i>#bisimulation#</i>#%}%. *) | |
386 | |
387 (** Now it is straightforward to prove the principle, which says that any stream pair in [R] is equal. The reader may wish to step through the proof script to see what is going on. *) | |
388 Theorem stream_eq_coind : forall s1 s2, R s1 s2 -> stream_eq s1 s2. | |
389 cofix; destruct s1; destruct s2; intro. | |
390 generalize (Cons_case_hd H); intro Heq; simpl in Heq; rewrite Heq. | |
391 constructor. | |
392 apply stream_eq_coind. | |
393 apply (Cons_case_tl H). | |
394 Qed. | |
395 End stream_eq_coind. | |
396 | |
397 (** To see why this proof is guarded, we can print it and verify that the one co-recursive call is an immediate argument to a constructor. *) | |
398 Print stream_eq_coind. | |
399 | |
400 (** We omit the output and proceed to proving [ones_eq''] again. The only bit of ingenuity is in choosing [R], and in this case the most restrictive predicate works. *) | |
401 | |
402 Theorem ones_eq'' : stream_eq ones ones'. | |
403 apply (stream_eq_coind (fun s1 s2 => s1 = ones /\ s2 = ones')); crush. | |
404 Qed. | |
405 | |
406 (** Note that this proof achieves the proper reduction behavior via [hd] and [tl], rather than [frob] as in the last proof. All three functions pattern match on their arguments, catalyzing computation steps. | |
407 | |
408 Compared to the inductive proofs that we are used to, it still seems unsatisfactory that we had to write out a choice of [R] in the last proof. An alternate is to capture a common pattern of co-recursion in a more specialized co-induction principle. For the current example, that pattern is: prove [stream_eq s1 s2] where [s1] and [s2] are defined as their own tails. *) | |
409 | |
410 Section stream_eq_loop. | |
411 Variable A : Set. | |
412 Variables s1 s2 : stream A. | |
413 | |
414 Hypothesis Cons_case_hd : hd s1 = hd s2. | |
415 Hypothesis loop1 : tl s1 = s1. | |
416 Hypothesis loop2 : tl s2 = s2. | |
417 | |
418 (** The proof of the principle includes a choice of [R], so that we no longer need to make such choices thereafter. *) | |
419 | |
420 Theorem stream_eq_loop : stream_eq s1 s2. | |
421 apply (stream_eq_coind (fun s1' s2' => s1' = s1 /\ s2' = s2)); crush. | |
422 Qed. | |
423 End stream_eq_loop. | |
424 | |
425 Theorem ones_eq''' : stream_eq ones ones'. | |
426 apply stream_eq_loop; crush. | |
427 Qed. | |
331 (* end thide *) | 428 (* end thide *) |
332 | 429 |
333 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. *) | 430 (** Let us put [stream_eq_ind] through its paces a bit more, considering two different ways to compute infinite streams of all factorial values. First, we import the [fact] factorial function from the standard library. *) |
431 | |
432 Require Import Arith. | |
433 Print fact. | |
434 (** %\vspace{-.15in}%[[ | |
435 fact = | |
436 fix fact (n : nat) : nat := | |
437 match n with | |
438 | 0 => 1 | |
439 | S n0 => S n0 * fact n0 | |
440 end | |
441 : nat -> nat | |
442 ]] | |
443 *) | |
444 | |
445 (** The simplest way to compute the factorial stream involves calling [fact] afresh at each position. *) | |
446 | |
447 CoFixpoint fact_slow' (n : nat) := Cons (fact n) (fact_slow' (S n)). | |
448 Definition fact_slow := fact_slow' 1. | |
449 | |
450 (** A more clever, optimized method maintains an accumulator of the previous factorial, so that each new entry can be computed with a single multiplication. *) | |
451 | |
452 CoFixpoint fact_iter' (cur acc : nat) := Cons acc (fact_iter' (S cur) (acc * cur)). | |
453 Definition fact_iter := fact_iter' 2 1. | |
454 | |
455 (** We can verify that the streams are equal up to particular finite bounds. *) | |
456 | |
457 Eval simpl in approx fact_iter 5. | |
458 (** %\vspace{-.15in}%[[ | |
459 = 1 :: 2 :: 6 :: 24 :: 120 :: nil | |
460 : list nat | |
461 ]] | |
462 *) | |
463 Eval simpl in approx fact_slow 5. | |
464 (** %\vspace{-.15in}%[[ | |
465 = 1 :: 2 :: 6 :: 24 :: 120 :: nil | |
466 : list nat | |
467 ]] | |
468 | |
469 Now, to prove that the two versions are equivalent, it is helpful to prove (and add as a proof hint) a quick lemma about the computational behavior of [fact]. *) | |
470 | |
471 (* begin thide *) | |
472 Lemma fact_def : forall x n, | |
473 fact_iter' x (fact n * S n) = fact_iter' x (fact (S n)). | |
474 simpl; intros; f_equal; ring. | |
475 Qed. | |
476 | |
477 Hint Resolve fact_def. | |
478 | |
479 (** With the hint added, it is easy to prove an auxiliary lemma relating [fact_iter'] and [fact_slow']. The key bit of ingenuity is introduction of an existential quantifier for the shared parameter [n]. *) | |
480 | |
481 Lemma fact_eq' : forall n, stream_eq (fact_iter' (S n) (fact n)) (fact_slow' n). | |
482 intro; apply (stream_eq_coind (fun s1 s2 => exists n, s1 = fact_iter' (S n) (fact n) | |
483 /\ s2 = fact_slow' n)); crush; eauto. | |
484 Qed. | |
485 | |
486 (** The final theorem is a direct corollary of [fact_eq']. *) | |
487 | |
488 Theorem fact_eq : stream_eq fact_iter fact_slow. | |
489 apply fact_eq'. | |
490 Qed. | |
491 | |
492 (** As in the case of [ones_eq'], we may be unsatisfied that we needed to write down a choice of [R] that seems to duplicate information already present in a lemma statement. We can facilitate a simpler proof by defining a co-induction principle specialized to goals that begin with single universal quantifiers, and the strategy can be extended in a straightforward way to principles for other counts of quantifiers. (Our [stream_eq_loop] principle is effectively the instantiation of this technique to zero quantifiers.) *) | |
493 | |
494 Section stream_eq_onequant. | |
495 Variables A B : Set. | |
496 (** We have the types [A], the domain of the one quantifier; and [B], the type of data found in the streams. *) | |
497 | |
498 Variables f g : A -> stream B. | |
499 (** The two streams we compare must be of the forms [f x] and [g x], for some shared [x]. Note that this falls out naturally when [x] is a shared universally quantified variable in a lemma statement. *) | |
500 | |
501 Hypothesis Cons_case_hd : forall x, hd (f x) = hd (g x). | |
502 Hypothesis Cons_case_tl : forall x, exists y, tl (f x) = f y /\ tl (g x) = g y. | |
503 (** These conditions are inspired by the bisimulation requirements, with a more general version of the [R] choice we made for [fact_eq'] inlined into the hypotheses of [stream_eq_coind]. *) | |
504 | |
505 Theorem stream_eq_onequant : forall x, stream_eq (f x) (g x). | |
506 intro; apply (stream_eq_coind (fun s1 s2 => exists x, s1 = f x /\ s2 = g x)); crush; eauto. | |
507 Qed. | |
508 End stream_eq_onequant. | |
509 | |
510 Lemma fact_eq'' : forall n, stream_eq (fact_iter' (S n) (fact n)) (fact_slow' n). | |
511 apply stream_eq_onequant; crush; eauto. | |
512 Qed. | |
513 | |
514 (** We have arrived at one of our customary automated proofs, thanks to the new principle. *) | |
515 (* end thide *) | |
334 | 516 |
335 | 517 |
336 (** * Simple Modeling of Non-Terminating Programs *) | 518 (** * Simple Modeling of Non-Terminating Programs *) |
337 | 519 |
338 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple assembly language and use that semantics to prove that assembly programs always run forever. This basic technique can be combined with typing judgments for more advanced languages, where some ill-typed programs can go wrong, but no well-typed programs go wrong. | 520 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple assembly language and use that semantics to prove that assembly programs always run forever. This basic technique can be combined with typing judgments for more advanced languages, where some ill-typed programs can go wrong, but no well-typed programs go wrong. |
376 match goal with | 558 match goal with |
377 | [ |- context[Jnz ?r _] ] => case_eq (rs r) | 559 | [ |- context[Jnz ?r _] ] => case_eq (rs r) |
378 end; eauto. | 560 end; eauto. |
379 Qed. | 561 Qed. |
380 | 562 |
381 (** We are ready to define a co-inductive judgment capturing the idea that a program runs forever. We define the judgment in terms of a program [prog], represented as a function mapping each label to the instruction found there. That is, to simplify the proof, we consider only infinitely-long programs. *) | 563 (** We are ready to define a co-inductive judgment capturing the idea that a program runs forever. We define the judgment in terms of a program [prog], represented as a function mapping each label to the instruction found there. That is, to simplify the proof, we consider only infinitely long programs. *) |
382 | 564 |
383 Section safe. | 565 Section safe. |
384 Variable prog : label -> instr. | 566 Variable prog : label -> instr. |
385 | 567 |
386 CoInductive safe : label -> regs -> Prop := | 568 CoInductive safe : label -> regs -> Prop := |
387 | Step : forall pc r pc' r', | 569 | Step : forall pc r pc' r', |
388 exec pc r (prog pc) pc' r' | 570 exec pc r (prog pc) pc' r' |
389 -> safe pc' r' | 571 -> safe pc' r' |
390 -> safe pc r. | 572 -> safe pc r. |
391 | 573 |
392 (** Now we can prove that any starting address and register bank lead to safe infinite execution. Recall that proofs of existentially-quantified formulas are all built with a single constructor of the inductive type [ex]. This means that we can use [destruct] to %``%#"#open up#"#%''% such proofs. In the proof below, we want to perform this opening up on an appropriate use of the [exec_total] lemma. This lemma's conclusion begins with two existential quantifiers, so we want to tell [destruct] that it should not stop at the first quantifier. We accomplish our goal by using an %\textit{%#<i>#intro pattern#</i>#%}% with [destruct]. Consult the Coq manual for the details of intro patterns; the specific pattern [[? [? ?]]] that we use here accomplishes our goal of destructing both quantifiers at once. *) | 574 (** Now we can prove that any starting address and register bank lead to safe infinite execution. Recall that proofs of existentially quantified formulas are all built with a single constructor of the inductive type [ex]. This means that we can use [destruct] to %``%#"#open up#"#%''% such proofs. In the proof below, we want to perform this opening up on an appropriate use of the [exec_total] lemma. This lemma's conclusion begins with two existential quantifiers, so we want to tell [destruct] that it should not stop at the first quantifier. We accomplish our goal by using an %\textit{%#<i>#intro pattern#</i>#%}% with [destruct]. Consult the Coq manual for the details of intro patterns; the specific pattern [[? [? ?]]] that we use here accomplishes our goal of destructing both quantifiers at once. *) |
393 | 575 |
394 Theorem always_safe : forall pc rs, | 576 Theorem always_safe : forall pc rs, |
395 safe pc rs. | 577 safe pc rs. |
396 cofix; intros; | 578 cofix; intros; |
397 destruct (exec_total pc rs (prog pc)) as [? [? ?]]; | 579 destruct (exec_total pc rs (prog pc)) as [? [? ?]]; |