comparison src/Coinductive.v @ 398:05efde66559d

Get it working in Coq 8.4beta1; use nice coqdoc notation for italics
author Adam Chlipala <adam@chlipala.net>
date Wed, 06 Jun 2012 11:25:13 -0400
parents 3c941750c347
children df7cd24383ae
comparison
equal deleted inserted replaced
397:d62ed7381a0b 398:05efde66559d
1 (* Copyright (c) 2008-2011, Adam Chlipala 1 (* Copyright (c) 2008-2012, Adam Chlipala
2 * 2 *
3 * This work is licensed under a 3 * This work is licensed under a
4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
5 * Unported License. 5 * Unported License.
6 * The license text is available at: 6 * The license text is available at:
20 20
21 (** In lazy functional programming languages like %\index{Haskell}%Haskell, infinite data structures are everywhere%~\cite{whyfp}%. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow. 21 (** In lazy functional programming languages like %\index{Haskell}%Haskell, infinite data structures are everywhere%~\cite{whyfp}%. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow.
22 22
23 %\index{laziness}%Laziness is easy to implement in Haskell, where all the definitions in a program may be thought of as mutually recursive. In such an unconstrained setting, it is easy to implement an infinite loop when you really meant to build an infinite list, where any finite prefix of the list should be forceable in finite time. Haskell programmers learn how to avoid such slip-ups. In Coq, such a laissez-faire policy is not good enough. 23 %\index{laziness}%Laziness is easy to implement in Haskell, where all the definitions in a program may be thought of as mutually recursive. In such an unconstrained setting, it is easy to implement an infinite loop when you really meant to build an infinite list, where any finite prefix of the list should be forceable in finite time. Haskell programmers learn how to avoid such slip-ups. In Coq, such a laissez-faire policy is not good enough.
24 24
25 We spent some time in the last chapter discussing the %\index{Curry-Howard correspondence}%Curry-Howard isomorphism, where proofs are identified with functional programs. In such a setting, infinite loops, intended or otherwise, are disastrous. If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously. That is, the addition of general recursion would make CIC %\textit{%#<i>#inconsistent#</i>#%}%. For an arbitrary proposition [P], we could write: 25 We spent some time in the last chapter discussing the %\index{Curry-Howard correspondence}%Curry-Howard isomorphism, where proofs are identified with functional programs. In such a setting, infinite loops, intended or otherwise, are disastrous. If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously. That is, the addition of general recursion would make CIC _inconsistent_. For an arbitrary proposition [P], we could write:
26 [[ 26 [[
27 Fixpoint bad (u : unit) : P := bad u. 27 Fixpoint bad (u : unit) : P := bad u.
28 ]] 28 ]]
29 29
30 This would leave us with [bad tt] as a proof of [P]. 30 This would leave us with [bad tt] as a proof of [P].
31 31
32 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. 32 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem.
33 33
34 One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs; several such approaches are surveyed in Chapter 7. This is a heavyweight solution, and so we would like to avoid it whenever possible. 34 One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs; several such approaches are surveyed in Chapter 7. This is a heavyweight solution, and so we would like to avoid it whenever possible.
35 35
36 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\index{co-inductive types}\textit{%#<i>#co-inductive types#</i>#%}%, is the subject of this chapter. *) 36 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\index{co-inductive types}%_co-inductive types_, is the subject of this chapter. *)
37 37
38 38
39 (** * Computing with Infinite Data *) 39 (** * Computing with Infinite Data *)
40 40
41 (** Let us begin with the most basic type of infinite data, %\textit{%#<i>#streams#</i>#%}%, or lazy lists.%\index{Vernacular commands!CoInductive}% *) 41 (** Let us begin with the most basic type of infinite data, _streams_, or lazy lists.%\index{Vernacular commands!CoInductive}% *)
42 42
43 Section stream. 43 Section stream.
44 Variable A : Type. 44 Variable A : Type.
45 45
46 CoInductive stream : Type := 46 CoInductive stream : Type :=
47 | Cons : A -> stream -> stream. 47 | Cons : A -> stream -> stream.
48 End stream. 48 End stream.
49 49
50 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite. 50 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite.
51 51
52 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to %\textit{%#<i>#use#</i>#%}% values of recursive inductive types effectively, here we find that we need %\index{co-recursive definitions}\textit{%#<i>#co-recursive definitions#</i>#%}% to %\textit{%#<i>#build#</i>#%}% values of co-inductive types effectively. 52 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to _use_ values of recursive inductive types effectively, here we find that we need %\index{co-recursive definitions}%_co-recursive definitions_ to _build_ values of co-inductive types effectively.
53 53
54 We can define a stream consisting only of zeroes.%\index{Vernacular commands!CoFixpoint}% *) 54 We can define a stream consisting only of zeroes.%\index{Vernacular commands!CoFixpoint}% *)
55 55
56 CoFixpoint zeroes : stream nat := Cons 0 zeroes. 56 CoFixpoint zeroes : stream nat := Cons 0 zeroes.
57 57
96 ]] 96 ]]
97 *) 97 *)
98 98
99 (* end thide *) 99 (* end thide *)
100 100
101 (** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints %\textit{%#<i>#consume#</i>#%}% values of inductive types, with restrictions on which %\textit{%#<i>#arguments#</i>#%}% may be passed in recursive calls. Dually, co-fixpoints %\textit{%#<i>#produce#</i>#%}% values of co-inductive types, with restrictions on what may be done with the %\textit{%#<i>#results#</i>#%}% of co-recursive calls. 101 (** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints _consume_ values of inductive types, with restrictions on which _arguments_ may be passed in recursive calls. Dually, co-fixpoints _produce_ values of co-inductive types, with restrictions on what may be done with the _results_ of co-recursive calls.
102 102
103 The restriction for co-inductive types shows up as the %\index{guardedness condition}\textit{%#<i>#guardedness condition#</i>#%}%, and it can be broken into two parts. First, consider this stream definition, which would be legal in Haskell. 103 The restriction for co-inductive types shows up as the %\index{guardedness condition}%_guardedness condition_, and it can be broken into two parts. First, consider this stream definition, which would be legal in Haskell.
104 [[ 104 [[
105 CoFixpoint looper : stream nat := looper. 105 CoFixpoint looper : stream nat := looper.
106 ]] 106 ]]
107 107
108 << 108 <<
112 looper : stream nat 112 looper : stream nat
113 113
114 unguarded recursive call in "looper" 114 unguarded recursive call in "looper"
115 >> 115 >>
116 116
117 The rule we have run afoul of here is that %\textit{%#<i>#every co-recursive call must be guarded by a constructor#</i>#%}%; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating. It is a good thing that this rule is enforced. If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency. 117 The rule we have run afoul of here is that _every co-recursive call must be guarded by a constructor_; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating. It is a good thing that this rule is enforced. If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency.
118 118
119 Some familiar functions are easy to write in co-recursive fashion. *) 119 Some familiar functions are easy to write in co-recursive fashion. *)
120 120
121 Section map. 121 Section map.
122 Variables A B : Type. 122 Variables A B : Type.
128 end. 128 end.
129 End map. 129 End map.
130 130
131 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy the guardedness condition. 131 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy the guardedness condition.
132 132
133 The implications of the condition can be subtle. To illustrate how, we start off with another co-recursive function definition that %\textit{%#<i>#is#</i>#%}% legal. The function [interleave] takes two streams and produces a new stream that alternates between their elements. *) 133 The implications of the condition can be subtle. To illustrate how, we start off with another co-recursive function definition that _is_ legal. The function [interleave] takes two streams and produces a new stream that alternates between their elements. *)
134 134
135 Section interleave. 135 Section interleave.
136 Variable A : Type. 136 Variable A : Type.
137 137
138 CoFixpoint interleave (s1 s2 : stream A) : stream A := 138 CoFixpoint interleave (s1 s2 : stream A) : stream A :=
184 end. 184 end.
185 ]] 185 ]]
186 Clearly in this case the [map'] calls are not immediate arguments to constructors, so we violate the guardedness condition. *) 186 Clearly in this case the [map'] calls are not immediate arguments to constructors, so we violate the guardedness condition. *)
187 (* end thide *) 187 (* end thide *)
188 188
189 (** A more interesting question is why that condition is the right one. We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx]. The guardedness condition is an example of a syntactic check for %\index{productivity}\emph{%#<i>#productivity#</i>#%}% of co-recursive definitions. A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx]. If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions. However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks. Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *) 189 (** A more interesting question is why that condition is the right one. We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx]. The guardedness condition is an example of a syntactic check for %\index{productivity}%_productivity_ of co-recursive definitions. A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx]. If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions. However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks. Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *)
190 190
191 191
192 (** * Infinite Proofs *) 192 (** * Infinite Proofs *)
193 193
194 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *) 194 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *)
203 (** However, faced with the initial subgoal, it is not at all clear how this theorem can be proved. In fact, it is unprovable. The [eq] predicate that we use is fundamentally limited to equalities that can be demonstrated by finite, syntactic arguments. To prove this equivalence, we will need to introduce a new relation. *) 203 (** However, faced with the initial subgoal, it is not at all clear how this theorem can be proved. In fact, it is unprovable. The [eq] predicate that we use is fundamentally limited to equalities that can be demonstrated by finite, syntactic arguments. To prove this equivalence, we will need to introduce a new relation. *)
204 (* begin thide *) 204 (* begin thide *)
205 205
206 Abort. 206 Abort.
207 207
208 (** Co-inductive datatypes make sense by analogy from Haskell. What we need now is a %\textit{%#<i>#co-inductive proposition#</i>#%}%. That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition. The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures. Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions. 208 (** Co-inductive datatypes make sense by analogy from Haskell. What we need now is a _co-inductive proposition_. That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition. The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures. Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions.
209 209
210 We are ready for our first %\index{co-inductive predicates}%co-inductive predicate. *) 210 We are ready for our first %\index{co-inductive predicates}%co-inductive predicate. *)
211 211
212 Section stream_eq. 212 Section stream_eq.
213 Variable A : Type. 213 Variable A : Type.
260 Thinking about how Coq would generate a proof term from the proof script above, we see that the problem is that we are violating the guardedness condition. During our proofs, Coq can help us check whether we have yet gone wrong in this way. We can run the command [Guarded] in any context to see if it is possible to finish the proof in a way that will yield a properly guarded proof term.%\index{Vernacular commands!Guarded}% 260 Thinking about how Coq would generate a proof term from the proof script above, we see that the problem is that we are violating the guardedness condition. During our proofs, Coq can help us check whether we have yet gone wrong in this way. We can run the command [Guarded] in any context to see if it is possible to finish the proof in a way that will yield a properly guarded proof term.%\index{Vernacular commands!Guarded}%
261 [[ 261 [[
262 Guarded. 262 Guarded.
263 ]] 263 ]]
264 264
265 Running [Guarded] here gives us the same error message that we got when we tried to run [Qed]. In larger proofs, [Guarded] can be helpful in detecting problems %\textit{%#<i>#before#</i>#%}% we think we are ready to run [Qed]. 265 Running [Guarded] here gives us the same error message that we got when we tried to run [Qed]. In larger proofs, [Guarded] can be helpful in detecting problems _before_ we think we are ready to run [Qed].
266 266
267 We need to start the co-induction by applying [stream_eq]'s constructor. To do that, we need to know that both arguments to the predicate are [Cons]es. Informally, this is trivial, but [simpl] is not able to help us. *) 267 We need to start the co-induction by applying [stream_eq]'s constructor. To do that, we need to know that both arguments to the predicate are [Cons]es. Informally, this is trivial, but [simpl] is not able to help us. *)
268 268
269 Undo. 269 Undo.
270 simpl. 270 simpl.
342 assumption. 342 assumption.
343 Qed. 343 Qed.
344 344
345 (** Why did this silly-looking trick help? The answer has to do with the constraints placed on Coq's evaluation rules by the need for termination. The [cofix]-related restriction that foiled our first attempt at using [simpl] is dual to a restriction for [fix]. In particular, an application of an anonymous [fix] only reduces when the top-level structure of the recursive argument is known. Otherwise, we would be unfolding the recursive definition ad infinitum. 345 (** Why did this silly-looking trick help? The answer has to do with the constraints placed on Coq's evaluation rules by the need for termination. The [cofix]-related restriction that foiled our first attempt at using [simpl] is dual to a restriction for [fix]. In particular, an application of an anonymous [fix] only reduces when the top-level structure of the recursive argument is known. Otherwise, we would be unfolding the recursive definition ad infinitum.
346 346
347 Fixpoints only reduce when enough is known about the %\textit{%#<i>#definitions#</i>#%}% of their arguments. Dually, co-fixpoints only reduce when enough is known about %\textit{%#<i>#how their results will be used#</i>#%}%. In particular, a [cofix] is only expanded when it is the discriminee of a [match]. Rewriting with our superficially silly lemma wrapped new [match]es around the two [cofix]es, triggering reduction. 347 Fixpoints only reduce when enough is known about the _definitions_ of their arguments. Dually, co-fixpoints only reduce when enough is known about _how their results will be used_. In particular, a [cofix] is only expanded when it is the discriminee of a [match]. Rewriting with our superficially silly lemma wrapped new [match]es around the two [cofix]es, triggering reduction.
348 348
349 If [cofix]es reduced haphazardly, it would be easy to run into infinite loops in evaluation, since we are, after all, building infinite objects. 349 If [cofix]es reduced haphazardly, it would be easy to run into infinite loops in evaluation, since we are, after all, building infinite objects.
350 350
351 One common source of difficulty with co-inductive proofs is bad interaction with standard Coq automation machinery. If we try to prove [ones_eq'] with automation, like we have in previous inductive proofs, we get an invalid proof. *) 351 One common source of difficulty with co-inductive proofs is bad interaction with standard Coq automation machinery. If we try to prove [ones_eq'] with automation, like we have in previous inductive proofs, we get an invalid proof. *)
352 352
360 360
361 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. 361 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis.
362 362
363 %\medskip% 363 %\medskip%
364 364
365 Must we always be cautious with automation in proofs by co-induction? Induction seems to have dual versions of the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles. It turns out that we can usually do the same with %\index{co-induction principles}\emph{%#<i>#co-induction principles#</i>#%}%. Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq']. 365 Must we always be cautious with automation in proofs by co-induction? Induction seems to have dual versions of the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles. It turns out that we can usually do the same with %\index{co-induction principles}%_co-induction principles_. Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq'].
366 366
367 An induction principle is parameterized over a predicate characterizing what we mean to prove, %\emph{%#<i>#as a function of the inductive fact that we already know#</i>#%}%. Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, %\emph{%#<i>#as a function of the arguments to the co-inductive predicate that we are trying to prove#</i>#%}%. 367 An induction principle is parameterized over a predicate characterizing what we mean to prove, _as a function of the inductive fact that we already know_. Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, _as a function of the arguments to the co-inductive predicate that we are trying to prove_.
368 368
369 To state a useful principle for [stream_eq], it will be useful first to define the stream head function. *) 369 To state a useful principle for [stream_eq], it will be useful first to define the stream head function. *)
370 370
371 Definition hd A (s : stream A) : A := 371 Definition hd A (s : stream A) : A :=
372 match s with 372 match s with
380 Variable R : stream A -> stream A -> Prop. 380 Variable R : stream A -> stream A -> Prop.
381 (** This relation generalizes the theorem we want to prove, characterizing exactly which two arguments to [stream_eq] we want to consider. *) 381 (** This relation generalizes the theorem we want to prove, characterizing exactly which two arguments to [stream_eq] we want to consider. *)
382 382
383 Hypothesis Cons_case_hd : forall s1 s2, R s1 s2 -> hd s1 = hd s2. 383 Hypothesis Cons_case_hd : forall s1 s2, R s1 s2 -> hd s1 = hd s2.
384 Hypothesis Cons_case_tl : forall s1 s2, R s1 s2 -> R (tl s1) (tl s2). 384 Hypothesis Cons_case_tl : forall s1 s2, R s1 s2 -> R (tl s1) (tl s2).
385 (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that an [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails. An established technical term for such a relation is %\index{bisimulation}\emph{%#<i>#bisimulation#</i>#%}%. *) 385 (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that an [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails. An established technical term for such a relation is %\index{bisimulation}%_bisimulation_. *)
386 386
387 (** Now it is straightforward to prove the principle, which says that any stream pair in [R] is equal. The reader may wish to step through the proof script to see what is going on. *) 387 (** Now it is straightforward to prove the principle, which says that any stream pair in [R] is equal. The reader may wish to step through the proof script to see what is going on. *)
388 388
389 Theorem stream_eq_coind : forall s1 s2, R s1 s2 -> stream_eq s1 s2. 389 Theorem stream_eq_coind : forall s1 s2, R s1 s2 -> stream_eq s1 s2.
390 cofix; destruct s1; destruct s2; intro. 390 cofix; destruct s1; destruct s2; intro.
517 (* end thide *) 517 (* end thide *)
518 518
519 519
520 (** * Simple Modeling of Non-Terminating Programs *) 520 (** * Simple Modeling of Non-Terminating Programs *)
521 521
522 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple imperative programming language and use that semantics to prove the correctness of a trivial optimization that removes spurious additions by 0. We follow the technique of %\index{co-inductive big-step operational semantics}\emph{%#<i>#co-inductive big-step operational semantics#</i>#%}~\cite{BigStep}%. 522 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple imperative programming language and use that semantics to prove the correctness of a trivial optimization that removes spurious additions by 0. We follow the technique of %\index{co-inductive big-step operational semantics}%_co-inductive big-step operational semantics_ %\cite{BigStep}%.
523 523
524 We define a suggestive synonym for [nat], as we will consider programs with infinitely many variables, represented as [nat]s. *) 524 We define a suggestive synonym for [nat], as we will consider programs with infinitely many variables, represented as [nat]s. *)
525 525
526 Definition var := nat. 526 Definition var := nat.
527 527
551 Inductive cmd : Set := 551 Inductive cmd : Set :=
552 | Assign : var -> exp -> cmd 552 | Assign : var -> exp -> cmd
553 | Seq : cmd -> cmd -> cmd 553 | Seq : cmd -> cmd -> cmd
554 | While : exp -> cmd -> cmd. 554 | While : exp -> cmd -> cmd.
555 555
556 (** We could define an inductive relation to characterize the results of command evaluation. However, such a relation would not capture %\emph{%#<i>#nonterminating#</i>#%}% executions. With a co-inductive relation, we can capture both cases. The parameters of the relation are an initial state, a command, and a final state. A program that does not terminate in a particular initial state is related to %\emph{%#<i>#any#</i>#%}% final state. *) 556 (** We could define an inductive relation to characterize the results of command evaluation. However, such a relation would not capture _nonterminating_ executions. With a co-inductive relation, we can capture both cases. The parameters of the relation are an initial state, a command, and a final state. A program that does not terminate in a particular initial state is related to _any_ final state. *)
557 557
558 CoInductive evalCmd : vars -> cmd -> vars -> Prop := 558 CoInductive evalCmd : vars -> cmd -> vars -> Prop :=
559 | EvalAssign : forall vs v e, evalCmd vs (Assign v e) (set vs v (evalExp vs e)) 559 | EvalAssign : forall vs v e, evalCmd vs (Assign v e) (set vs v (evalExp vs e))
560 | EvalSeq : forall vs1 vs2 vs3 c1 c2, evalCmd vs1 c1 vs2 560 | EvalSeq : forall vs1 vs2 vs3 c1 c2, evalCmd vs1 c1 vs2
561 -> evalCmd vs2 c2 vs3 561 -> evalCmd vs2 c2 vs3
580 580
581 Hypothesis WhileCase : forall vs1 vs3 e c, R vs1 (While e c) vs3 581 Hypothesis WhileCase : forall vs1 vs3 e c, R vs1 (While e c) vs3
582 -> (evalExp vs1 e = 0 /\ vs3 = vs1) 582 -> (evalExp vs1 e = 0 /\ vs3 = vs1)
583 \/ exists vs2, evalExp vs1 e <> 0 /\ R vs1 c vs2 /\ R vs2 (While e c) vs3. 583 \/ exists vs2, evalExp vs1 e <> 0 /\ R vs1 c vs2 /\ R vs2 (While e c) vs3.
584 584
585 (** The proof is routine. We make use of a form of %\index{tactics!destruct}%[destruct] that takes an %\index{intro pattern}\emph{%#<i>#intro pattern#</i>#%}% in an [as] clause. These patterns control how deeply we break apart the components of an inductive value, and we refer the reader to the Coq manual for more details. *) 585 (** The proof is routine. We make use of a form of %\index{tactics!destruct}%[destruct] that takes an %\index{intro pattern}%_intro pattern_ in an [as] clause. These patterns control how deeply we break apart the components of an inductive value, and we refer the reader to the Coq manual for more details. *)
586 586
587 Theorem evalCmd_coind : forall vs1 c vs2, R vs1 c vs2 -> evalCmd vs1 c vs2. 587 Theorem evalCmd_coind : forall vs1 c vs2, R vs1 c vs2 -> evalCmd vs1 c vs2.
588 cofix; intros; destruct c. 588 cofix; intros; destruct c.
589 rewrite (AssignCase H); constructor. 589 rewrite (AssignCase H); constructor.
590 destruct (SeqCase H) as [? [? ?]]; econstructor; eauto. 590 destruct (SeqCase H) as [? [? ?]]; econstructor; eauto.