Mercurial > cpdt > repo
comparison src/Coinductive.v @ 402:df7cd24383ae
Typesetting pass over Coinductive
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Fri, 08 Jun 2012 13:26:28 -0400 |
parents | 05efde66559d |
children | a730378789f5 |
comparison
equal
deleted
inserted
replaced
401:c898e72b84a3 | 402:df7cd24383ae |
---|---|
10 (* begin hide *) | 10 (* begin hide *) |
11 Require Import List. | 11 Require Import List. |
12 | 12 |
13 Require Import CpdtTactics. | 13 Require Import CpdtTactics. |
14 | 14 |
15 Definition bad := 0. | |
16 | |
15 Set Implicit Arguments. | 17 Set Implicit Arguments. |
16 (* end hide *) | 18 (* end hide *) |
17 | 19 |
18 | 20 |
19 (** %\chapter{Infinite Data and Proofs}% *) | 21 (** %\chapter{Infinite Data and Proofs}% *) |
31 | 33 |
32 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. | 34 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. |
33 | 35 |
34 One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs; several such approaches are surveyed in Chapter 7. This is a heavyweight solution, and so we would like to avoid it whenever possible. | 36 One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs; several such approaches are surveyed in Chapter 7. This is a heavyweight solution, and so we would like to avoid it whenever possible. |
35 | 37 |
36 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\index{co-inductive types}%_co-inductive types_, is the subject of this chapter. *) | 38 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism,%\index{co-inductive types}% _co-inductive types_, is the subject of this chapter. *) |
37 | 39 |
38 | 40 |
39 (** * Computing with Infinite Data *) | 41 (** * Computing with Infinite Data *) |
40 | 42 |
41 (** Let us begin with the most basic type of infinite data, _streams_, or lazy lists.%\index{Vernacular commands!CoInductive}% *) | 43 (** Let us begin with the most basic type of infinite data, _streams_, or lazy lists.%\index{Vernacular commands!CoInductive}% *) |
47 | Cons : A -> stream -> stream. | 49 | Cons : A -> stream -> stream. |
48 End stream. | 50 End stream. |
49 | 51 |
50 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite. | 52 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite. |
51 | 53 |
52 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to _use_ values of recursive inductive types effectively, here we find that we need %\index{co-recursive definitions}%_co-recursive definitions_ to _build_ values of co-inductive types effectively. | 54 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to _use_ values of recursive inductive types effectively, here we find that we need%\index{co-recursive definitions}% _co-recursive definitions_ to _build_ values of co-inductive types effectively. |
53 | 55 |
54 We can define a stream consisting only of zeroes.%\index{Vernacular commands!CoFixpoint}% *) | 56 We can define a stream consisting only of zeroes.%\index{Vernacular commands!CoFixpoint}% *) |
55 | 57 |
56 CoFixpoint zeroes : stream nat := Cons 0 zeroes. | 58 CoFixpoint zeroes : stream nat := Cons 0 zeroes. |
57 | 59 |
96 ]] | 98 ]] |
97 *) | 99 *) |
98 | 100 |
99 (* end thide *) | 101 (* end thide *) |
100 | 102 |
103 (* begin hide *) | |
104 Definition looper := 0. | |
105 (* end hide *) | |
106 | |
101 (** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints _consume_ values of inductive types, with restrictions on which _arguments_ may be passed in recursive calls. Dually, co-fixpoints _produce_ values of co-inductive types, with restrictions on what may be done with the _results_ of co-recursive calls. | 107 (** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints _consume_ values of inductive types, with restrictions on which _arguments_ may be passed in recursive calls. Dually, co-fixpoints _produce_ values of co-inductive types, with restrictions on what may be done with the _results_ of co-recursive calls. |
102 | 108 |
103 The restriction for co-inductive types shows up as the %\index{guardedness condition}%_guardedness condition_, and it can be broken into two parts. First, consider this stream definition, which would be legal in Haskell. | 109 The restriction for co-inductive types shows up as the%\index{guardedness condition}% _guardedness condition_. First, consider this stream definition, which would be legal in Haskell. |
104 [[ | 110 [[ |
105 CoFixpoint looper : stream nat := looper. | 111 CoFixpoint looper : stream nat := looper. |
106 ]] | 112 ]] |
107 | 113 |
108 << | 114 << |
126 match s with | 132 match s with |
127 | Cons h t => Cons (f h) (map t) | 133 | Cons h t => Cons (f h) (map t) |
128 end. | 134 end. |
129 End map. | 135 End map. |
130 | 136 |
131 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy the guardedness condition. | 137 (* begin hide *) |
138 Definition filter := 0. | |
139 (* end hide *) | |
140 | |
141 (** This code is a literal copy of that for the list [map] function, with the [nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy the guardedness condition. | |
132 | 142 |
133 The implications of the condition can be subtle. To illustrate how, we start off with another co-recursive function definition that _is_ legal. The function [interleave] takes two streams and produces a new stream that alternates between their elements. *) | 143 The implications of the condition can be subtle. To illustrate how, we start off with another co-recursive function definition that _is_ legal. The function [interleave] takes two streams and produces a new stream that alternates between their elements. *) |
134 | 144 |
135 Section interleave. | 145 Section interleave. |
136 Variable A : Type. | 146 Variable A : Type. |
184 end. | 194 end. |
185 ]] | 195 ]] |
186 Clearly in this case the [map'] calls are not immediate arguments to constructors, so we violate the guardedness condition. *) | 196 Clearly in this case the [map'] calls are not immediate arguments to constructors, so we violate the guardedness condition. *) |
187 (* end thide *) | 197 (* end thide *) |
188 | 198 |
189 (** A more interesting question is why that condition is the right one. We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx]. The guardedness condition is an example of a syntactic check for %\index{productivity}%_productivity_ of co-recursive definitions. A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx]. If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions. However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks. Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *) | 199 (** A more interesting question is why that condition is the right one. We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx]. The guardedness condition is an example of a syntactic check for%\index{productivity}% _productivity_ of co-recursive definitions. A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx]. If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions. However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks. Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *) |
190 | 200 |
191 | 201 |
192 (** * Infinite Proofs *) | 202 (** * Infinite Proofs *) |
193 | 203 |
194 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *) | 204 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *) |
360 | 370 |
361 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. | 371 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. |
362 | 372 |
363 %\medskip% | 373 %\medskip% |
364 | 374 |
365 Must we always be cautious with automation in proofs by co-induction? Induction seems to have dual versions of the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles. It turns out that we can usually do the same with %\index{co-induction principles}%_co-induction principles_. Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq']. | 375 Must we always be cautious with automation in proofs by co-induction? Induction seems to have dual versions of the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles. It turns out that we can usually do the same with%\index{co-induction principles}% _co-induction principles_. Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq']. |
366 | 376 |
367 An induction principle is parameterized over a predicate characterizing what we mean to prove, _as a function of the inductive fact that we already know_. Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, _as a function of the arguments to the co-inductive predicate that we are trying to prove_. | 377 An induction principle is parameterized over a predicate characterizing what we mean to prove, _as a function of the inductive fact that we already know_. Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, _as a function of the arguments to the co-inductive predicate that we are trying to prove_. |
368 | 378 |
369 To state a useful principle for [stream_eq], it will be useful first to define the stream head function. *) | 379 To state a useful principle for [stream_eq], it will be useful first to define the stream head function. *) |
370 | 380 |
380 Variable R : stream A -> stream A -> Prop. | 390 Variable R : stream A -> stream A -> Prop. |
381 (** This relation generalizes the theorem we want to prove, characterizing exactly which two arguments to [stream_eq] we want to consider. *) | 391 (** This relation generalizes the theorem we want to prove, characterizing exactly which two arguments to [stream_eq] we want to consider. *) |
382 | 392 |
383 Hypothesis Cons_case_hd : forall s1 s2, R s1 s2 -> hd s1 = hd s2. | 393 Hypothesis Cons_case_hd : forall s1 s2, R s1 s2 -> hd s1 = hd s2. |
384 Hypothesis Cons_case_tl : forall s1 s2, R s1 s2 -> R (tl s1) (tl s2). | 394 Hypothesis Cons_case_tl : forall s1 s2, R s1 s2 -> R (tl s1) (tl s2). |
385 (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that an [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails. An established technical term for such a relation is %\index{bisimulation}%_bisimulation_. *) | 395 (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that an [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails. An established technical term for such a relation is%\index{bisimulation}% _bisimulation_. *) |
386 | 396 |
387 (** Now it is straightforward to prove the principle, which says that any stream pair in [R] is equal. The reader may wish to step through the proof script to see what is going on. *) | 397 (** Now it is straightforward to prove the principle, which says that any stream pair in [R] is equal. The reader may wish to step through the proof script to see what is going on. *) |
388 | 398 |
389 Theorem stream_eq_coind : forall s1 s2, R s1 s2 -> stream_eq s1 s2. | 399 Theorem stream_eq_coind : forall s1 s2, R s1 s2 -> stream_eq s1 s2. |
390 cofix; destruct s1; destruct s2; intro. | 400 cofix; destruct s1; destruct s2; intro. |
517 (* end thide *) | 527 (* end thide *) |
518 | 528 |
519 | 529 |
520 (** * Simple Modeling of Non-Terminating Programs *) | 530 (** * Simple Modeling of Non-Terminating Programs *) |
521 | 531 |
522 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple imperative programming language and use that semantics to prove the correctness of a trivial optimization that removes spurious additions by 0. We follow the technique of %\index{co-inductive big-step operational semantics}%_co-inductive big-step operational semantics_ %\cite{BigStep}%. | 532 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple imperative programming language and use that semantics to prove the correctness of a trivial optimization that removes spurious additions by 0. We follow the technique of%\index{co-inductive big-step operational semantics}% _co-inductive big-step operational semantics_ %\cite{BigStep}%. |
523 | 533 |
524 We define a suggestive synonym for [nat], as we will consider programs with infinitely many variables, represented as [nat]s. *) | 534 We define a suggestive synonym for [nat], as we will consider programs with infinitely many variables, represented as [nat]s. *) |
525 | 535 |
526 Definition var := nat. | 536 Definition var := nat. |
527 | 537 |
580 | 590 |
581 Hypothesis WhileCase : forall vs1 vs3 e c, R vs1 (While e c) vs3 | 591 Hypothesis WhileCase : forall vs1 vs3 e c, R vs1 (While e c) vs3 |
582 -> (evalExp vs1 e = 0 /\ vs3 = vs1) | 592 -> (evalExp vs1 e = 0 /\ vs3 = vs1) |
583 \/ exists vs2, evalExp vs1 e <> 0 /\ R vs1 c vs2 /\ R vs2 (While e c) vs3. | 593 \/ exists vs2, evalExp vs1 e <> 0 /\ R vs1 c vs2 /\ R vs2 (While e c) vs3. |
584 | 594 |
585 (** The proof is routine. We make use of a form of %\index{tactics!destruct}%[destruct] that takes an %\index{intro pattern}%_intro pattern_ in an [as] clause. These patterns control how deeply we break apart the components of an inductive value, and we refer the reader to the Coq manual for more details. *) | 595 (** The proof is routine. We make use of a form of %\index{tactics!destruct}%[destruct] that takes an%\index{intro pattern}% _intro pattern_ in an [as] clause. These patterns control how deeply we break apart the components of an inductive value, and we refer the reader to the Coq manual for more details. *) |
586 | 596 |
587 Theorem evalCmd_coind : forall vs1 c vs2, R vs1 c vs2 -> evalCmd vs1 c vs2. | 597 Theorem evalCmd_coind : forall vs1 c vs2, R vs1 c vs2 -> evalCmd vs1 c vs2. |
588 cofix; intros; destruct c. | 598 cofix; intros; destruct c. |
589 rewrite (AssignCase H); constructor. | 599 rewrite (AssignCase H); constructor. |
590 destruct (SeqCase H) as [? [? ?]]; econstructor; eauto. | 600 destruct (SeqCase H) as [? [? ?]]; econstructor; eauto. |