comparison src/GeneralRec.v @ 474:d9e1fb184672

Batch of changes based on proofreader feedback
author Adam Chlipala <adam@chlipala.net>
date Mon, 22 Oct 2012 13:48:45 -0400
parents b36876d4611e
children 1fd4109f7b31
comparison
equal deleted inserted replaced
473:39c204d2262c 474:d9e1fb184672
104 Acc_intro : (forall y : A, R y x -> Acc R y) -> Acc R x 104 Acc_intro : (forall y : A, R y x -> Acc R y) -> Acc R x
105 ]] 105 ]]
106 106
107 In prose, an element [x] is accessible for a relation [R] if every element "less than" [x] according to [R] is also accessible. Since [Acc] is defined inductively, we know that any accessibility proof involves a finite chain of invocations, in a certain sense that we can make formal. Building on Chapter 5's examples, let us define a co-inductive relation that is closer to the usual informal notion of "absence of infinite decreasing chains." *) 107 In prose, an element [x] is accessible for a relation [R] if every element "less than" [x] according to [R] is also accessible. Since [Acc] is defined inductively, we know that any accessibility proof involves a finite chain of invocations, in a certain sense that we can make formal. Building on Chapter 5's examples, let us define a co-inductive relation that is closer to the usual informal notion of "absence of infinite decreasing chains." *)
108 108
109 CoInductive isChain A (R : A -> A -> Prop) : stream A -> Prop := 109 CoInductive infiniteDecreasingChain A (R : A -> A -> Prop) : stream A -> Prop :=
110 | ChainCons : forall x y s, isChain R (Cons y s) 110 | ChainCons : forall x y s, infiniteDecreasingChain R (Cons y s)
111 -> R y x 111 -> R y x
112 -> isChain R (Cons x (Cons y s)). 112 -> infiniteDecreasingChain R (Cons x (Cons y s)).
113 113
114 (** We can now prove that any accessible element cannot be the beginning of any infinite decreasing chain. *) 114 (** We can now prove that any accessible element cannot be the beginning of any infinite decreasing chain. *)
115 115
116 (* begin thide *) 116 (* begin thide *)
117 Lemma noChains' : forall A (R : A -> A -> Prop) x, Acc R x 117 Lemma noBadChains' : forall A (R : A -> A -> Prop) x, Acc R x
118 -> forall s, ~isChain R (Cons x s). 118 -> forall s, ~infiniteDecreasingChain R (Cons x s).
119 induction 1; crush; 119 induction 1; crush;
120 match goal with 120 match goal with
121 | [ H : isChain _ _ |- _ ] => inversion H; eauto 121 | [ H : infiniteDecreasingChain _ _ |- _ ] => inversion H; eauto
122 end. 122 end.
123 Qed. 123 Qed.
124 124
125 (** From here, the absence of infinite decreasing chains in well-founded sets is immediate. *) 125 (** From here, the absence of infinite decreasing chains in well-founded sets is immediate. *)
126 126
127 Theorem noChains : forall A (R : A -> A -> Prop), well_founded R 127 Theorem noBadChains : forall A (R : A -> A -> Prop), well_founded R
128 -> forall s, ~isChain R s. 128 -> forall s, ~infiniteDecreasingChain R s.
129 destruct s; apply noChains'; auto. 129 destruct s; apply noBadChains'; auto.
130 Qed. 130 Qed.
131 (* end thide *) 131 (* end thide *)
132 132
133 (** Absence of infinite decreasing chains implies absence of infinitely nested recursive calls, for any recursive definition that respects the well-founded relation. The [Fix] combinator from the standard library formalizes that intuition: *) 133 (** Absence of infinite decreasing chains implies absence of infinitely nested recursive calls, for any recursive definition that respects the well-founded relation. The [Fix] combinator from the standard library formalizes that intuition: *)
134 134
168 red; intro; eapply lengthOrder_wf'; eauto. 168 red; intro; eapply lengthOrder_wf'; eauto.
169 Defined. 169 Defined.
170 170
171 (** Notice that we end these proofs with %\index{Vernacular commands!Defined}%[Defined], not [Qed]. Recall that [Defined] marks the theorems as %\emph{transparent}%, so that the details of their proofs may be used during program execution. Why could such details possibly matter for computation? It turns out that [Fix] satisfies the primitive recursion restriction by declaring itself as _recursive in the structure of [Acc] proofs_. This is possible because [Acc] proofs follow a predictable inductive structure. We must do work, as in the last theorem's proof, to establish that all elements of a type belong to [Acc], but the automatic unwinding of those proofs during recursion is straightforward. If we ended the proof with [Qed], the proof details would be hidden from computation, in which case the unwinding process would get stuck. 171 (** Notice that we end these proofs with %\index{Vernacular commands!Defined}%[Defined], not [Qed]. Recall that [Defined] marks the theorems as %\emph{transparent}%, so that the details of their proofs may be used during program execution. Why could such details possibly matter for computation? It turns out that [Fix] satisfies the primitive recursion restriction by declaring itself as _recursive in the structure of [Acc] proofs_. This is possible because [Acc] proofs follow a predictable inductive structure. We must do work, as in the last theorem's proof, to establish that all elements of a type belong to [Acc], but the automatic unwinding of those proofs during recursion is straightforward. If we ended the proof with [Qed], the proof details would be hidden from computation, in which case the unwinding process would get stuck.
172 172
173 To justify our two recursive [mergeSort] calls, we will also need to prove that [partition] respects the [lengthOrder] relation. These proofs, too, must be kept transparent, to avoid stuckness of [Fix] evaluation. *) 173 To justify our two recursive [mergeSort] calls, we will also need to prove that [partition] respects the [lengthOrder] relation. These proofs, too, must be kept transparent, to avoid stuckness of [Fix] evaluation. We use the syntax [@foo] to reference identifier [foo] with its implicit argument behavior turned off. *)
174 174
175 Lemma partition_wf : forall len ls, 2 <= length ls <= len 175 Lemma partition_wf : forall len ls, 2 <= length ls <= len
176 -> let (ls1, ls2) := partition ls in 176 -> let (ls1, ls2) := partition ls in
177 lengthOrder ls1 ls /\ lengthOrder ls2 ls. 177 lengthOrder ls1 ls /\ lengthOrder ls2 ls.
178 unfold lengthOrder; induction len; crush; do 2 (destruct ls; crush); 178 unfold lengthOrder; induction len; crush; do 2 (destruct ls; crush);
299 | forall (n : nat) (v : A), 299 | forall (n : nat) (v : A),
300 f n = Some v 300 f n = Some v
301 -> forall (n' : nat), n' >= n 301 -> forall (n' : nat), n' >= n
302 -> f n' = Some v}. 302 -> f n' = Some v}.
303 303
304 (** A computation is fundamentally a function [f] from an _approximation level_ [n] to an optional result. Intuitively, higher [n] values enable termination in more cases than lower values. A call to [f] may return [None] to indicate that [n] was not high enough to run the computation to completion; higher [n] values may yield [Some]. Further, the proof obligation within the sigma type asserts that [f] is _monotone_ in an appropriate sense: when some [n] is sufficient to produce termination, so are all higher [n] values, and they all yield the same program result [v]. 304 (** A computation is fundamentally a function [f] from an _approximation level_ [n] to an optional result. Intuitively, higher [n] values enable termination in more cases than lower values. A call to [f] may return [None] to indicate that [n] was not high enough to run the computation to completion; higher [n] values may yield [Some]. Further, the proof obligation within the subset type asserts that [f] is _monotone_ in an appropriate sense: when some [n] is sufficient to produce termination, so are all higher [n] values, and they all yield the same program result [v].
305 305
306 It is easy to define a relation characterizing when a computation runs to a particular result at a particular approximation level. *) 306 It is easy to define a relation characterizing when a computation runs to a particular result at a particular approximation level. *)
307 307
308 Definition runTo (m : computation) (n : nat) (v : A) := 308 Definition runTo (m : computation) (n : nat) (v : A) :=
309 proj1_sig m n = Some v. 309 proj1_sig m n = Some v.
408 Theorem run_Return : run Return v. 408 Theorem run_Return : run Return v.
409 run. 409 run.
410 Qed. 410 Qed.
411 End Return. 411 End Return.
412 412
413 (** The name [Return] was meant to be suggestive of the standard operations of %\index{monad}%monads%~\cite{Monads}%. The other standard operation is [Bind], which lets us run one computation and, if it terminates, pass its result off to another computation. *) 413 (** The name [Return] was meant to be suggestive of the standard operations of %\index{monad}%monads%~\cite{Monads}%. The other standard operation is [Bind], which lets us run one computation and, if it terminates, pass its result off to another computation. We implement bind using the notation [let (x, y) := e1 in e2], for pulling apart the value [e1] which may be thought of as a pair. The second component of a [computation] is a proof, which we do not need to mention directly in the definition of [Bind]. *)
414 414
415 Section Bind. 415 Section Bind.
416 Variables A B : Type. 416 Variables A B : Type.
417 Variable m1 : computation A. 417 Variable m1 : computation A.
418 Variable m2 : A -> computation B. 418 Variable m2 : A -> computation B.