Mercurial > cpdt > repo
comparison src/GeneralRec.v @ 424:f83664d817ce
Pass through GeneralRec, to incorporate new coqdoc features
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Wed, 25 Jul 2012 17:16:07 -0400 |
parents | f1cdae4af393 |
children | 5f25705a10ea |
comparison
equal
deleted
inserted
replaced
423:d3a40c044ab4 | 424:f83664d817ce |
---|---|
18 | 18 |
19 (** %\chapter{General Recursion}% *) | 19 (** %\chapter{General Recursion}% *) |
20 | 20 |
21 (** Termination of all programs is a crucial property of Gallina. Non-terminating programs introduce logical inconsistency, where any theorem can be proved with an infinite loop. Coq uses a small set of conservative, syntactic criteria to check termination of all recursive definitions. These criteria are insufficient to support the natural encodings of a variety of important programming idioms. Further, since Coq makes it so convenient to encode mathematics computationally, with functional programs, we may find ourselves wanting to employ more complicated recursion in mathematical definitions. | 21 (** Termination of all programs is a crucial property of Gallina. Non-terminating programs introduce logical inconsistency, where any theorem can be proved with an infinite loop. Coq uses a small set of conservative, syntactic criteria to check termination of all recursive definitions. These criteria are insufficient to support the natural encodings of a variety of important programming idioms. Further, since Coq makes it so convenient to encode mathematics computationally, with functional programs, we may find ourselves wanting to employ more complicated recursion in mathematical definitions. |
22 | 22 |
23 What exactly are the conservative criteria that we run up against? For _recursive_ definitions, recursive calls are only allowed on _syntactic subterms_ of the original primary argument, a restriction known as%\index{primitive recursion}% _primitive recursion_. In fact, Coq's handling of reflexive inductive types (those defined in terms of functions returning the same type) gives a bit more flexibility than in traditional primitive recursion, but the term is still applied commonly. In Chapter 5, we saw how _co-recursive_ definitions are checked against a syntactic guardness condition that guarantees productivity. | 23 What exactly are the conservative criteria that we run up against? For _recursive_ definitions, recursive calls are only allowed on _syntactic subterms_ of the original primary argument, a restriction known as%\index{primitive recursion}% _primitive recursion_. In fact, Coq's handling of reflexive inductive types (those defined in terms of functions returning the same type) gives a bit more flexibility than in traditional primitive recursion, but the term is still applied commonly. In Chapter 5, we saw how _co-recursive_ definitions are checked against a syntactic guardedness condition that guarantees productivity. |
24 | 24 |
25 Many natural recursion patterns satisfy neither condition. For instance, there is our simple running example in this chapter, merge sort. We will study three different approaches to more flexible recursion, and the latter two of the approaches will even support definitions that may fail to terminate on certain inputs, without any up-front characterization of which inputs those may be. | 25 Many natural recursion patterns satisfy neither condition. For instance, there is our simple running example in this chapter, merge sort. We will study three different approaches to more flexible recursion, and the latter two of the approaches will even support definitions that may fail to terminate on certain inputs, without any up-front characterization of which inputs those may be. |
26 | 26 |
27 Before proceeding, it is important to note that the problem here is not as fundamental as it may appear. The final example of Chapter 5 demonstrated what is called a%\index{deep embedding}% _deep embedding_ of the syntax and semantics of a programming language. That is, we gave a mathematical definition of a language of programs and their meanings. This language clearly admitted non-termination, and we could think of writing all our sophisticated recursive functions with such explicit syntax types. However, in doing so, we forfeit our chance to take advantage of Coq's very good built-in support for reasoning about Gallina programs. We would rather use a%\index{shallow embedding}% _shallow embedding_, where we model informal constructs by encoding them as normal Gallina programs. Each of the three techniques of this chapter follows that style. *) | 27 Before proceeding, it is important to note that the problem here is not as fundamental as it may appear. The final example of Chapter 5 demonstrated what is called a%\index{deep embedding}% _deep embedding_ of the syntax and semantics of a programming language. That is, we gave a mathematical definition of a language of programs and their meanings. This language clearly admitted non-termination, and we could think of writing all our sophisticated recursive functions with such explicit syntax types. However, in doing so, we forfeit our chance to take advantage of Coq's very good built-in support for reasoning about Gallina programs. We would rather use a%\index{shallow embedding}% _shallow embedding_, where we model informal constructs by encoding them as normal Gallina programs. Each of the three techniques of this chapter follows that style. *) |
28 | 28 |
32 (** The essence of terminating recursion is that there are no infinite chains of nested recursive calls. This intuition is commonly mapped to the mathematical idea of a%\index{well-founded relation}% _well-founded relation_, and the associated standard technique in Coq is%\index{well-founded recursion}% _well-founded recursion_. The syntactic-subterm relation that Coq applies by default is well-founded, but many cases demand alternate well-founded relations. To demonstrate, let us see where we get stuck on attempting a standard merge sort implementation. *) | 32 (** The essence of terminating recursion is that there are no infinite chains of nested recursive calls. This intuition is commonly mapped to the mathematical idea of a%\index{well-founded relation}% _well-founded relation_, and the associated standard technique in Coq is%\index{well-founded recursion}% _well-founded recursion_. The syntactic-subterm relation that Coq applies by default is well-founded, but many cases demand alternate well-founded relations. To demonstrate, let us see where we get stuck on attempting a standard merge sort implementation. *) |
33 | 33 |
34 Section mergeSort. | 34 Section mergeSort. |
35 Variable A : Type. | 35 Variable A : Type. |
36 Variable le : A -> A -> bool. | 36 Variable le : A -> A -> bool. |
37 (** We have a set equipped with some %``%#"#less-than-or-equal-to#"#%''% test. *) | 37 (** We have a set equipped with some "less-than-or-equal-to" test. *) |
38 | 38 |
39 (** A standard function inserts an element into a sorted list, preserving sortedness. *) | 39 (** A standard function inserts an element into a sorted list, preserving sortedness. *) |
40 | 40 |
41 Fixpoint insert (x : A) (ls : list A) : list A := | 41 Fixpoint insert (x : A) (ls : list A) : list A := |
42 match ls with | 42 match ls with |
64 | h1 :: h2 :: ls' => | 64 | h1 :: h2 :: ls' => |
65 let (ls1, ls2) := partition ls' in | 65 let (ls1, ls2) := partition ls' in |
66 (h1 :: ls1, h2 :: ls2) | 66 (h1 :: ls1, h2 :: ls2) |
67 end. | 67 end. |
68 | 68 |
69 (** Now, let us try to write the final sorting function, using a natural number %``%#"#[<=]#"#%''% test [leb] from the standard library. | 69 (** Now, let us try to write the final sorting function, using a natural number "[<=]" test [leb] from the standard library. |
70 [[ | 70 [[ |
71 Fixpoint mergeSort (ls : list A) : list A := | 71 Fixpoint mergeSort (ls : list A) : list A := |
72 if leb (length ls) 2 | 72 if leb (length ls) 2 |
73 then ls | 73 then ls |
74 else let lss := partition ls in | 74 else let lss := partition ls in |
90 fun (A : Type) (R : A -> A -> Prop) => forall a : A, Acc R a | 90 fun (A : Type) (R : A -> A -> Prop) => forall a : A, Acc R a |
91 ]] | 91 ]] |
92 | 92 |
93 The bulk of the definitional work devolves to the%\index{accessibility relation}\index{Gallina terms!Acc}% _accessibility_ relation [Acc], whose definition we may also examine. *) | 93 The bulk of the definitional work devolves to the%\index{accessibility relation}\index{Gallina terms!Acc}% _accessibility_ relation [Acc], whose definition we may also examine. *) |
94 | 94 |
95 (* begin hide *) | |
96 Definition Acc_intro' := Acc_intro. | |
97 (* end hide *) | |
98 | |
95 Print Acc. | 99 Print Acc. |
96 (** %\vspace{-.15in}% [[ | 100 (** %\vspace{-.15in}% [[ |
97 Inductive Acc (A : Type) (R : A -> A -> Prop) (x : A) : Prop := | 101 Inductive Acc (A : Type) (R : A -> A -> Prop) (x : A) : Prop := |
98 Acc_intro : (forall y : A, R y x -> Acc R y) -> Acc R x | 102 Acc_intro : (forall y : A, R y x -> Acc R y) -> Acc R x |
99 ]] | 103 ]] |
100 | 104 |
101 In prose, an element [x] is accessible for a relation [R] if every element %``%#"#less than#"#%''% [x] according to [R] is also accessible. Since [Acc] is defined inductively, we know that any accessibility proof involves a finite chain of invocations, in a certain sense that we can make formal. Building on Chapter 5's examples, let us define a co-inductive relation that is closer to the usual informal notion of %``%#"#absence of infinite decreasing chains.#"#%''% *) | 105 In prose, an element [x] is accessible for a relation [R] if every element "less than" [x] according to [R] is also accessible. Since [Acc] is defined inductively, we know that any accessibility proof involves a finite chain of invocations, in a certain sense that we can make formal. Building on Chapter 5's examples, let us define a co-inductive relation that is closer to the usual informal notion of "absence of infinite decreasing chains." *) |
102 | 106 |
103 CoInductive isChain A (R : A -> A -> Prop) : stream A -> Prop := | 107 CoInductive isChain A (R : A -> A -> Prop) : stream A -> Prop := |
104 | ChainCons : forall x y s, isChain R (Cons y s) | 108 | ChainCons : forall x y s, isChain R (Cons y s) |
105 -> R y x | 109 -> R y x |
106 -> isChain R (Cons x (Cons y s)). | 110 -> isChain R (Cons x (Cons y s)). |
139 A call to %\index{Gallina terms!Fix}%[Fix] must present a relation [R] and a proof of its well-foundedness. The next argument, [P], is the possibly dependent range type of the function we build; the domain [A] of [R] is the function's domain. The following argument has this type: | 143 A call to %\index{Gallina terms!Fix}%[Fix] must present a relation [R] and a proof of its well-foundedness. The next argument, [P], is the possibly dependent range type of the function we build; the domain [A] of [R] is the function's domain. The following argument has this type: |
140 [[ | 144 [[ |
141 forall x : A, (forall y : A, R y x -> P y) -> P x | 145 forall x : A, (forall y : A, R y x -> P y) -> P x |
142 ]] | 146 ]] |
143 | 147 |
144 This is an encoding of the function body. The input [x] stands for the function argument, and the next input stands for the function we are defining. Recursive calls are encoded as calls to the second argument, whose type tells us it expects a value [y] and a proof that [y] is %``%#"#less than#"#%''% [x], according to [R]. In this way, we enforce the well-foundedness restriction on recursive calls. | 148 This is an encoding of the function body. The input [x] stands for the function argument, and the next input stands for the function we are defining. Recursive calls are encoded as calls to the second argument, whose type tells us it expects a value [y] and a proof that [y] is "less than" [x], according to [R]. In this way, we enforce the well-foundedness restriction on recursive calls. |
145 | 149 |
146 The rest of [Fix]'s type tells us that it returns a function of exactly the type we expect, so we are now ready to use it to implement [mergeSort]. Careful readers may have noticed that [Fix] has a dependent type of the sort we met in the previous chapter. | 150 The rest of [Fix]'s type tells us that it returns a function of exactly the type we expect, so we are now ready to use it to implement [mergeSort]. Careful readers may have noticed that [Fix] has a dependent type of the sort we met in the previous chapter. |
147 | 151 |
148 Before writing [mergeSort], we need to settle on a well-founded relation. The right one for this example is based on lengths of lists. *) | 152 Before writing [mergeSort], we need to settle on a well-founded relation. The right one for this example is based on lengths of lists. *) |
149 | 153 |
222 then let lss := partition ls in | 226 then let lss := partition ls in |
223 merge le (mergeSort le (fst lss)) (mergeSort le (snd lss)) | 227 merge le (mergeSort le (fst lss)) (mergeSort le (snd lss)) |
224 else ls. | 228 else ls. |
225 intros; apply (Fix_eq (@lengthOrder_wf A) (fun _ => list A)); intros. | 229 intros; apply (Fix_eq (@lengthOrder_wf A) (fun _ => list A)); intros. |
226 | 230 |
227 (** The library theorem [Fix_eq] imposes one more strange subgoal upon us. We must prove that the function body is unable to distinguish between %``%#"#self#"#%''% arguments that map equal inputs to equal outputs. One might think this should be true of any Gallina code, but in fact this general%\index{extensionality}% _function extensionality_ property is neither provable nor disprovable within Coq. The type of [Fix_eq] makes clear what we must show manually: *) | 231 (** The library theorem [Fix_eq] imposes one more strange subgoal upon us. We must prove that the function body is unable to distinguish between "self" arguments that map equal inputs to equal outputs. One might think this should be true of any Gallina code, but in fact this general%\index{extensionality}% _function extensionality_ property is neither provable nor disprovable within Coq. The type of [Fix_eq] makes clear what we must show manually: *) |
228 | 232 |
229 Check Fix_eq. | 233 Check Fix_eq. |
230 (** %\vspace{-.15in}%[[ | 234 (** %\vspace{-.15in}%[[ |
231 Fix_eq | 235 Fix_eq |
232 : forall (A : Type) (R : A -> A -> Prop) (Rwf : well_founded R) | 236 : forall (A : Type) (R : A -> A -> Prop) (Rwf : well_founded R) |
276 Some more recent Coq features provide more convenient syntax for defining recursive functions. Interested readers can consult the Coq manual about the commands %\index{Function}%[Function] and %\index{Program Fixpoint}%[Program Fixpoint]. *) | 280 Some more recent Coq features provide more convenient syntax for defining recursive functions. Interested readers can consult the Coq manual about the commands %\index{Function}%[Function] and %\index{Program Fixpoint}%[Program Fixpoint]. *) |
277 | 281 |
278 | 282 |
279 (** * A Non-Termination Monad Inspired by Domain Theory *) | 283 (** * A Non-Termination Monad Inspired by Domain Theory *) |
280 | 284 |
281 (** The key insights of %\index{domain theory}%domain theory%~\cite{WinskelDomains}% inspire the next approach to modeling non-termination. Domain theory is based on _information orders_ that relate values representing computation results, according to how much information these values convey. For instance, a simple domain might include values %``%#"#the program does not terminate#"#%''% and %``%#"#the program terminates with the answer 5.#"#%''% The former is considered to be an _approximation_ of the latter, while the latter is _not_ an approximation of %``%#"#the program terminates with the answer 6.#"#%''% The details of domain theory will not be important in what follows; we merely borrow the notion of an approximation ordering on computation results. | 285 (** The key insights of %\index{domain theory}%domain theory%~\cite{WinskelDomains}% inspire the next approach to modeling non-termination. Domain theory is based on _information orders_ that relate values representing computation results, according to how much information these values convey. For instance, a simple domain might include values "the program does not terminate" and "the program terminates with the answer 5." The former is considered to be an _approximation_ of the latter, while the latter is _not_ an approximation of "the program terminates with the answer 6." The details of domain theory will not be important in what follows; we merely borrow the notion of an approximation ordering on computation results. |
282 | 286 |
283 Consider this definition of a type of computations. *) | 287 Consider this definition of a type of computations. *) |
284 | 288 |
285 Section computation. | 289 Section computation. |
286 Variable A : Type. | 290 Variable A : Type. |
435 (** A simple notation lets us write [Bind] calls the way they appear in Haskell. *) | 439 (** A simple notation lets us write [Bind] calls the way they appear in Haskell. *) |
436 | 440 |
437 Notation "x <- m1 ; m2" := | 441 Notation "x <- m1 ; m2" := |
438 (Bind m1 (fun x => m2)) (right associativity, at level 70). | 442 (Bind m1 (fun x => m2)) (right associativity, at level 70). |
439 | 443 |
440 (** We can verify that we have indeed defined a monad, by proving the standard monad laws. Part of the exercise is choosing an appropriate notion of equality between computations. We use %``%#"#equality at all approximation levels.#"#%''% *) | 444 (** We can verify that we have indeed defined a monad, by proving the standard monad laws. Part of the exercise is choosing an appropriate notion of equality between computations. We use "equality at all approximation levels." *) |
441 | 445 |
442 Definition meq A (m1 m2 : computation A) := forall n, proj1_sig m1 n = proj1_sig m2 n. | 446 Definition meq A (m1 m2 : computation A) := forall n, proj1_sig m1 n = proj1_sig m2 n. |
443 | 447 |
444 Theorem left_identity : forall A B (a : A) (f : A -> computation B), | 448 Theorem left_identity : forall A B (a : A) (f : A -> computation B), |
445 meq (Bind (Return a) f) (f a). | 449 meq (Bind (Return a) f) (f a). |
567 ls2 <- mergeSort (snd lss); | 571 ls2 <- mergeSort (snd lss); |
568 Return (merge le ls1 ls2) | 572 Return (merge le ls1 ls2) |
569 else Return ls) _); abstract mergeSort'. | 573 else Return ls) _); abstract mergeSort'. |
570 Defined. | 574 Defined. |
571 | 575 |
572 (** Furthermore, %``%#"#running#"#%''% [mergeSort'] on concrete inputs is as easy as choosing a sufficiently high approximation level and letting Coq's computation rules do the rest. Contrast this with the proof work that goes into deriving an evaluation fact for a deeply embedded language, with one explicit proof rule application per execution step. *) | 576 (** Furthermore, "running" [mergeSort'] on concrete inputs is as easy as choosing a sufficiently high approximation level and letting Coq's computation rules do the rest. Contrast this with the proof work that goes into deriving an evaluation fact for a deeply embedded language, with one explicit proof rule application per execution step. *) |
573 | 577 |
574 Lemma test_mergeSort' : run (mergeSort' leb (1 :: 2 :: 36 :: 8 :: 19 :: nil)) | 578 Lemma test_mergeSort' : run (mergeSort' leb (1 :: 2 :: 36 :: 8 :: 19 :: nil)) |
575 (1 :: 2 :: 8 :: 19 :: 36 :: nil). | 579 (1 :: 2 :: 8 :: 19 :: 36 :: nil). |
576 exists 4; reflexivity. | 580 exists 4; reflexivity. |
577 Qed. | 581 Qed. |
595 exists 1; reflexivity. | 599 exists 1; reflexivity. |
596 Qed. | 600 Qed. |
597 | 601 |
598 (** As before, proving outputs for specific inputs is as easy as demonstrating a high enough approximation level. | 602 (** As before, proving outputs for specific inputs is as easy as demonstrating a high enough approximation level. |
599 | 603 |
600 There are other theorems that are important to prove about combinators like [Return], [Bind], and [Fix]. In general, for a computation [c], we sometimes have a hypothesis proving [run c v] for some [v], and we want to perform inversion to deduce what [v] must be. Each combinator should ideally have a theorem of that kind, for [c] built directly from that combinator. We have omitted such theorems here, but they are not hard to prove. In general, the domain theory-inspired approach avoids the type-theoretic %``%#"#gotchas#"#%''% that tend to show up in approaches that try to mix normal Coq computation with explicit syntax types. The next section of this chapter demonstrates two alternate approaches of that sort. *) | 604 There are other theorems that are important to prove about combinators like [Return], [Bind], and [Fix]. In general, for a computation [c], we sometimes have a hypothesis proving [run c v] for some [v], and we want to perform inversion to deduce what [v] must be. Each combinator should ideally have a theorem of that kind, for [c] built directly from that combinator. We have omitted such theorems here, but they are not hard to prove. In general, the domain theory-inspired approach avoids the type-theoretic "gotchas" that tend to show up in approaches that try to mix normal Coq computation with explicit syntax types. The next section of this chapter demonstrates two alternate approaches of that sort. *) |
601 | 605 |
602 | 606 |
603 (** * Co-Inductive Non-Termination Monads *) | 607 (** * Co-Inductive Non-Termination Monads *) |
604 | 608 |
605 (** There are two key downsides to both of the previous approaches: both require unusual syntax based on explicit calls to fixpoint combinators, and both generate immediate proof obligations about the bodies of recursive definitions. In Chapter 5, we have already seen how co-inductive types support recursive definitions that exhibit certain well-behaved varieties of non-termination. It turns out that we can leverage that co-induction support for encoding of general recursive definitions, by adding layers of co-inductive syntax. In effect, we mix elements of shallow and deep embeddings. | 609 (** There are two key downsides to both of the previous approaches: both require unusual syntax based on explicit calls to fixpoint combinators, and both generate immediate proof obligations about the bodies of recursive definitions. In Chapter 5, we have already seen how co-inductive types support recursive definitions that exhibit certain well-behaved varieties of non-termination. It turns out that we can leverage that co-induction support for encoding of general recursive definitions, by adding layers of co-inductive syntax. In effect, we mix elements of shallow and deep embeddings. |
610 | Answer : A -> thunk A | 614 | Answer : A -> thunk A |
611 | Think : thunk A -> thunk A. | 615 | Think : thunk A -> thunk A. |
612 | 616 |
613 (** A computation is either an immediate [Answer] or another computation wrapped inside [Think]. Since [thunk] is co-inductive, every [thunk] type is inhabited by an infinite nesting of [Think]s, standing for non-termination. Terminating results are [Answer] wrapped inside some finite number of [Think]s. | 617 (** A computation is either an immediate [Answer] or another computation wrapped inside [Think]. Since [thunk] is co-inductive, every [thunk] type is inhabited by an infinite nesting of [Think]s, standing for non-termination. Terminating results are [Answer] wrapped inside some finite number of [Think]s. |
614 | 618 |
615 Why bother to write such a strange definition? The definition of [thunk] is motivated by the ability it gives us to define a %``%#"#bind#"#%''% operation, similar to the one we defined in the previous section. *) | 619 Why bother to write such a strange definition? The definition of [thunk] is motivated by the ability it gives us to define a "bind" operation, similar to the one we defined in the previous section. *) |
616 | 620 |
617 CoFixpoint TBind A B (m1 : thunk A) (m2 : A -> thunk B) : thunk B := | 621 CoFixpoint TBind A B (m1 : thunk A) (m2 : A -> thunk B) : thunk B := |
618 match m1 with | 622 match m1 with |
619 | Answer x => m2 x | 623 | Answer x => m2 x |
620 | Think m1' => Think (TBind m1' m2) | 624 | Think m1' => Think (TBind m1' m2) |
746 | 750 |
747 Notation "x <- m1 ; m2" := | 751 Notation "x <- m1 ; m2" := |
748 (TBind m1 (fun x => m2)) (right associativity, at level 70). | 752 (TBind m1 (fun x => m2)) (right associativity, at level 70). |
749 | 753 |
750 (* begin hide *) | 754 (* begin hide *) |
751 Definition fib := 0. | 755 Definition fib := pred. |
752 (* end hide *) | 756 (* end hide *) |
753 | 757 |
754 (** %\vspace{-.15in}%[[ | 758 (** %\vspace{-.15in}%[[ |
755 CoFixpoint fib (n : nat) : thunk nat := | 759 CoFixpoint fib (n : nat) : thunk nat := |
756 match n with | 760 match n with |
764 | 768 |
765 Coq complains that the guardedness condition is violated. The two recursive calls are immediate arguments to [TBind], but [TBind] is not a constructor of [thunk]. Rather, it is a defined function. This example shows a very serious limitation of [thunk] for traditional functional programming: it is not, in general, possible to make recursive calls and then make further recursive calls, depending on the first call's result. The [fact] example succeeded because it was already tail recursive, meaning no further computation is needed after a recursive call. | 769 Coq complains that the guardedness condition is violated. The two recursive calls are immediate arguments to [TBind], but [TBind] is not a constructor of [thunk]. Rather, it is a defined function. This example shows a very serious limitation of [thunk] for traditional functional programming: it is not, in general, possible to make recursive calls and then make further recursive calls, depending on the first call's result. The [fact] example succeeded because it was already tail recursive, meaning no further computation is needed after a recursive call. |
766 | 770 |
767 %\medskip% | 771 %\medskip% |
768 | 772 |
769 I know no easy fix for this problem of [thunk], but we can define an alternate co-inductive monad that avoids the problem, based on a proposal by Megacz%~\cite{Megacz}%. We ran into trouble because [TBind] was not a constructor of [thunk], so let us define a new type family where %``%#"#bind#"#%''% is a constructor. *) | 773 I know no easy fix for this problem of [thunk], but we can define an alternate co-inductive monad that avoids the problem, based on a proposal by Megacz%~\cite{Megacz}%. We ran into trouble because [TBind] was not a constructor of [thunk], so let us define a new type family where "bind" is a constructor. *) |
770 | 774 |
771 CoInductive comp (A : Type) : Type := | 775 CoInductive comp (A : Type) : Type := |
772 | Ret : A -> comp A | 776 | Ret : A -> comp A |
773 | Bnd : forall B, comp B -> (B -> comp A) -> comp A. | 777 | Bnd : forall B, comp B -> (B -> comp A) -> comp A. |
774 | 778 |
843 comp_eq (Bnd (Bnd m f) g) (Bnd m (fun x => Bnd (f x) g)). | 847 comp_eq (Bnd (Bnd m f) g) (Bnd m (fun x => Bnd (f x) g)). |
844 red; crush; eauto. | 848 red; crush; eauto. |
845 Qed. | 849 Qed. |
846 (* end hide *) | 850 (* end hide *) |
847 | 851 |
848 (** Not only can we define the Fibonacci function with the new monad, but even our running example of merge sort becomes definable. By shadowing our previous notation for %``%#"#bind,#"#%''%, we can write almost exactly the same code as in our previous [mergeSort'] definition, but with less syntactic clutter. *) | 852 (** Not only can we define the Fibonacci function with the new monad, but even our running example of merge sort becomes definable. By shadowing our previous notation for "bind,", we can write almost exactly the same code as in our previous [mergeSort'] definition, but with less syntactic clutter. *) |
849 | 853 |
850 Notation "x <- m1 ; m2" := (Bnd m1 (fun x => m2)). | 854 Notation "x <- m1 ; m2" := (Bnd m1 (fun x => m2)). |
851 | 855 |
852 CoFixpoint mergeSort'' A (le : A -> A -> bool) (ls : list A) : comp (list A) := | 856 CoFixpoint mergeSort'' A (le : A -> A -> bool) (ls : list A) : comp (list A) := |
853 if le_lt_dec 2 (length ls) | 857 if le_lt_dec 2 (length ls) |