# HG changeset patch # User Adam Chlipala # Date 1343851777 14400 # Node ID 89c67796754e1c3bc3289d1a145ef948a90be5c2 # Parent cbfd23b4364de7ba3205a08adee5fbfe8fbca284 Undo some overzealous vspace tweaks diff -r cbfd23b4364d -r 89c67796754e src/Coinductive.v --- a/src/Coinductive.v Wed Aug 01 15:52:50 2012 -0400 +++ b/src/Coinductive.v Wed Aug 01 16:09:37 2012 -0400 @@ -29,7 +29,7 @@ Fixpoint bad (u : unit) : P := bad u. ]] -%\smallskip{}%This would leave us with [bad tt] as a proof of [P]. +This would leave us with [bad tt] as a proof of [P]. There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. diff -r cbfd23b4364d -r 89c67796754e src/InductiveTypes.v --- a/src/InductiveTypes.v Wed Aug 01 15:52:50 2012 -0400 +++ b/src/InductiveTypes.v Wed Aug 01 16:09:37 2012 -0400 @@ -325,7 +325,7 @@ P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n ]] -%\smallskip{}%Each of the two cases of our last proof came from the type of one of the arguments to [nat_ind]. We chose [P] to be [(fun n : nat => plus n O = n)]. The first proof case corresponded to [P O] and the second case to [(forall n : nat, P n -> P (S n))]. The free variable [n] and inductive hypothesis [IHn] came from the argument types given here. +Each of the two cases of our last proof came from the type of one of the arguments to [nat_ind]. We chose [P] to be [(fun n : nat => plus n O = n)]. The first proof case corresponded to [P O] and the second case to [(forall n : nat, P n -> P (S n))]. The free variable [n] and inductive hypothesis [IHn] came from the argument types given here. Since [nat] has a constructor that takes an argument, we may sometimes need to know that that constructor is injective.%\index{tactics!injection}\index{tactics!trivial}% *) @@ -501,7 +501,7 @@ Nil : list T | Cons : T -> list T -> list T ]] -%\smallskip{}%The final definition is the same as what we wrote manually before. The other elements of the section are altered similarly, turning out exactly as they were before, though we managed to write their definitions more succinctly. *) +The final definition is the same as what we wrote manually before. The other elements of the section are altered similarly, turning out exactly as they were before, though we managed to write their definitions more succinctly. *) Check length. (** %\vspace{-.15in}% [[ @@ -509,7 +509,7 @@ : forall T : Set, list T -> nat ]] -%\smallskip{}%The parameter [T] is treated as a new argument to the induction principle, too. *) +The parameter [T] is treated as a new argument to the induction principle, too. *) Check list_ind. (** %\vspace{-.15in}% [[ @@ -520,7 +520,7 @@ forall l : list T, P l ]] -%\smallskip{}%Thus, even though we just saw that [T] is added as an extra argument to the constructor [Cons], there is no quantifier for [T] in the type of the inductive case like there is for each of the other arguments. *) +Thus, even though we just saw that [T] is added as an extra argument to the constructor [Cons], there is no quantifier for [T] in the type of the inductive case like there is for each of the other arguments. *) (** * Mutually Inductive Types *) @@ -584,7 +584,7 @@ forall e : even_list, P e ]] -%\smallskip{}%We see that no inductive hypotheses are included anywhere in the type. To get them, we must ask for mutual principles as we need them, using the %\index{Vernacular commands!Scheme}%[Scheme] command. *) +We see that no inductive hypotheses are included anywhere in the type. To get them, we must ask for mutual principles as we need them, using the %\index{Vernacular commands!Scheme}%[Scheme] command. *) Scheme even_list_mut := Induction for even_list Sort Prop with odd_list_mut := Induction for odd_list Sort Prop. @@ -695,7 +695,7 @@ forall f2 : formula, P f2 ]] -%\smallskip{}%Focusing on the [Forall] case, which comes third, we see that we are allowed to assume that the theorem holds _for any application of the argument function [f1]_. That is, Coq induction principles do not follow a simple rule that the textual representations of induction variables must get shorter in appeals to induction hypotheses. Luckily for us, the people behind the metatheory of Coq have verified that this flexibility does not introduce unsoundness. +Focusing on the [Forall] case, which comes third, we see that we are allowed to assume that the theorem holds _for any application of the argument function [f1]_. That is, Coq induction principles do not follow a simple rule that the textual representations of induction variables must get shorter in appeals to induction hypotheses. Luckily for us, the people behind the metatheory of Coq have verified that this flexibility does not introduce unsoundness. %\medskip% @@ -749,7 +749,7 @@ : forall P : unit -> Prop, P tt -> forall u : unit, P u ]] -%\smallskip{}%We see that this induction principle is defined in terms of a more general principle, [unit_rect]. The <> stands for "recursion principle," and the <> at the end stands for [Type]. *) +We see that this induction principle is defined in terms of a more general principle, [unit_rect]. The <> stands for "recursion principle," and the <> at the end stands for [Type]. *) Check unit_rect. (** %\vspace{-.15in}% [[ @@ -757,7 +757,7 @@ : forall P : unit -> Type, P tt -> forall u : unit, P u ]] -%\smallskip{}%The principle [unit_rect] gives [P] type [unit -> Type] instead of [unit -> Prop]. [Type] is another universe, like [Set] and [Prop]. In fact, it is a common supertype of both. Later on, we will discuss exactly what the significances of the different universes are. For now, it is just important that we can use [Type] as a sort of meta-universe that may turn out to be either [Set] or [Prop]. We can see the symmetry inherent in the subtyping relationship by printing the definition of another principle that was generated for [unit] automatically: *) +The principle [unit_rect] gives [P] type [unit -> Type] instead of [unit -> Prop]. [Type] is another universe, like [Set] and [Prop]. In fact, it is a common supertype of both. Later on, we will discuss exactly what the significances of the different universes are. For now, it is just important that we can use [Type] as a sort of meta-universe that may turn out to be either [Set] or [Prop]. We can see the symmetry inherent in the subtyping relationship by printing the definition of another principle that was generated for [unit] automatically: *) Print unit_rec. (** %\vspace{-.15in}%[[ @@ -766,7 +766,7 @@ : forall P : unit -> Set, P tt -> forall u : unit, P u ]] -%\smallskip{}%This is identical to the definition for [unit_ind], except that we have substituted [Set] for [Prop]. For most inductive types [T], then, we get not just induction principles [T_ind], but also %\index{recursion principles}%recursion principles [T_rec]. We can use [T_rec] to write recursive definitions without explicit [Fixpoint] recursion. For instance, the following two definitions are equivalent: *) +This is identical to the definition for [unit_ind], except that we have substituted [Set] for [Prop]. For most inductive types [T], then, we get not just induction principles [T_ind], but also %\index{recursion principles}%recursion principles [T_rec]. We can use [T_rec] to write recursive definitions without explicit [Fixpoint] recursion. For instance, the following two definitions are equivalent: *) Definition always_O (u : unit) : nat := match u with @@ -788,7 +788,7 @@ : forall P : unit -> Type, P tt -> forall u : unit, P u ]] -%\smallskip{}%The only new wrinkle here is the annotations on the [match] expression. This is a%\index{dependent pattern matching}% _dependently typed_ pattern match, because the _type_ of the expression depends on the _value_ being matched on. Of course, for this example, the dependency is degenerate; the value being matched on has type [unit], so it may only take on a single known value, [tt]. We will meet more involved examples later, especially in Part II of the book. +The only new wrinkle here is the annotations on the [match] expression. This is a%\index{dependent pattern matching}% _dependently typed_ pattern match, because the _type_ of the expression depends on the _value_ being matched on. Of course, for this example, the dependency is degenerate; the value being matched on has type [unit], so it may only take on a single known value, [tt]. We will meet more involved examples later, especially in Part II of the book. %\index{type inference}%Type inference for dependent pattern matching is undecidable, which can be proved by reduction from %\index{higher-order unification}%higher-order unification%~\cite{HOU}%. Thus, we often find ourselves needing to annotate our programs in a way that explains dependencies to the type checker. In the example of [unit_rect], we have an %\index{Gallina terms!as}%[as] clause, which binds a name for the discriminee; and a %\index{Gallina terms!return}%[return] clause, which gives a way to compute the [match] result type as a function of the discriminee. @@ -822,7 +822,7 @@ P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n ]] -%\smallskip{}%Now we have an actual recursive definition. Expressions starting with %\index{Gallina terms!fix}%[fix] are anonymous forms of [Fixpoint], just as [fun] expressions stand for anonymous non-recursive functions. Beyond that, the syntax of [fix] mirrors that of [Fixpoint]. We can understand the definition of [nat_rect] better by reimplementing [nat_ind] using sections. *) +Now we have an actual recursive definition. Expressions starting with %\index{Gallina terms!fix}%[fix] are anonymous forms of [Fixpoint], just as [fun] expressions stand for anonymous non-recursive functions. Beyond that, the syntax of [fix] mirrors that of [Fixpoint]. We can understand the definition of [nat_rect] better by reimplementing [nat_ind] using sections. *) Section nat_ind'. (** First, we have the property of natural numbers that we aim to prove. *) @@ -875,7 +875,7 @@ forall e : even_list, P e ]] -%\smallskip{}%We see a mutually recursive [fix], with the different functions separated by %\index{Gallina terms!with}%[with] in the same way that they would be separated by <> in ML. A final %\index{Gallina terms!for}%[for] clause identifies which of the mutually recursive functions should be the final value of the [fix] expression. Using this definition as a template, we can reimplement [even_list_mut] directly. *) +We see a mutually recursive [fix], with the different functions separated by %\index{Gallina terms!with}%[with] in the same way that they would be separated by <> in ML. A final %\index{Gallina terms!for}%[for] clause identifies which of the mutually recursive functions should be the final value of the [fix] expression. Using this definition as a template, we can reimplement [even_list_mut] directly. *) Section even_list_mut'. (** First, we need the properties that we are proving. *) @@ -944,7 +944,7 @@ forall n : nat_tree, P n ]] -%\smallskip{}%There is no command like [Scheme] that will implement an improved principle for us. In general, it takes creativity to figure out how to incorporate nested uses of different type families. This is roughly the same creativity employed in the traditional task of strengthening an induction hypothesis. Now that we know how to implement induction principles manually, we are in a position to apply just such creativity to this problem. +There is no command like [Scheme] that will implement an improved principle for us. In general, it takes creativity to figure out how to incorporate nested uses of different type families. This is roughly the same creativity employed in the traditional task of strengthening an induction hypothesis. Now that we know how to implement induction principles manually, we are in a position to apply just such creativity to this problem. First, we will need an auxiliary definition, characterizing what it means for a property to hold of every element of a list. *) @@ -966,7 +966,7 @@ Inductive True : Prop := I : True ]] -%\smallskip{}%That is, [True] is a proposition with exactly one proof, [I], which we may always supply trivially. +That is, [True] is a proposition with exactly one proof, [I], which we may always supply trivially. Finding the definition of [/\] takes a little more work. Coq supports user registration of arbitrary parsing rules, and it is such a rule that is letting us write [/\] instead of an application of some inductive type family. We can find the underlying inductive type with the %\index{Vernacular commands!Locate}%[Locate] command, whose argument may be a parsing token.%\index{Gallina terms!and}% *) @@ -1020,7 +1020,7 @@ end. ]] - %\smallskip{}%Coq rejects this definition, saying + Coq rejects this definition, saying << Recursive call to nat_tree_ind' has principal argument equal to "tr" instead of rest. @@ -1150,7 +1150,7 @@ true = false -> False ]] -%\smallskip{}%The negation is replaced with an implication of falsehood. We use the tactic %\index{tactics!intro}%[intro H] to change the assumption of the implication into a hypothesis named [H]. *) +The negation is replaced with an implication of falsehood. We use the tactic %\index{tactics!intro}%[intro H] to change the assumption of the implication into a hypothesis named [H]. *) intro H. (** %\vspace{-.15in}%[[ @@ -1159,7 +1159,7 @@ False ]] -%\smallskip{}%This is the point in the proof where we apply some creativity. We define a function whose utility will become clear soon. *) +This is the point in the proof where we apply some creativity. We define a function whose utility will become clear soon. *) Definition toProp (b : bool) := if b then True else False. @@ -1172,7 +1172,7 @@ toProp false ]] -%\smallskip{}%Now the righthand side of [H]'s equality appears in the conclusion, so we can rewrite, using the notation [<-] to request to replace the righthand side the equality with the lefthand side.%\index{tactics!rewrite}% *) +Now the righthand side of [H]'s equality appears in the conclusion, so we can rewrite, using the notation [<-] to request to replace the righthand side the equality with the lefthand side.%\index{tactics!rewrite}% *) rewrite <- H. (** %\vspace{-.15in}%[[ @@ -1181,7 +1181,7 @@ toProp true ]] -%\smallskip{}%We are almost done. Just how close we are to done is revealed by computational simplification. *) +We are almost done. Just how close we are to done is revealed by computational simplification. *) simpl. (** %\vspace{-.15in}%[[ diff -r cbfd23b4364d -r 89c67796754e src/Predicates.v --- a/src/Predicates.v Wed Aug 01 15:52:50 2012 -0400 +++ b/src/Predicates.v Wed Aug 01 16:09:37 2012 -0400 @@ -45,7 +45,7 @@ Inductive True : Prop := I : True ]] -%\smallskip{}%Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. Chapter 12 goes into more detail about the theoretical differences between [Prop] and [Set]. For now, we will simply follow common intuitions about what a proof is. +Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. Chapter 12 goes into more detail about the theoretical differences between [Prop] and [Set]. For now, we will simply follow common intuitions about what a proof is. The type [unit] has one value, [tt]. The type [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact _should not_ be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving. @@ -87,7 +87,7 @@ Inductive False : Prop := ]] - %\smallskip{}%We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number. Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *) + We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number. Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *) Theorem False_imp : False -> 2 + 2 = 5. (* begin thide *) @@ -129,7 +129,7 @@ : Prop -> Prop ]] - %\smallskip{}%We see that [not] is just shorthand for implication of [False]. We can use that fact explicitly in proofs. The syntax [~ P] expands to [not P]. *) + We see that [not] is just shorthand for implication of [False]. We can use that fact explicitly in proofs. The syntax [~ P] expands to [not P]. *) Theorem arith_neq' : ~ (2 + 2 = 5). (* begin thide *) @@ -151,7 +151,7 @@ Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B ]] - %\smallskip{}%The interested reader can check that [and] has a Curry-Howard equivalent called %\index{Gallina terms!prod}%[prod], the type of pairs. However, it is generally most convenient to reason about conjunction using tactics. An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks. The operator [/\] is an infix shorthand for [and]. *) + The interested reader can check that [and] has a Curry-Howard equivalent called %\index{Gallina terms!prod}%[prod], the type of pairs. However, it is generally most convenient to reason about conjunction using tactics. An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks. The operator [/\] is an infix shorthand for [and]. *) Theorem and_comm : P /\ Q -> Q /\ P. @@ -199,7 +199,7 @@ or_introl : A -> A \/ B | or_intror : B -> A \/ B ]] -%\smallskip{}%We see that there are two ways to prov a disjunction: prove the first disjunct or prove the second. The Curry-Howard analogue of this is the Coq %\index{Gallina terms!sum}%[sum] type. We can demonstrate the main tactics here with another proof of commutativity. *) +We see that there are two ways to prov a disjunction: prove the first disjunct or prove the second. The Curry-Howard analogue of this is the Coq %\index{Gallina terms!sum}%[sum] type. We can demonstrate the main tactics here with another proof of commutativity. *) Theorem or_comm : P \/ Q -> Q \/ P. @@ -373,7 +373,7 @@ ex_intro : forall x : A, P x -> ex P ]] - %\smallskip{}%The family [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *) + The family [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *) Theorem exist1 : exists x : nat, x + 1 = 2. (* begin thide *) @@ -463,7 +463,7 @@ Inductive eq (A : Type) (x : A) : A -> Prop := eq_refl : x = x ]] - %\smallskip{}%Behind the scenes, uses of infix [=] are expanded to instances of [eq]. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [eq_refl], we see that we can only _prove_ equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. Another way of stating that definition is: equality is defined as the least reflexive relation. + Behind the scenes, uses of infix [=] are expanded to instances of [eq]. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [eq_refl], we see that we can only _prove_ equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. Another way of stating that definition is: equality is defined as the least reflexive relation. Returning to the example of [isZero], we can see how to work with hypotheses that use this predicate. *) @@ -531,7 +531,7 @@ : forall P : nat -> Prop, P 0 -> forall n : nat, isZero n -> P n ]] - %\smallskip{}%In our last proof script, [destruct] chose to instantiate [P] as [fun n => S n + S n = S (S (S (S n)))]. You can verify for yourself that this specialization of the principle applies to the goal and that the hypothesis [P 0] then matches the subgoal we saw generated. If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *) + In our last proof script, [destruct] chose to instantiate [P] as [fun n => S n + S n = S (S (S (S n)))]. You can verify for yourself that this specialization of the principle applies to the goal and that the hypothesis [P 0] then matches the subgoal we saw generated. If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *) (* begin hide *) diff -r cbfd23b4364d -r 89c67796754e src/StackMachine.v --- a/src/StackMachine.v Wed Aug 01 15:52:50 2012 -0400 +++ b/src/StackMachine.v Wed Aug 01 16:09:37 2012 -0400 @@ -103,6 +103,9 @@ Eval simpl in expDenote (Binop Times (Binop Plus (Const 2) (Const 2)) (Const 7)). (** [= 28 : nat] *) +(** %\smallskip{}%Nothing too surprising goes on here, so we are ready to move on to the target language of our compiler. *) + + (** ** Target Language *) (** We will compile our source programs onto a simple stack machine, whose syntax is: *) @@ -140,6 +143,8 @@ end end. +(** With the two programming languages defined, we can turn to the compiler definition. *) + (** ** Translation *) @@ -163,7 +168,7 @@ Eval simpl in compile (Binop Times (Binop Plus (Const 2) (Const 2)) (Const 7)). (** [= iConst 7 :: iConst 2 :: iConst 2 :: iBinop Plus :: iBinop Times :: nil : prog] *) -(** We can also run our compiled programs and check that they give the right results. *) +(** %\smallskip{}%We can also run our compiled programs and check that they give the right results. *) Eval simpl in progDenote (compile (Const 42)) nil. (** [= Some (42 :: nil) : option stack] *) @@ -615,10 +620,12 @@ (TNConst 7)). (** [= true : typeDenote Bool] *) +(** %\smallskip{}%Now we are ready to define a suitable stack machine target for compilation. *) + (** ** Target Language *) -(** Now we want to define a suitable stack machine target for compilation. In the example of the untyped language, stack machine programs could encounter stack underflows and "get stuck." This was unfortunate, since we had to deal with this complication even though we proved that our compiler never produced underflowing programs. We could have used dependent types to force all stack machine programs to be underflow-free. +(** In the example of the untyped language, stack machine programs could encounter stack underflows and "get stuck." This was unfortunate, since we had to deal with this complication even though we proved that our compiler never produced underflowing programs. We could have used dependent types to force all stack machine programs to be underflow-free. For our new languages, besides underflow, we also have the problem of stack slots with naturals instead of bools or vice versa. This time, we will use indexed typed families to avoid the need to reason about potential failures. @@ -761,6 +768,8 @@ (TNConst 7)) nil) tt. (** [= (true, tt) : vstack (Bool :: nil)] *) +(** %\smallskip{}%The compiler seems to be working, so let us turn to proving that it _always_ works. *) + (** ** Translation Correctness *) diff -r cbfd23b4364d -r 89c67796754e src/Subset.v --- a/src/Subset.v Wed Aug 01 15:52:50 2012 -0400 +++ b/src/Subset.v Wed Aug 01 16:09:37 2012 -0400 @@ -85,7 +85,7 @@ : nat ]] -%\smallskip{}%One aspect in particular of the definition of [pred_strong1] may be surprising. We took advantage of [Definition]'s syntactic sugar for defining function arguments in the case of [n], but we bound the proofs later with explicit [fun] expressions. Let us see what happens if we write this function in the way that at first seems most natural. +One aspect in particular of the definition of [pred_strong1] may be surprising. We took advantage of [Definition]'s syntactic sugar for defining function arguments in the case of [n], but we bound the proofs later with explicit [fun] expressions. Let us see what happens if we write this function in the way that at first seems most natural. %\vspace{-.15in}%[[ Definition pred_strong1' (n : nat) (pf : n > 0) : nat := @@ -153,7 +153,7 @@ exist : forall x : A, P x -> sig P ]] -%\smallskip{}%The family [sig] is a Curry-Howard twin of [ex], except that [sig] is in [Type], while [ex] is in [Prop]. That means that [sig] values can survive extraction, while [ex] proofs will always be erased. The actual details of extraction of [sig]s are more subtle, as we will see shortly. +The family [sig] is a Curry-Howard twin of [ex], except that [sig] is in [Type], while [ex] is in [Prop]. That means that [sig] values can survive extraction, while [ex] proofs will always be erased. The actual details of extraction of [sig]s are more subtle, as we will see shortly. We rewrite [pred_strong1], using some syntactic sugar for subset types. *) @@ -301,7 +301,7 @@ : forall n : nat, n > 0 -> {m : nat | n = S m} ]] -%\smallskip{}%We see the code we entered, with some proofs filled in. The first proof obligation, the second argument to [False_rec], is filled in with a nasty-looking proof term that we can be glad we did not enter by hand. The second proof obligation is a simple reflexivity proof. *) +We see the code we entered, with some proofs filled in. The first proof obligation, the second argument to [False_rec], is filled in with a nasty-looking proof term that we can be glad we did not enter by hand. The second proof obligation is a simple reflexivity proof. *) Eval compute in pred_strong4 two_gt0. (** %\vspace{-.15in}% [[ @@ -309,7 +309,7 @@ : {m : nat | 2 = S m} ]] - %\smallskip{}%A tactic modifier called %\index{tactics!abstract}%[abstract] can be helpful for producing shorter terms, by automatically abstracting subgoals into named lemmas. *) + A tactic modifier called %\index{tactics!abstract}%[abstract] can be helpful for producing shorter terms, by automatically abstracting subgoals into named lemmas. *) (* begin thide *) Definition pred_strong4' : forall n : nat, n > 0 -> {m : nat | n = S m}. @@ -358,7 +358,7 @@ : {m : nat | 2 = S m} ]] - %\smallskip{}%One other alternative is worth demonstrating. Recent Coq versions include a facility called %\index{Program}%[Program] that streamlines this style of definition. Here is a complete implementation using [Program].%\index{Vernacular commands!Obligation Tactic}\index{Vernacular commands!Program Definition}% *) + One other alternative is worth demonstrating. Recent Coq versions include a facility called %\index{Program}%[Program] that streamlines this style of definition. Here is a complete implementation using [Program].%\index{Vernacular commands!Obligation Tactic}\index{Vernacular commands!Program Definition}% *) Obligation Tactic := crush. @@ -376,7 +376,7 @@ : {m : nat | 2 = S m} ]] -%\smallskip{}%In this case, we see that the new definition yields the same computational behavior as before. *) +In this case, we see that the new definition yields the same computational behavior as before. *) (** * Decidable Proposition Types *) @@ -421,7 +421,7 @@ : {2 = 3} + {2 <> 3} ]] -%\smallskip{}%Note that the %\coqdocnotation{%##Yes##%}% and %\coqdocnotation{%##No##%}% notations are hiding proofs establishing the correctness of the outputs. +Note that the %\coqdocnotation{%##Yes##%}% and %\coqdocnotation{%##No##%}% notations are hiding proofs establishing the correctness of the outputs. Our definition extracts to reasonable OCaml code. *) @@ -529,7 +529,7 @@ : {In 3 (1 :: 2 :: nil)} + {~ In 3 (1 :: 2 :: nil)} ]] -%\smallskip{}%[In_dec] has a reasonable extraction to OCaml. *) +[In_dec] has a reasonable extraction to OCaml. *) Extraction In_dec. (* end thide *) @@ -596,7 +596,7 @@ : {{m | 0 = S m}} ]] - %\smallskip{}%Because we used [maybe], one valid implementation of the type we gave [pred_strong7] would return [??] in every case. We can strengthen the type to rule out such vacuous implementations, and the type family %\index{Gallina terms!sumor}%[sumor] from the standard library provides the easiest starting point. For type [A] and proposition [B], [A + {B}] desugars to [sumor A B], whose values are either values of [A] or proofs of [B]. *) + Because we used [maybe], one valid implementation of the type we gave [pred_strong7] would return [??] in every case. We can strengthen the type to rule out such vacuous implementations, and the type family %\index{Gallina terms!sumor}%[sumor] from the standard library provides the easiest starting point. For type [A] and proposition [B], [A + {B}] desugars to [sumor A B], whose values are either values of [A] or proofs of [B]. *) Print sumor. (** %\vspace{-.15in}% [[ @@ -604,7 +604,7 @@ inleft : A -> A + {B} | inright : B -> A + {B} ]] -%\smallskip{}%We add notations for easy use of the [sumor] constructors. The second notation is specialized to [sumor]s whose [A] parameters are instantiated with regular subset types, since this is how we will use [sumor] below. *) +We add notations for easy use of the [sumor] constructors. The second notation is specialized to [sumor]s whose [A] parameters are instantiated with regular subset types, since this is how we will use [sumor] below. *) Notation "!!" := (inright _ _). Notation "[|| x ||]" := (inleft _ [x]). @@ -776,7 +776,7 @@ : {{t | hasType (Plus (Nat 1) (Bool false)) t}} ]] -%\smallskip{}%The type checker also extracts to some reasonable OCaml code. *) +The type checker also extracts to some reasonable OCaml code. *) Extraction typeCheck. @@ -929,4 +929,4 @@ {(forall t : type, ~ hasType (Plus (Nat 1) (Bool false)) t)} ]] -%\smallskip{}%The results of simplifying calls to [typeCheck'] look deceptively similar to the results for [typeCheck], but now the types of the results provide more information. *) +The results of simplifying calls to [typeCheck'] look deceptively similar to the results for [typeCheck], but now the types of the results provide more information. *)