# HG changeset patch # User Adam Chlipala # Date 1289424664 18000 # Node ID 2c88fc1dbe33dafa4562111c8d6e4f5578245e75 # Parent da9ccc6bf572d182cc9fd9422d556c2c7e6eb30d A pass of double-quotes and LaTeX operator beautification diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Coinductive.v --- a/src/Coinductive.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Coinductive.v Wed Nov 10 16:31:04 2010 -0500 @@ -33,7 +33,7 @@ There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem. -One solution is to use types to contain the possibility of non-termination. For instance, we can create a "non-termination monad," inside which we must write all of our general-recursive programs. This is a heavyweight solution, and so we would like to avoid it whenever possible. +One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs. This is a heavyweight solution, and so we would like to avoid it whenever possible. Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\textit{%##co-inductive types##%}%, is the subject of this chapter. *) @@ -328,7 +328,7 @@ Abort. (* end thide *) -(** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with "hiding" the co-inductive hypothesis. *) +(** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. *) (** * Simple Modeling of Non-Terminating Programs *) @@ -387,7 +387,7 @@ -> safe pc' r' -> safe pc r. - (** Now we can prove that any starting address and register bank lead to safe infinite execution. Recall that proofs of existentially-quantified formulas are all built with a single constructor of the inductive type [ex]. This means that we can use [destruct] to "open up" such proofs. In the proof below, we want to perform this opening up on an appropriate use of the [exec_total] lemma. This lemma's conclusion begins with two existential quantifiers, so we want to tell [destruct] that it should not stop at the first quantifier. We accomplish our goal by using an %\textit{%##intro pattern##%}% with [destruct]. Consult the Coq manual for the details of intro patterns; the specific pattern [[? [? ?]]] that we use here accomplishes our goal of destructing both quantifiers at once. *) + (** Now we can prove that any starting address and register bank lead to safe infinite execution. Recall that proofs of existentially-quantified formulas are all built with a single constructor of the inductive type [ex]. This means that we can use [destruct] to %``%#"#open up#"#%''% such proofs. In the proof below, we want to perform this opening up on an appropriate use of the [exec_total] lemma. This lemma's conclusion begins with two existential quantifiers, so we want to tell [destruct] that it should not stop at the first quantifier. We accomplish our goal by using an %\textit{%##intro pattern##%}% with [destruct]. Consult the Coq manual for the details of intro patterns; the specific pattern [[? [? ?]]] that we use here accomplishes our goal of destructing both quantifiers at once. *) Theorem always_safe : forall pc rs, safe pc rs. @@ -412,7 +412,7 @@ %\item%#
  • # Define a function [map] for building an output tree out of two input trees by traversing them in parallel and applying a two-argument function to their corresponding data values.#
  • # %\item%#
  • # Define a tree [falses] where every node has the value [false].#
  • # %\item%#
  • # Define a tree [true_false] where the root node has value [true], its children have value [false], all nodes at the next have the value [true], and so on, alternating boolean values from level to level.#
  • # - %\item%#
  • # Prove that [true_false] is equal to the result of mapping the boolean "or" function [orb] over [true_false] and [falses]. You can make [orb] available with [Require Import Bool.]. You may find the lemma [orb_false_r] from the same module helpful. Your proof here should not be about the standard equality [=], but rather about some new equality relation that you define.#
  • # + %\item%#
  • # Prove that [true_false] is equal to the result of mapping the boolean %``%#"#or#"#%''% function [orb] over [true_false] and [falses]. You can make [orb] available with [Require Import Bool.]. You may find the lemma [orb_false_r] from the same module helpful. Your proof here should not be about the standard equality [=], but rather about some new equality relation that you define.#
  • # ##%\end{enumerate}% ## ##%\end{enumerate}% *) diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Equality.v --- a/src/Equality.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Equality.v Wed Nov 10 16:31:04 2010 -0500 @@ -23,7 +23,7 @@ (** * The Definitional Equality *) -(** We have seen many examples so far where proof goals follow "by computation." That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially. Exactly when this works and when it does not depends on the details of Coq's %\textit{%##definitional equality##%}%. This is an untyped binary relation appearing in the formal metatheory of CIC. CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal. +(** We have seen many examples so far where proof goals follow %``%#"#by computation.#"#%''% That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially. Exactly when this works and when it does not depends on the details of Coq's %\textit{%##definitional equality##%}%. This is an untyped binary relation appearing in the formal metatheory of CIC. CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal. The [cbv] tactic will help us illustrate the rules of Coq's definitional equality. We redefine the natural number predecessor function in a somewhat convoluted way and construct a manual proof that it returns [0] when applied to [1]. *) @@ -95,12 +95,12 @@ (** The standard [eq] relation is critically dependent on the definitional equality. [eq] is often called a %\textit{%##propositional equality##%}%, because it reifies definitional equality as a proposition that may or may not hold. Standard axiomatizations of an equality predicate in first-order logic define equality in terms of properties it has, like reflexivity, symmetry, and transitivity. In contrast, for [eq] in Coq, those properties are implicit in the properties of the definitional equality, which are built into CIC's metatheory and the implementation of Gallina. We could add new rules to the definitional equality, and [eq] would keep its definition and methods of use. - This all may make it sound like the choice of [eq]'s definition is unimportant. To the contrary, in this chapter, we will see examples where alternate definitions may simplify proofs. Before that point, we will introduce effective proof methods for goals that use proofs of the standard propositional equality "as data." *) + This all may make it sound like the choice of [eq]'s definition is unimportant. To the contrary, in this chapter, we will see examples where alternate definitions may simplify proofs. Before that point, we will introduce effective proof methods for goals that use proofs of the standard propositional equality %``%#"#as data.#"#%''% *) (** * Heterogeneous Lists Revisited *) -(** One of our example dependent data structures from the last chapter was heterogeneous lists and their associated "cursor" type. The recursive version poses some special challenges related to equality proofs, since it uses such proofs in its definition of [member] types. *) +(** One of our example dependent data structures from the last chapter was heterogeneous lists and their associated %``%#"#cursor#"#%''% type. The recursive version poses some special challenges related to equality proofs, since it uses such proofs in its definition of [member] types. *) Section fhlist. Variable A : Type. @@ -193,7 +193,7 @@ In retrospect, the problem is not so hard to see. Reflexivity proofs only show [x = x] for particular values of [x], whereas here we are thinking in terms of a proof of [a = elm], where the two sides of the equality are not equal syntactically. Thus, the essential lemma we need does not even type-check! - Is it time to throw in the towel? Luckily, the answer is "no." In this chapter, we will see several useful patterns for proving obligations like this. + Is it time to throw in the towel? Luckily, the answer is %``%#"#no.#"#%''% In this chapter, we will see several useful patterns for proving obligations like this. For this particular example, the solution is surprisingly straightforward. [destruct] has a simpler sibling [case] which should behave identically for any inductive type with one constructor of no arguments. *) @@ -297,7 +297,7 @@ ]] - The type error comes from our [return] annotation. In that annotation, the [as]-bound variable [pf'] has type [x = x'], refering to the [in]-bound variable [x']. To do a dependent [match], we %\textit{%##must##%}% choose a fresh name for the second argument of [eq]. We are just as constrained to use the "real" value [x] for the first argument. Thus, within the [return] clause, the proof we are matching on %\textit{%##must##%}% equate two non-matching terms, which makes it impossible to equate that proof with reflexivity. + The type error comes from our [return] annotation. In that annotation, the [as]-bound variable [pf'] has type [x = x'], refering to the [in]-bound variable [x']. To do a dependent [match], we %\textit{%##must##%}% choose a fresh name for the second argument of [eq]. We are just as constrained to use the %``%#"#real#"#%''% value [x] for the first argument. Thus, within the [return] clause, the proof we are matching on %\textit{%##must##%}% equate two non-matching terms, which makes it impossible to equate that proof with reflexivity. Nonetheless, it turns out that, with one catch, we %\textit{%##can##%}% prove this lemma. *) @@ -323,9 +323,9 @@ ]] - [eq_rect_eq] states a "fact" that seems like common sense, once the notation is deciphered. [eq_rect] is the automatically-generated recursion principle for [eq]. Calling [eq_rect] is another way of [match]ing on an equality proof. The proof we match on is the argument [h], and [x] is the body of the [match]. [eq_rect_eq] just says that [match]es on proofs of [p = p], for any [p], are superfluous and may be removed. + [eq_rect_eq] states a %``%#"#fact#"#%''% that seems like common sense, once the notation is deciphered. [eq_rect] is the automatically-generated recursion principle for [eq]. Calling [eq_rect] is another way of [match]ing on an equality proof. The proof we match on is the argument [h], and [x] is the body of the [match]. [eq_rect_eq] just says that [match]es on proofs of [p = p], for any [p], are superfluous and may be removed. - Perhaps surprisingly, we cannot prove [eq_rect_eq] from within Coq. This proposition is introduced as an axiom; that is, a proposition asserted as true without proof. We cannot assert just any statement without proof. Adding [False] as an axiom would allow us to prove any proposition, for instance, defeating the point of using a proof assistant. In general, we need to be sure that we never assert %\textit{%##inconsistent##%}% sets of axioms. A set of axioms is inconsistent if its conjunction implies [False]. For the case of [eq_rect_eq], consistency has been verified outside of Coq via "informal" metatheory. + Perhaps surprisingly, we cannot prove [eq_rect_eq] from within Coq. This proposition is introduced as an axiom; that is, a proposition asserted as true without proof. We cannot assert just any statement without proof. Adding [False] as an axiom would allow us to prove any proposition, for instance, defeating the point of using a proof assistant. In general, we need to be sure that we never assert %\textit{%##inconsistent##%}% sets of axioms. A set of axioms is inconsistent if its conjunction implies [False]. For the case of [eq_rect_eq], consistency has been verified outside of Coq via %``%#"#informal#"#%''% metatheory. This axiom is equivalent to another that is more commonly known and mentioned in type theory circles. *) @@ -339,7 +339,7 @@ ]] - This is the unfortunately-named "Streicher's axiom K," which says that a predicate on properly-typed equality proofs holds of all such proofs if it holds of reflexivity. *) + This is the unfortunately-named %``%#"#Streicher's axiom K,#"#%''% which says that a predicate on properly-typed equality proofs holds of all such proofs if it holds of reflexivity. *) End fhlist_map. @@ -560,7 +560,7 @@ ]] -[JMeq] stands for "John Major equality," a name coined by Conor McBride as a sort of pun about British politics. [JMeq] starts out looking a lot like [eq]. The crucial difference is that we may use [JMeq] %\textit{%##on arguments of different types##%}%. For instance, a lemma that we failed to establish before is trivial with [JMeq]. It makes for prettier theorem statements to define some syntactic shorthand first. *) +[JMeq] stands for %``%#"#John Major equality,#"#%''% a name coined by Conor McBride as a sort of pun about British politics. [JMeq] starts out looking a lot like [eq]. The crucial difference is that we may use [JMeq] %\textit{%##on arguments of different types##%}%. For instance, a lemma that we failed to establish before is trivial with [JMeq]. It makes for prettier theorem statements to define some syntactic shorthand first. *) Infix "==" := JMeq (at level 70, no associativity). @@ -734,7 +734,7 @@ rewrite (UIP_refl _ _ x0); reflexivity. Qed. -(** We see that, in a very formal sense, we are free to switch back and forth between the two styles of proofs about equality proofs. One style may be more convenient than the other for some proofs, but we can always interconvert between our results. The style that does not use heterogeneous equality may be preferable in cases where many results do not require the tricks of this chapter, since then the use of axioms is avoided altogether for the simple cases, and a wider audience will be able to follow those "simple" proofs. On the other hand, heterogeneous equality often makes for shorter and more readable theorem statements. +(** We see that, in a very formal sense, we are free to switch back and forth between the two styles of proofs about equality proofs. One style may be more convenient than the other for some proofs, but we can always interconvert between our results. The style that does not use heterogeneous equality may be preferable in cases where many results do not require the tricks of this chapter, since then the use of axioms is avoided altogether for the simple cases, and a wider audience will be able to follow those %``%#"#simple#"#%''% proofs. On the other hand, heterogeneous equality often makes for shorter and more readable theorem statements. It is worth remarking that it is possible to avoid axioms altogether for equalities on types with decidable equality. The [Eqdep_dec] module of the standard library contains a parametric proof of [UIP_refl] for such cases. *) (* end thide *) @@ -763,7 +763,7 @@ Qed. (* end thide *) -(** The same axiom can help us prove equality of types, where we need to "reason under quantifiers." *) +(** The same axiom can help us prove equality of types, where we need to %``%#"#reason under quantifiers.#"#%''% *) Theorem forall_eq : (forall x : nat, match x with | O => True @@ -811,29 +811,29 @@ %\item%#
  • # Define a datatype [type] of lambda types, including just booleans and function types.#
  • # %\item%#
  • # Define a type family [exp : list type -> type -> Type] of lambda expressions, including boolean constants, variables, and function application and abstraction.#
  • # %\item%#
  • # Implement a definitional interpreter for [exp]s, by way of a recursive function over expressions and substitutions for free variables, like in the related example from the last chapter.#
  • # - %\item%#
  • # Implement a function [subst : forall t' ts t, exp (t' :: ts) t -> exp ts t' -> exp ts t]. The type of the first expression indicates that its most recently bound free variable has type [t']. The second expression also has type [t'], and the job of [subst] is to substitute the second expression for every occurrence of the "first" variable of the first expression.#
  • # + %\item%#
  • # Implement a function [subst : forall t' ts t, exp (t' :: ts) t -> exp ts t' -> exp ts t]. The type of the first expression indicates that its most recently bound free variable has type [t']. The second expression also has type [t'], and the job of [subst] is to substitute the second expression for every occurrence of the %``%#"#first#"#%''% variable of the first expression.#
  • # %\item%#
  • # Prove that [subst] preserves program meanings. That is, prove [[ forall t' ts t (e : exp (t' :: ts) t) (e' : exp ts t') (s : hlist typeDenote ts), expDenote (subst e e') s = expDenote e (expDenote e' s ::: s) ]] - where [:::] is an infix operator for heterogeneous "cons" that is defined in the book's [DepList] module.#
  • # + where [:::] is an infix operator for heterogeneous %``%#"#cons#"#%''% that is defined in the book's [DepList] module.## ##%\end{enumerate}% The material presented up to this point should be sufficient to enable a good solution of this exercise, with enough ingenuity. If you get stuck, it may be helpful to use the following structure. None of these elements need to appear in your solution, but we can at least guarantee that there is a reasonable solution based on them. %\begin{enumerate}%#
      # %\item%#
    1. # The [DepList] module will be useful. You can get the standard dependent list definitions there, instead of copying-and-pasting from the last chapter. It is worth reading the source for that module over, since it defines some new helpful functions and notations that we did not use last chapter.#
    2. # - %\item%#
    3. # Define a recursive function [liftVar : forall ts1 ts2 t t', member t (ts1 ++ ts2) -> member t (ts1 ++ t' :: ts2)]. This function should "lift" a de Bruijn variable so that its type refers to a new variable inserted somewhere in the index list.#
    4. # - %\item%#
    5. # Define a recursive function [lift' : forall ts t (e : exp ts t) ts1 ts2 t', ts = ts1 ++ ts2 -> exp (ts1 ++ t' :: ts2) t] which performs a similar lifting on an [exp]. The convoluted type is to get around restrictions on [match] annotations. We delay "realizing" that the first index of [e] is built with list concatenation until after a dependent [match], and the new explicit proof argument must be used to cast some terms that come up in the [match] body.#
    6. # + %\item%#
    7. # Define a recursive function [liftVar : forall ts1 ts2 t t', member t (ts1 ++ ts2) -> member t (ts1 ++ t' :: ts2)]. This function should %``%#"#lift#"#%''% a de Bruijn variable so that its type refers to a new variable inserted somewhere in the index list.#
    8. # + %\item%#
    9. # Define a recursive function [lift' : forall ts t (e : exp ts t) ts1 ts2 t', ts = ts1 ++ ts2 -> exp (ts1 ++ t' :: ts2) t] which performs a similar lifting on an [exp]. The convoluted type is to get around restrictions on [match] annotations. We delay %``%#"#realizing#"#%''% that the first index of [e] is built with list concatenation until after a dependent [match], and the new explicit proof argument must be used to cast some terms that come up in the [match] body.#
    10. # %\item%#
    11. # Define a function [lift : forall ts t t', exp ts t -> exp (t' :: ts) t], which handles simpler top-level lifts. This should be an easy one-liner based on [lift'].#
    12. # %\item%#
    13. # Define a recursive function [substVar : forall ts1 ts2 t t', member t (ts1 ++ t' :: ts2) -> (t' = t) + member t (ts1 ++ ts2)]. This function is the workhorse behind substitution applied to a variable. It returns [inl] to indicate that the variable we pass to it is the variable that we are substituting for, and it returns [inr] to indicate that the variable we are examining is %\textit{%##not##%}% the one we are substituting for. In the first case, we get a proof that the necessary typing relationship holds, and, in the second case, we get the original variable modified to reflect the removal of the substitutee from the typing context.#
    14. # %\item%#
    15. # Define a recursive function [subst' : forall ts t (e : exp ts t) ts1 t' ts2, ts = ts1 ++ t' :: ts2 -> exp (ts1 ++ ts2) t' -> exp (ts1 ++ ts2) t]. This is the workhorse of substitution in expressions, employing the same proof-passing trick as for [lift']. You will probably want to use [lift] somewhere in the definition of [subst'].#
    16. # %\item%#
    17. # Now [subst] should be a one-liner, defined in terms of [subst'].#
    18. # %\item%#
    19. # Prove a correctness theorem for each auxiliary function, leading up to the proof of [subst] correctness.#
    20. # - %\item%#
    21. # All of the reasoning about equality proofs in these theorems follows a regular pattern. If you have an equality proof that you want to replace with [refl_equal] somehow, run [generalize] on that proof variable. Your goal is to get to the point where you can [rewrite] with the original proof to change the type of the generalized version. To avoid type errors (the infamous "second-order unification" failure messages), it will be helpful to run [generalize] on other pieces of the proof context that mention the equality's lefthand side. You might also want to use [generalize dependent], which generalizes not just one variable but also all variables whose types depend on it. [generalize dependent] has the sometimes-helpful property of removing from the context all variables that it generalizes. Once you do manage the mind-bending trick of using the equality proof to rewrite its own type, you will be able to rewrite with [UIP_refl].#
    22. # + %\item%#
    23. # All of the reasoning about equality proofs in these theorems follows a regular pattern. If you have an equality proof that you want to replace with [refl_equal] somehow, run [generalize] on that proof variable. Your goal is to get to the point where you can [rewrite] with the original proof to change the type of the generalized version. To avoid type errors (the infamous %``%#"#second-order unification#"#%''% failure messages), it will be helpful to run [generalize] on other pieces of the proof context that mention the equality's lefthand side. You might also want to use [generalize dependent], which generalizes not just one variable but also all variables whose types depend on it. [generalize dependent] has the sometimes-helpful property of removing from the context all variables that it generalizes. Once you do manage the mind-bending trick of using the equality proof to rewrite its own type, you will be able to rewrite with [UIP_refl].#
    24. # %\item%#
    25. # A variant of the [ext_eq] axiom from the end of this chapter is available in the book module [Axioms], and you will probably want to use it in the [lift'] and [subst'] correctness proofs.#
    26. # - %\item%#
    27. # The [change] tactic should come in handy in the proofs about [lift] and [subst], where you want to introduce "extraneous" list concatenations with [nil] to match the forms of earlier theorems.#
    28. # - %\item%#
    29. # Be careful about [destruct]ing a term "too early." You can use [generalize] on proof terms to bring into the proof context any important propositions about the term. Then, when you [destruct] the term, it is updated in the extra propositions, too. The [case_eq] tactic is another alternative to this approach, based on saving an equality between the original term and its new form.#
    30. # + %\item%#
    31. # The [change] tactic should come in handy in the proofs about [lift] and [subst], where you want to introduce %``%#"#extraneous#"#%''% list concatenations with [nil] to match the forms of earlier theorems.#
    32. # + %\item%#
    33. # Be careful about [destruct]ing a term %``%#"#too early.#"#%''% You can use [generalize] on proof terms to bring into the proof context any important propositions about the term. Then, when you [destruct] the term, it is updated in the extra propositions, too. The [case_eq] tactic is another alternative to this approach, based on saving an equality between the original term and its new form.#
    34. # #
    #%\end{enumerate}% ## diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Extensional.v --- a/src/Extensional.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Extensional.v Wed Nov 10 16:31:04 2010 -0500 @@ -1,4 +1,4 @@ -(* Copyright (c) 2008-2009, Adam Chlipala +(* Copyright (c) 2008-2010, Adam Chlipala * * This work is licensed under a * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 @@ -37,6 +37,7 @@ (* end hide *) Notation "'Nat'" := TNat : source_scope. + (** printing --> $\longrightarrow$ *) Infix "-->" := Arrow (right associativity, at level 60) : source_scope. Open Scope source_scope. @@ -127,7 +128,7 @@ -> exp_equiv G (#v1) (#v2) | EqConst : forall G n, - exp_equiv G (^n) (^n) + exp_equiv G (^ n) (^ n) | EqPlus : forall G x1 y1 x2 y2, exp_equiv G x1 x2 -> exp_equiv G y1 y2 @@ -158,6 +159,7 @@ | Prod : type -> type -> type. Notation "'Nat'" := TNat : cps_scope. + (** printing ---> $\Longrightarrow$ *) Notation "t --->" := (Cont t) (at level 61) : cps_scope. Infix "**" := Prod (right associativity, at level 60) : cps_scope. @@ -277,11 +279,12 @@ Fixpoint cpsType (t : Source.type) : CPS.type := match t with | Nat => Nat%cps - | t1 --> t2 => (cpsType t1 ** (cpsType t2 --->) --->)%cps + | t1 --> t2 => (cpsType t1 ** (cpsType t2 ---> ) ---> )%cps end%source. (** Now we can define the expression translation. The notation [x <-- e1; e2] stands for translating source-level expression [e1], binding [x] to the CPS-level result of running the translated program, and then evaluating CPS-level expression [e2] in that context. *) +(** printing <-- $\longleftarrow$ *) Reserved Notation "x <-- e1 ; e2" (right associativity, at level 76, e1 at next level). Section cpsExp. @@ -298,7 +301,7 @@ | Var _ v => fun k => k v | Const n => fun k => - x <- ^n; + x <- ^ n; k x | Plus e1 e2 => fun k => x1 <-- e1; @@ -309,7 +312,7 @@ | App _ _ e1 e2 => fun k => f <-- e1; x <-- e2; - kf <- \r, k r; + kf <- \ r, k r; p <- [x, kf]; f @@ p | Abs _ _ e' => fun k => diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Hoas.v --- a/src/Hoas.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Hoas.v Wed Nov 10 16:31:04 2010 -0500 @@ -1,4 +1,4 @@ -(* Copyright (c) 2008-2009, Adam Chlipala +(* Copyright (c) 2008-2010, Adam Chlipala * * This work is licensed under a * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 @@ -25,7 +25,7 @@ (** * Classic HOAS *) -(** The motto of HOAS is simple: represent object language binders using meta language binders. Here, "object language" refers to the language being formalized, while the meta language is the language in which the formalization is done. Our usual meta language, Coq's Gallina, contains the standard binding facilities of functional programming, making it a promising base for higher-order encodings. +(** The motto of HOAS is simple: represent object language binders using meta language binders. Here, %``%#"#object language#"#%''% refers to the language being formalized, while the meta language is the language in which the formalization is done. Our usual meta language, Coq's Gallina, contains the standard binding facilities of functional programming, making it a promising base for higher-order encodings. Recall the concrete encoding of basic untyped lambda calculus expressions. *) @@ -500,6 +500,7 @@ (** Finally, we have the step relation itself, which combines our ingredients in the standard way. In the congruence rule, we introduce the extra variable [E1] and its associated equality to make the rule easier for [eauto] to apply. *) +(** printing ==> $\Rightarrow$ *) Reserved Notation "E1 ==> E2" (no associativity, at level 90). Inductive Step : forall t, Exp t -> Exp t -> Prop := @@ -579,6 +580,7 @@ We must start by defining the big-step semantics itself. The definition is completely standard. *) +(** printing ===> $\Longrightarrow$ *) Reserved Notation "E1 ===> E2" (no associativity, at level 90). Inductive BigStep : forall t, Exp t -> Exp t -> Prop := @@ -606,6 +608,7 @@ (** To prove a crucial intermediate lemma, we will want to name the transitive-reflexive closure of the small-step relation. *) (* begin thide *) +(** printing ==>* $\Rightarrow^*$ *) Reserved Notation "E1 ==>* E2" (no associativity, at level 90). Inductive MultiStep : forall t, Exp t -> Exp t -> Prop := diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/InductiveTypes.v --- a/src/InductiveTypes.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/InductiveTypes.v Wed Nov 10 16:31:04 2010 -0500 @@ -66,7 +66,7 @@ ]] -%\noindent%...which corresponds to "proof by case analysis" in classical math. For non-recursive inductive types, the two tactics will always have identical behavior. Often case analysis is sufficient, even in proofs about recursive types, and it is nice to avoid introducing unneeded induction hypotheses. +%\noindent%...which corresponds to %``%#"#proof by case analysis#"#%''% in classical math. For non-recursive inductive types, the two tactics will always have identical behavior. Often case analysis is sufficient, even in proofs about recursive types, and it is nice to avoid introducing unneeded induction hypotheses. What exactly %\textit{%##is##%}% the induction principle for [unit]? We can ask Coq: *) @@ -329,7 +329,7 @@ %\medskip% -In general, we can implement any "tree" types as inductive types. For example, here are binary trees of naturals. *) +In general, we can implement any %``%#"#tree#"#%''% types as inductive types. For example, here are binary trees of naturals. *) Inductive nat_btree : Set := | NLeaf : nat_btree @@ -579,7 +579,7 @@ | And : formula -> formula -> formula | Forall : (nat -> formula) -> formula. -(** Our kinds of formulas are equalities between naturals, conjunction, and universal quantification over natural numbers. We avoid needing to include a notion of "variables" in our type, by using Coq functions to encode quantification. For instance, here is the encoding of [forall x : nat, x = x]: *) +(** Our kinds of formulas are equalities between naturals, conjunction, and universal quantification over natural numbers. We avoid needing to include a notion of %``%#"#variables#"#%''% in our type, by using Coq functions to encode quantification. For instance, here is the encoding of [forall x : nat, x = x]: *) Example forall_refl : formula := Forall (fun x => Eq x x). @@ -936,7 +936,7 @@ ]] - Coq rejects this definition, saying "Recursive call to nat_tree_ind' has principal argument equal to "tr" instead of rest." There is no deep theoretical reason why this program should be rejected; Coq applies incomplete termination-checking heuristics, and it is necessary to learn a few of the most important rules. The term "nested inductive type" hints at the solution to this particular problem. Just like true mutually-inductive types require mutually-recursive induction principles, nested types require nested recursion. *) + Coq rejects this definition, saying %``%#"#Recursive call to nat_tree_ind' has principal argument equal to "tr" instead of rest.#"#%''% There is no deep theoretical reason why this program should be rejected; Coq applies incomplete termination-checking heuristics, and it is necessary to learn a few of the most important rules. The term %``%#"#nested inductive type#"#%''% hints at the solution to this particular problem. Just like true mutually-inductive types require mutually-recursive induction principles, nested types require nested recursion. *) Fixpoint nat_tree_ind' (tr : nat_tree) : P tr := match tr with @@ -1051,7 +1051,7 @@ Theorem true_neq_false : true <> false. (* begin thide *) -(** We begin with the tactic [red], which is short for "one step of reduction," to unfold the definition of logical negation. *) +(** We begin with the tactic [red], which is short for %``%#"#one step of reduction,#"#%''% to unfold the definition of logical negation. *) red. (** [[ @@ -1128,11 +1128,11 @@ (** %\begin{enumerate}%#
      # -%\item%#
    1. # Define an inductive type [truth] with three constructors, [Yes], [No], and [Maybe]. [Yes] stands for certain truth, [No] for certain falsehood, and [Maybe] for an unknown situation. Define "not," "and," and "or" for this replacement boolean algebra. Prove that your implementation of "and" is commutative and distributes over your implementation of "or."#
    2. # +%\item%#
    3. # Define an inductive type [truth] with three constructors, [Yes], [No], and [Maybe]. [Yes] stands for certain truth, [No] for certain falsehood, and [Maybe] for an unknown situation. Define %``%#"#not,#"#%''% %``%#"#and,#"#%''% and %``%#"#or#"#%''% for this replacement boolean algebra. Prove that your implementation of %``%#"#and#"#%''% is commutative and distributes over your implementation of %``%#"#or.#"#%''%#
    4. # %\item%#
    5. # Modify the first example language of Chapter 2 to include variables, where variables are represented with [nat]. Extend the syntax and semantics of expressions to accommodate the change. Your new [expDenote] function should take as a new extra first argument a value of type [var -> nat], where [var] is a synonym for naturals-as-variables, and the function assigns a value to each variable. Define a constant folding function which does a bottom-up pass over an expression, at each stage replacing every binary operation on constants with an equivalent constant. Prove that constant folding preserves the meanings of expressions.#
    6. # -%\item%#
    7. # Reimplement the second example language of Chapter 2 to use mutually-inductive types instead of dependent types. That is, define two separate (non-dependent) inductive types [nat_exp] and [bool_exp] for expressions of the two different types, rather than a single indexed type. To keep things simple, you may consider only the binary operators that take naturals as operands. Add natural number variables to the language, as in the last exercise, and add an "if" expression form taking as arguments one boolean expression and two natural number expressions. Define semantics and constant-folding functions for this new language. Your constant folding should simplify not just binary operations (returning naturals or booleans) with known arguments, but also "if" expressions with known values for their test expressions but possibly undetermined "then" and "else" cases. Prove that constant-folding a natural number expression preserves its meaning.#
    8. # +%\item%#
    9. # Reimplement the second example language of Chapter 2 to use mutually-inductive types instead of dependent types. That is, define two separate (non-dependent) inductive types [nat_exp] and [bool_exp] for expressions of the two different types, rather than a single indexed type. To keep things simple, you may consider only the binary operators that take naturals as operands. Add natural number variables to the language, as in the last exercise, and add an %``%#"#if#"#%''% expression form taking as arguments one boolean expression and two natural number expressions. Define semantics and constant-folding functions for this new language. Your constant folding should simplify not just binary operations (returning naturals or booleans) with known arguments, but also %``%#"#if#"#%''% expressions with known values for their test expressions but possibly undetermined %``%#"#then#"#%''% and %``%#"#else#"#%''% cases. Prove that constant-folding a natural number expression preserves its meaning.#
    10. # %\item%#
    11. # Using a reflexive inductive definition, define a type [nat_tree] of infinitary trees, with natural numbers at their leaves and a countable infinity of new trees branching out of each internal node. Define a function [increment] that increments the number in every leaf of a [nat_tree]. Define a function [leapfrog] over a natural [i] and a tree [nt]. [leapfrog] should recurse into the [i]th child of [nt], the [i+1]st child of that node, the [i+2]nd child of the next node, and so on, until reaching a leaf, in which case [leapfrog] should return the number at that leaf. Prove that the result of any call to [leapfrog] is incremented by one by calling [increment] on the tree.#
    12. # diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Interps.v --- a/src/Interps.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Interps.v Wed Nov 10 16:31:04 2010 -0500 @@ -1,4 +1,4 @@ -(* Copyright (c) 2008-2009, Adam Chlipala +(* Copyright (c) 2008-2010, Adam Chlipala * * This work is licensed under a * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 @@ -64,7 +64,9 @@ Notation "# v" := (Var v) (at level 70). + (** printing ^ $\dag$ *) Notation "^ n" := (Const n) (at level 70). + (** printing +^ $\hat{+}$ *) Infix "+^" := Plus (left associativity, at level 79). Infix "@" := App (left associativity, at level 77). diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Intro.v --- a/src/Intro.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Intro.v Wed Nov 10 16:31:04 2010 -0500 @@ -49,7 +49,7 @@ We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant. -There are a good number of (though definitely not "many") tools that are in wide use today for building machine-checked mathematical proofs and machine-certified programs. This is my attempt at an exhaustive list of interactive "proof assistants" satisfying a few criteria. First, the authors of each tool must intend for it to be put to use for software-related applications. Second, there must have been enough engineering effort put into the tool that someone not doing research on the tool itself would feel his time was well spent using it. A third criterion is more of an empirical validation of the second: the tool must have a significant user community outside of its own development team. +There are a good number of (though definitely not %``%#"#many#"#%''%) tools that are in wide use today for building machine-checked mathematical proofs and machine-certified programs. This is my attempt at an exhaustive list of interactive %``%#"#proof assistants#"#%''% satisfying a few criteria. First, the authors of each tool must intend for it to be put to use for software-related applications. Second, there must have been enough engineering effort put into the tool that someone not doing research on the tool itself would feel his time was well spent using it. A third criterion is more of an empirical validation of the second: the tool must have a significant user community outside of its own development team. % \medskip @@ -74,7 +74,7 @@ # -Isabelle/HOL, implemented with the "proof assistant development framework" Isabelle, is the most popular proof assistant for the HOL logic. The other implementations of HOL can be considered equivalent for purposes of the discussion here. +Isabelle/HOL, implemented with the %``%#"#proof assistant development framework#"#%''% Isabelle, is the most popular proof assistant for the HOL logic. The other implementations of HOL can be considered equivalent for purposes of the discussion here. *) @@ -111,9 +111,9 @@ (** ** An Easy-to-Check Kernel Proof Language *) (** -Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure. Proof assistants satisfying the "de Bruijn criterion" may use complicated and extensible procedures to seek out proofs, but in the end they produce %\textit{%##proof terms##%}% in kernel languages. These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics. To believe a proof, we can ignore the possibility of bugs during %\textit{%##search##%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%##result##%}% of the search. +Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure. Proof assistants satisfying the %``%#"#de Bruijn criterion#"#%''% may use complicated and extensible procedures to seek out proofs, but in the end they produce %\textit{%##proof terms##%}% in kernel languages. These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics. To believe a proof, we can ignore the possibility of bugs during %\textit{%##search##%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%##result##%}% of the search. -ACL2 and PVS do not meet the de Bruijn criterion, employing fancy decision procedures that produce no "evidence trails" justifying their results. +ACL2 and PVS do not meet the de Bruijn criterion, employing fancy decision procedures that produce no %``%#"#evidence trails#"#%''% justifying their results. *) (** ** Convenient Programmable Proof Automation *) @@ -125,13 +125,13 @@ Of the remaining tools, all can support user extension with new decision procedures by hacking directly in the tool's implementation language (such as OCaml for Coq). Since ACL2 and PVS do not satisfy the de Bruijn criterion, overall correctness is at the mercy of the authors of new procedures. -Isabelle/HOL and Coq both support coding new proof manipulations in ML in ways that cannot lead to the acceptance of invalid proofs. Additionally, Coq includes a domain-specific language for coding decision procedures in normal Coq source code, with no need to break out into ML. This language is called Ltac, and I think of it as the unsung hero of the proof assistant world. Not only does Ltac prevent you from making fatal mistakes, it also includes a number of novel programming constructs which combine to make a "proof by decision procedure" style very pleasant. We will meet these features in the chapters to come. +Isabelle/HOL and Coq both support coding new proof manipulations in ML in ways that cannot lead to the acceptance of invalid proofs. Additionally, Coq includes a domain-specific language for coding decision procedures in normal Coq source code, with no need to break out into ML. This language is called Ltac, and I think of it as the unsung hero of the proof assistant world. Not only does Ltac prevent you from making fatal mistakes, it also includes a number of novel programming constructs which combine to make a %``%#"#proof by decision procedure#"#%''% style very pleasant. We will meet these features in the chapters to come. *) (** ** Proof by Reflection *) (** -A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation. Coq includes programs and proof terms in the same syntactic class. This makes it easy to write programs that compute proofs. With rich enough dependent types, such programs are %\textit{%##certified decision procedures##%}%. In such cases, these certified procedures can be put to good use %\textit{%##without ever running them##%}%! Their types guarantee that, if we did bother to run them, we would receive proper "ground" proofs. +A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation. Coq includes programs and proof terms in the same syntactic class. This makes it easy to write programs that compute proofs. With rich enough dependent types, such programs are %\textit{%##certified decision procedures##%}%. In such cases, these certified procedures can be put to good use %\textit{%##without ever running them##%}%! Their types guarantee that, if we did bother to run them, we would receive proper %``%#"#ground#"#%''% proofs. The critical ingredient for this technique, many of whose instances are referred to as %\textit{%##proof by reflection##%}%, is a way of inducing non-trivial computation inside of logical propositions during proof checking. Further, most of these instances require dependent types to make it possible to state the appropriate theorems. Of the proof assistants I listed, only Coq really provides this support. *) @@ -144,18 +144,18 @@ I think the answer is simple. None of the competition has well-developed systems for tactic-based theorem proving. Agda and Epigram are designed and marketed more as programming languages than proof assistants. Dependent types are great, because they often help you prove deep theorems without doing anything that feels like proving. Nonetheless, almost any interesting certified programming project will benefit from some activity that deserves to be called proving, and many interesting projects absolutely require semi-automated proving, if the sanity of the programmer is to be safeguarded. Informally, proving is unavoidable when any correctness proof for a program has a structure that does not mirror the structure of the program itself. An example is a compiler correctness proof, which probably proceeds by induction on program execution traces, which have no simple relationship with the structure of the compiler or the structure of the programs it compiles. In building such proofs, a mature system for scripted proof automation is invaluable. -On the other hand, Agda, Epigram, and similar tools have less implementation baggage associated with them, and so they tend to be the default first homes of innovations in practical type theory. Some significant kinds of dependently-typed programs are much easier to write in Agda and Epigram than in Coq. The former tools may very well be superior choices for projects that do not involve any "proving." Anecdotally, I have gotten the impression that manual proving is orders of magnitudes more costly than manual coping with Coq's lack of programming bells and whistles. In this book, I will devote significant time to patterns for programming with dependent types in Coq as it is today. We can hope that the type theory community is tending towards convergence on the right set of features for practical programming with dependent types, and that we will eventually have a single tool embodying those features. +On the other hand, Agda, Epigram, and similar tools have less implementation baggage associated with them, and so they tend to be the default first homes of innovations in practical type theory. Some significant kinds of dependently-typed programs are much easier to write in Agda and Epigram than in Coq. The former tools may very well be superior choices for projects that do not involve any %``%#"#proving.#"#%''% Anecdotally, I have gotten the impression that manual proving is orders of magnitudes more costly than manual coping with Coq's lack of programming bells and whistles. In this book, I will devote significant time to patterns for programming with dependent types in Coq as it is today. We can hope that the type theory community is tending towards convergence on the right set of features for practical programming with dependent types, and that we will eventually have a single tool embodying those features. *) (** * Engineering with a Proof Assistant *) (** -In comparisons with its competitors, Coq is often derided for promoting unreadable proofs. It is very easy to write proof scripts that manipulate proof goals imperatively, with no structure to aid readers. Such developments are nightmares to maintain, and they certainly do not manage to convey "why the theorem is true" to anyone but the original author. One additional (and not insignificant) purpose of this book is to show why it is unfair and unproductive to dismiss Coq based on the existence of such developments. +In comparisons with its competitors, Coq is often derided for promoting unreadable proofs. It is very easy to write proof scripts that manipulate proof goals imperatively, with no structure to aid readers. Such developments are nightmares to maintain, and they certainly do not manage to convey %``%#"#why the theorem is true#"#%''% to anyone but the original author. One additional (and not insignificant) purpose of this book is to show why it is unfair and unproductive to dismiss Coq based on the existence of such developments. I will go out on a limb and guess that the reader is a dedicated fan of some functional programming language or another, and that he may even have been involved in teaching that language to undergraduates. I want to propose an analogy between two attitudes: coming to a negative conclusion about Coq after reading common Coq developments in the wild, and coming to a negative conclusion about Your Favorite Language after looking at the programs undergraduates write in it in the first week of class. The pragmatics of mechanized proving and program verification have been under serious study for much less time than the pragmatics of programming have been. The computer theorem proving community is still developing the key insights that correspond to those that functional programming texts and instructors impart to their students, to help those students get over that critical hump where using the language stops being more trouble than it is worth. Most of the insights for Coq are barely even disseminated among the experts, let alone set down in a tutorial form. I hope to use this book to go a long way towards remedying that. -If I do that job well, then this book should be of interest even to people who have participated in classes or tutorials specifically about Coq. The book should even be useful to people who have been using Coq for years but who are mystified when their Coq developments prove impenetrable by colleagues. The crucial angle in this book is that there are "design patterns" for reliably avoiding the really grungy parts of theorem proving, and consistent use of these patterns can get you over the hump to the point where it is worth your while to use Coq to prove your theorems and certify your programs, even if formal verification is not your main concern in a project. We will follow this theme by pursuing two main methods for replacing manual proofs with more understandable artifacts: dependently-typed functions and custom Ltac decision procedures. +If I do that job well, then this book should be of interest even to people who have participated in classes or tutorials specifically about Coq. The book should even be useful to people who have been using Coq for years but who are mystified when their Coq developments prove impenetrable by colleagues. The crucial angle in this book is that there are %``%#"#design patterns#"#%''% for reliably avoiding the really grungy parts of theorem proving, and consistent use of these patterns can get you over the hump to the point where it is worth your while to use Coq to prove your theorems and certify your programs, even if formal verification is not your main concern in a project. We will follow this theme by pursuing two main methods for replacing manual proofs with more understandable artifacts: dependently-typed functions and custom Ltac decision procedures. *) diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/MoreDep.v --- a/src/MoreDep.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/MoreDep.v Wed Nov 10 16:31:04 2010 -0500 @@ -18,7 +18,7 @@ (** %\chapter{More Dependent Types}% *) -(** Subset types and their relatives help us integrate verification with programming. Though they reorganize the certified programmer's workflow, they tend not to have deep effects on proofs. We write largely the same proofs as we would for classical verification, with some of the structure moved into the programs themselves. It turns out that, when we use dependent types to their full potential, we warp the development and proving process even more than that, picking up "free theorems" to the extent that often a certified program is hardly more complex than its uncertified counterpart in Haskell or ML. +(** Subset types and their relatives help us integrate verification with programming. Though they reorganize the certified programmer's workflow, they tend not to have deep effects on proofs. We write largely the same proofs as we would for classical verification, with some of the structure moved into the programs themselves. It turns out that, when we use dependent types to their full potential, we warp the development and proving process even more than that, picking up %``%#"#free theorems#"#%''% to the extent that often a certified program is hardly more complex than its uncertified counterpart in Haskell or ML. In particular, we have only scratched the tip of the iceberg that is Coq's inductive definition mechanism. The inductive types we have seen so far have their counterparts in the other proof assistants that we surveyed in Chapter 1. This chapter explores the strange new world of dependent inductive datatypes (that is, dependent inductive types outside [Prop]), a possibility which sets Coq apart from all of the competition not based on type theory. *) @@ -195,7 +195,7 @@ | Prod t1 t2 => typeDenote t1 * typeDenote t2 end%type. -(** [typeDenote] compiles types of our object language into "native" Coq types. It is deceptively easy to implement. The only new thing we see is the [%type] annotation, which tells Coq to parse the [match] expression using the notations associated with types. Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor. [type] is one example of an identifer bound to a %\textit{%##notation scope##%}%. We will deal more explicitly with notations and notation scopes in later chapters. +(** [typeDenote] compiles types of our object language into %``%#"#native#"#%''% Coq types. It is deceptively easy to implement. The only new thing we see is the [%type] annotation, which tells Coq to parse the [match] expression using the notations associated with types. Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor. [type] is one example of an identifer bound to a %\textit{%##notation scope##%}%. We will deal more explicitly with notations and notation scopes in later chapters. We can define a function [expDenote] that is typed in terms of [typeDenote]. *) @@ -548,9 +548,9 @@ (** We apply a trick that I call the %\textit{%##convoy pattern##%}%. Recall that [match] annotations only make it possible to describe a dependence of a [match] %\textit{%##result type##%}% on the discriminee. There is no automatic refinement of the types of free variables. However, it is possible to effect such a refinement by finding a way to encode free variable type dependencies in the [match] result type, so that a [return] clause can express the connection. - In particular, we can extend the [match] to return %\textit{%##functions over the free variables whose types we want to refine##%}%. In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time. We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn. We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern. Such a [match] expression is applied immediately to the "old version" of the variable to be refined, and the type checker is happy. + In particular, we can extend the [match] to return %\textit{%##functions over the free variables whose types we want to refine##%}%. In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time. We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn. We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern. Such a [match] expression is applied immediately to the %``%#"#old version#"#%''% of the variable to be refined, and the type checker is happy. - After writing this code, even I do not understand the precise details of how balancing works. I consulted Chris Okasaki's paper "Red-Black Trees in a Functional Setting" and transcribed the code to use dependent types. Luckily, the details are not so important here; types alone will tell us that insertion preserves balanced-ness, and we will prove that insertion produces trees containing the right keys. + After writing this code, even I do not understand the precise details of how balancing works. I consulted Chris Okasaki's paper %``%#"#Red-Black Trees in a Functional Setting#"#%''% and transcribed the code to use dependent types. Luckily, the details are not so important here; types alone will tell us that insertion preserves balanced-ness, and we will prove that insertion produces trees containing the right keys. Here is the symmetric function [balance2], for cases where the possibly-invalid tree appears on the right rather than on the left. *) diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/OpSem.v --- a/src/OpSem.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/OpSem.v Wed Nov 10 16:31:04 2010 -0500 @@ -1,4 +1,4 @@ -(* Copyright (c) 2009, Adam Chlipala +(* Copyright (c) 2009-2010, Adam Chlipala * * This work is licensed under a * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 @@ -20,7 +20,7 @@ (** The last few chapters have shown how PHOAS can make it relatively painless to reason about program transformations. Each of our example languages so far has had a semantics that is easy to implement with an interpreter in Gallina. Since Gallina is designed to rule out non-termination, we cannot hope to give interpreter-based semantics to Turing-complete programming languages. Falling back on standard operational semantics leaves us with the old bureaucratic concerns about capture-avoiding substitution. Can we encode Turing-complete, higher-order languages in Coq without sacrificing the advantages of higher-order encoding? - Any approach that applies to basic untyped lambda calculus is likely to extend to most object languages of interest. We can attempt the "obvious" way of equipping a PHOAS definition for use in an operational semantics, without mentioning substitution explicitly. Specifically, we try to work with expressions with [var] instantiated with a type of values. *) + Any approach that applies to basic untyped lambda calculus is likely to extend to most object languages of interest. We can attempt the %``%#"#obvious#"#%''% way of equipping a PHOAS definition for use in an operational semantics, without mentioning substitution explicitly. Specifically, we try to work with expressions with [var] instantiated with a type of values. *) Section exp. Variable var : Type. @@ -58,7 +58,7 @@ ]] - Of course, this kind of definition is not structurally recursive, so Coq will not allow it. Getting "substitution for free" seems to require some similar kind of self-reference. + Of course, this kind of definition is not structurally recursive, so Coq will not allow it. Getting %``%#"#substitution for free#"#%''% seems to require some similar kind of self-reference. In this chapter, we will consider an alternate take on the problem. We add a level of indirection, introducing more explicit syntax to break the cycle in type definitions. Specifically, we represent function values as numbers that index into a %\emph{%##closure heap##%}% that our operational semantics maintains alongside the expression being evaluated. *) @@ -83,6 +83,7 @@ (** The second of our two definitions expresses when one list extends another. We will write [ls1 ~> ls2] to indicate that [ls1] could evolve into [ls2]; that is, [ls1] is a suffix of [ls2]. *) Definition extends (ls1 ls2 : list A) := exists ls, ls2 = ls ++ ls1. + (** printing ~> $\leadsto$ *) Infix "~>" := extends (no associativity, at level 80). (** We prove and add as hints a few basic theorems about [lookup] and [extends]. *) @@ -292,6 +293,7 @@ (** Finally, we define a CPS translation in the same way as in our previous example for simply-typed lambda calculus. *) +(** printing <-- $\longleftarrow$ *) Reserved Notation "x <-- e1 ; e2" (right associativity, at level 76, e1 at next level). Section cpsExp. @@ -364,6 +366,8 @@ -> cr (Source.VFun l1) (CPS.VFun l2). End cr. +(** printing |-- $\vdash$ *) +(** printing ~~ $\sim$ *) Notation "s1 & s2 |-- v1 ~~ v2" := (cr s1 s2 v1 v2) (no associativity, at level 70). Hint Constructors cr. diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Predicates.v --- a/src/Predicates.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Predicates.v Wed Nov 10 16:31:04 2010 -0500 @@ -18,7 +18,7 @@ (** %\chapter{Inductive Predicates}% *) -(** The so-called "Curry-Howard Correspondence" states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *) +(** The so-called %``%#"#Curry-Howard Correspondence#"#%''% states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *) Print unit. (** %\vspace{-.15in}% [[ @@ -36,7 +36,7 @@ The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are. This idea is known as %\textit{%##proof irrelevance##%}%, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures. -With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes." This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *) +With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus %``%#"#datatypes.#"#%''% This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *) (** * Propositional Logic *) @@ -266,7 +266,7 @@ (* end hide *) - (** It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is [tauto], which is a complete decision procedure for constructive propositional logic. (More on what "constructive" means in the next section.) We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *) + (** It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is [tauto], which is a complete decision procedure for constructive propositional logic. (More on what %``%#"#constructive#"#%''% means in the next section.) We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *) Theorem or_comm' : P \/ Q -> Q \/ P. (* begin thide *) @@ -330,9 +330,9 @@ (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth? -The answer comes from the fact that Coq implements %\textit{%##constructive##%}% or %\textit{%##intuitionistic##%}% logic, in contrast to the %\textit{%##classical##%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%##decidable##%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts." +The answer comes from the fact that Coq implements %\textit{%##constructive##%}% or %\textit{%##intuitionistic##%}% logic, in contrast to the %\textit{%##classical##%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%##decidable##%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like %``%#"#this particular Turing machine halts.#"#%''% -Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth." +Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply %``%#"#run a [Prop] to determine its truth.#"#%''% Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\textit{%##program extraction##%}%, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs. @@ -341,7 +341,7 @@ (** * First-Order Logic *) -(** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]." +(** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the %``%#"#real#"#%''% type of the implication says %``%#"#for every proof of [P], there exists a proof of [Q].#"#%''% Existential quantification is defined in the standard library. *) @@ -357,7 +357,7 @@ Theorem exist1 : exists x : nat, x + 1 = 2. (* begin thide *) (** remove printing exists *) - (** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text "exists" in formulas.) *) + (** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text %``%#"#exists#"#%''% in formulas.) *) exists 1. @@ -479,7 +479,7 @@ It seems that case analysis has not helped us much at all! Our sole hypothesis disappears, leaving us, if anything, worse off than we were before. What went wrong? We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments. If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction. Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0]. - Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *) + Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of %``%#"#logically complete case analysis#"#%''% is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *) Undo. inversion 1. @@ -658,7 +658,7 @@ ]] - Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%##on the structure of one of the [even] proofs##%}%. This technique is commonly called %\textit{%##rule induction##%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent. *) + Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%##on the structure of one of the [even] proofs##%}%. This technique is commonly called %\textit{%##rule induction##%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with %``%#"#normal#"#%''% induction is apparent. *) Restart. @@ -775,7 +775,7 @@ (** We write the proof in a way that avoids the use of local variable or hypothesis names, using the [match] tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, [eauto]. -[crush] uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. [auto] attempts Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. [eauto] relaxes this restriction, at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively-automated Coq proofs to see a bag of standard tactics applied to pick off the "easy" subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search. +[crush] uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. [auto] attempts Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. [eauto] relaxes this restriction, at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively-automated Coq proofs to see a bag of standard tactics applied to pick off the %``%#"#easy#"#%''% subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search. The original theorem now follows trivially from our lemma. *) @@ -815,7 +815,7 @@ (* begin hide *) (* In-class exercises *) -(* EX: Define a type [prop] of simple boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction. Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the "false" formula. Prove that every false-free [prop] is valid. *) +(* EX: Define a type [prop] of simple boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction. Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the %``%#"#false#"#%''% formula. Prove that every false-free [prop] is valid. *) (* begin thide *) Inductive prop : Set := @@ -889,12 +889,12 @@ %\item%#
    13. # [P /\ (Q \/ R) -> (P /\ Q) \/ (P /\ R)]#
    14. # #
    #%\end{enumerate}% - %\item%#
  • # Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%##exists##%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of "forward reasoning," in contrast to the "backward reasoning" that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol. + %\item%#
  • # Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%##exists##%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of %``%#"#forward reasoning,#"#%''% in contrast to the %``%#"#backward reasoning#"#%''% that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol. %\begin{enumerate}%#
      # %\item%#
    1. # [p x -> (forall x, p x -> exists y, q x y) -> (forall x y, q x y -> q y (f y)) -> exists z, q z (f z)]#
    2. # #
  • #%\end{enumerate}% -%\item%#
  • # Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10. Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd. It is probably easiest to prove the second theorem by indicating "odd-ness" as equality to [2 * n + 1] for some [n].#
  • # +%\item%#
  • # Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10. Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd. It is probably easiest to prove the second theorem by indicating %``%#"#odd-ness#"#%''% as equality to [2 * n + 1] for some [n].#
  • # %\item%#
  • # Define a simple programming language, its semantics, and its typing rules, and then prove that well-typed programs cannot go wrong. Specifically: %\begin{enumerate}%#
      # @@ -903,7 +903,7 @@ %\item%#
    1. # Define an inductive type [cmd] of commands, containing expressions and variable assignments. A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#
    2. # %\item%#
    3. # Define an inductive type [val] of values, containing natural number constants and pairings of values.#
    4. # %\item%#
    5. # Define a type of variable assignments, which assign a value to each variable.#
    6. # - %\item%#
    7. # Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. "Big step" means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, "[1 + 1] evaluates to [2] under assignment [va]" should be derivable for any assignment [va].#
    8. # + %\item%#
    9. # Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. %``%#"#Big step#"#%''% means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, %``%#"#[1 + 1] evaluates to [2] under assignment [va]#"#%''% should be derivable for any assignment [va].#
    10. # %\item%#
    11. # Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment. The value of a command is the result of evaluating its final expression.#
    12. # %\item%#
    13. # Define a type of variable typings, which are like variable assignments, but map variables to types instead of values. You might use polymorphism to share some code with your variable assignments.#
    14. # %\item%#
    15. # Define typing judgments for expressions, values, and commands. The expression and command cases will be in terms of a typing assignment.#
    16. # @@ -921,7 +921,7 @@ end ]] ## %\item%#
    17. # You probably do not want to use an inductive definition for compatibility of variable assignments and typings.#
    18. # - %\item%#
    19. # The [Tactics] module from this book contains a variant [crush'] of [crush]. [crush'] takes two arguments. The first argument is a list of lemmas and other functions to be tried automatically in "forward reasoning" style, where we add new facts without being sure yet that they link into a proof of the conclusion. The second argument is a list of predicates on which inversion should be attempted automatically. For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found. Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept. The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#
    20. # + %\item%#
    21. # The [Tactics] module from this book contains a variant [crush'] of [crush]. [crush'] takes two arguments. The first argument is a list of lemmas and other functions to be tried automatically in %``%#"#forward reasoning#"#%''% style, where we add new facts without being sure yet that they link into a proof of the conclusion. The second argument is a list of predicates on which inversion should be attempted automatically. For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found. Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept. The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#
    22. # %\item%#
    23. # If you want [crush'] to apply polymorphic lemmas, you may have to do a little extra work, if the type parameter is not a free variable of your proof context (so that [crush'] does not know to try it). For instance, if you define a polymorphic map insert function [assign] of some type [forall T : Set, ...], and you want particular applications of [assign] added automatically with type parameter [U], you would need to include [assign] in the lemma list as [assign U] (if you have implicit arguments off) or [assign (T := U)] or [@assign U] (if you have implicit arguments on).#
    24. # #
  • #%\end{enumerate}% diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/StackMachine.v --- a/src/StackMachine.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/StackMachine.v Wed Nov 10 16:31:04 2010 -0500 @@ -71,7 +71,7 @@ %\medskip% -Now we are ready to say what these programs mean. We will do this by writing an interpreter that can be thought of as a trivial operational or denotational semantics. (If you are not familiar with these semantic techniques, no need to worry; we will stick to "common sense" constructions.) *) +Now we are ready to say what these programs mean. We will do this by writing an interpreter that can be thought of as a trivial operational or denotational semantics. (If you are not familiar with these semantic techniques, no need to worry; we will stick to %``%#"#common sense#"#%''% constructions.) *) Definition binopDenote (b : binop) : nat -> nat -> nat := match b with @@ -101,9 +101,9 @@ ]] -Languages like Haskell and ML have a convenient %\textit{%##principal typing##%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of "complete" type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language. +Languages like Haskell and ML have a convenient %\textit{%##principal typing##%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language. -This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\textit{%##Calculus of Inductive Constructions (CIC)##%}%, which is an extension of the older %\textit{%##Calculus of Constructions (CoC)##%}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\textit{%##strong normalization##%}%, meaning that every program (and, more importantly, every proof term) terminates; and %\textit{%##relative consistency##%}% with systems like versions of Zermelo Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are "really true," if you believe in set theory. +This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\textit{%##Calculus of Inductive Constructions (CIC)##%}%, which is an extension of the older %\textit{%##Calculus of Constructions (CoC)##%}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\textit{%##strong normalization##%}%, meaning that every program (and, more importantly, every proof term) terminates; and %\textit{%##relative consistency##%}% with systems like versions of Zermelo Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory. Coq is actually based on an extension of CIC called %\textit{%##Gallina##%}%. The text after the [:=] and before the period in the last code example is a term of Gallina. Gallina adds many useful features that are not compiled internally to more primitive CIC features. The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission. @@ -147,7 +147,7 @@ (** An instruction either pushes a constant onto the stack or pops two arguments, applies a binary operator to them, and pushes the result onto the stack. A program is a list of instructions, and a stack is a list of natural numbers. -We can give instructions meanings as functions from stacks to optional stacks, where running an instruction results in [None] in case of a stack underflow and results in [Some s'] when the result of execution is the new stack [s']. [::] is the "list cons" operator from the Coq standard library. *) +We can give instructions meanings as functions from stacks to optional stacks, where running an instruction results in [None] in case of a stack underflow and results in [Some s'] when the result of execution is the new stack [s']. [::] is the %``%#"#list cons#"#%''% operator from the Coq standard library. *) Definition instrDenote (i : instr) (s : stack) : option stack := match i with @@ -233,7 +233,7 @@ (* end hide *) (* begin thide *) -(** Though a pencil-and-paper proof might clock out at this point, writing "by a routine induction on [e]," it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\textit{%##strengthening the induction hypothesis##%}%. We do that by proving an auxiliary lemma: +(** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\textit{%##strengthening the induction hypothesis##%}%. We do that by proving an auxiliary lemma: *) Lemma compile_correct' : forall e p s, @@ -585,7 +585,7 @@ First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition. There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat]. %\textit{%##Generalized algebraic datatypes (GADTs)##%}% are a popular feature in GHC Haskell and other languages that removes this first restriction. -The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\textit{%##expressions##%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to "real" functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally. +The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\textit{%##expressions##%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally. *) (** We can define a similar type family for typed expressions. *) @@ -605,7 +605,7 @@ (** It can take a few moments to come to terms with the fact that [Set], the type of types of programs, is itself a first-class type, and that we can write functions that return [Set]s. Past that wrinkle, the definition of [typeDenote] is trivial, relying on the [nat] and [bool] types from the Coq standard library. -We need to define one auxiliary function, implementing a boolean binary "less-than" operator, which only appears in the standard library with a type fancier than what we are prepared to deal with here. The code is entirely standard and ML-like, with the one caveat being that the Coq [nat] type uses a unary representation, where [O] is zero and [S n] is the successor of [n]. +We need to define one auxiliary function, implementing a boolean binary %``%#"#less-than#"#%''% operator, which only appears in the standard library with a type fancier than what we are prepared to deal with here. The code is entirely standard and ML-like, with the one caveat being that the Coq [nat] type uses a unary representation, where [O] is zero and [S n] is the successor of [n]. *) Fixpoint lessThan (n1 n2 : nat) : bool := @@ -678,7 +678,7 @@ (** ** Target Language *) -(** Now we want to define a suitable stack machine target for compilation. In the example of the untyped language, stack machine programs could encounter stack underflows and "get stuck." This was unfortunate, since we had to deal with this complication even though we proved that our compiler never produced underflowing programs. We could have used dependent types to force all stack machine programs to be underflow-free. +(** Now we want to define a suitable stack machine target for compilation. In the example of the untyped language, stack machine programs could encounter stack underflows and %``%#"#get stuck.#"#%''% This was unfortunate, since we had to deal with this complication even though we proved that our compiler never produced underflowing programs. We could have used dependent types to force all stack machine programs to be underflow-free. For our new languages, besides underflow, we also have the problem of stack slots with naturals instead of bools or vice versa. This time, we will use indexed typed families to avoid the need to reason about potential failures. @@ -864,7 +864,7 @@ Lemma tcompile_correct' : forall t (e : texp t) ts (s : vstack ts), tprogDenote (tcompile e ts) s = (texpDenote e, s). -(** While lemma [compile_correct'] quantified over a program that is the "continuation" for the expression we are considering, here we avoid drawing in any extra syntactic elements. In addition to the source expression and its type, we also quantify over an initial stack type and a stack compatible with it. Running the compilation of the program starting from that stack, we should arrive at a stack that differs only in having the program's denotation pushed onto it. +(** While lemma [compile_correct'] quantified over a program that is the %``%#"#continuation#"#%''% for the expression we are considering, here we avoid drawing in any extra syntactic elements. In addition to the source expression and its type, we also quantify over an initial stack type and a stack compatible with it. Running the compilation of the program starting from that stack, we should arrive at a stack that differs only in having the program's denotation pushed onto it. Let us try to prove this theorem in the same way that we settled on in the last section. *) diff -r da9ccc6bf572 -r 2c88fc1dbe33 src/Subset.v --- a/src/Subset.v Wed Nov 10 15:42:05 2010 -0500 +++ b/src/Subset.v Wed Nov 10 16:31:04 2010 -0500 @@ -20,7 +20,7 @@ \chapter{Subset Types and Variations}% *) -(** So far, we have seen many examples of what we might call "classical program verification." We write programs, write their specifications, and then prove that the programs satisfy their specifications. The programs that we have written in Coq have been normal functional programs that we could just as well have written in Haskell or ML. In this chapter, we start investigating uses of %\textit{%##dependent types##%}% to integrate programming, specification, and proving into a single phase. *) +(** So far, we have seen many examples of what we might call %``%#"#classical program verification.#"#%''% We write programs, write their specifications, and then prove that the programs satisfy their specifications. The programs that we have written in Coq have been normal functional programs that we could just as well have written in Haskell or ML. In this chapter, we start investigating uses of %\textit{%##dependent types##%}% to integrate programming, specification, and proving into a single phase. *) (** * Introducing Subset Types *) @@ -234,7 +234,7 @@ end). (* begin thide *) - (** We build [pred_strong4] using tactic-based proving, beginning with a [Definition] command that ends in a period before a definition is given. Such a command enters the interactive proving mode, with the type given for the new identifier as our proof goal. We do most of the work with the [refine] tactic, to which we pass a partial "proof" of the type we are trying to prove. There may be some pieces left to fill in, indicated by underscores. Any underscore that Coq cannot reconstruct with type inference is added as a proof subgoal. In this case, we have two subgoals: + (** We build [pred_strong4] using tactic-based proving, beginning with a [Definition] command that ends in a period before a definition is given. Such a command enters the interactive proving mode, with the type given for the new identifier as our proof goal. We do most of the work with the [refine] tactic, to which we pass a partial %``%#"#proof#"#%''% of the type we are trying to prove. There may be some pieces left to fill in, indicated by underscores. Any underscore that Coq cannot reconstruct with type inference is added as a proof subgoal. In this case, we have two subgoals: [[ 2 subgoals @@ -260,7 +260,7 @@ (* end thide *) Defined. -(** We end the "proof" with [Defined] instead of [Qed], so that the definition we constructed remains visible. This contrasts to the case of ending a proof with [Qed], where the details of the proof are hidden afterward. Let us see what our proof script constructed. *) +(** We end the %``%#"#proof#"#%''% with [Defined] instead of [Qed], so that the definition we constructed remains visible. This contrasts to the case of ending a proof with [Qed], where the details of the proof are hidden afterward. Let us see what our proof script constructed. *) Print pred_strong4. (** %\vspace{-.15in}% [[ @@ -448,7 +448,7 @@ (** %\smallskip% -We can build "smart" versions of the usual boolean operators and put them to good use in certified programming. For instance, here is a [sumbool] version of boolean "or." *) +We can build %``%#"#smart#"#%''% versions of the usual boolean operators and put them to good use in certified programming. For instance, here is a [sumbool] version of boolean %``%#"#or.#"#%''% *) (* begin thide *) Notation "x || y" := (if x then Yes else Reduce y). @@ -592,7 +592,7 @@ (** * Monadic Notations *) -(** We can treat [maybe] like a monad, in the same way that the Haskell [Maybe] type is interpreted as a failure monad. Our [maybe] has the wrong type to be a literal monad, but a "bind"-like notation will still be helpful. *) +(** We can treat [maybe] like a monad, in the same way that the Haskell [Maybe] type is interpreted as a failure monad. Our [maybe] has the wrong type to be a literal monad, but a %``%#"#bind#"#%''%-like notation will still be helpful. *) Notation "x <- e1 ; e2" := (match e1 with | Unknown => ?? @@ -611,7 +611,7 @@ [[(m1, m2)]]); tauto. Defined. -(** We can build a [sumor] version of the "bind" notation and use it to write a similarly straightforward version of this function. *) +(** We can build a [sumor] version of the %``%#"#bind#"#%''% notation and use it to write a similarly straightforward version of this function. *) (** printing <-- $\longleftarrow$ *) @@ -668,7 +668,7 @@ decide equality. Defined. -(** Another notation complements the monadic notation for [maybe] that we defined earlier. Sometimes we want to include "assertions" in our procedures. That is, we want to run a decision procedure and fail if it fails; otherwise, we want to continue, with the proof that it produced made available to us. This infix notation captures that idea, for a procedure that returns an arbitrary two-constructor type. *) +(** Another notation complements the monadic notation for [maybe] that we defined earlier. Sometimes we want to include %``%#"#assertions#"#%''% in our procedures. That is, we want to run a decision procedure and fail if it fails; otherwise, we want to continue, with the proof that it produced made available to us. This infix notation captures that idea, for a procedure that returns an arbitrary two-constructor type. *) Notation "e1 ;; e2" := (if e1 then e2 else ??) (right associativity, at level 60). @@ -794,7 +794,7 @@ (** %\smallskip% -We can adapt this implementation to use [sumor], so that we know our type-checker only fails on ill-typed inputs. First, we define an analogue to the "assertion" notation. *) +We can adapt this implementation to use [sumor], so that we know our type-checker only fails on ill-typed inputs. First, we define an analogue to the %``%#"#assertion#"#%''% notation. *) (* begin thide *) Notation "e1 ;;; e2" := (if e1 then e2 else !!) @@ -889,7 +889,7 @@ %\item%#
  • # Define a function [negate] that returns a simplified version of the negation of a [prop]. That is, the function should have type [forall p : prop, {p' : prop | forall truth, propDenote truth p <-> ~ propDenote truth p'}]. To simplify a variable, just negate it. Simplify a negation by returning its argument. Simplify conjunctions and disjunctions using De Morgan's laws, negating the arguments recursively and switching the kind of connective. [decide] may be useful in some of the proof obligations, even if you do not use it in the computational part of [negate]'s definition. Lemmas like [decide] allow us to compensate for the lack of a general Law of the Excluded Middle in CIC.#
  • # ##%\end{enumerate}% ## -%\item%#
  • # Implement the DPLL satisfiability decision procedure for boolean formulas in conjunctive normal form, with a dependent type that guarantees its correctness. An example of a reasonable type for this function would be [forall f : formula, {truth : tvals | formulaTrue truth f} + {forall truth, ~ formulaTrue truth f}]. Implement at least "the basic backtracking algorithm" as defined here: +%\item%#
  • # Implement the DPLL satisfiability decision procedure for boolean formulas in conjunctive normal form, with a dependent type that guarantees its correctness. An example of a reasonable type for this function would be [forall f : formula, {truth : tvals | formulaTrue truth f} + {forall truth, ~ formulaTrue truth f}]. Implement at least %``%#"#the basic backtracking algorithm#"#%''% as defined here: %\begin{center}\url{http://en.wikipedia.org/wiki/DPLL_algorithm}\end{center}% #
    http://en.wikipedia.org/wiki/DPLL_algorithm
    # It might also be instructive to implement the unit propagation and pure literal elimination optimizations described there or some other optimizations that have been used in modern SAT solvers.#
  • #