Mercurial > cpdt > repo
comparison src/Predicates.v @ 398:05efde66559d
Get it working in Coq 8.4beta1; use nice coqdoc notation for italics
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Wed, 06 Jun 2012 11:25:13 -0400 |
parents | cc8d0503619f |
children | 5986e9fd40b5 |
comparison
equal
deleted
inserted
replaced
397:d62ed7381a0b | 398:05efde66559d |
---|---|
52 ]] | 52 ]] |
53 *) | 53 *) |
54 | 54 |
55 (** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. Chapter 12 goes into more detail about the theoretical differences between [Prop] and [Set]. For now, we will simply follow common intuitions about what a proof is. | 55 (** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. Chapter 12 goes into more detail about the theoretical differences between [Prop] and [Set]. For now, we will simply follow common intuitions about what a proof is. |
56 | 56 |
57 The type [unit] has one value, [tt]. The type [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact %\textit{%#<i>#should not#</i>#%}% be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving. | 57 The type [unit] has one value, [tt]. The type [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact _should not_ be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving. |
58 | 58 |
59 The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are. This idea is known as %\index{proof irrelevance}\textit{%#<i>#proof irrelevance#</i>#%}%, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures. | 59 The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are. This idea is known as %\index{proof irrelevance}%_proof irrelevance_, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures. |
60 | 60 |
61 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus %``%#"#datatypes.#"#%''% This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *) | 61 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus %``%#"#datatypes.#"#%''% This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *) |
62 | 62 |
63 | 63 |
64 (** * Propositional Logic *) | 64 (** * Propositional Logic *) |
379 | 379 |
380 (** * What Does It Mean to Be Constructive? *) | 380 (** * What Does It Mean to Be Constructive? *) |
381 | 381 |
382 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth? | 382 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth? |
383 | 383 |
384 The answer comes from the fact that Coq implements %\index{constructive logic}\textit{%#<i>#constructive#</i>#%}% or %\index{intuitionistic logic|see{constructive logic}}\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\index{classical logic}\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\index{decidability}\textit{%#<i>#decidable#</i>#%}%, in the sense of %\index{computability|see{decidability}}%computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like %``%#"#this particular Turing machine halts.#"#%''% | 384 The answer comes from the fact that Coq implements %\index{constructive logic}%_constructive_ or %\index{intuitionistic logic|see{constructive logic}}%_intuitionistic_ logic, in contrast to the %\index{classical logic}%_classical_ logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\index{decidability}%_decidable_, in the sense of %\index{computability|see{decidability}}%computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like %``%#"#this particular Turing machine halts.#"#%''% |
385 | 385 |
386 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply %``%#"#run a [Prop] to determine its truth.#"#%''% | 386 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply %``%#"#run a [Prop] to determine its truth.#"#%''% |
387 | 387 |
388 Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\index{program extraction}\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs. | 388 Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\index{program extraction}%_program extraction_, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs. |
389 | 389 |
390 We will see more about Coq's program extraction facility in a later chapter. However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally. It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it. It is almost always most useful to maintain the distinction between programs and proofs. If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove. It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies. *) | 390 We will see more about Coq's program extraction facility in a later chapter. However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally. It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it. It is almost always most useful to maintain the distinction between programs and proofs. If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove. It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies. *) |
391 | 391 |
392 | 392 |
393 (** * First-Order Logic *) | 393 (** * First-Order Logic *) |
490 (** %\coqdockw{%#<tt>#constructor#</tt>#%}%[.]%\vspace{-.075in}% *) | 490 (** %\coqdockw{%#<tt>#constructor#</tt>#%}%[.]%\vspace{-.075in}% *) |
491 | 491 |
492 (* end thide *) | 492 (* end thide *) |
493 Qed. | 493 Qed. |
494 | 494 |
495 (** We can call [isZero] a %\index{judgment}\textit{%#<i>#judgment#</i>#%}%, in the sense often used in the semantics of programming languages. Judgments are typically defined in the style of %\index{natural deduction}\textit{%#<i>#natural deduction#</i>#%}%, where we write a number of %\index{inference rules}\textit{%#<i>#inference rules#</i>#%}% with premises appearing above a solid line and a conclusion appearing below the line. In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it. The proof of [isZero_zero] demonstrates how we can apply an inference rule. | 495 (** We can call [isZero] a %\index{judgment}%_judgment_, in the sense often used in the semantics of programming languages. Judgments are typically defined in the style of %\index{natural deduction}%_natural deduction_, where we write a number of %\index{inference rules}%_inference rules_ with premises appearing above a solid line and a conclusion appearing below the line. In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it. The proof of [isZero_zero] demonstrates how we can apply an inference rule. |
496 | 496 |
497 The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop]. We saw examples of parameterized types like [list], but there the parameters appeared with names %\textit{%#<i>#before#</i>#%}% the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors. | 497 The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop]. We saw examples of parameterized types like [list], but there the parameters appeared with names _before_ the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors. |
498 | 498 |
499 For instance, our definition [isZero] makes the predicate provable only when the argument is [0]. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in %\index{Prolog}%Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type!%\index{Gallina terms!eq}\index{Gallina terms!refl\_equal}% *) | 499 For instance, our definition [isZero] makes the predicate provable only when the argument is [0]. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in %\index{Prolog}%Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type!%\index{Gallina terms!eq}\index{Gallina terms!refl\_equal}% *) |
500 | 500 |
501 (* begin hide *) | 501 (* begin hide *) |
502 Print eq. | 502 Print eq. |
505 [[ | 505 [[ |
506 Inductive eq (A : Type) (x : A) : A -> Prop := refl_equal : x = x | 506 Inductive eq (A : Type) (x : A) : A -> Prop := refl_equal : x = x |
507 | 507 |
508 ]] | 508 ]] |
509 | 509 |
510 Behind the scenes, uses of infix [=] are expanded to instances of [eq]. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [refl_equal], we see that we can only %\textit{%#<i>#prove#</i>#%}% equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. Another way of stating that definition is: equality is defined as the least reflexive relation. | 510 Behind the scenes, uses of infix [=] are expanded to instances of [eq]. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [refl_equal], we see that we can only _prove_ equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. Another way of stating that definition is: equality is defined as the least reflexive relation. |
511 | 511 |
512 Returning to the example of [isZero], we can see how to work with hypotheses that use this predicate. *) | 512 Returning to the example of [isZero], we can see how to work with hypotheses that use this predicate. *) |
513 | 513 |
514 Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n. | 514 Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n. |
515 (* begin thide *) | 515 (* begin thide *) |
737 | 737 |
738 [[ | 738 [[ |
739 IHn : forall m : nat, even n -> even m -> even (n + m) | 739 IHn : forall m : nat, even n -> even m -> even (n + m) |
740 ]] | 740 ]] |
741 | 741 |
742 Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%. This technique is commonly called %\index{rule induction}\textit{%#<i>#rule induction#</i>#%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with %``%#"#normal#"#%''% induction is apparent. | 742 Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct _on the structure of one of the [even] proofs_. This technique is commonly called %\index{rule induction}%_rule induction_ in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with %``%#"#normal#"#%''% induction is apparent. |
743 | 743 |
744 Recall that tactics like [induction] and [destruct] may be passed numbers to refer to unnamed lefthand sides of implications in the conclusion, where the argument [n] refers to the [n]th such hypothesis. *) | 744 Recall that tactics like [induction] and [destruct] may be passed numbers to refer to unnamed lefthand sides of implications in the conclusion, where the argument [n] refers to the [n]th such hypothesis. *) |
745 | 745 |
746 (* begin hide *) | 746 (* begin hide *) |
747 Restart. | 747 Restart. |
890 | 890 |
891 Theorem even_contra : forall n, even (S (n + n)) -> False. | 891 Theorem even_contra : forall n, even (S (n + n)) -> False. |
892 intros; eapply even_contra'; eauto. | 892 intros; eapply even_contra'; eauto. |
893 Qed. | 893 Qed. |
894 | 894 |
895 (** We use a variant %\index{tactics!apply}%[eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto]. An invocation of [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments. In this case, [eauto] is able to determine the right values for those unification variables, using (unsurprisingly) a variant of the classic algorithm for %\emph{%#<i>#unification#</i>#%}~\cite{unification}%. | 895 (** We use a variant %\index{tactics!apply}%[eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto]. An invocation of [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments. In this case, [eauto] is able to determine the right values for those unification variables, using (unsurprisingly) a variant of the classic algorithm for _unification_ %\cite{unification}%. |
896 | 896 |
897 By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq. Imagine that we had tried to prove [even_contra'] with all of the [forall] quantifiers moved to the front of the lemma statement. *) | 897 By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq. Imagine that we had tried to prove [even_contra'] with all of the [forall] quantifiers moved to the front of the lemma statement. *) |
898 | 898 |
899 Lemma even_contra'' : forall n' n, even n' -> n' = S (n + n) -> False. | 899 Lemma even_contra'' : forall n' n, even n' -> n' = S (n + n) -> False. |
900 induction 1; crush; | 900 induction 1; crush; |
911 ============================ | 911 ============================ |
912 False | 912 False |
913 | 913 |
914 ]] | 914 ]] |
915 | 915 |
916 We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n]. This is because the quantification of [n] %\textit{%#<i>#appeared after the thing we are inducting on#</i>#%}% in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. *) | 916 We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n]. This is because the quantification of [n] _appeared after the thing we are inducting on_ in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. *) |
917 | 917 |
918 Abort. | 918 Abort. |
919 | 919 |
920 (** Why should Coq implement [induction] this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *) | 920 (** Why should Coq implement [induction] this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *) |
921 (* end thide *) | 921 (* end thide *) |