comparison src/Predicates.v @ 471:51a8472aca68

Batch of changes based on proofreader feedback
author Adam Chlipala <adam@chlipala.net>
date Mon, 08 Oct 2012 16:04:49 -0400
parents 0df6dde807ab
children 1fd4109f7b31
comparison
equal deleted inserted replaced
470:0df6dde807ab 471:51a8472aca68
54 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes." This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *) 54 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes." This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *)
55 55
56 56
57 (** * Propositional Logic *) 57 (** * Propositional Logic *)
58 58
59 (** Let us begin with a brief tour through the definitions of the connectives for propositional logic. We will work within a Coq section that provides us with a set of propositional variables. In Coq parlance, these are just terms of type [Prop.] *) 59 (** Let us begin with a brief tour through the definitions of the connectives for propositional logic. We will work within a Coq section that provides us with a set of propositional variables. In Coq parlance, these are just variables of type [Prop]. *)
60 60
61 Section Propositional. 61 Section Propositional.
62 Variables P Q R : Prop. 62 Variables P Q R : Prop.
63 63
64 (** In Coq, the most basic propositional connective is implication, written [->], which we have already used in almost every proof. Rather than being defined inductively, implication is built into Coq as the function type constructor. 64 (** In Coq, the most basic propositional connective is implication, written [->], which we have already used in almost every proof. Rather than being defined inductively, implication is built into Coq as the function type constructor.
99 99
100 Theorem arith_neq : 2 + 2 = 5 -> 9 + 9 = 835. 100 Theorem arith_neq : 2 + 2 = 5 -> 9 + 9 = 835.
101 (* begin thide *) 101 (* begin thide *)
102 intro. 102 intro.
103 103
104 (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important. We use the %\index{tactics!elimtype}%[elimtype] tactic to state a proposition, telling Coq that we wish to construct a proof of the new proposition and then prove the original goal by case analysis on the structure of the new auxiliary proof. Since [False] has no constructors, [elimtype False] simply leaves us with the obligation to prove [False]. *) 104 (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important. We use the %\index{tactics!elimtype}%[elimtype] tactic. For a full description of it, see the Coq manual. For our purposes, we only need the variant [elimtype False], which lets us replace any conclusion formula with [False], because any fact follows from an inconsistent context. *)
105 105
106 elimtype False. 106 elimtype False.
107 (** [[ 107 (** [[
108 H : 2 + 2 = 5 108 H : 2 + 2 = 5
109 ============================ 109 ============================
292 (* begin thide *) 292 (* begin thide *)
293 tauto. 293 tauto.
294 (* end thide *) 294 (* end thide *)
295 Qed. 295 Qed.
296 296
297 (** Sometimes propositional reasoning forms important plumbing for the proof of a theorem, but we still need to apply some other smarts about, say, arithmetic. The tactic %\index{tactics!intuition}%[intuition] is a generalization of [tauto] that proves everything it can using propositional reasoning. When some goals remain, it uses propositional laws to simplify them as far as possible. Consider this example, which uses the list concatenation operator [++] from the standard library. *) 297 (** Sometimes propositional reasoning forms important plumbing for the proof of a theorem, but we still need to apply some other smarts about, say, arithmetic. The tactic %\index{tactics!intuition}%[intuition] is a generalization of [tauto] that proves everything it can using propositional reasoning. When some further facts must be established to finish the proof, [intuition] uses propositional laws to simplify them as far as possible. Consider this example, which uses the list concatenation operator %\coqdocnotation{%#<tt>#++#</tt>#%}% from the standard library. *)
298 298
299 Theorem arith_comm : forall ls1 ls2 : list nat, 299 Theorem arith_comm : forall ls1 ls2 : list nat,
300 length ls1 = length ls2 \/ length ls1 + length ls2 = 6 300 length ls1 = length ls2 \/ length ls1 + length ls2 = 6
301 -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2. 301 -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
302 (* begin thide *) 302 (* begin thide *)
348 (** Ending the section here has the same effect as always. Each of our propositional theorems becomes universally quantified over the propositional variables that we used. *) 348 (** Ending the section here has the same effect as always. Each of our propositional theorems becomes universally quantified over the propositional variables that we used. *)
349 349
350 350
351 (** * What Does It Mean to Be Constructive? *) 351 (** * What Does It Mean to Be Constructive? *)
352 352
353 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. The datatype [bool] is built from two values [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth? 353 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. The datatype [bool] is built from two values [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth, [True] and [False]?
354 354
355 The answer comes from the fact that Coq implements%\index{constructive logic}% _constructive_ or%\index{intuitionistic logic|see{constructive logic}}% _intuitionistic_ logic, in contrast to the%\index{classical logic}% _classical_ logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is%\index{decidability}% _decidable_, in the sense of %\index{computability|see{decidability}}%computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts." 355 The answer comes from the fact that Coq implements%\index{constructive logic}% _constructive_ or%\index{intuitionistic logic|see{constructive logic}}% _intuitionistic_ logic, in contrast to the%\index{classical logic}% _classical_ logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is%\index{decidability}% _decidable_, in the sense of %\index{computability|see{decidability}}%computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts."
356
357 A similar paradoxical situation would result if every proposition evaluated to either [True] or [False]. Evaluation in Coq is decidable, so we would be limiting ourselves to decidable propositions only.
356 358
357 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth." 359 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth."
358 360
359 Constructive logic lets us define all of the logical connectives in an aesthetically appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called%\index{program extraction}% _program extraction_, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs. 361 Constructive logic lets us define all of the logical connectives in an aesthetically appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called%\index{program extraction}% _program extraction_, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.
360 362
373 ex_intro : forall x : A, P x -> ex P 375 ex_intro : forall x : A, P x -> ex P
374 ]] 376 ]]
375 377
376 (Note that here, as always, each [forall] quantifier has the largest possible scope, so that the type of [ex_intro] could also be written [forall x : A, (P x -> ex P)].) 378 (Note that here, as always, each [forall] quantifier has the largest possible scope, so that the type of [ex_intro] could also be written [forall x : A, (P x -> ex P)].)
377 379
378 The family [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *) 380 The family [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time.
381
382 Here is an example of a theorem statement with existential quantification. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *)
379 383
380 Theorem exist1 : exists x : nat, x + 1 = 2. 384 Theorem exist1 : exists x : nat, x + 1 = 2.
381 (* begin thide *) 385 (* begin thide *)
382 (** remove printing exists *) 386 (** remove printing exists *)
383 (** We can start this proof with a tactic %\index{tactics!exists}%[exists], which should not be confused with the formula constructor shorthand of the same name. %In the version of this document that you are reading, the reverse ``E'' appears instead of the text ``exists'' in formulas.% *) 387 (** We can start this proof with a tactic %\index{tactics!exists}%[exists], which should not be confused with the formula constructor shorthand of the same name. %In the version of this document that you are reading, the reverse ``E'' appears instead of the text ``exists'' in formulas.% *)
807 end; crush; eauto. 811 end; crush; eauto.
808 Qed. 812 Qed.
809 813
810 (** We write the proof in a way that avoids the use of local variable or hypothesis names, using the %\index{tactics!match}%[match] tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, %\index{tactics!eauto}%[eauto]. 814 (** We write the proof in a way that avoids the use of local variable or hypothesis names, using the %\index{tactics!match}%[match] tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, %\index{tactics!eauto}%[eauto].
811 815
812 The [crush] tactic uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. The [auto] tactic attempts %\index{Prolog}%Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. (See Chapter 13 for a first-principles introduction to what we mean by "Prolog-style logic programming.") Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. This restriction is relaxed for [eauto], at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively automated Coq proofs to see a bag of standard tactics applied to pick off the "easy" subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search. 816 The [crush] tactic uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. For now, think of [eauto] as a potentially more expensive version of [auto] that considers more possible proofs; see Chapter 13 for more detail. The quick summary is that [eauto] considers applying a lemma even when the form of the current goal doesn not uniquely determine the values of all of the lemma's quantified variables.
813 817
814 The original theorem now follows trivially from our lemma. *) 818 The original theorem now follows trivially from our lemma. *)
815 819
816 Theorem even_contra : forall n, even (S (n + n)) -> False. 820 Theorem even_contra : forall n, even (S (n + n)) -> False.
817 intros; eapply even_contra'; eauto. 821 intros; eapply even_contra'; eauto.