annotate src/Predicates.v @ 469:b36876d4611e

Batch of changes based on proofreader feedback
author Adam Chlipala <adam@chlipala.net>
date Wed, 26 Sep 2012 16:35:35 -0400
parents 79190c225f1a
children 0df6dde807ab
rev   line source
adam@394 1 (* Copyright (c) 2008-2012, Adam Chlipala
adamc@45 2 *
adamc@45 3 * This work is licensed under a
adamc@45 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@45 5 * Unported License.
adamc@45 6 * The license text is available at:
adamc@45 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@45 8 *)
adamc@45 9
adamc@45 10 (* begin hide *)
adamc@45 11 Require Import List.
adamc@45 12
adam@314 13 Require Import CpdtTactics.
adamc@45 14
adamc@45 15 Set Implicit Arguments.
adam@322 16
adam@322 17 (* Extra definitions to get coqdoc to choose the right fonts. *)
adam@322 18
adam@323 19 (* begin thide *)
adam@322 20 Inductive unit := tt.
adam@322 21 Inductive Empty_set := .
adam@322 22 Inductive bool := true | false.
adam@322 23 Inductive sum := .
adam@322 24 Inductive prod := .
adam@322 25 Inductive and := conj.
adam@322 26 Inductive or := or_introl | or_intror.
adam@322 27 Inductive ex := ex_intro.
adam@426 28 Inductive eq := eq_refl.
adam@322 29 Reset unit.
adam@323 30 (* end thide *)
adamc@45 31 (* end hide *)
adamc@45 32
adamc@45 33 (** %\chapter{Inductive Predicates}% *)
adamc@45 34
adam@322 35 (** The so-called %\index{Curry-Howard correspondence}``%#"#Curry-Howard correspondence#"#%''~\cite{Curry,Howard}% states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *)
adamc@45 36
adamc@45 37 Print unit.
adam@440 38 (** %\vspace{-.15in}%[[
adamc@209 39 Inductive unit : Set := tt : unit
adam@302 40 ]]
adam@302 41 *)
adamc@45 42
adamc@45 43 Print True.
adam@440 44 (** %\vspace{-.15in}%[[
adamc@209 45 Inductive True : Prop := I : True
adam@322 46 ]]
adamc@45 47
adam@442 48 Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. Chapter 12 goes into more detail about the theoretical differences between [Prop] and [Set]. For now, we will simply follow common intuitions about what a proof is.
adamc@45 49
adam@398 50 The type [unit] has one value, [tt]. The type [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact _should not_ be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving.
adamc@45 51
adam@401 52 The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are. This idea is known as%\index{proof irrelevance}% _proof irrelevance_, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.
adamc@45 53
adam@421 54 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes." This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *)
adamc@45 55
adamc@45 56
adamc@48 57 (** * Propositional Logic *)
adamc@45 58
adamc@45 59 (** Let us begin with a brief tour through the definitions of the connectives for propositional logic. We will work within a Coq section that provides us with a set of propositional variables. In Coq parlance, these are just terms of type [Prop.] *)
adamc@45 60
adamc@45 61 Section Propositional.
adamc@46 62 Variables P Q R : Prop.
adamc@45 63
adamc@45 64 (** In Coq, the most basic propositional connective is implication, written [->], which we have already used in almost every proof. Rather than being defined inductively, implication is built into Coq as the function type constructor.
adamc@45 65
adamc@45 66 We have also already seen the definition of [True]. For a demonstration of a lower-level way of establishing proofs of inductive predicates, we turn to this trivial theorem. *)
adamc@45 67
adamc@45 68 Theorem obvious : True.
adamc@55 69 (* begin thide *)
adamc@45 70 apply I.
adamc@55 71 (* end thide *)
adamc@45 72 Qed.
adamc@45 73
adam@401 74 (** We may always use the [apply] tactic to take a proof step based on applying a particular constructor of the inductive predicate that we are trying to establish. Sometimes there is only one constructor that could possibly apply, in which case a shortcut is available:%\index{tactics!constructor}% *)
adamc@45 75
adamc@55 76 (* begin thide *)
adamc@45 77 Theorem obvious' : True.
adamc@45 78 constructor.
adamc@45 79 Qed.
adamc@45 80
adamc@55 81 (* end thide *)
adamc@55 82
adamc@45 83 (** There is also a predicate [False], which is the Curry-Howard mirror image of [Empty_set] from the last chapter. *)
adamc@45 84
adamc@45 85 Print False.
adam@440 86 (** %\vspace{-.15in}%[[
adamc@209 87 Inductive False : Prop :=
adamc@209 88 ]]
adamc@45 89
adam@442 90 We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number. Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *)
adamc@45 91
adamc@45 92 Theorem False_imp : False -> 2 + 2 = 5.
adamc@55 93 (* begin thide *)
adamc@45 94 destruct 1.
adamc@55 95 (* end thide *)
adamc@45 96 Qed.
adamc@45 97
adam@449 98 (** In a consistent context, we can never build a proof of [False]. In inconsistent contexts that appear in the courses of proofs, it is usually easiest to proceed by demonstrating the inconsistency with an explicit proof of [False]. *)
adamc@45 99
adamc@45 100 Theorem arith_neq : 2 + 2 = 5 -> 9 + 9 = 835.
adamc@55 101 (* begin thide *)
adamc@45 102 intro.
adamc@45 103
adam@322 104 (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important. We use the %\index{tactics!elimtype}%[elimtype] tactic to state a proposition, telling Coq that we wish to construct a proof of the new proposition and then prove the original goal by case analysis on the structure of the new auxiliary proof. Since [False] has no constructors, [elimtype False] simply leaves us with the obligation to prove [False]. *)
adamc@45 105
adamc@45 106 elimtype False.
adamc@45 107 (** [[
adamc@45 108 H : 2 + 2 = 5
adamc@45 109 ============================
adamc@45 110 False
adamc@209 111
adamc@209 112 ]]
adamc@45 113
adamc@209 114 For now, we will leave the details of this proof about arithmetic to [crush]. *)
adamc@45 115
adamc@45 116 crush.
adamc@55 117 (* end thide *)
adamc@45 118 Qed.
adamc@45 119
adamc@45 120 (** A related notion to [False] is logical negation. *)
adamc@45 121
adam@421 122 (* begin hide *)
adam@421 123 Definition foo := not.
adam@421 124 (* end hide *)
adam@421 125
adamc@45 126 Print not.
adamc@209 127 (** %\vspace{-.15in}% [[
adamc@209 128 not = fun A : Prop => A -> False
adamc@209 129 : Prop -> Prop
adamc@209 130 ]]
adamc@45 131
adam@442 132 We see that [not] is just shorthand for implication of [False]. We can use that fact explicitly in proofs. The syntax [~ P] expands to [not P]. *)
adamc@45 133
adamc@45 134 Theorem arith_neq' : ~ (2 + 2 = 5).
adamc@55 135 (* begin thide *)
adamc@45 136 unfold not.
adamc@45 137 (** [[
adamc@45 138 ============================
adamc@45 139 2 + 2 = 5 -> False
adam@302 140 ]]
adam@302 141 *)
adamc@45 142
adamc@45 143 crush.
adamc@55 144 (* end thide *)
adamc@45 145 Qed.
adamc@45 146
adamc@45 147 (** We also have conjunction, which we introduced in the last chapter. *)
adamc@45 148
adamc@45 149 Print and.
adam@440 150 (** %\vspace{-.15in}%[[
adam@322 151 Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B
adamc@209 152 ]]
adamc@209 153
adam@442 154 The interested reader can check that [and] has a Curry-Howard equivalent called %\index{Gallina terms!prod}%[prod], the type of pairs. However, it is generally most convenient to reason about conjunction using tactics. An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks. The operator [/\] is an infix shorthand for [and]. *)
adamc@45 155
adamc@45 156 Theorem and_comm : P /\ Q -> Q /\ P.
adamc@209 157
adamc@55 158 (* begin thide *)
adamc@45 159 (** We start by case analysis on the proof of [P /\ Q]. *)
adamc@45 160
adamc@45 161 destruct 1.
adamc@45 162 (** [[
adamc@45 163 H : P
adamc@45 164 H0 : Q
adamc@45 165 ============================
adamc@45 166 Q /\ P
adamc@209 167
adamc@209 168 ]]
adamc@45 169
adam@322 170 Every proof of a conjunction provides proofs for both conjuncts, so we get a single subgoal reflecting that. We can proceed by splitting this subgoal into a case for each conjunct of [Q /\ P].%\index{tactics!split}% *)
adamc@45 171
adamc@45 172 split.
adam@440 173 (** [[
adam@440 174 2 subgoals
adamc@45 175
adamc@45 176 H : P
adamc@45 177 H0 : Q
adamc@45 178 ============================
adamc@45 179 Q
adam@439 180
adam@439 181 subgoal 2 is
adam@439 182
adam@322 183 P
adamc@209 184
adamc@209 185 ]]
adamc@45 186
adam@322 187 In each case, the conclusion is among our hypotheses, so the %\index{tactics!assumption}%[assumption] tactic finishes the process. *)
adamc@45 188
adamc@45 189 assumption.
adamc@45 190 assumption.
adamc@55 191 (* end thide *)
adamc@45 192 Qed.
adamc@45 193
adam@322 194 (** Coq disjunction is called %\index{Gallina terms!or}%[or] and abbreviated with the infix operator [\/]. *)
adamc@45 195
adamc@45 196 Print or.
adam@440 197 (** %\vspace{-.15in}%[[
adamc@209 198 Inductive or (A : Prop) (B : Prop) : Prop :=
adamc@209 199 or_introl : A -> A \/ B | or_intror : B -> A \/ B
adamc@209 200 ]]
adamc@45 201
adam@449 202 We see that there are two ways to prove a disjunction: prove the first disjunct or prove the second. The Curry-Howard analogue of this is the Coq %\index{Gallina terms!sum}%[sum] type. We can demonstrate the main tactics here with another proof of commutativity. *)
adamc@45 203
adamc@45 204 Theorem or_comm : P \/ Q -> Q \/ P.
adamc@55 205
adamc@55 206 (* begin thide *)
adamc@45 207 (** As in the proof for [and], we begin with case analysis, though this time we are met by two cases instead of one. *)
adamc@209 208
adamc@45 209 destruct 1.
adam@439 210 (** [[
adam@439 211 2 subgoals
adamc@45 212
adamc@45 213 H : P
adamc@45 214 ============================
adamc@45 215 Q \/ P
adam@439 216
adam@439 217 subgoal 2 is
adam@439 218
adamc@45 219 Q \/ P
adamc@209 220
adamc@209 221 ]]
adamc@45 222
adam@401 223 We can see that, in the first subgoal, we want to prove the disjunction by proving its second disjunct. The %\index{tactics!right}%[right] tactic telegraphs this intent. *)
adam@322 224
adamc@45 225 right; assumption.
adamc@45 226
adam@322 227 (** The second subgoal has a symmetric proof.%\index{tactics!left}%
adamc@45 228
adamc@45 229 [[
adamc@45 230 1 subgoal
adamc@45 231
adamc@45 232 H : Q
adamc@45 233 ============================
adamc@45 234 Q \/ P
adam@302 235 ]]
adam@302 236 *)
adamc@45 237
adamc@45 238 left; assumption.
adam@322 239
adamc@55 240 (* end thide *)
adamc@45 241 Qed.
adamc@45 242
adamc@46 243
adamc@46 244 (* begin hide *)
adamc@46 245 (* In-class exercises *)
adamc@46 246
adamc@46 247 Theorem contra : P -> ~P -> R.
adamc@52 248 (* begin thide *)
adamc@52 249 unfold not.
adamc@52 250 intros.
adamc@52 251 elimtype False.
adamc@52 252 apply H0.
adamc@52 253 assumption.
adamc@52 254 (* end thide *)
adamc@46 255 Admitted.
adamc@46 256
adamc@46 257 Theorem and_assoc : (P /\ Q) /\ R -> P /\ (Q /\ R).
adamc@52 258 (* begin thide *)
adamc@52 259 intros.
adamc@52 260 destruct H.
adamc@52 261 destruct H.
adamc@52 262 split.
adamc@52 263 assumption.
adamc@52 264 split.
adamc@52 265 assumption.
adamc@52 266 assumption.
adamc@52 267 (* end thide *)
adamc@46 268 Admitted.
adamc@46 269
adamc@46 270 Theorem or_assoc : (P \/ Q) \/ R -> P \/ (Q \/ R).
adamc@52 271 (* begin thide *)
adamc@52 272 intros.
adamc@52 273 destruct H.
adamc@52 274 destruct H.
adamc@52 275 left.
adamc@52 276 assumption.
adamc@52 277 right.
adamc@52 278 left.
adamc@52 279 assumption.
adamc@52 280 right.
adamc@52 281 right.
adamc@52 282 assumption.
adamc@52 283 (* end thide *)
adamc@46 284 Admitted.
adamc@46 285
adamc@46 286 (* end hide *)
adamc@46 287
adamc@46 288
adam@421 289 (** It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is %\index{tactics!tauto}%[tauto], which is a complete decision procedure for constructive propositional logic. (More on what "constructive" means in the next section.) We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *)
adamc@46 290
adamc@46 291 Theorem or_comm' : P \/ Q -> Q \/ P.
adamc@55 292 (* begin thide *)
adamc@46 293 tauto.
adamc@55 294 (* end thide *)
adamc@46 295 Qed.
adamc@46 296
adam@401 297 (** Sometimes propositional reasoning forms important plumbing for the proof of a theorem, but we still need to apply some other smarts about, say, arithmetic. The tactic %\index{tactics!intuition}%[intuition] is a generalization of [tauto] that proves everything it can using propositional reasoning. When some goals remain, it uses propositional laws to simplify them as far as possible. Consider this example, which uses the list concatenation operator [++] from the standard library. *)
adamc@46 298
adamc@46 299 Theorem arith_comm : forall ls1 ls2 : list nat,
adamc@46 300 length ls1 = length ls2 \/ length ls1 + length ls2 = 6
adamc@46 301 -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
adamc@55 302 (* begin thide *)
adamc@46 303 intuition.
adamc@46 304
adamc@46 305 (** A lot of the proof structure has been generated for us by [intuition], but the final proof depends on a fact about lists. The remaining subgoal hints at what cleverness we need to inject. *)
adamc@46 306
adamc@46 307 (** [[
adamc@46 308 ls1 : list nat
adamc@46 309 ls2 : list nat
adamc@46 310 H0 : length ls1 + length ls2 = 6
adamc@46 311 ============================
adamc@46 312 length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2
adamc@209 313
adamc@209 314 ]]
adamc@46 315
adamc@209 316 We can see that we need a theorem about lengths of concatenated lists, which we proved last chapter and is also in the standard library. *)
adamc@46 317
adamc@46 318 rewrite app_length.
adamc@46 319 (** [[
adamc@46 320 ls1 : list nat
adamc@46 321 ls2 : list nat
adamc@46 322 H0 : length ls1 + length ls2 = 6
adamc@46 323 ============================
adamc@46 324 length ls1 + length ls2 = 6 \/ length ls1 = length ls2
adamc@209 325
adamc@209 326 ]]
adamc@46 327
adamc@209 328 Now the subgoal follows by purely propositional reasoning. That is, we could replace [length ls1 + length ls2 = 6] with [P] and [length ls1 = length ls2] with [Q] and arrive at a tautology of propositional logic. *)
adamc@46 329
adamc@46 330 tauto.
adamc@55 331 (* end thide *)
adamc@46 332 Qed.
adamc@46 333
adam@322 334 (** The [intuition] tactic is one of the main bits of glue in the implementation of [crush], so, with a little help, we can get a short automated proof of the theorem. *)
adamc@46 335
adamc@55 336 (* begin thide *)
adamc@46 337 Theorem arith_comm' : forall ls1 ls2 : list nat,
adamc@46 338 length ls1 = length ls2 \/ length ls1 + length ls2 = 6
adamc@46 339 -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
adam@375 340 Hint Rewrite app_length.
adamc@46 341
adamc@46 342 crush.
adamc@46 343 Qed.
adamc@55 344 (* end thide *)
adamc@46 345
adamc@45 346 End Propositional.
adamc@45 347
adam@322 348 (** Ending the section here has the same effect as always. Each of our propositional theorems becomes universally quantified over the propositional variables that we used. *)
adam@322 349
adamc@46 350
adamc@47 351 (** * What Does It Mean to Be Constructive? *)
adamc@46 352
adam@401 353 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. The datatype [bool] is built from two values [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth?
adamc@46 354
adam@421 355 The answer comes from the fact that Coq implements%\index{constructive logic}% _constructive_ or%\index{intuitionistic logic|see{constructive logic}}% _intuitionistic_ logic, in contrast to the%\index{classical logic}% _classical_ logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is%\index{decidability}% _decidable_, in the sense of %\index{computability|see{decidability}}%computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts."
adamc@47 356
adam@421 357 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth."
adamc@47 358
adam@401 359 Constructive logic lets us define all of the logical connectives in an aesthetically appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called%\index{program extraction}% _program extraction_, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.
adamc@47 360
adamc@47 361 We will see more about Coq's program extraction facility in a later chapter. However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally. It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it. It is almost always most useful to maintain the distinction between programs and proofs. If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove. It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies. *)
adamc@48 362
adamc@48 363
adamc@48 364 (** * First-Order Logic *)
adamc@48 365
adam@421 366 (** The %\index{Gallina terms!forall}%[forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]."
adamc@48 367
adam@322 368 %\index{existential quantification}\index{Gallina terms!exists}\index{Gallina terms!ex}%Existential quantification is defined in the standard library. *)
adamc@48 369
adam@322 370 Print ex.
adam@440 371 (** %\vspace{-.15in}%[[
adamc@209 372 Inductive ex (A : Type) (P : A -> Prop) : Prop :=
adamc@209 373 ex_intro : forall x : A, P x -> ex P
adamc@209 374 ]]
adamc@48 375
adam@469 376 (Note that here, as always, each [forall] quantifier has the largest possible scope, so that the type of [ex_intro] could also be written [forall x : A, (P x -> ex P)].)
adam@469 377
adam@442 378 The family [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *)
adamc@48 379
adamc@48 380 Theorem exist1 : exists x : nat, x + 1 = 2.
adamc@55 381 (* begin thide *)
adamc@67 382 (** remove printing exists *)
adam@421 383 (** We can start this proof with a tactic %\index{tactics!exists}%[exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse %`%#'#E#'#%'% appears instead of the text "exists" in formulas.) *)
adamc@209 384
adamc@48 385 exists 1.
adamc@48 386
adamc@209 387 (** The conclusion is replaced with a version using the existential witness that we announced.
adamc@48 388
adamc@209 389 [[
adamc@48 390 ============================
adamc@48 391 1 + 1 = 2
adam@302 392 ]]
adam@302 393 *)
adamc@48 394
adamc@48 395 reflexivity.
adamc@55 396 (* end thide *)
adamc@48 397 Qed.
adamc@48 398
adamc@48 399 (** printing exists $\exists$ *)
adamc@48 400
adamc@48 401 (** We can also use tactics to reason about existential hypotheses. *)
adamc@48 402
adamc@48 403 Theorem exist2 : forall n m : nat, (exists x : nat, n + x = m) -> n <= m.
adamc@55 404 (* begin thide *)
adamc@48 405 (** We start by case analysis on the proof of the existential fact. *)
adamc@209 406
adamc@48 407 destruct 1.
adamc@48 408 (** [[
adamc@48 409 n : nat
adamc@48 410 m : nat
adamc@48 411 x : nat
adamc@48 412 H : n + x = m
adamc@48 413 ============================
adamc@48 414 n <= m
adamc@209 415
adamc@209 416 ]]
adamc@48 417
adamc@209 418 The goal has been replaced by a form where there is a new free variable [x], and where we have a new hypothesis that the body of the existential holds with [x] substituted for the old bound variable. From here, the proof is just about arithmetic and is easy to automate. *)
adamc@48 419
adamc@48 420 crush.
adamc@55 421 (* end thide *)
adamc@48 422 Qed.
adamc@48 423
adamc@48 424
adamc@48 425 (* begin hide *)
adamc@48 426 (* In-class exercises *)
adamc@48 427
adamc@48 428 Theorem forall_exists_commute : forall (A B : Type) (P : A -> B -> Prop),
adamc@48 429 (exists x : A, forall y : B, P x y) -> (forall y : B, exists x : A, P x y).
adamc@52 430 (* begin thide *)
adamc@52 431 intros.
adamc@52 432 destruct H.
adamc@52 433 exists x.
adamc@52 434 apply H.
adamc@52 435 (* end thide *)
adamc@48 436 Admitted.
adamc@48 437
adamc@48 438 (* end hide *)
adamc@48 439
adamc@48 440
adam@322 441 (** The tactic [intuition] has a first-order cousin called %\index{tactics!firstorder}%[firstorder], which proves many formulas when only first-order reasoning is needed, and it tries to perform first-order simplifications in any case. First-order reasoning is much harder than propositional reasoning, so [firstorder] is much more likely than [intuition] to get stuck in a way that makes it run for long enough to be useless. *)
adamc@49 442
adamc@49 443
adamc@49 444 (** * Predicates with Implicit Equality *)
adamc@49 445
adamc@49 446 (** We start our exploration of a more complicated class of predicates with a simple example: an alternative way of characterizing when a natural number is zero. *)
adamc@49 447
adamc@49 448 Inductive isZero : nat -> Prop :=
adamc@49 449 | IsZero : isZero 0.
adamc@49 450
adamc@49 451 Theorem isZero_zero : isZero 0.
adamc@55 452 (* begin thide *)
adamc@49 453 constructor.
adamc@55 454 (* end thide *)
adamc@49 455 Qed.
adamc@49 456
adam@449 457 (** We can call [isZero] a%\index{judgment}% _judgment_, in the sense often used in the semantics of programming languages. Judgments are typically defined in the style of%\index{natural deduction}% _natural deduction_, where we write a number of%\index{inference rules}% _inference rules_ with premises appearing above a solid line and a conclusion appearing below the line. In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it. The proof of [isZero_zero] demonstrates how we can apply an inference rule. (Readers not familiar with formal semantics should not worry about not following this paragraph!)
adamc@49 458
adam@398 459 The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop]. We saw examples of parameterized types like [list], but there the parameters appeared with names _before_ the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors.
adamc@49 460
adam@449 461 For instance, our definition [isZero] makes the predicate provable only when the argument is [0]. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in %\index{Prolog}%Prolog (but worry not if not familiar with Prolog), and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type!%\index{Gallina terms!eq}\index{Gallina terms!refl\_equal}% *)
adamc@49 462
adamc@49 463 Print eq.
adam@440 464 (** %\vspace{-.15in}%[[
adam@426 465 Inductive eq (A : Type) (x : A) : A -> Prop := eq_refl : x = x
adamc@209 466 ]]
adamc@49 467
adam@442 468 Behind the scenes, uses of infix [=] are expanded to instances of [eq]. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [eq_refl], we see that we can only _prove_ equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. Another way of stating that definition is: equality is defined as the least reflexive relation.
adamc@49 469
adam@322 470 Returning to the example of [isZero], we can see how to work with hypotheses that use this predicate. *)
adamc@49 471
adamc@49 472 Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n.
adamc@55 473 (* begin thide *)
adamc@49 474 (** We want to proceed by cases on the proof of the assumption about [isZero]. *)
adamc@209 475
adamc@49 476 destruct 1.
adamc@49 477 (** [[
adamc@49 478 n : nat
adamc@49 479 ============================
adamc@49 480 n + 0 = n
adamc@209 481
adamc@209 482 ]]
adamc@49 483
adamc@209 484 Since [isZero] has only one constructor, we are presented with only one subgoal. The argument [m] to [isZero] is replaced with that type's argument from the single constructor [IsZero]. From this point, the proof is trivial. *)
adamc@49 485
adamc@49 486 crush.
adamc@55 487 (* end thide *)
adamc@49 488 Qed.
adamc@49 489
adamc@49 490 (** Another example seems at first like it should admit an analogous proof, but in fact provides a demonstration of one of the most basic gotchas of Coq proving. *)
adamc@49 491
adamc@49 492 Theorem isZero_contra : isZero 1 -> False.
adamc@55 493 (* begin thide *)
adamc@49 494 (** Let us try a proof by cases on the assumption, as in the last proof. *)
adamc@209 495
adamc@49 496 destruct 1.
adamc@49 497 (** [[
adamc@49 498 ============================
adamc@49 499 False
adamc@209 500
adamc@209 501 ]]
adamc@49 502
adamc@209 503 It seems that case analysis has not helped us much at all! Our sole hypothesis disappears, leaving us, if anything, worse off than we were before. What went wrong? We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments. If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction. Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0].
adamc@49 504
adam@449 505 Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called %\index{tactics!inversion}%[inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. (Again, worry not if the semantics-oriented terminology from this last sentence is unfamiliar.) *)
adamc@49 506
adamc@49 507 Undo.
adamc@49 508 inversion 1.
adamc@55 509 (* end thide *)
adamc@49 510 Qed.
adamc@49 511
adamc@49 512 (** What does [inversion] do? Think of it as a version of [destruct] that does its best to take advantage of the structure of arguments to inductive types. In this case, [inversion] completed the proof immediately, because it was able to detect that we were using [isZero] with an impossible argument.
adamc@49 513
adamc@49 514 Sometimes using [destruct] when you should have used [inversion] can lead to confusing results. To illustrate, consider an alternate proof attempt for the last theorem. *)
adamc@49 515
adamc@49 516 Theorem isZero_contra' : isZero 1 -> 2 + 2 = 5.
adamc@49 517 destruct 1.
adamc@49 518 (** [[
adamc@49 519 ============================
adamc@49 520 1 + 1 = 4
adamc@209 521
adamc@209 522 ]]
adamc@49 523
adam@469 524 What on earth happened here? Internally, [destruct] replaced [1] with a fresh variable, and, trying to be helpful, it also replaced the occurrence of [1] within the unary representation of each number in the goal. Then, within the [O] case of the proof, we replace the fresh variable with [O]. This has the net effect of decrementing each of these numbers. *)
adamc@209 525
adamc@49 526 Abort.
adamc@49 527
adam@280 528 (** To see more clearly what is happening, we can consider the type of [isZero]'s induction principle. *)
adam@280 529
adam@280 530 Check isZero_ind.
adam@280 531 (** %\vspace{-.15in}% [[
adam@280 532 isZero_ind
adam@280 533 : forall P : nat -> Prop, P 0 -> forall n : nat, isZero n -> P n
adam@280 534 ]]
adam@280 535
adam@442 536 In our last proof script, [destruct] chose to instantiate [P] as [fun n => S n + S n = S (S (S (S n)))]. You can verify for yourself that this specialization of the principle applies to the goal and that the hypothesis [P 0] then matches the subgoal we saw generated. If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *)
adam@280 537
adamc@49 538
adamc@49 539 (* begin hide *)
adamc@49 540 (* In-class exercises *)
adamc@49 541
adamc@49 542 (* EX: Define an inductive type capturing when a list has exactly two elements. Prove that your predicate does not hold of the empty list, and prove that, whenever it holds of a list, the length of that list is two. *)
adamc@49 543
adamc@52 544 (* begin thide *)
adamc@52 545 Section twoEls.
adamc@52 546 Variable A : Type.
adamc@52 547
adamc@52 548 Inductive twoEls : list A -> Prop :=
adamc@52 549 | TwoEls : forall x y, twoEls (x :: y :: nil).
adamc@52 550
adamc@52 551 Theorem twoEls_nil : twoEls nil -> False.
adamc@52 552 inversion 1.
adamc@52 553 Qed.
adamc@52 554
adamc@52 555 Theorem twoEls_two : forall ls, twoEls ls -> length ls = 2.
adamc@52 556 inversion 1.
adamc@52 557 reflexivity.
adamc@52 558 Qed.
adamc@52 559 End twoEls.
adamc@52 560 (* end thide *)
adamc@52 561
adamc@49 562 (* end hide *)
adamc@49 563
adamc@50 564
adamc@50 565 (** * Recursive Predicates *)
adamc@50 566
adamc@50 567 (** We have already seen all of the ingredients we need to build interesting recursive predicates, like this predicate capturing even-ness. *)
adamc@50 568
adamc@50 569 Inductive even : nat -> Prop :=
adamc@50 570 | EvenO : even O
adamc@50 571 | EvenSS : forall n, even n -> even (S (S n)).
adamc@50 572
adam@401 573 (** Think of [even] as another judgment defined by natural deduction rules. The rule [EvenO] has nothing above the line and [even O] below the line, and [EvenSS] is a rule with [even n] above the line and [even (S (S n))] below.
adamc@50 574
adamc@50 575 The proof techniques of the last section are easily adapted. *)
adamc@50 576
adamc@50 577 Theorem even_0 : even 0.
adamc@55 578 (* begin thide *)
adamc@50 579 constructor.
adamc@55 580 (* end thide *)
adamc@50 581 Qed.
adamc@50 582
adamc@50 583 Theorem even_4 : even 4.
adamc@55 584 (* begin thide *)
adamc@50 585 constructor; constructor; constructor.
adamc@55 586 (* end thide *)
adamc@50 587 Qed.
adamc@50 588
adam@375 589 (** It is not hard to see that sequences of constructor applications like the above can get tedious. We can avoid them using Coq's hint facility, with a new [Hint] variant that asks to consider all constructors of an inductive type during proof search. The tactic %\index{tactics!auto}%[auto] performs exhaustive proof search up to a fixed depth, considering only the proof steps we have registered as hints. *)
adamc@50 590
adamc@55 591 (* begin thide *)
adamc@50 592 Hint Constructors even.
adamc@50 593
adamc@50 594 Theorem even_4' : even 4.
adamc@50 595 auto.
adamc@50 596 Qed.
adamc@50 597
adamc@55 598 (* end thide *)
adamc@55 599
adam@322 600 (** We may also use [inversion] with [even]. *)
adam@322 601
adamc@50 602 Theorem even_1_contra : even 1 -> False.
adamc@55 603 (* begin thide *)
adamc@50 604 inversion 1.
adamc@55 605 (* end thide *)
adamc@50 606 Qed.
adamc@50 607
adamc@50 608 Theorem even_3_contra : even 3 -> False.
adamc@55 609 (* begin thide *)
adamc@50 610 inversion 1.
adamc@50 611 (** [[
adamc@50 612 H : even 3
adamc@50 613 n : nat
adamc@50 614 H1 : even 1
adamc@50 615 H0 : n = 1
adamc@50 616 ============================
adamc@50 617 False
adamc@209 618
adamc@209 619 ]]
adamc@50 620
adam@322 621 The [inversion] tactic can be a little overzealous at times, as we can see here with the introduction of the unused variable [n] and an equality hypothesis about it. For more complicated predicates, though, adding such assumptions is critical to dealing with the undecidability of general inversion. More complex inductive definitions and theorems can cause [inversion] to generate equalities where neither side is a variable. *)
adamc@50 622
adamc@50 623 inversion H1.
adamc@55 624 (* end thide *)
adamc@50 625 Qed.
adamc@50 626
adamc@50 627 (** We can also do inductive proofs about [even]. *)
adamc@50 628
adamc@50 629 Theorem even_plus : forall n m, even n -> even m -> even (n + m).
adamc@55 630 (* begin thide *)
adamc@50 631 (** It seems a reasonable first choice to proceed by induction on [n]. *)
adamc@209 632
adamc@50 633 induction n; crush.
adamc@50 634 (** [[
adamc@50 635 n : nat
adamc@50 636 IHn : forall m : nat, even n -> even m -> even (n + m)
adamc@50 637 m : nat
adamc@50 638 H : even (S n)
adamc@50 639 H0 : even m
adamc@50 640 ============================
adamc@50 641 even (S (n + m))
adamc@209 642
adamc@209 643 ]]
adamc@50 644
adamc@209 645 We will need to use the hypotheses [H] and [H0] somehow. The most natural choice is to invert [H]. *)
adamc@50 646
adamc@50 647 inversion H.
adamc@50 648 (** [[
adamc@50 649 n : nat
adamc@50 650 IHn : forall m : nat, even n -> even m -> even (n + m)
adamc@50 651 m : nat
adamc@50 652 H : even (S n)
adamc@50 653 H0 : even m
adamc@50 654 n0 : nat
adamc@50 655 H2 : even n0
adamc@50 656 H1 : S n0 = n
adamc@50 657 ============================
adamc@50 658 even (S (S n0 + m))
adamc@209 659
adamc@209 660 ]]
adamc@50 661
adamc@209 662 Simplifying the conclusion brings us to a point where we can apply a constructor. *)
adamc@209 663
adamc@50 664 simpl.
adamc@50 665 (** [[
adamc@50 666 ============================
adamc@50 667 even (S (S (n0 + m)))
adam@302 668 ]]
adam@302 669 *)
adamc@50 670
adamc@50 671 constructor.
adam@322 672
adam@401 673 (** [[
adamc@50 674 ============================
adamc@50 675 even (n0 + m)
adamc@209 676
adamc@209 677 ]]
adamc@50 678
adamc@209 679 At this point, we would like to apply the inductive hypothesis, which is:
adamc@209 680
adamc@209 681 [[
adamc@50 682 IHn : forall m : nat, even n -> even m -> even (n + m)
adam@440 683
adamc@209 684 ]]
adamc@50 685
adam@421 686 Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct _on the structure of one of the [even] proofs_. This technique is commonly called%\index{rule induction}% _rule induction_ in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent.
adamc@50 687
adam@322 688 Recall that tactics like [induction] and [destruct] may be passed numbers to refer to unnamed lefthand sides of implications in the conclusion, where the argument [n] refers to the [n]th such hypothesis. *)
adam@322 689
adamc@50 690 Restart.
adamc@50 691
adamc@50 692 induction 1.
adamc@50 693 (** [[
adamc@50 694 m : nat
adamc@50 695 ============================
adamc@50 696 even m -> even (0 + m)
adam@322 697 ]]
adamc@50 698
adam@322 699 %\noindent \coqdockw{subgoal} 2 \coqdockw{is}:%#<tt>subgoal 2 is</tt>#
adam@322 700 [[
adamc@50 701 even m -> even (S (S n) + m)
adamc@209 702
adamc@209 703 ]]
adamc@50 704
adamc@209 705 The first case is easily discharged by [crush], based on the hint we added earlier to try the constructors of [even]. *)
adamc@50 706
adamc@50 707 crush.
adamc@50 708
adamc@50 709 (** Now we focus on the second case: *)
adamc@209 710
adamc@50 711 intro.
adamc@50 712 (** [[
adamc@50 713 m : nat
adamc@50 714 n : nat
adamc@50 715 H : even n
adamc@50 716 IHeven : even m -> even (n + m)
adamc@50 717 H0 : even m
adamc@50 718 ============================
adamc@50 719 even (S (S n) + m)
adamc@209 720
adamc@209 721 ]]
adamc@50 722
adamc@209 723 We simplify and apply a constructor, as in our last proof attempt. *)
adamc@50 724
adamc@50 725 simpl; constructor.
adam@322 726
adam@401 727 (** [[
adamc@50 728 ============================
adamc@50 729 even (n + m)
adamc@209 730
adamc@209 731 ]]
adamc@50 732
adamc@209 733 Now we have an exact match with our inductive hypothesis, and the remainder of the proof is trivial. *)
adamc@50 734
adamc@50 735 apply IHeven; assumption.
adamc@50 736
adamc@50 737 (** In fact, [crush] can handle all of the details of the proof once we declare the induction strategy. *)
adamc@50 738
adamc@50 739 Restart.
adam@322 740
adamc@50 741 induction 1; crush.
adamc@55 742 (* end thide *)
adamc@50 743 Qed.
adamc@50 744
adamc@50 745 (** Induction on recursive predicates has similar pitfalls to those we encountered with inversion in the last section. *)
adamc@50 746
adamc@50 747 Theorem even_contra : forall n, even (S (n + n)) -> False.
adamc@55 748 (* begin thide *)
adamc@50 749 induction 1.
adamc@50 750 (** [[
adamc@50 751 n : nat
adamc@50 752 ============================
adamc@50 753 False
adam@322 754 ]]
adamc@50 755
adam@322 756 %\noindent \coqdockw{subgoal} 2 \coqdockw{is}:%#<tt>subgoal 2 is</tt>#
adam@322 757 [[
adamc@50 758 False
adamc@209 759
adamc@209 760 ]]
adamc@50 761
adam@280 762 We are already sunk trying to prove the first subgoal, since the argument to [even] was replaced by a fresh variable internally. This time, we find it easier to prove this theorem by way of a lemma. Instead of trusting [induction] to replace expressions with fresh variables, we do it ourselves, explicitly adding the appropriate equalities as new assumptions. *)
adamc@209 763
adamc@50 764 Abort.
adamc@50 765
adamc@50 766 Lemma even_contra' : forall n', even n' -> forall n, n' = S (n + n) -> False.
adamc@50 767 induction 1; crush.
adamc@50 768
adamc@54 769 (** At this point, it is useful to consider all cases of [n] and [n0] being zero or nonzero. Only one of these cases has any trickiness to it. *)
adamc@209 770
adamc@50 771 destruct n; destruct n0; crush.
adamc@50 772
adamc@50 773 (** [[
adamc@50 774 n : nat
adamc@50 775 H : even (S n)
adamc@50 776 IHeven : forall n0 : nat, S n = S (n0 + n0) -> False
adamc@50 777 n0 : nat
adamc@50 778 H0 : S n = n0 + S n0
adamc@50 779 ============================
adamc@50 780 False
adamc@209 781
adamc@209 782 ]]
adamc@50 783
adam@280 784 At this point it is useful to use a theorem from the standard library, which we also proved with a different name in the last chapter. We can search for a theorem that allows us to rewrite terms of the form [x + S y]. *)
adamc@209 785
adam@280 786 SearchRewrite (_ + S _).
adam@440 787 (** %\vspace{-.15in}%[[
adam@280 788 plus_n_Sm : forall n m : nat, S (n + m) = n + S m
adam@302 789 ]]
adam@302 790 *)
adamc@50 791
adamc@50 792 rewrite <- plus_n_Sm in H0.
adamc@50 793
adam@322 794 (** The induction hypothesis lets us complete the proof, if we use a variant of [apply] that has a %\index{tactics!with}%[with] clause to give instantiations of quantified variables. *)
adamc@209 795
adamc@50 796 apply IHeven with n0; assumption.
adamc@50 797
adam@322 798 (** As usual, we can rewrite the proof to avoid referencing any locally generated names, which makes our proof script more readable and more robust to changes in the theorem statement. We use the notation [<-] to request a hint that does right-to-left rewriting, just like we can with the [rewrite] tactic. *)
adamc@209 799
adamc@209 800 Restart.
adam@322 801
adam@375 802 Hint Rewrite <- plus_n_Sm.
adamc@50 803
adamc@50 804 induction 1; crush;
adamc@50 805 match goal with
adamc@50 806 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0
adamc@50 807 end; crush; eauto.
adamc@50 808 Qed.
adamc@50 809
adam@322 810 (** We write the proof in a way that avoids the use of local variable or hypothesis names, using the %\index{tactics!match}%[match] tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, %\index{tactics!eauto}%[eauto].
adamc@50 811
adam@449 812 The [crush] tactic uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. The [auto] tactic attempts %\index{Prolog}%Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. (See Chapter 13 for a first-principles introduction to what we mean by "Prolog-style logic programming.") Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. This restriction is relaxed for [eauto], at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively automated Coq proofs to see a bag of standard tactics applied to pick off the "easy" subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search.
adamc@50 813
adamc@50 814 The original theorem now follows trivially from our lemma. *)
adamc@50 815
adamc@50 816 Theorem even_contra : forall n, even (S (n + n)) -> False.
adamc@52 817 intros; eapply even_contra'; eauto.
adamc@50 818 Qed.
adamc@52 819
adam@398 820 (** We use a variant %\index{tactics!apply}%[eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto]. An invocation of [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments. In this case, [eauto] is able to determine the right values for those unification variables, using (unsurprisingly) a variant of the classic algorithm for _unification_ %\cite{unification}%.
adamc@52 821
adamc@52 822 By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq. Imagine that we had tried to prove [even_contra'] with all of the [forall] quantifiers moved to the front of the lemma statement. *)
adamc@52 823
adamc@52 824 Lemma even_contra'' : forall n' n, even n' -> n' = S (n + n) -> False.
adamc@52 825 induction 1; crush;
adamc@52 826 match goal with
adamc@52 827 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0
adamc@52 828 end; crush; eauto.
adamc@52 829
adamc@209 830 (** One subgoal remains:
adamc@52 831
adamc@209 832 [[
adamc@52 833 n : nat
adamc@52 834 H : even (S (n + n))
adamc@52 835 IHeven : S (n + n) = S (S (S (n + n))) -> False
adamc@52 836 ============================
adamc@52 837 False
adamc@209 838
adamc@209 839 ]]
adamc@52 840
adam@398 841 We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n]. This is because the quantification of [n] _appeared after the thing we are inducting on_ in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. *)
adamc@52 842
adam@322 843 Abort.
adam@322 844
adam@322 845 (** Why should Coq implement [induction] this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *)
adamc@55 846 (* end thide *)
adamc@209 847
adam@322 848
adamc@51 849
adamc@52 850
adamc@52 851 (* begin hide *)
adamc@52 852 (* In-class exercises *)
adamc@52 853
adam@448 854 (* EX: Define a type [prop] of simple Boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction. Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the "false" formula. Prove that every false-free [prop] is valid. *)
adamc@52 855
adamc@52 856 (* begin thide *)
adamc@52 857 Inductive prop : Set :=
adamc@52 858 | Tru : prop
adamc@52 859 | Fals : prop
adamc@52 860 | And : prop -> prop -> prop
adamc@52 861 | Or : prop -> prop -> prop.
adamc@52 862
adamc@52 863 Inductive holds : prop -> Prop :=
adamc@52 864 | HTru : holds Tru
adamc@52 865 | HAnd : forall p1 p2, holds p1 -> holds p2 -> holds (And p1 p2)
adamc@52 866 | HOr1 : forall p1 p2, holds p1 -> holds (Or p1 p2)
adamc@52 867 | HOr2 : forall p1 p2, holds p2 -> holds (Or p1 p2).
adamc@52 868
adamc@52 869 Inductive falseFree : prop -> Prop :=
adamc@52 870 | FFTru : falseFree Tru
adamc@52 871 | FFAnd : forall p1 p2, falseFree p1 -> falseFree p2 -> falseFree (And p1 p2)
adamc@52 872 | FFNot : forall p1 p2, falseFree p1 -> falseFree p2 -> falseFree (Or p1 p2).
adamc@52 873
adamc@52 874 Hint Constructors holds.
adamc@52 875
adamc@52 876 Theorem falseFree_holds : forall p, falseFree p -> holds p.
adamc@52 877 induction 1; crush.
adamc@52 878 Qed.
adamc@52 879 (* end thide *)
adamc@52 880
adamc@52 881
adamc@52 882 (* EX: Define an inductive type [prop'] that is the same as [prop] but omits the possibility for falsehood. Define a proposition [holds'] for [prop'] that is analogous to [holds]. Define a function [propify] for translating [prop']s to [prop]s. Prove that, for any [prop'] [p], if [propify p] is valid, then so is [p]. *)
adamc@52 883
adamc@52 884 (* begin thide *)
adamc@52 885 Inductive prop' : Set :=
adamc@52 886 | Tru' : prop'
adamc@52 887 | And' : prop' -> prop' -> prop'
adamc@52 888 | Or' : prop' -> prop' -> prop'.
adamc@52 889
adamc@52 890 Inductive holds' : prop' -> Prop :=
adamc@52 891 | HTru' : holds' Tru'
adamc@52 892 | HAnd' : forall p1 p2, holds' p1 -> holds' p2 -> holds' (And' p1 p2)
adamc@52 893 | HOr1' : forall p1 p2, holds' p1 -> holds' (Or' p1 p2)
adamc@52 894 | HOr2' : forall p1 p2, holds' p2 -> holds' (Or' p1 p2).
adamc@52 895
adamc@52 896 Fixpoint propify (p : prop') : prop :=
adamc@52 897 match p with
adamc@52 898 | Tru' => Tru
adamc@52 899 | And' p1 p2 => And (propify p1) (propify p2)
adamc@52 900 | Or' p1 p2 => Or (propify p1) (propify p2)
adamc@52 901 end.
adamc@52 902
adamc@52 903 Hint Constructors holds'.
adamc@52 904
adamc@52 905 Lemma propify_holds' : forall p', holds p' -> forall p, p' = propify p -> holds' p.
adamc@52 906 induction 1; crush; destruct p; crush.
adamc@52 907 Qed.
adamc@52 908
adamc@52 909 Theorem propify_holds : forall p, holds (propify p) -> holds' p.
adamc@52 910 intros; eapply propify_holds'; eauto.
adamc@52 911 Qed.
adamc@52 912 (* end thide *)
adamc@52 913
adamc@52 914 (* end hide *)