annotate src/Predicates.v @ 282:caa69851c78d

Subset suggestions from PC; improvements to build process for coqdoc fontification
author Adam Chlipala <adam@chlipala.net>
date Fri, 05 Nov 2010 10:35:56 -0400
parents 4146889930c5
children 2c88fc1dbe33
rev   line source
adam@281 1 (* Copyright (c) 2008-2010, Adam Chlipala
adamc@45 2 *
adamc@45 3 * This work is licensed under a
adamc@45 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@45 5 * Unported License.
adamc@45 6 * The license text is available at:
adamc@45 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@45 8 *)
adamc@45 9
adamc@45 10 (* begin hide *)
adamc@45 11 Require Import List.
adamc@45 12
adamc@45 13 Require Import Tactics.
adamc@45 14
adamc@45 15 Set Implicit Arguments.
adamc@45 16 (* end hide *)
adamc@45 17
adamc@45 18
adamc@45 19 (** %\chapter{Inductive Predicates}% *)
adamc@45 20
adamc@45 21 (** The so-called "Curry-Howard Correspondence" states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *)
adamc@45 22
adamc@45 23 Print unit.
adamc@209 24 (** %\vspace{-.15in}% [[
adamc@209 25 Inductive unit : Set := tt : unit
adamc@45 26 ]] *)
adamc@45 27
adamc@45 28 Print True.
adamc@209 29 (** %\vspace{-.15in}% [[
adamc@209 30 Inductive True : Prop := I : True
adamc@45 31 ]] *)
adamc@45 32
adamc@45 33 (** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T].
adamc@45 34
adamc@45 35 [unit] has one value, [tt]. [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact %\textit{%#<i>#should not#</i>#%}% be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving.
adamc@45 36
adamc@45 37 The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are. This idea is known as %\textit{%#<i>#proof irrelevance#</i>#%}%, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.
adamc@45 38
adamc@45 39 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes." This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *)
adamc@45 40
adamc@45 41
adamc@48 42 (** * Propositional Logic *)
adamc@45 43
adamc@45 44 (** Let us begin with a brief tour through the definitions of the connectives for propositional logic. We will work within a Coq section that provides us with a set of propositional variables. In Coq parlance, these are just terms of type [Prop.] *)
adamc@45 45
adamc@45 46 Section Propositional.
adamc@46 47 Variables P Q R : Prop.
adamc@45 48
adamc@45 49 (** In Coq, the most basic propositional connective is implication, written [->], which we have already used in almost every proof. Rather than being defined inductively, implication is built into Coq as the function type constructor.
adamc@45 50
adamc@45 51 We have also already seen the definition of [True]. For a demonstration of a lower-level way of establishing proofs of inductive predicates, we turn to this trivial theorem. *)
adamc@45 52
adamc@45 53 Theorem obvious : True.
adamc@55 54 (* begin thide *)
adamc@45 55 apply I.
adamc@55 56 (* end thide *)
adamc@45 57 Qed.
adamc@45 58
adamc@45 59 (** We may always use the [apply] tactic to take a proof step based on applying a particular constructor of the inductive predicate that we are trying to establish. Sometimes there is only one constructor that could possibly apply, in which case a shortcut is available: *)
adamc@45 60
adamc@55 61 (* begin thide *)
adamc@45 62 Theorem obvious' : True.
adamc@45 63 constructor.
adamc@45 64 Qed.
adamc@45 65
adamc@55 66 (* end thide *)
adamc@55 67
adamc@45 68 (** There is also a predicate [False], which is the Curry-Howard mirror image of [Empty_set] from the last chapter. *)
adamc@45 69
adamc@45 70 Print False.
adamc@209 71 (** %\vspace{-.15in}% [[
adamc@209 72 Inductive False : Prop :=
adamc@209 73
adamc@209 74 ]]
adamc@45 75
adamc@209 76 We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number. Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *)
adamc@45 77
adamc@45 78 Theorem False_imp : False -> 2 + 2 = 5.
adamc@55 79 (* begin thide *)
adamc@45 80 destruct 1.
adamc@55 81 (* end thide *)
adamc@45 82 Qed.
adamc@45 83
adamc@45 84 (** In a consistent context, we can never build a proof of [False]. In inconsistent contexts that appear in the courses of proofs, it is usually easiest to proceed by demonstrating that inconsistency with an explicit proof of [False]. *)
adamc@45 85
adamc@45 86 Theorem arith_neq : 2 + 2 = 5 -> 9 + 9 = 835.
adamc@55 87 (* begin thide *)
adamc@45 88 intro.
adamc@45 89
adamc@45 90 (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important. We use the [elimtype] tactic to state a proposition, telling Coq that we wish to construct a proof of the new proposition and then prove the original goal by case analysis on the structure of the new auxiliary proof. Since [False] has no constructors, [elimtype False] simply leaves us with the obligation to prove [False]. *)
adamc@45 91
adamc@45 92 elimtype False.
adamc@45 93 (** [[
adamc@45 94 H : 2 + 2 = 5
adamc@45 95 ============================
adamc@45 96 False
adamc@209 97
adamc@209 98 ]]
adamc@45 99
adamc@209 100 For now, we will leave the details of this proof about arithmetic to [crush]. *)
adamc@45 101
adamc@45 102 crush.
adamc@55 103 (* end thide *)
adamc@45 104 Qed.
adamc@45 105
adamc@45 106 (** A related notion to [False] is logical negation. *)
adamc@45 107
adamc@45 108 Print not.
adamc@209 109 (** %\vspace{-.15in}% [[
adamc@209 110 not = fun A : Prop => A -> False
adamc@209 111 : Prop -> Prop
adamc@209 112
adamc@209 113 ]]
adamc@45 114
adam@280 115 We see that [not] is just shorthand for implication of [False]. We can use that fact explicitly in proofs. The syntax [~ P] expands to [not P]. *)
adamc@45 116
adamc@45 117 Theorem arith_neq' : ~ (2 + 2 = 5).
adamc@55 118 (* begin thide *)
adamc@45 119 unfold not.
adamc@45 120 (** [[
adamc@45 121 ============================
adamc@45 122 2 + 2 = 5 -> False
adamc@45 123 ]] *)
adamc@45 124
adamc@45 125 crush.
adamc@55 126 (* end thide *)
adamc@45 127 Qed.
adamc@45 128
adamc@45 129 (** We also have conjunction, which we introduced in the last chapter. *)
adamc@45 130
adamc@45 131 Print and.
adamc@209 132 (** %\vspace{-.15in}% [[
adamc@209 133 Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B
adamc@209 134
adamc@209 135 ]]
adamc@209 136
adamc@210 137 The interested reader can check that [and] has a Curry-Howard doppelganger called [prod], the type of pairs. However, it is generally most convenient to reason about conjunction using tactics. An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks. [/\] is an infix shorthand for [and]. *)
adamc@45 138
adamc@45 139 Theorem and_comm : P /\ Q -> Q /\ P.
adamc@209 140
adamc@55 141 (* begin thide *)
adamc@45 142 (** We start by case analysis on the proof of [P /\ Q]. *)
adamc@45 143
adamc@45 144 destruct 1.
adamc@45 145 (** [[
adamc@45 146 H : P
adamc@45 147 H0 : Q
adamc@45 148 ============================
adamc@45 149 Q /\ P
adamc@209 150
adamc@209 151 ]]
adamc@45 152
adamc@209 153 Every proof of a conjunction provides proofs for both conjuncts, so we get a single subgoal reflecting that. We can proceed by splitting this subgoal into a case for each conjunct of [Q /\ P]. *)
adamc@45 154
adamc@45 155 split.
adamc@45 156 (** [[
adamc@45 157 2 subgoals
adamc@45 158
adamc@45 159 H : P
adamc@45 160 H0 : Q
adamc@45 161 ============================
adamc@45 162 Q
adamc@45 163
adamc@45 164 subgoal 2 is:
adamc@45 165 P
adamc@209 166
adamc@209 167 ]]
adamc@45 168
adamc@209 169 In each case, the conclusion is among our hypotheses, so the [assumption] tactic finishes the process. *)
adamc@45 170
adamc@45 171 assumption.
adamc@45 172 assumption.
adamc@55 173 (* end thide *)
adamc@45 174 Qed.
adamc@45 175
adamc@45 176 (** Coq disjunction is called [or] and abbreviated with the infix operator [\/]. *)
adamc@45 177
adamc@45 178 Print or.
adamc@209 179 (** %\vspace{-.15in}% [[
adamc@209 180 Inductive or (A : Prop) (B : Prop) : Prop :=
adamc@209 181 or_introl : A -> A \/ B | or_intror : B -> A \/ B
adamc@209 182
adamc@209 183 ]]
adamc@45 184
adamc@209 185 We see that there are two ways to prove a disjunction: prove the first disjunct or prove the second. The Curry-Howard analogue of this is the Coq [sum] type. We can demonstrate the main tactics here with another proof of commutativity. *)
adamc@45 186
adamc@45 187 Theorem or_comm : P \/ Q -> Q \/ P.
adamc@55 188
adamc@55 189 (* begin thide *)
adamc@45 190 (** As in the proof for [and], we begin with case analysis, though this time we are met by two cases instead of one. *)
adamc@209 191
adamc@45 192 destruct 1.
adamc@45 193 (** [[
adamc@45 194 2 subgoals
adamc@45 195
adamc@45 196 H : P
adamc@45 197 ============================
adamc@45 198 Q \/ P
adamc@45 199
adamc@45 200 subgoal 2 is:
adamc@45 201 Q \/ P
adamc@209 202
adamc@209 203 ]]
adamc@45 204
adamc@209 205 We can see that, in the first subgoal, we want to prove the disjunction by proving its second disjunct. The [right] tactic telegraphs this intent. *)
adamc@209 206
adamc@45 207 right; assumption.
adamc@45 208
adamc@45 209 (** The second subgoal has a symmetric proof.
adamc@45 210
adamc@45 211 [[
adamc@45 212 1 subgoal
adamc@45 213
adamc@45 214 H : Q
adamc@45 215 ============================
adamc@45 216 Q \/ P
adamc@45 217 ]] *)
adamc@45 218
adamc@45 219 left; assumption.
adamc@55 220 (* end thide *)
adamc@45 221 Qed.
adamc@45 222
adamc@46 223
adamc@46 224 (* begin hide *)
adamc@46 225 (* In-class exercises *)
adamc@46 226
adamc@46 227 Theorem contra : P -> ~P -> R.
adamc@52 228 (* begin thide *)
adamc@52 229 unfold not.
adamc@52 230 intros.
adamc@52 231 elimtype False.
adamc@52 232 apply H0.
adamc@52 233 assumption.
adamc@52 234 (* end thide *)
adamc@46 235 Admitted.
adamc@46 236
adamc@46 237 Theorem and_assoc : (P /\ Q) /\ R -> P /\ (Q /\ R).
adamc@52 238 (* begin thide *)
adamc@52 239 intros.
adamc@52 240 destruct H.
adamc@52 241 destruct H.
adamc@52 242 split.
adamc@52 243 assumption.
adamc@52 244 split.
adamc@52 245 assumption.
adamc@52 246 assumption.
adamc@52 247 (* end thide *)
adamc@46 248 Admitted.
adamc@46 249
adamc@46 250 Theorem or_assoc : (P \/ Q) \/ R -> P \/ (Q \/ R).
adamc@52 251 (* begin thide *)
adamc@52 252 intros.
adamc@52 253 destruct H.
adamc@52 254 destruct H.
adamc@52 255 left.
adamc@52 256 assumption.
adamc@52 257 right.
adamc@52 258 left.
adamc@52 259 assumption.
adamc@52 260 right.
adamc@52 261 right.
adamc@52 262 assumption.
adamc@52 263 (* end thide *)
adamc@46 264 Admitted.
adamc@46 265
adamc@46 266 (* end hide *)
adamc@46 267
adamc@46 268
adamc@46 269 (** It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is [tauto], which is a complete decision procedure for constructive propositional logic. (More on what "constructive" means in the next section.) We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *)
adamc@46 270
adamc@46 271 Theorem or_comm' : P \/ Q -> Q \/ P.
adamc@55 272 (* begin thide *)
adamc@46 273 tauto.
adamc@55 274 (* end thide *)
adamc@46 275 Qed.
adamc@46 276
adamc@46 277 (** Sometimes propositional reasoning forms important plumbing for the proof of a theorem, but we still need to apply some other smarts about, say, arithmetic. [intuition] is a generalization of [tauto] that proves everything it can using propositional reasoning. When some goals remain, it uses propositional laws to simplify them as far as possible. Consider this example, which uses the list concatenation operator [++] from the standard library. *)
adamc@46 278
adamc@46 279 Theorem arith_comm : forall ls1 ls2 : list nat,
adamc@46 280 length ls1 = length ls2 \/ length ls1 + length ls2 = 6
adamc@46 281 -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
adamc@55 282 (* begin thide *)
adamc@46 283 intuition.
adamc@46 284
adamc@46 285 (** A lot of the proof structure has been generated for us by [intuition], but the final proof depends on a fact about lists. The remaining subgoal hints at what cleverness we need to inject. *)
adamc@46 286
adamc@46 287 (** [[
adamc@46 288 ls1 : list nat
adamc@46 289 ls2 : list nat
adamc@46 290 H0 : length ls1 + length ls2 = 6
adamc@46 291 ============================
adamc@46 292 length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2
adamc@209 293
adamc@209 294 ]]
adamc@46 295
adamc@209 296 We can see that we need a theorem about lengths of concatenated lists, which we proved last chapter and is also in the standard library. *)
adamc@46 297
adamc@46 298 rewrite app_length.
adamc@46 299 (** [[
adamc@46 300 ls1 : list nat
adamc@46 301 ls2 : list nat
adamc@46 302 H0 : length ls1 + length ls2 = 6
adamc@46 303 ============================
adamc@46 304 length ls1 + length ls2 = 6 \/ length ls1 = length ls2
adamc@209 305
adamc@209 306 ]]
adamc@46 307
adamc@209 308 Now the subgoal follows by purely propositional reasoning. That is, we could replace [length ls1 + length ls2 = 6] with [P] and [length ls1 = length ls2] with [Q] and arrive at a tautology of propositional logic. *)
adamc@46 309
adamc@46 310 tauto.
adamc@55 311 (* end thide *)
adamc@46 312 Qed.
adamc@46 313
adamc@46 314 (** [intuition] is one of the main bits of glue in the implementation of [crush], so, with a little help, we can get a short automated proof of the theorem. *)
adamc@46 315
adamc@55 316 (* begin thide *)
adamc@46 317 Theorem arith_comm' : forall ls1 ls2 : list nat,
adamc@46 318 length ls1 = length ls2 \/ length ls1 + length ls2 = 6
adamc@46 319 -> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.
adamc@46 320 Hint Rewrite app_length : cpdt.
adamc@46 321
adamc@46 322 crush.
adamc@46 323 Qed.
adamc@55 324 (* end thide *)
adamc@46 325
adamc@45 326 End Propositional.
adamc@45 327
adamc@46 328
adamc@47 329 (** * What Does It Mean to Be Constructive? *)
adamc@46 330
adamc@47 331 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth?
adamc@46 332
adamc@209 333 The answer comes from the fact that Coq implements %\textit{%#<i>#constructive#</i>#%}% or %\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%#<i>#decidable#</i>#%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts."
adamc@47 334
adamc@47 335 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth."
adamc@47 336
adamc@47 337 Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.
adamc@47 338
adamc@47 339 We will see more about Coq's program extraction facility in a later chapter. However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally. It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it. It is almost always most useful to maintain the distinction between programs and proofs. If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove. It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies. *)
adamc@48 340
adamc@48 341
adamc@48 342 (** * First-Order Logic *)
adamc@48 343
adamc@48 344 (** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]."
adamc@48 345
adamc@48 346 Existential quantification is defined in the standard library. *)
adamc@48 347
adamc@48 348 Print ex.
adamc@209 349 (** %\vspace{-.15in}% [[
adamc@209 350 Inductive ex (A : Type) (P : A -> Prop) : Prop :=
adamc@209 351 ex_intro : forall x : A, P x -> ex P
adamc@209 352
adamc@209 353 ]]
adamc@48 354
adamc@209 355 [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *)
adamc@48 356
adamc@48 357 Theorem exist1 : exists x : nat, x + 1 = 2.
adamc@55 358 (* begin thide *)
adamc@67 359 (** remove printing exists *)
adamc@55 360 (** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text "exists" in formulas.) *)
adamc@209 361
adamc@48 362 exists 1.
adamc@48 363
adamc@209 364 (** The conclusion is replaced with a version using the existential witness that we announced.
adamc@48 365
adamc@209 366 [[
adamc@48 367 ============================
adamc@48 368 1 + 1 = 2
adamc@48 369 ]] *)
adamc@48 370
adamc@48 371 reflexivity.
adamc@55 372 (* end thide *)
adamc@48 373 Qed.
adamc@48 374
adamc@48 375 (** printing exists $\exists$ *)
adamc@48 376
adamc@48 377 (** We can also use tactics to reason about existential hypotheses. *)
adamc@48 378
adamc@48 379 Theorem exist2 : forall n m : nat, (exists x : nat, n + x = m) -> n <= m.
adamc@55 380 (* begin thide *)
adamc@48 381 (** We start by case analysis on the proof of the existential fact. *)
adamc@209 382
adamc@48 383 destruct 1.
adamc@48 384 (** [[
adamc@48 385 n : nat
adamc@48 386 m : nat
adamc@48 387 x : nat
adamc@48 388 H : n + x = m
adamc@48 389 ============================
adamc@48 390 n <= m
adamc@209 391
adamc@209 392 ]]
adamc@48 393
adamc@209 394 The goal has been replaced by a form where there is a new free variable [x], and where we have a new hypothesis that the body of the existential holds with [x] substituted for the old bound variable. From here, the proof is just about arithmetic and is easy to automate. *)
adamc@48 395
adamc@48 396 crush.
adamc@55 397 (* end thide *)
adamc@48 398 Qed.
adamc@48 399
adamc@48 400
adamc@48 401 (* begin hide *)
adamc@48 402 (* In-class exercises *)
adamc@48 403
adamc@48 404 Theorem forall_exists_commute : forall (A B : Type) (P : A -> B -> Prop),
adamc@48 405 (exists x : A, forall y : B, P x y) -> (forall y : B, exists x : A, P x y).
adamc@52 406 (* begin thide *)
adamc@52 407 intros.
adamc@52 408 destruct H.
adamc@52 409 exists x.
adamc@52 410 apply H.
adamc@52 411 (* end thide *)
adamc@48 412 Admitted.
adamc@48 413
adamc@48 414 (* end hide *)
adamc@48 415
adamc@48 416
adamc@48 417 (** The tactic [intuition] has a first-order cousin called [firstorder]. [firstorder] proves many formulas when only first-order reasoning is needed, and it tries to perform first-order simplifications in any case. First-order reasoning is much harder than propositional reasoning, so [firstorder] is much more likely than [intuition] to get stuck in a way that makes it run for long enough to be useless. *)
adamc@49 418
adamc@49 419
adamc@49 420 (** * Predicates with Implicit Equality *)
adamc@49 421
adamc@49 422 (** We start our exploration of a more complicated class of predicates with a simple example: an alternative way of characterizing when a natural number is zero. *)
adamc@49 423
adamc@49 424 Inductive isZero : nat -> Prop :=
adamc@49 425 | IsZero : isZero 0.
adamc@49 426
adamc@49 427 Theorem isZero_zero : isZero 0.
adamc@55 428 (* begin thide *)
adamc@49 429 constructor.
adamc@55 430 (* end thide *)
adamc@49 431 Qed.
adamc@49 432
adamc@49 433 (** We can call [isZero] a %\textit{%#<i>#judgment#</i>#%}%, in the sense often used in the semantics of programming languages. Judgments are typically defined in the style of %\textit{%#<i>#natural deduction#</i>#%}%, where we write a number of %\textit{%#<i>#inference rules#</i>#%}% with premises appearing above a solid line and a conclusion appearing below the line. In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it. The proof of [isZero_zero] demonstrates how we can apply an inference rule.
adamc@49 434
adamc@49 435 The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop]. We saw examples of parameterized types like [list], but there the parameters appeared with names %\textit{%#<i>#before#</i>#%}% the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors.
adamc@49 436
adamc@49 437 For instance, [isZero] forces its argument to be [0]. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type! *)
adamc@49 438
adamc@49 439 Print eq.
adamc@209 440 (** %\vspace{-.15in}% [[
adamc@209 441 Inductive eq (A : Type) (x : A) : A -> Prop := refl_equal : x = x
adamc@209 442
adamc@209 443 ]]
adamc@49 444
adamc@209 445 [eq] is the type we get behind the scenes when uses of infix [=] are expanded. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [refl_equal], we see that we can only %\textit{%#<i>#prove#</i>#%}% equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition.
adamc@49 446
adamc@49 447 Returning to the example of [isZero], we can see how to make use of hypotheses that use this predicate. *)
adamc@49 448
adamc@49 449 Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n.
adamc@55 450 (* begin thide *)
adamc@49 451 (** We want to proceed by cases on the proof of the assumption about [isZero]. *)
adamc@209 452
adamc@49 453 destruct 1.
adamc@49 454 (** [[
adamc@49 455 n : nat
adamc@49 456 ============================
adamc@49 457 n + 0 = n
adamc@209 458
adamc@209 459 ]]
adamc@49 460
adamc@209 461 Since [isZero] has only one constructor, we are presented with only one subgoal. The argument [m] to [isZero] is replaced with that type's argument from the single constructor [IsZero]. From this point, the proof is trivial. *)
adamc@49 462
adamc@49 463 crush.
adamc@55 464 (* end thide *)
adamc@49 465 Qed.
adamc@49 466
adamc@49 467 (** Another example seems at first like it should admit an analogous proof, but in fact provides a demonstration of one of the most basic gotchas of Coq proving. *)
adamc@49 468
adamc@49 469 Theorem isZero_contra : isZero 1 -> False.
adamc@55 470 (* begin thide *)
adamc@49 471 (** Let us try a proof by cases on the assumption, as in the last proof. *)
adamc@209 472
adamc@49 473 destruct 1.
adamc@49 474 (** [[
adamc@49 475 ============================
adamc@49 476 False
adamc@209 477
adamc@209 478 ]]
adamc@49 479
adamc@209 480 It seems that case analysis has not helped us much at all! Our sole hypothesis disappears, leaving us, if anything, worse off than we were before. What went wrong? We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments. If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction. Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0].
adamc@49 481
adamc@49 482 Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *)
adamc@49 483
adamc@49 484 Undo.
adamc@49 485 inversion 1.
adamc@55 486 (* end thide *)
adamc@49 487 Qed.
adamc@49 488
adamc@49 489 (** What does [inversion] do? Think of it as a version of [destruct] that does its best to take advantage of the structure of arguments to inductive types. In this case, [inversion] completed the proof immediately, because it was able to detect that we were using [isZero] with an impossible argument.
adamc@49 490
adamc@49 491 Sometimes using [destruct] when you should have used [inversion] can lead to confusing results. To illustrate, consider an alternate proof attempt for the last theorem. *)
adamc@49 492
adamc@49 493 Theorem isZero_contra' : isZero 1 -> 2 + 2 = 5.
adamc@49 494 destruct 1.
adamc@49 495 (** [[
adamc@49 496 ============================
adamc@49 497 1 + 1 = 4
adamc@209 498
adamc@209 499 ]]
adamc@49 500
adam@280 501 What on earth happened here? Internally, [destruct] replaced [1] with a fresh variable, and, trying to be helpful, it also replaced the occurrence of [1] within the unary representation of each number in the goal. This has the net effect of decrementing each of these numbers. *)
adamc@209 502
adamc@49 503 Abort.
adamc@49 504
adam@280 505 (** To see more clearly what is happening, we can consider the type of [isZero]'s induction principle. *)
adam@280 506
adam@280 507 Check isZero_ind.
adam@280 508 (** %\vspace{-.15in}% [[
adam@280 509 isZero_ind
adam@280 510 : forall P : nat -> Prop, P 0 -> forall n : nat, isZero n -> P n
adam@280 511
adam@280 512 ]]
adam@280 513
adam@280 514 In our last proof script, [destruct] chose to instantiate [P] as [fun n => S n + S n = S (S (S (S n)))]. You can verify for yourself that this specialization of the principle applies to the goal and that the hypothesis [P 0] then matches the subgoal we saw generated. If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *)
adam@280 515
adamc@49 516
adamc@49 517 (* begin hide *)
adamc@49 518 (* In-class exercises *)
adamc@49 519
adamc@49 520 (* EX: Define an inductive type capturing when a list has exactly two elements. Prove that your predicate does not hold of the empty list, and prove that, whenever it holds of a list, the length of that list is two. *)
adamc@49 521
adamc@52 522 (* begin thide *)
adamc@52 523 Section twoEls.
adamc@52 524 Variable A : Type.
adamc@52 525
adamc@52 526 Inductive twoEls : list A -> Prop :=
adamc@52 527 | TwoEls : forall x y, twoEls (x :: y :: nil).
adamc@52 528
adamc@52 529 Theorem twoEls_nil : twoEls nil -> False.
adamc@52 530 inversion 1.
adamc@52 531 Qed.
adamc@52 532
adamc@52 533 Theorem twoEls_two : forall ls, twoEls ls -> length ls = 2.
adamc@52 534 inversion 1.
adamc@52 535 reflexivity.
adamc@52 536 Qed.
adamc@52 537 End twoEls.
adamc@52 538 (* end thide *)
adamc@52 539
adamc@49 540 (* end hide *)
adamc@49 541
adamc@50 542
adamc@50 543 (** * Recursive Predicates *)
adamc@50 544
adamc@50 545 (** We have already seen all of the ingredients we need to build interesting recursive predicates, like this predicate capturing even-ness. *)
adamc@50 546
adamc@50 547 Inductive even : nat -> Prop :=
adamc@50 548 | EvenO : even O
adamc@50 549 | EvenSS : forall n, even n -> even (S (S n)).
adamc@50 550
adamc@50 551 (** Think of [even] as another judgment defined by natural deduction rules. [EvenO] is a rule with nothing above the line and [even O] below the line, and [EvenSS] is a rule with [even n] above the line and [even (S (S n))] below.
adamc@50 552
adamc@50 553 The proof techniques of the last section are easily adapted. *)
adamc@50 554
adamc@50 555 Theorem even_0 : even 0.
adamc@55 556 (* begin thide *)
adamc@50 557 constructor.
adamc@55 558 (* end thide *)
adamc@50 559 Qed.
adamc@50 560
adamc@50 561 Theorem even_4 : even 4.
adamc@55 562 (* begin thide *)
adamc@50 563 constructor; constructor; constructor.
adamc@55 564 (* end thide *)
adamc@50 565 Qed.
adamc@50 566
adamc@50 567 (** It is not hard to see that sequences of constructor applications like the above can get tedious. We can avoid them using Coq's hint facility. *)
adamc@50 568
adamc@55 569 (* begin thide *)
adamc@50 570 Hint Constructors even.
adamc@50 571
adamc@50 572 Theorem even_4' : even 4.
adamc@50 573 auto.
adamc@50 574 Qed.
adamc@50 575
adamc@55 576 (* end thide *)
adamc@55 577
adamc@50 578 Theorem even_1_contra : even 1 -> False.
adamc@55 579 (* begin thide *)
adamc@50 580 inversion 1.
adamc@55 581 (* end thide *)
adamc@50 582 Qed.
adamc@50 583
adamc@50 584 Theorem even_3_contra : even 3 -> False.
adamc@55 585 (* begin thide *)
adamc@50 586 inversion 1.
adamc@50 587 (** [[
adamc@50 588 H : even 3
adamc@50 589 n : nat
adamc@50 590 H1 : even 1
adamc@50 591 H0 : n = 1
adamc@50 592 ============================
adamc@50 593 False
adamc@209 594
adamc@209 595 ]]
adamc@50 596
adamc@209 597 [inversion] can be a little overzealous at times, as we can see here with the introduction of the unused variable [n] and an equality hypothesis about it. For more complicated predicates, though, adding such assumptions is critical to dealing with the undecidability of general inversion. *)
adamc@50 598
adamc@50 599 inversion H1.
adamc@55 600 (* end thide *)
adamc@50 601 Qed.
adamc@50 602
adamc@50 603 (** We can also do inductive proofs about [even]. *)
adamc@50 604
adamc@50 605 Theorem even_plus : forall n m, even n -> even m -> even (n + m).
adamc@55 606 (* begin thide *)
adamc@50 607 (** It seems a reasonable first choice to proceed by induction on [n]. *)
adamc@209 608
adamc@50 609 induction n; crush.
adamc@50 610 (** [[
adamc@50 611 n : nat
adamc@50 612 IHn : forall m : nat, even n -> even m -> even (n + m)
adamc@50 613 m : nat
adamc@50 614 H : even (S n)
adamc@50 615 H0 : even m
adamc@50 616 ============================
adamc@50 617 even (S (n + m))
adamc@209 618
adamc@209 619 ]]
adamc@50 620
adamc@209 621 We will need to use the hypotheses [H] and [H0] somehow. The most natural choice is to invert [H]. *)
adamc@50 622
adamc@50 623 inversion H.
adamc@50 624 (** [[
adamc@50 625 n : nat
adamc@50 626 IHn : forall m : nat, even n -> even m -> even (n + m)
adamc@50 627 m : nat
adamc@50 628 H : even (S n)
adamc@50 629 H0 : even m
adamc@50 630 n0 : nat
adamc@50 631 H2 : even n0
adamc@50 632 H1 : S n0 = n
adamc@50 633 ============================
adamc@50 634 even (S (S n0 + m))
adamc@209 635
adamc@209 636 ]]
adamc@50 637
adamc@209 638 Simplifying the conclusion brings us to a point where we can apply a constructor. *)
adamc@209 639
adamc@50 640 simpl.
adamc@50 641 (** [[
adamc@50 642 ============================
adamc@50 643 even (S (S (n0 + m)))
adamc@50 644 ]] *)
adamc@50 645
adamc@50 646 constructor.
adamc@50 647 (** [[
adamc@50 648 ============================
adamc@50 649 even (n0 + m)
adamc@209 650
adamc@209 651 ]]
adamc@50 652
adamc@209 653 At this point, we would like to apply the inductive hypothesis, which is:
adamc@209 654
adamc@209 655 [[
adamc@50 656
adamc@50 657 IHn : forall m : nat, even n -> even m -> even (n + m)
adamc@209 658
adamc@209 659 ]]
adamc@50 660
adamc@209 661 Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%. This technique is commonly called %\textit{%#<i>#rule induction#</i>#%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent. *)
adamc@50 662
adamc@50 663 Restart.
adamc@50 664
adamc@50 665 induction 1.
adamc@50 666 (** [[
adamc@50 667 m : nat
adamc@50 668 ============================
adamc@50 669 even m -> even (0 + m)
adamc@50 670
adamc@50 671 subgoal 2 is:
adamc@50 672 even m -> even (S (S n) + m)
adamc@209 673
adamc@209 674 ]]
adamc@50 675
adamc@209 676 The first case is easily discharged by [crush], based on the hint we added earlier to try the constructors of [even]. *)
adamc@50 677
adamc@50 678 crush.
adamc@50 679
adamc@50 680 (** Now we focus on the second case: *)
adamc@209 681
adamc@50 682 intro.
adamc@50 683
adamc@50 684 (** [[
adamc@50 685 m : nat
adamc@50 686 n : nat
adamc@50 687 H : even n
adamc@50 688 IHeven : even m -> even (n + m)
adamc@50 689 H0 : even m
adamc@50 690 ============================
adamc@50 691 even (S (S n) + m)
adamc@209 692
adamc@209 693 ]]
adamc@50 694
adamc@209 695 We simplify and apply a constructor, as in our last proof attempt. *)
adamc@50 696
adamc@50 697 simpl; constructor.
adamc@50 698 (** [[
adamc@50 699 ============================
adamc@50 700 even (n + m)
adamc@209 701
adamc@209 702 ]]
adamc@50 703
adamc@209 704 Now we have an exact match with our inductive hypothesis, and the remainder of the proof is trivial. *)
adamc@50 705
adamc@50 706 apply IHeven; assumption.
adamc@50 707
adamc@50 708 (** In fact, [crush] can handle all of the details of the proof once we declare the induction strategy. *)
adamc@50 709
adamc@50 710 Restart.
adamc@50 711 induction 1; crush.
adamc@55 712 (* end thide *)
adamc@50 713 Qed.
adamc@50 714
adamc@50 715 (** Induction on recursive predicates has similar pitfalls to those we encountered with inversion in the last section. *)
adamc@50 716
adamc@50 717 Theorem even_contra : forall n, even (S (n + n)) -> False.
adamc@55 718 (* begin thide *)
adamc@50 719 induction 1.
adamc@50 720 (** [[
adamc@50 721 n : nat
adamc@50 722 ============================
adamc@50 723 False
adamc@50 724
adamc@50 725 subgoal 2 is:
adamc@50 726 False
adamc@209 727
adamc@209 728 ]]
adamc@50 729
adam@280 730 We are already sunk trying to prove the first subgoal, since the argument to [even] was replaced by a fresh variable internally. This time, we find it easier to prove this theorem by way of a lemma. Instead of trusting [induction] to replace expressions with fresh variables, we do it ourselves, explicitly adding the appropriate equalities as new assumptions. *)
adamc@209 731
adamc@50 732 Abort.
adamc@50 733
adamc@50 734 Lemma even_contra' : forall n', even n' -> forall n, n' = S (n + n) -> False.
adamc@50 735 induction 1; crush.
adamc@50 736
adamc@54 737 (** At this point, it is useful to consider all cases of [n] and [n0] being zero or nonzero. Only one of these cases has any trickiness to it. *)
adamc@209 738
adamc@50 739 destruct n; destruct n0; crush.
adamc@50 740
adamc@50 741 (** [[
adamc@50 742 n : nat
adamc@50 743 H : even (S n)
adamc@50 744 IHeven : forall n0 : nat, S n = S (n0 + n0) -> False
adamc@50 745 n0 : nat
adamc@50 746 H0 : S n = n0 + S n0
adamc@50 747 ============================
adamc@50 748 False
adamc@209 749
adamc@209 750 ]]
adamc@50 751
adam@280 752 At this point it is useful to use a theorem from the standard library, which we also proved with a different name in the last chapter. We can search for a theorem that allows us to rewrite terms of the form [x + S y]. *)
adamc@209 753
adam@280 754 SearchRewrite (_ + S _).
adamc@209 755 (** %\vspace{-.15in}% [[
adam@280 756 plus_n_Sm : forall n m : nat, S (n + m) = n + S m
adamc@50 757 ]] *)
adamc@50 758
adamc@50 759 rewrite <- plus_n_Sm in H0.
adamc@50 760
adamc@50 761 (** The induction hypothesis lets us complete the proof. *)
adamc@209 762
adamc@50 763 apply IHeven with n0; assumption.
adamc@50 764
adamc@202 765 (** As usual, we can rewrite the proof to avoid referencing any locally-generated names, which makes our proof script more readable and more robust to changes in the theorem statement. We use the notation [<-] to request a hint that does right-to-left rewriting, just like we can with the [rewrite] tactic. *)
adamc@209 766
adamc@209 767 Restart.
adamc@50 768 Hint Rewrite <- plus_n_Sm : cpdt.
adamc@50 769
adamc@50 770 induction 1; crush;
adamc@50 771 match goal with
adamc@50 772 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0
adamc@50 773 end; crush; eauto.
adamc@50 774 Qed.
adamc@50 775
adamc@50 776 (** We write the proof in a way that avoids the use of local variable or hypothesis names, using the [match] tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, [eauto].
adamc@50 777
adamc@55 778 [crush] uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. [auto] attempts Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. [eauto] relaxes this restriction, at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively-automated Coq proofs to see a bag of standard tactics applied to pick off the "easy" subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search.
adamc@50 779
adamc@50 780 The original theorem now follows trivially from our lemma. *)
adamc@50 781
adamc@50 782 Theorem even_contra : forall n, even (S (n + n)) -> False.
adamc@52 783 intros; eapply even_contra'; eauto.
adamc@50 784 Qed.
adamc@52 785
adamc@52 786 (** We use a variant [eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto]. [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments. [eauto] is able to determine the right values for those unification variables.
adamc@52 787
adamc@52 788 By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq. Imagine that we had tried to prove [even_contra'] with all of the [forall] quantifiers moved to the front of the lemma statement. *)
adamc@52 789
adamc@52 790 Lemma even_contra'' : forall n' n, even n' -> n' = S (n + n) -> False.
adamc@52 791 induction 1; crush;
adamc@52 792 match goal with
adamc@52 793 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0
adamc@52 794 end; crush; eauto.
adamc@52 795
adamc@209 796 (** One subgoal remains:
adamc@52 797
adamc@209 798 [[
adamc@52 799 n : nat
adamc@52 800 H : even (S (n + n))
adamc@52 801 IHeven : S (n + n) = S (S (S (n + n))) -> False
adamc@52 802 ============================
adamc@52 803 False
adamc@209 804
adamc@209 805 ]]
adamc@52 806
adamc@209 807 We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n]. This is because the quantification of [n] %\textit{%#<i>#appeared after the thing we are inducting on#</i>#%}% in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses.
adamc@52 808
adamc@52 809 Why should Coq implement [induction] this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *)
adamc@55 810 (* end thide *)
adamc@209 811
adamc@51 812 Abort.
adamc@51 813
adamc@52 814
adamc@52 815 (* begin hide *)
adamc@52 816 (* In-class exercises *)
adamc@52 817
adamc@52 818 (* EX: Define a type [prop] of simple boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction. Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the "false" formula. Prove that every false-free [prop] is valid. *)
adamc@52 819
adamc@52 820 (* begin thide *)
adamc@52 821 Inductive prop : Set :=
adamc@52 822 | Tru : prop
adamc@52 823 | Fals : prop
adamc@52 824 | And : prop -> prop -> prop
adamc@52 825 | Or : prop -> prop -> prop.
adamc@52 826
adamc@52 827 Inductive holds : prop -> Prop :=
adamc@52 828 | HTru : holds Tru
adamc@52 829 | HAnd : forall p1 p2, holds p1 -> holds p2 -> holds (And p1 p2)
adamc@52 830 | HOr1 : forall p1 p2, holds p1 -> holds (Or p1 p2)
adamc@52 831 | HOr2 : forall p1 p2, holds p2 -> holds (Or p1 p2).
adamc@52 832
adamc@52 833 Inductive falseFree : prop -> Prop :=
adamc@52 834 | FFTru : falseFree Tru
adamc@52 835 | FFAnd : forall p1 p2, falseFree p1 -> falseFree p2 -> falseFree (And p1 p2)
adamc@52 836 | FFNot : forall p1 p2, falseFree p1 -> falseFree p2 -> falseFree (Or p1 p2).
adamc@52 837
adamc@52 838 Hint Constructors holds.
adamc@52 839
adamc@52 840 Theorem falseFree_holds : forall p, falseFree p -> holds p.
adamc@52 841 induction 1; crush.
adamc@52 842 Qed.
adamc@52 843 (* end thide *)
adamc@52 844
adamc@52 845
adamc@52 846 (* EX: Define an inductive type [prop'] that is the same as [prop] but omits the possibility for falsehood. Define a proposition [holds'] for [prop'] that is analogous to [holds]. Define a function [propify] for translating [prop']s to [prop]s. Prove that, for any [prop'] [p], if [propify p] is valid, then so is [p]. *)
adamc@52 847
adamc@52 848 (* begin thide *)
adamc@52 849 Inductive prop' : Set :=
adamc@52 850 | Tru' : prop'
adamc@52 851 | And' : prop' -> prop' -> prop'
adamc@52 852 | Or' : prop' -> prop' -> prop'.
adamc@52 853
adamc@52 854 Inductive holds' : prop' -> Prop :=
adamc@52 855 | HTru' : holds' Tru'
adamc@52 856 | HAnd' : forall p1 p2, holds' p1 -> holds' p2 -> holds' (And' p1 p2)
adamc@52 857 | HOr1' : forall p1 p2, holds' p1 -> holds' (Or' p1 p2)
adamc@52 858 | HOr2' : forall p1 p2, holds' p2 -> holds' (Or' p1 p2).
adamc@52 859
adamc@52 860 Fixpoint propify (p : prop') : prop :=
adamc@52 861 match p with
adamc@52 862 | Tru' => Tru
adamc@52 863 | And' p1 p2 => And (propify p1) (propify p2)
adamc@52 864 | Or' p1 p2 => Or (propify p1) (propify p2)
adamc@52 865 end.
adamc@52 866
adamc@52 867 Hint Constructors holds'.
adamc@52 868
adamc@52 869 Lemma propify_holds' : forall p', holds p' -> forall p, p' = propify p -> holds' p.
adamc@52 870 induction 1; crush; destruct p; crush.
adamc@52 871 Qed.
adamc@52 872
adamc@52 873 Theorem propify_holds : forall p, holds (propify p) -> holds' p.
adamc@52 874 intros; eapply propify_holds'; eauto.
adamc@52 875 Qed.
adamc@52 876 (* end thide *)
adamc@52 877
adamc@52 878 (* end hide *)
adamc@58 879
adamc@58 880
adamc@58 881 (** * Exercises *)
adamc@58 882
adamc@58 883 (** %\begin{enumerate}%#<ol>#
adamc@58 884
adamc@58 885 %\item%#<li># Prove these tautologies of propositional logic, using only the tactics [apply], [assumption], [constructor], [destruct], [intro], [intros], [left], [right], [split], and [unfold].
adamc@58 886 %\begin{enumerate}%#<ol>#
adamc@58 887 %\item%#<li># [(True \/ False) /\ (False \/ True)]#</li>#
adamc@209 888 %\item%#<li># [P -> ~ ~ P]#</li>#
adamc@58 889 %\item%#<li># [P /\ (Q \/ R) -> (P /\ Q) \/ (P /\ R)]#</li>#
adamc@61 890 #</ol> </li>#%\end{enumerate}%
adamc@58 891
adamc@61 892 %\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%#<tt>#exists#</tt>#%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of "forward reasoning," in contrast to the "backward reasoning" that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol.
adamc@61 893 %\begin{enumerate}%#<ol>#
adamc@58 894 %\item%#<li># [p x -> (forall x, p x -> exists y, q x y) -> (forall x y, q x y -> q y (f y)) -> exists z, q z (f z)]#</li>#
adamc@58 895 #</ol> </li>#%\end{enumerate}%
adamc@58 896
adamc@59 897 %\item%#<li># Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10. Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd. It is probably easiest to prove the second theorem by indicating "odd-ness" as equality to [2 * n + 1] for some [n].#</li>#
adamc@59 898
adamc@60 899 %\item%#<li># Define a simple programming language, its semantics, and its typing rules, and then prove that well-typed programs cannot go wrong. Specifically:
adamc@60 900 %\begin{enumerate}%#<ol>#
adamc@60 901 %\item%#<li># Define [var] as a synonym for the natural numbers.#</li>#
adamc@60 902 %\item%#<li># Define an inductive type [exp] of expressions, containing natural number constants, natural number addition, pairing of two other expressions, extraction of the first component of a pair, extraction of the second component of a pair, and variables (based on the [var] type you defined).#</li>#
adamc@60 903 %\item%#<li># Define an inductive type [cmd] of commands, containing expressions and variable assignments. A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#</li>#
adamc@60 904 %\item%#<li># Define an inductive type [val] of values, containing natural number constants and pairings of values.#</li>#
adamc@60 905 %\item%#<li># Define a type of variable assignments, which assign a value to each variable.#</li>#
adamc@209 906 %\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. "Big step" means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, "[1 + 1] evaluates to [2] under assignment [va]" should be derivable for any assignment [va].#</li>#
adamc@60 907 %\item%#<li># Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment. The value of a command is the result of evaluating its final expression.#</li>#
adamc@60 908 %\item%#<li># Define a type of variable typings, which are like variable assignments, but map variables to types instead of values. You might use polymorphism to share some code with your variable assignments.#</li>#
adamc@60 909 %\item%#<li># Define typing judgments for expressions, values, and commands. The expression and command cases will be in terms of a typing assignment.#</li>#
adamc@60 910 %\item%#<li># Define a predicate [varsType] to express when a variable assignment and a variable typing agree on the types of variables.#</li>#
adamc@60 911 %\item%#<li># Prove that any expression that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li>#
adamc@60 912 %\item%#<li># Prove that any command that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li>#
adamc@60 913 #</ol> </li>#%\end{enumerate}%
adamc@60 914 A few hints that may be helpful:
adamc@60 915 %\begin{enumerate}%#<ol>#
adamc@60 916 %\item%#<li># One easy way of defining variable assignments and typings is to define both as instances of a polymorphic map type. The map type at parameter [T] can be defined to be the type of arbitrary functions from variables to [T]. A helpful function for implementing insertion into such a functional map is [eq_nat_dec], which you can make available with [Require Import Arith.]. [eq_nat_dec] has a dependent type that tells you that it makes accurate decisions on whether two natural numbers are equal, but you can use it as if it returned a boolean, e.g., [if eq_nat_dec n m then E1 else E2].#</li>#
adamc@60 917 %\item%#<li># If you follow the last hint, you may find yourself writing a proof that involves an expression with [eq_nat_dec] that you would like to simplify. Running [destruct] on the particular call to [eq_nat_dec] should do the trick. You can automate this advice with a piece of Ltac: [[
adamc@60 918
adamc@60 919 match goal with
adamc@60 920 | [ |- context[eq_nat_dec ?X ?Y] ] => destruct (eq_nat_dec X Y)
adamc@60 921 end
adamc@60 922 ]] #</li>#
adamc@60 923 %\item%#<li># You probably do not want to use an inductive definition for compatibility of variable assignments and typings.#</li>#
adam@280 924 %\item%#<li># The [Tactics] module from this book contains a variant [crush'] of [crush]. [crush'] takes two arguments. The first argument is a list of lemmas and other functions to be tried automatically in "forward reasoning" style, where we add new facts without being sure yet that they link into a proof of the conclusion. The second argument is a list of predicates on which inversion should be attempted automatically. For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found. Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept. The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#</li>#
adamc@60 925 %\item%#<li># If you want [crush'] to apply polymorphic lemmas, you may have to do a little extra work, if the type parameter is not a free variable of your proof context (so that [crush'] does not know to try it). For instance, if you define a polymorphic map insert function [assign] of some type [forall T : Set, ...], and you want particular applications of [assign] added automatically with type parameter [U], you would need to include [assign] in the lemma list as [assign U] (if you have implicit arguments off) or [assign (T := U)] or [@assign U] (if you have implicit arguments on).#</li>#
adamc@60 926 #</ol> </li>#%\end{enumerate}%
adamc@60 927
adamc@60 928 #</li>#
adamc@60 929
adamc@58 930 #</ol>#%\end{enumerate}% *)