annotate src/InductiveTypes.v @ 546:306539f29eea

Make specially licensed versions of lib files for cpdtlib.tgz. This adds an ocaml tool and calls to it in the Makefile.
author Arthur Peters <amp@singingwizard.org>
date Thu, 12 May 2016 18:09:36 -0500
parents 429e95d23b26
children 81d63d9c1cc5
rev   line source
adam@534 1 (* Copyright (c) 2008-2012, 2015, Adam Chlipala
adamc@26 2 *
adamc@26 3 * This work is licensed under a
adamc@26 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@26 5 * Unported License.
adamc@26 6 * The license text is available at:
adamc@26 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@26 8 *)
adamc@26 9
adamc@26 10 (* begin hide *)
adamc@26 11 Require Import List.
adamc@26 12
adam@534 13 Require Import Cpdt.CpdtTactics.
adamc@26 14
adamc@26 15 Set Implicit Arguments.
adam@534 16 Set Asymmetric Patterns.
adamc@26 17 (* end hide *)
adamc@26 18
adamc@26 19
adamc@74 20 (** %\part{Basic Programming and Proving}
adamc@74 21
adamc@74 22 \chapter{Introducing Inductive Types}% *)
adamc@26 23
adam@475 24 (** The logical foundation of Coq is the Calculus of Inductive Constructions, or CIC. In a sense, CIC is built from just two relatively straightforward features: function types and inductive types. From this modest foundation, we can prove essentially all of the theorems of math and carry out effectively all program verifications, with enough effort expended. This chapter introduces induction and recursion for functional programming in Coq. Most of our examples reproduce functionality from the Coq standard library, and I have tried to copy the standard library's choices of identifiers, where possible, so many of the definitions here are already available in the default Coq environment.
adam@315 25
adam@315 26 The last chapter took a deep dive into some of the more advanced Coq features, to highlight the unusual approach that I advocate in this book. However, from this point on, we will rewind and go back to basics, presenting the relevant features of Coq in a more bottom-up manner. A useful first step is a discussion of the differences and relationships between proofs and programs in Coq. *)
adam@315 27
adam@315 28
adam@315 29 (** * Proof Terms *)
adam@315 30
adam@465 31 (** Mainstream presentations of mathematics treat proofs as objects that exist outside of the universe of mathematical objects. However, for a variety of reasoning tasks, it is convenient to encode proofs, traditional mathematical objects, and programs within a single formal language. Validity checks on mathematical objects are useful in any setting, to catch typos and other uninteresting errors. The benefits of static typing for programs are widely recognized, and Coq brings those benefits to both mathematical objects and programs via a uniform mechanism. In fact, from this point on, we will not bother to distinguish between programs and mathematical objects. Many mathematical formalisms are most easily encoded in terms of programs.
adam@315 32
adam@400 33 Proofs are fundamentally different from programs, because any two proofs of a theorem are considered equivalent, from a formal standpoint if not from an engineering standpoint. However, we can use the same type-checking technology to check proofs as we use to validate our programs. This is the%\index{Curry-Howard correspondence}% _Curry-Howard correspondence_ %\cite{Curry,Howard}%, an approach for relating proofs and programs. We represent mathematical theorems as types, such that a theorem's proofs are exactly those programs that type-check at the corresponding type.
adam@315 34
adam@315 35 The last chapter's example already snuck in an instance of Curry-Howard. We used the token [->] to stand for both function types and logical implications. One reasonable conclusion upon seeing this might be that some fancy overloading of notations is at work. In fact, functions and implications are precisely identical according to Curry-Howard! That is, they are just two ways of describing the same computational phenomenon.
adam@315 36
adam@315 37 A short demonstration should explain how this can be. The identity function over the natural numbers is certainly not a controversial program. *)
adam@315 38
adam@315 39 Check (fun x : nat => x).
adam@315 40 (** [: nat -> nat] *)
adam@315 41
adam@439 42 (** %\smallskip{}%Consider this alternate program, which is almost identical to the last one. *)
adam@315 43
adam@315 44 Check (fun x : True => x).
adam@315 45 (** [: True -> True] *)
adam@315 46
adam@439 47 (** %\smallskip{}%The identity program is interpreted as a proof that %\index{Gallina terms!True}%[True], the always-true proposition, implies itself! What we see is that Curry-Howard interprets implications as functions, where an input is a proposition being assumed and an output is a proposition being deduced. This intuition is not too far from a common one for informal theorem proving, where we might already think of an implication proof as a process for transforming a hypothesis into a conclusion.
adam@315 48
adam@315 49 There are also more primitive proof forms available. For instance, the term %\index{Gallina terms!I}%[I] is the single proof of [True], applicable in any context. *)
adam@315 50
adam@315 51 Check I.
adam@315 52 (** [: True] *)
adam@315 53
adam@439 54 (** %\smallskip{}%With [I], we can prove another simple propositional theorem. *)
adam@315 55
adam@315 56 Check (fun _ : False => I).
adam@315 57 (** [: False -> True] *)
adam@315 58
adam@439 59 (** %\smallskip{}%No proofs of %\index{Gallina terms!False}%[False] exist in the top-level context, but the implication-as-function analogy gives us an easy way to, for example, show that [False] implies itself. *)
adam@315 60
adam@315 61 Check (fun x : False => x).
adam@315 62 (** [: False -> False] *)
adam@315 63
adam@439 64 (** %\smallskip{}%Every one of these example programs whose type looks like a logical formula is a%\index{proof term}% _proof term_. We use that name for any Gallina term of a logical type, and we will elaborate shortly on what makes a type logical.
adam@315 65
adam@469 66 In the rest of this chapter, we will introduce different ways of defining types. Every example type can be interpreted alternatively as a type of programs or proofs.
adam@317 67
adam@471 68 One of the first types we introduce will be [bool], with constructors [true] and [false]. Newcomers to Coq often wonder about the distinction between [True] and [true] and the distinction between [False] and [false]. One glib answer is that [True] and [False] are types, but [true] and [false] are not. A more useful answer is that Coq's metatheory guarantees that any term of type [bool] _evaluates_ to either [true] or [false]. This means that we have an _algorithm_ for answering any question phrased as an expression of type [bool]. Conversely, most propositions do not evaluate to [True] or [False]; the language of inductively defined propositions is much richer than that. We ought to be glad that we have no algorithm for deciding our formalized version of mathematical truth, since otherwise it would be clear that we could not formalize undecidable properties, like almost any interesting property of general-purpose programs. *)
adamc@26 69
adamc@26 70
adamc@26 71 (** * Enumerations *)
adamc@26 72
adam@493 73 (** Coq inductive types generalize the %\index{algebraic datatypes}%algebraic datatypes found in %\index{Haskell}%Haskell and %\index{ML}%ML. Confusingly enough, inductive types also generalize %\index{generalized algebraic datatypes}%generalized algebraic datatypes (GADTs), by adding the possibility for type dependency. Even so, it is worth backing up from the examples of the last chapter and going over basic, algebraic-datatype uses of inductive datatypes, because the chance to prove things about the values of these types adds new wrinkles beyond usual practice in Haskell and ML.
adamc@26 74
adam@315 75 The singleton type [unit] is an inductive type:%\index{Gallina terms!unit}\index{Gallina terms!tt}% *)
adamc@26 76
adamc@26 77 Inductive unit : Set :=
adamc@26 78 | tt.
adamc@26 79
adam@471 80 (** This vernacular command defines a new inductive type [unit] whose only value is [tt]. We can verify the types of the two identifiers we introduce: *)
adamc@26 81
adamc@26 82 Check unit.
adamc@208 83 (** [unit : Set] *)
adamc@26 84
adamc@26 85 Check tt.
adamc@208 86 (** [tt : unit] *)
adamc@26 87
adam@439 88 (** %\smallskip{}%We can prove that [unit] is a genuine singleton type. *)
adamc@26 89
adamc@26 90 Theorem unit_singleton : forall x : unit, x = tt.
adamc@208 91
adam@315 92 (** The important thing about an inductive type is, unsurprisingly, that you can do induction over its values, and induction is the key to proving this theorem. We ask to proceed by induction on the variable [x].%\index{tactics!induction}% *)
adamc@208 93
adamc@41 94 (* begin thide *)
adamc@26 95 induction x.
adamc@26 96
adamc@208 97 (** The goal changes to:
adamc@208 98 [[
adamc@26 99 tt = tt
adam@302 100 ]]
adam@302 101 *)
adamc@208 102
adam@448 103 (** %\noindent{}%...which we can discharge trivially. *)
adamc@208 104
adamc@26 105 reflexivity.
adamc@26 106 Qed.
adamc@41 107 (* end thide *)
adamc@26 108
adam@315 109 (** It seems kind of odd to write a proof by induction with no inductive hypotheses. We could have arrived at the same result by beginning the proof with:%\index{tactics!destruct}% [[
adamc@26 110 destruct x.
adamc@205 111 ]]
adamc@205 112
adam@420 113 %\noindent%...which corresponds to "proof by case analysis" in classical math. For non-recursive inductive types, the two tactics will always have identical behavior. Often case analysis is sufficient, even in proofs about recursive types, and it is nice to avoid introducing unneeded induction hypotheses.
adamc@26 114
adam@398 115 What exactly _is_ the %\index{induction principles}%induction principle for [unit]? We can ask Coq: *)
adamc@26 116
adamc@26 117 Check unit_ind.
adamc@208 118 (** [unit_ind : forall P : unit -> Prop, P tt -> forall u : unit, P u] *)
adamc@26 119
adam@439 120 (** %\smallskip{}%Every [Inductive] command defining a type [T] also defines an induction principle named [T_ind]. Recall from the last section that our type, operations over it, and principles for reasoning about it all live in the same language and are described by the same type system. The key to telling what is a program and what is a proof lies in the distinction between the type %\index{Gallina terms!Prop}%[Prop], which appears in our induction principle; and the type %\index{Gallina terms!Set}%[Set], which we have seen a few times already.
adamc@26 121
adam@315 122 The convention goes like this: [Set] is the type of normal types used in programming, and the values of such types are programs. [Prop] is the type of logical propositions, and the values of such types are proofs. Thus, an induction principle has a type that shows us that it is a function for building proofs.
adamc@26 123
adam@400 124 Specifically, [unit_ind] quantifies over a predicate [P] over [unit] values. If we can present a proof that [P] holds of [tt], then we are rewarded with a proof that [P] holds for any value [u] of type [unit]. In our last proof, the predicate was [(fun u : unit => u = tt)].
adam@315 125
adam@315 126 The definition of [unit] places the type in [Set]. By replacing [Set] with [Prop], [unit] with [True], and [tt] with [I], we arrive at precisely the definition of [True] that the Coq standard library employs! The program type [unit] is the Curry-Howard equivalent of the proposition [True]. We might make the tongue-in-cheek claim that, while philosophers have expended much ink on the nature of truth, we have now determined that truth is the [unit] type of functional programming.
adamc@26 127
adamc@26 128 %\medskip%
adamc@26 129
adam@315 130 We can define an inductive type even simpler than [unit]:%\index{Gallina terms!Empty\_set}% *)
adamc@26 131
adamc@26 132 Inductive Empty_set : Set := .
adamc@26 133
adamc@26 134 (** [Empty_set] has no elements. We can prove fun theorems about it: *)
adamc@26 135
adamc@26 136 Theorem the_sky_is_falling : forall x : Empty_set, 2 + 2 = 5.
adamc@41 137 (* begin thide *)
adamc@26 138 destruct 1.
adamc@26 139 Qed.
adamc@41 140 (* end thide *)
adamc@26 141
adam@317 142 (** Because [Empty_set] has no elements, the fact of having an element of this type implies anything. We use [destruct 1] instead of [destruct x] in the proof because unused quantified variables are relegated to being referred to by number. (There is a good reason for this, related to the unity of quantifiers and implication. At least within Coq's logical foundation of %\index{constructive logic}%constructive logic, which we elaborate on more in the next chapter, an implication is just a quantification over a proof, where the quantified variable is never used. It generally makes more sense to refer to implication hypotheses by number than by name, and Coq treats our quantifier over an unused variable as an implication in determining the proper behavior.)
adamc@26 143
adamc@26 144 We can see the induction principle that made this proof so easy: *)
adamc@26 145
adamc@26 146 Check Empty_set_ind.
adam@400 147 (** [Empty_set_ind : forall (P : Empty_set -> Prop) (e : Empty_set), P e] *)
adamc@26 148
adam@439 149 (** %\smallskip{}%In other words, any predicate over values from the empty set holds vacuously of every such element. In the last proof, we chose the predicate [(fun _ : Empty_set => 2 + 2 = 5)].
adamc@26 150
adamc@26 151 We can also apply this get-out-of-jail-free card programmatically. Here is a lazy way of converting values of [Empty_set] to values of [unit]: *)
adamc@26 152
adamc@26 153 Definition e2u (e : Empty_set) : unit := match e with end.
adamc@26 154
adam@493 155 (** We employ [match] pattern matching as in the last chapter. Since we match on a value whose type has no constructors, there is no need to provide any branches. It turns out that [Empty_set] is the Curry-Howard equivalent of [False]. As for why [Empty_set] starts with a capital letter and not a lowercase letter like [unit] does, we must refer the reader to the authors of the Coq standard library, to which we try to be faithful.
adamc@26 156
adamc@26 157 %\medskip%
adamc@26 158
adam@448 159 Moving up the ladder of complexity, we can define the Booleans:%\index{Gallina terms!bool}\index{Gallina terms!true}\index{Gallina terms!false}% *)
adamc@26 160
adamc@26 161 Inductive bool : Set :=
adamc@26 162 | true
adamc@26 163 | false.
adamc@26 164
adam@448 165 (** We can use less vacuous pattern matching to define Boolean negation.%\index{Gallina terms!negb}% *)
adamc@26 166
adam@279 167 Definition negb (b : bool) : bool :=
adamc@26 168 match b with
adamc@26 169 | true => false
adamc@26 170 | false => true
adamc@26 171 end.
adamc@26 172
adam@317 173 (** An alternative definition desugars to the above, thanks to an %\index{Gallina terms!if}%[if] notation overloaded to work with any inductive type that has exactly two constructors: *)
adamc@27 174
adam@279 175 Definition negb' (b : bool) : bool :=
adamc@27 176 if b then false else true.
adamc@27 177
adam@279 178 (** We might want to prove that [negb] is its own inverse operation. *)
adamc@26 179
adam@279 180 Theorem negb_inverse : forall b : bool, negb (negb b) = b.
adamc@41 181 (* begin thide *)
adamc@26 182 destruct b.
adamc@26 183
adamc@208 184 (** After we case-analyze on [b], we are left with one subgoal for each constructor of [bool].
adam@439 185 [[
adam@439 186 2 subgoals
adamc@26 187
adamc@26 188 ============================
adam@279 189 negb (negb true) = true
adam@439 190
adam@439 191 subgoal 2 is
adam@439 192
adam@279 193 negb (negb false) = false
adamc@26 194 ]]
adamc@26 195
adamc@26 196 The first subgoal follows by Coq's rules of computation, so we can dispatch it easily: *)
adamc@26 197
adamc@26 198 reflexivity.
adamc@26 199
adam@315 200 (** Likewise for the second subgoal, so we can restart the proof and give a very compact justification.%\index{Vernacular commands!Restart}% *)
adamc@26 201
adamc@26 202 Restart.
adam@315 203
adamc@26 204 destruct b; reflexivity.
adamc@26 205 Qed.
adamc@41 206 (* end thide *)
adamc@27 207
adam@448 208 (** Another theorem about Booleans illustrates another useful tactic.%\index{tactics!discriminate}% *)
adamc@27 209
adam@279 210 Theorem negb_ineq : forall b : bool, negb b <> b.
adamc@41 211 (* begin thide *)
adamc@27 212 destruct b; discriminate.
adamc@27 213 Qed.
adamc@41 214 (* end thide *)
adamc@27 215
adam@448 216 (** The [discriminate] tactic is used to prove that two values of an inductive type are not equal, whenever the values are formed with different constructors. In this case, the different constructors are [true] and [false].
adamc@27 217
adamc@27 218 At this point, it is probably not hard to guess what the underlying induction principle for [bool] is. *)
adamc@27 219
adamc@27 220 Check bool_ind.
adamc@208 221 (** [bool_ind : forall P : bool -> Prop, P true -> P false -> forall b : bool, P b] *)
adamc@28 222
adam@439 223 (** %\smallskip{}%That is, to prove that a property describes all [bool]s, prove that it describes both [true] and [false].
adam@315 224
adam@392 225 There is no interesting Curry-Howard analogue of [bool]. Of course, we can define such a type by replacing [Set] by [Prop] above, but the proposition we arrive at is not very useful. It is logically equivalent to [True], but it provides two indistinguishable primitive proofs, [true] and [false]. In the rest of the chapter, we will skip commenting on Curry-Howard versions of inductive definitions where such versions are not interesting. *)
adam@315 226
adamc@28 227
adamc@28 228 (** * Simple Recursive Types *)
adamc@28 229
adam@315 230 (** The natural numbers are the simplest common example of an inductive type that actually deserves the name.%\index{Gallina terms!nat}\index{Gallina terms!O}\index{Gallina terms!S}% *)
adamc@28 231
adamc@28 232 Inductive nat : Set :=
adamc@28 233 | O : nat
adamc@28 234 | S : nat -> nat.
adamc@28 235
adam@439 236 (** The constructor [O] is zero, and [S] is the successor function, so that [0] is syntactic sugar for [O], [1] for [S O], [2] for [S (S O)], and so on.
adamc@28 237
adam@316 238 Pattern matching works as we demonstrated in the last chapter:%\index{Gallina terms!pred}% *)
adamc@28 239
adamc@28 240 Definition isZero (n : nat) : bool :=
adamc@28 241 match n with
adamc@28 242 | O => true
adamc@28 243 | S _ => false
adamc@28 244 end.
adamc@28 245
adamc@28 246 Definition pred (n : nat) : nat :=
adamc@28 247 match n with
adamc@28 248 | O => O
adamc@28 249 | S n' => n'
adamc@28 250 end.
adamc@28 251
adam@469 252 (** We can prove theorems by case analysis with [destruct] as for simpler inductive types, but we can also now get into genuine inductive theorems. First, we will need a recursive function, to make things interesting.%\index{Gallina terms!plus}% *)
adamc@28 253
adamc@208 254 Fixpoint plus (n m : nat) : nat :=
adamc@28 255 match n with
adamc@28 256 | O => m
adamc@28 257 | S n' => S (plus n' m)
adamc@28 258 end.
adamc@28 259
adamc@208 260 (** Recall that [Fixpoint] is Coq's mechanism for recursive function definitions. Some theorems about [plus] can be proved without induction. *)
adamc@28 261
adamc@28 262 Theorem O_plus_n : forall n : nat, plus O n = n.
adamc@41 263 (* begin thide *)
adamc@28 264 intro; reflexivity.
adamc@28 265 Qed.
adamc@41 266 (* end thide *)
adamc@28 267
adamc@208 268 (** Coq's computation rules automatically simplify the application of [plus], because unfolding the definition of [plus] gives us a [match] expression where the branch to be taken is obvious from syntax alone. If we just reverse the order of the arguments, though, this no longer works, and we need induction. *)
adamc@28 269
adamc@28 270 Theorem n_plus_O : forall n : nat, plus n O = n.
adamc@41 271 (* begin thide *)
adamc@28 272 induction n.
adamc@28 273
adam@398 274 (** Our first subgoal is [plus O O = O], which _is_ trivial by computation. *)
adamc@28 275
adamc@28 276 reflexivity.
adamc@28 277
adam@469 278 (** Our second subgoal requires more work and also demonstrates our first inductive hypothesis.
adamc@28 279
adamc@28 280 [[
adamc@28 281 n : nat
adamc@28 282 IHn : plus n O = n
adamc@28 283 ============================
adamc@28 284 plus (S n) O = S n
adamc@208 285
adamc@28 286 ]]
adamc@28 287
adam@315 288 We can start out by using computation to simplify the goal as far as we can.%\index{tactics!simpl}% *)
adamc@28 289
adamc@28 290 simpl.
adamc@28 291
adam@400 292 (** Now the conclusion is [S (plus n O) = S n]. Using our inductive hypothesis: *)
adamc@28 293
adamc@28 294 rewrite IHn.
adamc@28 295
adam@448 296 (** %\noindent{}%...we get a trivial conclusion [S n = S n]. *)
adamc@28 297
adamc@28 298 reflexivity.
adamc@28 299
adam@315 300 (** Not much really went on in this proof, so the [crush] tactic from the [CpdtTactics] module can prove this theorem automatically. *)
adamc@28 301
adamc@28 302 Restart.
adam@315 303
adamc@28 304 induction n; crush.
adamc@28 305 Qed.
adamc@41 306 (* end thide *)
adamc@28 307
adamc@28 308 (** We can check out the induction principle at work here: *)
adamc@28 309
adamc@28 310 Check nat_ind.
adamc@208 311 (** %\vspace{-.15in}% [[
adamc@208 312 nat_ind : forall P : nat -> Prop,
adamc@208 313 P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n
adamc@208 314 ]]
adamc@28 315
adam@442 316 Each of the two cases of our last proof came from the type of one of the arguments to [nat_ind]. We chose [P] to be [(fun n : nat => plus n O = n)]. The first proof case corresponded to [P O] and the second case to [(forall n : nat, P n -> P (S n))]. The free variable [n] and inductive hypothesis [IHn] came from the argument types given here.
adamc@28 317
adam@315 318 Since [nat] has a constructor that takes an argument, we may sometimes need to know that that constructor is injective.%\index{tactics!injection}\index{tactics!trivial}% *)
adamc@28 319
adamc@28 320 Theorem S_inj : forall n m : nat, S n = S m -> n = m.
adamc@41 321 (* begin thide *)
adamc@28 322 injection 1; trivial.
adamc@28 323 Qed.
adamc@41 324 (* end thide *)
adamc@28 325
adam@471 326 (** The [injection] tactic refers to a premise by number, adding new equalities between the corresponding arguments of equated terms that are formed with the same constructor. We end up needing to prove [n = m -> n = m], so it is unsurprising that a tactic named [trivial] is able to finish the proof. This tactic attempts a variety of single proof steps, drawn from a user-specified database that we will later see how to extend.
adamc@28 327
adam@417 328 There is also a very useful tactic called %\index{tactics!congruence}%[congruence] that can prove this theorem immediately. The [congruence] tactic generalizes [discriminate] and [injection], and it also adds reasoning about the general properties of equality, such as that a function returns equal results on equal arguments. That is, [congruence] is a%\index{theory of equality and uninterpreted functions}% _complete decision procedure for the theory of equality and uninterpreted functions_, plus some smarts about inductive types.
adamc@29 329
adamc@29 330 %\medskip%
adamc@29 331
adamc@29 332 We can define a type of lists of natural numbers. *)
adamc@29 333
adamc@29 334 Inductive nat_list : Set :=
adamc@29 335 | NNil : nat_list
adamc@29 336 | NCons : nat -> nat_list -> nat_list.
adamc@29 337
adam@470 338 (** Recursive definitions over [nat_list] are straightforward extensions of what we have seen before. *)
adamc@29 339
adamc@29 340 Fixpoint nlength (ls : nat_list) : nat :=
adamc@29 341 match ls with
adamc@29 342 | NNil => O
adamc@29 343 | NCons _ ls' => S (nlength ls')
adamc@29 344 end.
adamc@29 345
adamc@208 346 Fixpoint napp (ls1 ls2 : nat_list) : nat_list :=
adamc@29 347 match ls1 with
adamc@29 348 | NNil => ls2
adamc@29 349 | NCons n ls1' => NCons n (napp ls1' ls2)
adamc@29 350 end.
adamc@29 351
adamc@29 352 (** Inductive theorem proving can again be automated quite effectively. *)
adamc@29 353
adamc@29 354 Theorem nlength_napp : forall ls1 ls2 : nat_list, nlength (napp ls1 ls2)
adamc@29 355 = plus (nlength ls1) (nlength ls2).
adamc@41 356 (* begin thide *)
adamc@29 357 induction ls1; crush.
adamc@29 358 Qed.
adamc@41 359 (* end thide *)
adamc@29 360
adamc@29 361 Check nat_list_ind.
adamc@208 362 (** %\vspace{-.15in}% [[
adamc@208 363 nat_list_ind
adamc@29 364 : forall P : nat_list -> Prop,
adamc@29 365 P NNil ->
adamc@29 366 (forall (n : nat) (n0 : nat_list), P n0 -> P (NCons n n0)) ->
adamc@29 367 forall n : nat_list, P n
adamc@29 368 ]]
adamc@29 369
adamc@29 370 %\medskip%
adamc@29 371
adam@471 372 In general, we can implement any "tree" type as an inductive type. For example, here are binary trees of naturals. *)
adamc@29 373
adamc@29 374 Inductive nat_btree : Set :=
adamc@29 375 | NLeaf : nat_btree
adamc@29 376 | NNode : nat_btree -> nat -> nat_btree -> nat_btree.
adamc@29 377
adam@471 378 (** Here are two functions whose intuitive explanations are not so important. The first one computes the size of a tree, and the second performs some sort of splicing of one tree into the leftmost available leaf node of another. *)
adam@471 379
adamc@29 380 Fixpoint nsize (tr : nat_btree) : nat :=
adamc@29 381 match tr with
adamc@35 382 | NLeaf => S O
adamc@29 383 | NNode tr1 _ tr2 => plus (nsize tr1) (nsize tr2)
adamc@29 384 end.
adamc@29 385
adamc@208 386 Fixpoint nsplice (tr1 tr2 : nat_btree) : nat_btree :=
adamc@29 387 match tr1 with
adamc@35 388 | NLeaf => NNode tr2 O NLeaf
adamc@29 389 | NNode tr1' n tr2' => NNode (nsplice tr1' tr2) n tr2'
adamc@29 390 end.
adamc@29 391
adamc@29 392 Theorem plus_assoc : forall n1 n2 n3 : nat, plus (plus n1 n2) n3 = plus n1 (plus n2 n3).
adamc@41 393 (* begin thide *)
adamc@29 394 induction n1; crush.
adamc@29 395 Qed.
adamc@41 396 (* end thide *)
adamc@29 397
adam@534 398 Hint Rewrite n_plus_O plus_assoc.
adam@534 399
adamc@29 400 Theorem nsize_nsplice : forall tr1 tr2 : nat_btree, nsize (nsplice tr1 tr2)
adamc@29 401 = plus (nsize tr2) (nsize tr1).
adamc@41 402 (* begin thide *)
adamc@29 403 induction tr1; crush.
adamc@29 404 Qed.
adamc@41 405 (* end thide *)
adamc@29 406
adam@471 407 (** It is convenient that these proofs go through so easily, but it is still useful to look into the details of what happened, by checking the statement of the tree induction principle. *)
adam@315 408
adamc@29 409 Check nat_btree_ind.
adamc@208 410 (** %\vspace{-.15in}% [[
adamc@208 411 nat_btree_ind
adamc@29 412 : forall P : nat_btree -> Prop,
adamc@29 413 P NLeaf ->
adamc@29 414 (forall n : nat_btree,
adamc@29 415 P n -> forall (n0 : nat) (n1 : nat_btree), P n1 -> P (NNode n n0 n1)) ->
adamc@29 416 forall n : nat_btree, P n
adam@302 417 ]]
adam@315 418
adam@315 419 We have the usual two cases, one for each constructor of [nat_btree]. *)
adamc@30 420
adamc@30 421
adamc@30 422 (** * Parameterized Types *)
adamc@30 423
adam@316 424 (** We can also define %\index{polymorphism}%polymorphic inductive types, as with algebraic datatypes in Haskell and ML.%\index{Gallina terms!list}\index{Gallina terms!Nil}\index{Gallina terms!Cons}\index{Gallina terms!length}\index{Gallina terms!app}% *)
adamc@30 425
adamc@30 426 Inductive list (T : Set) : Set :=
adamc@30 427 | Nil : list T
adamc@30 428 | Cons : T -> list T -> list T.
adamc@30 429
adamc@30 430 Fixpoint length T (ls : list T) : nat :=
adamc@30 431 match ls with
adamc@30 432 | Nil => O
adamc@30 433 | Cons _ ls' => S (length ls')
adamc@30 434 end.
adamc@30 435
adamc@208 436 Fixpoint app T (ls1 ls2 : list T) : list T :=
adamc@30 437 match ls1 with
adamc@30 438 | Nil => ls2
adamc@30 439 | Cons x ls1' => Cons x (app ls1' ls2)
adamc@30 440 end.
adamc@30 441
adamc@30 442 Theorem length_app : forall T (ls1 ls2 : list T), length (app ls1 ls2)
adamc@30 443 = plus (length ls1) (length ls2).
adamc@41 444 (* begin thide *)
adamc@30 445 induction ls1; crush.
adamc@30 446 Qed.
adamc@41 447 (* end thide *)
adamc@30 448
adam@420 449 (** There is a useful shorthand for writing many definitions that share the same parameter, based on Coq's%\index{sections}\index{Vernacular commands!Section}\index{Vernacular commands!Variable}% _section_ mechanism. The following block of code is equivalent to the above: *)
adamc@30 450
adamc@30 451 (* begin hide *)
adamc@30 452 Reset list.
adamc@30 453 (* end hide *)
adamc@30 454
adamc@30 455 Section list.
adamc@30 456 Variable T : Set.
adamc@30 457
adamc@30 458 Inductive list : Set :=
adamc@30 459 | Nil : list
adamc@30 460 | Cons : T -> list -> list.
adamc@30 461
adamc@30 462 Fixpoint length (ls : list) : nat :=
adamc@30 463 match ls with
adamc@30 464 | Nil => O
adamc@30 465 | Cons _ ls' => S (length ls')
adamc@30 466 end.
adamc@30 467
adamc@208 468 Fixpoint app (ls1 ls2 : list) : list :=
adamc@30 469 match ls1 with
adamc@30 470 | Nil => ls2
adamc@30 471 | Cons x ls1' => Cons x (app ls1' ls2)
adamc@30 472 end.
adamc@30 473
adamc@30 474 Theorem length_app : forall ls1 ls2 : list, length (app ls1 ls2)
adamc@30 475 = plus (length ls1) (length ls2).
adamc@41 476 (* begin thide *)
adamc@30 477 induction ls1; crush.
adamc@30 478 Qed.
adamc@41 479 (* end thide *)
adamc@30 480 End list.
adamc@30 481
adamc@35 482 Implicit Arguments Nil [T].
adamc@35 483
adam@469 484 (** After we end the section, the [Variable]s we used are added as extra function parameters for each defined identifier, as needed. With an [Implicit Arguments]%~\index{Vernacular commands!Implicit Arguments}% command, we ask that [T] be inferred when we use [Nil]; Coq's heuristics already decided to apply a similar policy to [Cons], because of the [Set Implicit Arguments]%~\index{Vernacular commands!Set Implicit Arguments}% command elided at the beginning of this chapter. We verify that our definitions have been saved properly using the [Print] command, a cousin of [Check] which shows the definition of a symbol, rather than just its type. *)
adamc@30 485
adamc@202 486 Print list.
adamc@208 487 (** %\vspace{-.15in}% [[
adamc@208 488 Inductive list (T : Set) : Set :=
adam@316 489 Nil : list T | Cons : T -> list T -> list T
adamc@202 490 ]]
adamc@30 491
adam@442 492 The final definition is the same as what we wrote manually before. The other elements of the section are altered similarly, turning out exactly as they were before, though we managed to write their definitions more succinctly. *)
adamc@30 493
adamc@30 494 Check length.
adamc@208 495 (** %\vspace{-.15in}% [[
adamc@208 496 length
adamc@30 497 : forall T : Set, list T -> nat
adamc@30 498 ]]
adamc@30 499
adam@442 500 The parameter [T] is treated as a new argument to the induction principle, too. *)
adamc@30 501
adamc@30 502 Check list_ind.
adamc@208 503 (** %\vspace{-.15in}% [[
adamc@208 504 list_ind
adamc@30 505 : forall (T : Set) (P : list T -> Prop),
adamc@30 506 P (Nil T) ->
adamc@30 507 (forall (t : T) (l : list T), P l -> P (Cons t l)) ->
adamc@30 508 forall l : list T, P l
adamc@30 509 ]]
adamc@30 510
adam@488 511 Thus, despite a very real sense in which the type [T] is an argument to the constructor [Cons], the inductive case in the type of [list_ind] (i.e., the third line of the type) includes no quantifier for [T], even though all of the other arguments are quantified explicitly. Parameters in other inductive definitions are treated similarly in stating induction principles. *)
adamc@31 512
adamc@31 513
adamc@31 514 (** * Mutually Inductive Types *)
adamc@31 515
adamc@31 516 (** We can define inductive types that refer to each other: *)
adamc@31 517
adamc@31 518 Inductive even_list : Set :=
adamc@31 519 | ENil : even_list
adamc@31 520 | ECons : nat -> odd_list -> even_list
adamc@31 521
adamc@31 522 with odd_list : Set :=
adamc@31 523 | OCons : nat -> even_list -> odd_list.
adamc@31 524
adamc@31 525 Fixpoint elength (el : even_list) : nat :=
adamc@31 526 match el with
adamc@31 527 | ENil => O
adamc@31 528 | ECons _ ol => S (olength ol)
adamc@31 529 end
adamc@31 530
adamc@31 531 with olength (ol : odd_list) : nat :=
adamc@31 532 match ol with
adamc@31 533 | OCons _ el => S (elength el)
adamc@31 534 end.
adamc@31 535
adamc@208 536 Fixpoint eapp (el1 el2 : even_list) : even_list :=
adamc@31 537 match el1 with
adamc@31 538 | ENil => el2
adamc@31 539 | ECons n ol => ECons n (oapp ol el2)
adamc@31 540 end
adamc@31 541
adamc@208 542 with oapp (ol : odd_list) (el : even_list) : odd_list :=
adamc@31 543 match ol with
adamc@31 544 | OCons n el' => OCons n (eapp el' el)
adamc@31 545 end.
adamc@31 546
adamc@31 547 (** Everything is going roughly the same as in past examples, until we try to prove a theorem similar to those that came before. *)
adamc@31 548
adamc@31 549 Theorem elength_eapp : forall el1 el2 : even_list,
adamc@31 550 elength (eapp el1 el2) = plus (elength el1) (elength el2).
adamc@41 551 (* begin thide *)
adamc@31 552 induction el1; crush.
adamc@31 553
adamc@31 554 (** One goal remains: [[
adamc@31 555
adamc@31 556 n : nat
adamc@31 557 o : odd_list
adamc@31 558 el2 : even_list
adamc@31 559 ============================
adamc@31 560 S (olength (oapp o el2)) = S (plus (olength o) (elength el2))
adamc@31 561 ]]
adamc@31 562
adamc@31 563 We have no induction hypothesis, so we cannot prove this goal without starting another induction, which would reach a similar point, sending us into a futile infinite chain of inductions. The problem is that Coq's generation of [T_ind] principles is incomplete. We only get non-mutual induction principles generated by default. *)
adamc@31 564
adamc@31 565 Abort.
adamc@31 566 Check even_list_ind.
adamc@208 567 (** %\vspace{-.15in}% [[
adamc@208 568 even_list_ind
adamc@31 569 : forall P : even_list -> Prop,
adamc@31 570 P ENil ->
adamc@31 571 (forall (n : nat) (o : odd_list), P (ECons n o)) ->
adamc@31 572 forall e : even_list, P e
adamc@31 573 ]]
adamc@31 574
adam@442 575 We see that no inductive hypotheses are included anywhere in the type. To get them, we must ask for mutual principles as we need them, using the %\index{Vernacular commands!Scheme}%[Scheme] command. *)
adamc@31 576
adamc@31 577 Scheme even_list_mut := Induction for even_list Sort Prop
adamc@31 578 with odd_list_mut := Induction for odd_list Sort Prop.
adamc@31 579
adam@316 580 (** This invocation of [Scheme] asks for the creation of induction principles [even_list_mut] for the type [even_list] and [odd_list_mut] for the type [odd_list]. The [Induction] keyword says we want standard induction schemes, since [Scheme] supports more exotic choices. Finally, [Sort Prop] establishes that we really want induction schemes, not recursion schemes, which are the same according to Curry-Howard, save for the [Prop]/[Set] distinction. *)
adam@316 581
adamc@31 582 Check even_list_mut.
adamc@208 583 (** %\vspace{-.15in}% [[
adamc@208 584 even_list_mut
adamc@31 585 : forall (P : even_list -> Prop) (P0 : odd_list -> Prop),
adamc@31 586 P ENil ->
adamc@31 587 (forall (n : nat) (o : odd_list), P0 o -> P (ECons n o)) ->
adamc@31 588 (forall (n : nat) (e : even_list), P e -> P0 (OCons n e)) ->
adamc@31 589 forall e : even_list, P e
adamc@31 590 ]]
adamc@31 591
adam@471 592 This is the principle we wanted in the first place.
adam@471 593
adam@471 594 The [Scheme] command is for asking Coq to generate particular induction schemes that are mutual among a set of inductive types (possibly only one such type, in which case we get a normal induction principle). In a sense, it generalizes the induction scheme generation that goes on automatically for each inductive definition. Future Coq versions might make that automatic generation smarter, so that [Scheme] is needed in fewer places. In a few sections, we will see how induction principles are derived theorems in Coq, so that there is not actually any need to build in _any_ automatic scheme generation.
adam@471 595
adam@471 596 There is one more wrinkle left in using the [even_list_mut] induction principle: the [induction] tactic will not apply it for us automatically. It will be helpful to look at how to prove one of our past examples without using [induction], so that we can then generalize the technique to mutual inductive types.%\index{tactics!apply}% *)
adamc@31 597
adamc@31 598 Theorem n_plus_O' : forall n : nat, plus n O = n.
adam@471 599 apply nat_ind.
adam@471 600 (** Here we use [apply], which is one of the most essential basic tactics. When we are trying to prove fact [P], and when [thm] is a theorem whose conclusion can be made to match [P] by proper choice of quantified variable values, the invocation [apply thm] will replace the current goal with one new goal for each premise of [thm].
adam@471 601
adam@471 602 This use of [apply] may seem a bit _too_ magical. To better see what is going on, we use a variant where we partially apply the theorem [nat_ind] to give an explicit value for the predicate that gives our induction hypothesis. *)
adam@471 603
adam@471 604 Undo.
adamc@31 605 apply (nat_ind (fun n => plus n O = n)); crush.
adamc@31 606 Qed.
adamc@31 607
adam@471 608 (** From this example, we can see that [induction] is not magic. It only does some bookkeeping for us to make it easy to apply a theorem, which we can do directly with the [apply] tactic.
adamc@31 609
adamc@31 610 This technique generalizes to our mutual example: *)
adamc@31 611
adamc@31 612 Theorem elength_eapp : forall el1 el2 : even_list,
adamc@31 613 elength (eapp el1 el2) = plus (elength el1) (elength el2).
adamc@41 614
adamc@31 615 apply (even_list_mut
adamc@31 616 (fun el1 : even_list => forall el2 : even_list,
adamc@31 617 elength (eapp el1 el2) = plus (elength el1) (elength el2))
adamc@31 618 (fun ol : odd_list => forall el : even_list,
adamc@31 619 olength (oapp ol el) = plus (olength ol) (elength el))); crush.
adamc@31 620 Qed.
adamc@41 621 (* end thide *)
adamc@31 622
adam@475 623 (** We simply need to specify two predicates, one for each of the mutually inductive types. In general, it is not a good idea to assume that a proof assistant can infer extra predicates, so this way of applying mutual induction is about as straightforward as we may hope for. *)
adamc@33 624
adamc@33 625
adamc@33 626 (** * Reflexive Types *)
adamc@33 627
adam@469 628 (** A kind of inductive type called a _reflexive type_ includes at least one constructor that takes as an argument _a function returning the same type we are defining_. One very useful class of examples is in modeling variable binders. Our example will be an encoding of the syntax of first-order logic. Since the idea of syntactic encodings of logic may require a bit of acclimation, let us first consider a simpler formula type for a subset of propositional logic. We are not yet using a reflexive type, but later we will extend the example reflexively. *)
adam@316 629
adam@316 630 Inductive pformula : Set :=
adam@316 631 | Truth : pformula
adam@316 632 | Falsehood : pformula
adam@316 633 | Conjunction : pformula -> pformula -> pformula.
adam@316 634
adam@448 635 (* begin hide *)
adam@448 636 (* begin thide *)
adam@448 637 Definition prod' := prod.
adam@448 638 (* end thide *)
adam@448 639 (* end hide *)
adam@448 640
adam@475 641 (** A key distinction here is between, for instance, the _syntax_ [Truth] and its _semantics_ [True]. We can make the semantics explicit with a recursive function. This function uses the infix operator %\index{Gallina operators!/\textbackslash}%[/\], which desugars to instances of the type family %\index{Gallina terms!and}%[and] from the standard library. The family [and] implements conjunction, the [Prop] Curry-Howard analogue of the usual pair type from functional programming (which is the type family %\index{Gallina terms!prod}%[prod] in Coq's standard library). *)
adam@316 642
adam@316 643 Fixpoint pformulaDenote (f : pformula) : Prop :=
adam@316 644 match f with
adam@316 645 | Truth => True
adam@316 646 | Falsehood => False
adam@316 647 | Conjunction f1 f2 => pformulaDenote f1 /\ pformulaDenote f2
adam@316 648 end.
adam@316 649
adam@392 650 (** This is just a warm-up that does not use reflexive types, the new feature we mean to introduce. When we set our sights on first-order logic instead, it becomes very handy to give constructors recursive arguments that are functions. *)
adamc@33 651
adamc@33 652 Inductive formula : Set :=
adamc@33 653 | Eq : nat -> nat -> formula
adamc@33 654 | And : formula -> formula -> formula
adamc@33 655 | Forall : (nat -> formula) -> formula.
adamc@33 656
adam@470 657 (** Our kinds of formulas are equalities between naturals, conjunction, and universal quantification over natural numbers. We avoid needing to include a notion of "variables" in our type, by using Coq functions to encode the syntax of quantification. For instance, here is the encoding of [forall x : nat, x = x]:%\index{Vernacular commands!Example}% *)
adamc@33 658
adamc@33 659 Example forall_refl : formula := Forall (fun x => Eq x x).
adamc@33 660
adamc@33 661 (** We can write recursive functions over reflexive types quite naturally. Here is one translating our formulas into native Coq propositions. *)
adamc@33 662
adamc@33 663 Fixpoint formulaDenote (f : formula) : Prop :=
adamc@33 664 match f with
adamc@33 665 | Eq n1 n2 => n1 = n2
adamc@33 666 | And f1 f2 => formulaDenote f1 /\ formulaDenote f2
adamc@33 667 | Forall f' => forall n : nat, formulaDenote (f' n)
adamc@33 668 end.
adamc@33 669
adamc@33 670 (** We can also encode a trivial formula transformation that swaps the order of equality and conjunction operands. *)
adamc@33 671
adamc@33 672 Fixpoint swapper (f : formula) : formula :=
adamc@33 673 match f with
adamc@33 674 | Eq n1 n2 => Eq n2 n1
adamc@33 675 | And f1 f2 => And (swapper f2) (swapper f1)
adamc@33 676 | Forall f' => Forall (fun n => swapper (f' n))
adamc@33 677 end.
adamc@33 678
adamc@33 679 (** It is helpful to prove that this transformation does not make true formulas false. *)
adamc@33 680
adamc@33 681 Theorem swapper_preserves_truth : forall f, formulaDenote f -> formulaDenote (swapper f).
adamc@41 682 (* begin thide *)
adamc@33 683 induction f; crush.
adamc@33 684 Qed.
adamc@41 685 (* end thide *)
adamc@33 686
adamc@33 687 (** We can take a look at the induction principle behind this proof. *)
adamc@33 688
adamc@33 689 Check formula_ind.
adamc@208 690 (** %\vspace{-.15in}% [[
adamc@208 691 formula_ind
adamc@33 692 : forall P : formula -> Prop,
adamc@33 693 (forall n n0 : nat, P (Eq n n0)) ->
adamc@33 694 (forall f0 : formula,
adamc@33 695 P f0 -> forall f1 : formula, P f1 -> P (And f0 f1)) ->
adamc@33 696 (forall f1 : nat -> formula,
adamc@33 697 (forall n : nat, P (f1 n)) -> P (Forall f1)) ->
adamc@33 698 forall f2 : formula, P f2
adamc@208 699 ]]
adamc@33 700
adam@442 701 Focusing on the [Forall] case, which comes third, we see that we are allowed to assume that the theorem holds _for any application of the argument function [f1]_. That is, Coq induction principles do not follow a simple rule that the textual representations of induction variables must get shorter in appeals to induction hypotheses. Luckily for us, the people behind the metatheory of Coq have verified that this flexibility does not introduce unsoundness.
adamc@33 702
adamc@33 703 %\medskip%
adamc@33 704
adam@469 705 Up to this point, we have seen how to encode in Coq more and more of what is possible with algebraic datatypes in %\index{Haskell}%Haskell and %\index{ML}%ML. This may have given the inaccurate impression that inductive types are a strict extension of algebraic datatypes. In fact, Coq must rule out some types allowed by Haskell and ML, for reasons of soundness. Reflexive types provide our first good example of such a case; only some of them are legal.
adamc@33 706
adam@400 707 Given our last example of an inductive type, many readers are probably eager to try encoding the syntax of %\index{lambda calculus}%lambda calculus. Indeed, the function-based representation technique that we just used, called%\index{higher-order abstract syntax}\index{HOAS|see{higher-order abstract syntax}}% _higher-order abstract syntax_ (HOAS)%~\cite{HOAS}%, is the representation of choice for lambda calculi in %\index{Twelf}%Twelf and in many applications implemented in Haskell and ML. Let us try to import that choice to Coq: *)
adam@400 708 (* begin hide *)
adam@437 709 (* begin thide *)
adam@400 710 Inductive term : Set := App | Abs.
adam@400 711 Reset term.
adam@420 712 Definition uhoh := O.
adam@437 713 (* end thide *)
adam@400 714 (* end hide *)
adamc@33 715 (** [[
adamc@33 716 Inductive term : Set :=
adamc@33 717 | App : term -> term -> term
adamc@33 718 | Abs : (term -> term) -> term.
adamc@33 719 ]]
adamc@33 720
adam@316 721 <<
adam@316 722 Error: Non strictly positive occurrence of "term" in "(term -> term) -> term"
adam@316 723 >>
adam@316 724
adam@400 725 We have run afoul of the%\index{strict positivity requirement}\index{positivity requirement}% _strict positivity requirement_ for inductive definitions, which says that the type being defined may not occur to the left of an arrow in the type of a constructor argument. It is important that the type of a constructor is viewed in terms of a series of arguments and a result, since obviously we need recursive occurrences to the lefts of the outermost arrows if we are to have recursive occurrences at all. Our candidate definition above violates the positivity requirement because it involves an argument of type [term -> term], where the type [term] that we are defining appears to the left of an arrow. The candidate type of [App] is fine, however, since every occurrence of [term] is either a constructor argument or the final result type.
adamc@33 726
adamc@33 727 Why must Coq enforce this restriction? Imagine that our last definition had been accepted, allowing us to write this function:
adamc@33 728
adam@439 729 %\vspace{-.15in}%[[
adamc@33 730 Definition uhoh (t : term) : term :=
adamc@33 731 match t with
adamc@33 732 | Abs f => f t
adamc@33 733 | _ => t
adamc@33 734 end.
adamc@205 735 ]]
adamc@205 736
adamc@33 737 Using an informal idea of Coq's semantics, it is easy to verify that the application [uhoh (Abs uhoh)] will run forever. This would be a mere curiosity in OCaml and Haskell, where non-termination is commonplace, though the fact that we have a non-terminating program without explicit recursive function definitions is unusual.
adamc@33 738
adam@316 739 %\index{termination checking}%For Coq, however, this would be a disaster. The possibility of writing such a function would destroy all our confidence that proving a theorem means anything. Since Coq combines programs and proofs in one language, we would be able to prove every theorem with an infinite loop.
adamc@33 740
adam@439 741 Nonetheless, the basic insight of HOAS is a very useful one, and there are ways to realize most benefits of HOAS in Coq. We will study a particular technique of this kind in the final chapter, on programming language syntax and semantics. *)
adamc@34 742
adamc@34 743
adam@317 744 (** * An Interlude on Induction Principles *)
adamc@34 745
adam@317 746 (** As we have emphasized a few times already, Coq proofs are actually programs, written in the same language we have been using in our examples all along. We can get a first sense of what this means by taking a look at the definitions of some of the %\index{induction principles}%induction principles we have used. A close look at the details here will help us construct induction principles manually, which we will see is necessary for some more advanced inductive definitions. *)
adamc@34 747
adam@470 748 Print nat_ind.
adam@437 749 (** %\vspace{-.15in}%[[
adam@470 750 nat_ind =
adam@470 751 fun P : nat -> Prop => nat_rect P
adam@470 752 : forall P : nat -> Prop,
adam@470 753 P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n
adamc@34 754 ]]
adamc@34 755
adam@470 756 We see that this induction principle is defined in terms of a more general principle, [nat_rect]. The <<rec>> stands for "recursion principle," and the <<t>> at the end stands for [Type]. *)
adamc@34 757
adam@470 758 Check nat_rect.
adamc@208 759 (** %\vspace{-.15in}% [[
adam@470 760 nat_rect
adam@470 761 : forall P : nat -> Type,
adam@470 762 P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n
adamc@34 763 ]]
adamc@34 764
adam@470 765 The principle [nat_rect] gives [P] type [nat -> Type] instead of [nat -> Prop]. This [Type] is another universe, like [Set] and [Prop]. In fact, it is a common supertype of both. Later on, we will discuss exactly what the significances of the different universes are. For now, it is just important that we can use [Type] as a sort of meta-universe that may turn out to be either [Set] or [Prop]. We can see the symmetry inherent in the subtyping relationship by printing the definition of another principle that was generated for [nat] automatically: *)
adamc@34 766
adam@470 767 Print nat_rec.
adam@437 768 (** %\vspace{-.15in}%[[
adam@470 769 nat_rec =
adam@470 770 fun P : nat -> Set => nat_rect P
adam@470 771 : forall P : nat -> Set,
adam@470 772 P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n
adamc@34 773 ]]
adamc@34 774
adam@470 775 This is identical to the definition for [nat_ind], except that we have substituted [Set] for [Prop]. For most inductive types [T], then, we get not just induction principles [T_ind], but also %\index{recursion principles}%recursion principles [T_rec]. We can use [T_rec] to write recursive definitions without explicit [Fixpoint] recursion. For instance, the following two definitions are equivalent: *)
adamc@34 776
adam@470 777 Fixpoint plus_recursive (n : nat) : nat -> nat :=
adam@470 778 match n with
adam@470 779 | O => fun m => m
adam@470 780 | S n' => fun m => S (plus_recursive n' m)
adamc@34 781 end.
adamc@34 782
adam@470 783 Definition plus_rec : nat -> nat -> nat :=
adam@472 784 nat_rec (fun _ : nat => nat -> nat) (fun m => m) (fun _ r m => S (r m)).
adam@472 785
adam@472 786 Theorem plus_equivalent : plus_recursive = plus_rec.
adam@472 787 reflexivity.
adam@472 788 Qed.
adamc@34 789
adam@470 790 (** Going even further down the rabbit hole, [nat_rect] itself is not even a primitive. It is a functional program that we can write manually. *)
adamc@34 791
adam@470 792 Print nat_rect.
adam@437 793 (** %\vspace{-.15in}%[[
adam@470 794 nat_rect =
adam@470 795 fun (P : nat -> Type) (f : P O) (f0 : forall n : nat, P n -> P (S n)) =>
adam@470 796 fix F (n : nat) : P n :=
adam@470 797 match n as n0 return (P n0) with
adam@470 798 | O => f
adam@470 799 | S n0 => f0 n0 (F n0)
adamc@208 800 end
adam@470 801 : forall P : nat -> Type,
adam@470 802 P O -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n
adamc@34 803 ]]
adamc@34 804
adam@493 805 The only new wrinkles here are, first, an anonymous recursive function definition, using the %\index{Gallina terms!fix}%[fix] keyword of Gallina (which is like [fun] with recursion supported); and, second, the annotations on the [match] expression. This is a%\index{dependent pattern matching}% _dependently typed_ pattern match, because the _type_ of the expression depends on the _value_ being matched on. We will meet more involved examples later, especially in Part II of the book.
adam@317 806
adam@470 807 %\index{type inference}%Type inference for dependent pattern matching is undecidable, which can be proved by reduction from %\index{higher-order unification}%higher-order unification%~\cite{HOU}%. Thus, we often find ourselves needing to annotate our programs in a way that explains dependencies to the type checker. In the example of [nat_rect], we have an %\index{Gallina terms!as}%[as] clause, which binds a name for the discriminee; and a %\index{Gallina terms!return}%[return] clause, which gives a way to compute the [match] result type as a function of the discriminee.
adamc@34 808
adam@470 809 To prove that [nat_rect] is nothing special, we can reimplement it manually. *)
adamc@34 810
adam@470 811 Fixpoint nat_rect' (P : nat -> Type)
adam@470 812 (HO : P O)
adam@470 813 (HS : forall n, P n -> P (S n)) (n : nat) :=
adam@470 814 match n return P n with
adam@470 815 | O => HO
adam@470 816 | S n' => HS n' (nat_rect' P HO HS n')
adamc@34 817 end.
adamc@34 818
adam@470 819 (** We can understand the definition of [nat_rect] better by reimplementing [nat_ind] using sections. *)
adamc@34 820
adam@317 821 Section nat_ind'.
adamc@208 822 (** First, we have the property of natural numbers that we aim to prove. *)
adamc@34 823
adam@317 824 Variable P : nat -> Prop.
adamc@34 825
adam@317 826 (** Then we require a proof of the [O] case, which we declare with the command %\index{Vernacular commands!Hypothesis}%[Hypothesis], which is a synonym for [Variable] that, by convention, is used for variables whose types are propositions. *)
adamc@34 827
adam@317 828 Hypothesis O_case : P O.
adamc@34 829
adamc@208 830 (** Next is a proof of the [S] case, which may assume an inductive hypothesis. *)
adamc@34 831
adam@317 832 Hypothesis S_case : forall n : nat, P n -> P (S n).
adamc@34 833
adamc@208 834 (** Finally, we define a recursive function to tie the pieces together. *)
adamc@34 835
adam@317 836 Fixpoint nat_ind' (n : nat) : P n :=
adam@317 837 match n with
adam@317 838 | O => O_case
adam@317 839 | S n' => S_case (nat_ind' n')
adam@317 840 end.
adam@317 841 End nat_ind'.
adamc@34 842
adam@400 843 (** Closing the section adds the [Variable]s and [Hypothesis]es as new [fun]-bound arguments to [nat_ind'], and, modulo the use of [Prop] instead of [Type], we end up with the exact same definition that was generated automatically for [nat_rect].
adamc@34 844
adam@317 845 %\medskip%
adamc@34 846
adam@317 847 We can also examine the definition of [even_list_mut], which we generated with [Scheme] for a mutually recursive type. *)
adamc@34 848
adam@317 849 Print even_list_mut.
adam@439 850 (** %\vspace{-.15in}%[[
adam@317 851 even_list_mut =
adam@317 852 fun (P : even_list -> Prop) (P0 : odd_list -> Prop)
adam@317 853 (f : P ENil) (f0 : forall (n : nat) (o : odd_list), P0 o -> P (ECons n o))
adam@317 854 (f1 : forall (n : nat) (e : even_list), P e -> P0 (OCons n e)) =>
adam@317 855 fix F (e : even_list) : P e :=
adam@317 856 match e as e0 return (P e0) with
adam@317 857 | ENil => f
adam@317 858 | ECons n o => f0 n o (F0 o)
adam@317 859 end
adam@317 860 with F0 (o : odd_list) : P0 o :=
adam@317 861 match o as o0 return (P0 o0) with
adam@317 862 | OCons n e => f1 n e (F e)
adam@317 863 end
adam@317 864 for F
adam@317 865 : forall (P : even_list -> Prop) (P0 : odd_list -> Prop),
adam@317 866 P ENil ->
adam@317 867 (forall (n : nat) (o : odd_list), P0 o -> P (ECons n o)) ->
adam@317 868 (forall (n : nat) (e : even_list), P e -> P0 (OCons n e)) ->
adam@317 869 forall e : even_list, P e
adam@317 870 ]]
adamc@34 871
adam@442 872 We see a mutually recursive [fix], with the different functions separated by %\index{Gallina terms!with}%[with] in the same way that they would be separated by <<and>> in ML. A final %\index{Gallina terms!for}%[for] clause identifies which of the mutually recursive functions should be the final value of the [fix] expression. Using this definition as a template, we can reimplement [even_list_mut] directly. *)
adamc@208 873
adam@317 874 Section even_list_mut'.
adam@317 875 (** First, we need the properties that we are proving. *)
adamc@208 876
adam@317 877 Variable Peven : even_list -> Prop.
adam@317 878 Variable Podd : odd_list -> Prop.
adamc@208 879
adam@317 880 (** Next, we need proofs of the three cases. *)
adamc@208 881
adam@317 882 Hypothesis ENil_case : Peven ENil.
adam@317 883 Hypothesis ECons_case : forall (n : nat) (o : odd_list), Podd o -> Peven (ECons n o).
adam@317 884 Hypothesis OCons_case : forall (n : nat) (e : even_list), Peven e -> Podd (OCons n e).
adamc@208 885
adam@317 886 (** Finally, we define the recursive functions. *)
adamc@208 887
adam@317 888 Fixpoint even_list_mut' (e : even_list) : Peven e :=
adam@317 889 match e with
adam@317 890 | ENil => ENil_case
adam@317 891 | ECons n o => ECons_case n (odd_list_mut' o)
adam@317 892 end
adam@317 893 with odd_list_mut' (o : odd_list) : Podd o :=
adam@317 894 match o with
adam@317 895 | OCons n e => OCons_case n (even_list_mut' e)
adam@317 896 end.
adamc@34 897 End even_list_mut'.
adamc@34 898
adamc@34 899 (** Even induction principles for reflexive types are easy to implement directly. For our [formula] type, we can use a recursive definition much like those we wrote above. *)
adamc@34 900
adamc@34 901 Section formula_ind'.
adamc@34 902 Variable P : formula -> Prop.
adamc@38 903 Hypothesis Eq_case : forall n1 n2 : nat, P (Eq n1 n2).
adamc@38 904 Hypothesis And_case : forall f1 f2 : formula,
adamc@34 905 P f1 -> P f2 -> P (And f1 f2).
adamc@38 906 Hypothesis Forall_case : forall f : nat -> formula,
adamc@34 907 (forall n : nat, P (f n)) -> P (Forall f).
adamc@34 908
adamc@34 909 Fixpoint formula_ind' (f : formula) : P f :=
adamc@208 910 match f with
adamc@34 911 | Eq n1 n2 => Eq_case n1 n2
adamc@34 912 | And f1 f2 => And_case (formula_ind' f1) (formula_ind' f2)
adamc@34 913 | Forall f' => Forall_case f' (fun n => formula_ind' (f' n))
adamc@34 914 end.
adamc@34 915 End formula_ind'.
adamc@34 916
adam@317 917 (** It is apparent that induction principle implementations involve some tedium but not terribly much creativity. *)
adam@317 918
adamc@35 919
adamc@35 920 (** * Nested Inductive Types *)
adamc@35 921
adamc@35 922 (** Suppose we want to extend our earlier type of binary trees to trees with arbitrary finite branching. We can use lists to give a simple definition. *)
adamc@35 923
adamc@35 924 Inductive nat_tree : Set :=
adamc@35 925 | NNode' : nat -> list nat_tree -> nat_tree.
adamc@35 926
adam@465 927 (** This is an example of a%\index{nested inductive type}% _nested_ inductive type definition, because we use the type we are defining as an argument to a parameterized type family. Coq will not allow all such definitions; it effectively pretends that we are defining [nat_tree] mutually with a version of [list] specialized to [nat_tree], checking that the resulting expanded definition satisfies the usual rules. For instance, if we replaced [list] with a type family that used its parameter as a function argument, then the definition would be rejected as violating the positivity restriction.
adamc@35 928
adam@475 929 As we encountered with mutual inductive types, we find that the automatically generated induction principle for [nat_tree] is too weak. *)
adamc@35 930
adam@471 931 (* begin hide *)
adam@471 932 (* begin thide *)
adam@471 933 Check Forall.
adam@471 934 (* end thide *)
adam@471 935 (* end hide *)
adam@471 936
adamc@35 937 Check nat_tree_ind.
adamc@208 938 (** %\vspace{-.15in}% [[
adamc@208 939 nat_tree_ind
adamc@35 940 : forall P : nat_tree -> Prop,
adamc@35 941 (forall (n : nat) (l : list nat_tree), P (NNode' n l)) ->
adamc@35 942 forall n : nat_tree, P n
adamc@35 943 ]]
adamc@35 944
adam@471 945 There is no command like [Scheme] that will implement an improved principle for us. In general, it takes creativity to figure out _good_ ways to incorporate nested uses of different type families. Now that we know how to implement induction principles manually, we are in a position to apply just such creativity to this problem.
adamc@35 946
adam@541 947 Many induction principles for types with nested uses of [list] could benefit from a unified predicate capturing the idea that some property holds of every element in a list. By defining this generic predicate once, we facilitate reuse of library theorems about it. (Here, we are actually duplicating the standard library's [Forall] predicate, with a different implementation, for didactic purposes.) *)
adamc@35 948
adamc@35 949 Section All.
adamc@35 950 Variable T : Set.
adamc@35 951 Variable P : T -> Prop.
adamc@35 952
adamc@35 953 Fixpoint All (ls : list T) : Prop :=
adamc@35 954 match ls with
adamc@35 955 | Nil => True
adamc@35 956 | Cons h t => P h /\ All t
adamc@35 957 end.
adamc@35 958 End All.
adamc@35 959
adam@317 960 (** It will be useful to review the definitions of [True] and [/\], since we will want to write manual proofs of them below. *)
adamc@35 961
adamc@35 962 Print True.
adam@439 963 (** %\vspace{-.15in}%[[
adamc@208 964 Inductive True : Prop := I : True
adamc@208 965 ]]
adamc@35 966
adam@442 967 That is, [True] is a proposition with exactly one proof, [I], which we may always supply trivially.
adamc@35 968
adam@400 969 Finding the definition of [/\] takes a little more work. Coq supports user registration of arbitrary parsing rules, and it is such a rule that is letting us write [/\] instead of an application of some inductive type family. We can find the underlying inductive type with the %\index{Vernacular commands!Locate}%[Locate] command, whose argument may be a parsing token.%\index{Gallina terms!and}% *)
adamc@35 970
adamc@35 971 Locate "/\".
adam@439 972 (** %\vspace{-.15in}%[[
adam@317 973 "A /\ B" := and A B : type_scope (default interpretation)
adam@302 974 ]]
adam@302 975 *)
adamc@35 976
adamc@35 977 Print and.
adam@439 978 (** %\vspace{-.15in}%[[
adamc@208 979 Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B
adam@317 980 ]]
adam@317 981 %\vspace{-.1in}%
adam@317 982 <<
adamc@208 983 For conj: Arguments A, B are implicit
adam@317 984 >>
adamc@35 985
adam@400 986 In addition to the definition of [and] itself, we get information on %\index{implicit arguments}%implicit arguments (and some other information that we omit here). The implicit argument information tells us that we build a proof of a conjunction by calling the constructor [conj] on proofs of the conjuncts, with no need to include the types of those proofs as explicit arguments.
adamc@35 987
adamc@35 988 %\medskip%
adamc@35 989
adam@448 990 Now we create a section for our induction principle, following the same basic plan as in the previous section of this chapter. *)
adamc@35 991
adamc@35 992 Section nat_tree_ind'.
adamc@35 993 Variable P : nat_tree -> Prop.
adamc@35 994
adamc@38 995 Hypothesis NNode'_case : forall (n : nat) (ls : list nat_tree),
adamc@35 996 All P ls -> P (NNode' n ls).
adamc@35 997
adam@420 998 (* begin hide *)
adam@437 999 (* begin thide *)
adam@420 1000 Definition list_nat_tree_ind := O.
adam@437 1001 (* end thide *)
adam@420 1002 (* end hide *)
adam@420 1003
adamc@35 1004 (** A first attempt at writing the induction principle itself follows the intuition that nested inductive type definitions are expanded into mutual inductive definitions.
adamc@35 1005
adam@439 1006 %\vspace{-.15in}%[[
adamc@35 1007 Fixpoint nat_tree_ind' (tr : nat_tree) : P tr :=
adamc@208 1008 match tr with
adamc@35 1009 | NNode' n ls => NNode'_case n ls (list_nat_tree_ind ls)
adamc@35 1010 end
adamc@35 1011
adamc@35 1012 with list_nat_tree_ind (ls : list nat_tree) : All P ls :=
adamc@208 1013 match ls with
adamc@35 1014 | Nil => I
adamc@35 1015 | Cons tr rest => conj (nat_tree_ind' tr) (list_nat_tree_ind rest)
adamc@35 1016 end.
adamc@205 1017 ]]
adamc@205 1018
adam@442 1019 Coq rejects this definition, saying
adam@317 1020 <<
adam@317 1021 Recursive call to nat_tree_ind' has principal argument equal to "tr"
adam@317 1022 instead of rest.
adam@317 1023 >>
adam@317 1024
adam@475 1025 There is no deep theoretical reason why this program should be rejected; Coq applies incomplete termination-checking heuristics, and it is necessary to learn a few of the most important rules. The term "nested inductive type" hints at the solution to this particular problem. Just as mutually inductive types require mutually recursive induction principles, nested types require nested recursion. *)
adamc@35 1026
adamc@35 1027 Fixpoint nat_tree_ind' (tr : nat_tree) : P tr :=
adamc@208 1028 match tr with
adamc@35 1029 | NNode' n ls => NNode'_case n ls
adamc@35 1030 ((fix list_nat_tree_ind (ls : list nat_tree) : All P ls :=
adamc@208 1031 match ls with
adamc@35 1032 | Nil => I
adam@471 1033 | Cons tr' rest => conj (nat_tree_ind' tr') (list_nat_tree_ind rest)
adamc@35 1034 end) ls)
adamc@35 1035 end.
adamc@35 1036
adam@398 1037 (** We include an anonymous [fix] version of [list_nat_tree_ind] that is literally _nested_ inside the definition of the recursive function corresponding to the inductive definition that had the nested use of [list]. *)
adamc@35 1038
adamc@35 1039 End nat_tree_ind'.
adamc@35 1040
adam@475 1041 (** We can try our induction principle out by defining some recursive functions on [nat_tree] and proving a theorem about them. First, we define some helper functions that operate on lists. *)
adamc@35 1042
adamc@35 1043 Section map.
adamc@35 1044 Variables T T' : Set.
adam@317 1045 Variable F : T -> T'.
adamc@35 1046
adamc@35 1047 Fixpoint map (ls : list T) : list T' :=
adamc@35 1048 match ls with
adamc@35 1049 | Nil => Nil
adam@317 1050 | Cons h t => Cons (F h) (map t)
adamc@35 1051 end.
adamc@35 1052 End map.
adamc@35 1053
adamc@35 1054 Fixpoint sum (ls : list nat) : nat :=
adamc@35 1055 match ls with
adamc@35 1056 | Nil => O
adamc@35 1057 | Cons h t => plus h (sum t)
adamc@35 1058 end.
adamc@35 1059
adamc@35 1060 (** Now we can define a size function over our trees. *)
adamc@35 1061
adamc@35 1062 Fixpoint ntsize (tr : nat_tree) : nat :=
adamc@35 1063 match tr with
adamc@35 1064 | NNode' _ trs => S (sum (map ntsize trs))
adamc@35 1065 end.
adamc@35 1066
adamc@35 1067 (** Notice that Coq was smart enough to expand the definition of [map] to verify that we are using proper nested recursion, even through a use of a higher-order function. *)
adamc@35 1068
adamc@208 1069 Fixpoint ntsplice (tr1 tr2 : nat_tree) : nat_tree :=
adamc@35 1070 match tr1 with
adamc@35 1071 | NNode' n Nil => NNode' n (Cons tr2 Nil)
adamc@35 1072 | NNode' n (Cons tr trs) => NNode' n (Cons (ntsplice tr tr2) trs)
adamc@35 1073 end.
adamc@35 1074
adamc@35 1075 (** We have defined another arbitrary notion of tree splicing, similar to before, and we can prove an analogous theorem about its relationship with tree size. We start with a useful lemma about addition. *)
adamc@35 1076
adamc@41 1077 (* begin thide *)
adamc@35 1078 Lemma plus_S : forall n1 n2 : nat,
adamc@35 1079 plus n1 (S n2) = S (plus n1 n2).
adamc@35 1080 induction n1; crush.
adamc@35 1081 Qed.
adamc@41 1082 (* end thide *)
adamc@35 1083
adamc@35 1084 (** Now we begin the proof of the theorem, adding the lemma [plus_S] as a hint. *)
adamc@35 1085
adam@534 1086 Hint Rewrite plus_S.
adam@534 1087
adamc@35 1088 Theorem ntsize_ntsplice : forall tr1 tr2 : nat_tree, ntsize (ntsplice tr1 tr2)
adamc@35 1089 = plus (ntsize tr2) (ntsize tr1).
adamc@41 1090 (* begin thide *)
adam@317 1091 (** We know that the standard induction principle is insufficient for the task, so we need to provide a %\index{tactics!using}%[using] clause for the [induction] tactic to specify our alternate principle. *)
adamc@208 1092
adamc@35 1093 induction tr1 using nat_tree_ind'; crush.
adamc@35 1094
adamc@35 1095 (** One subgoal remains: [[
adamc@35 1096 n : nat
adamc@35 1097 ls : list nat_tree
adamc@35 1098 H : All
adamc@35 1099 (fun tr1 : nat_tree =>
adamc@35 1100 forall tr2 : nat_tree,
adamc@35 1101 ntsize (ntsplice tr1 tr2) = plus (ntsize tr2) (ntsize tr1)) ls
adamc@35 1102 tr2 : nat_tree
adamc@35 1103 ============================
adamc@35 1104 ntsize
adamc@35 1105 match ls with
adamc@35 1106 | Nil => NNode' n (Cons tr2 Nil)
adamc@35 1107 | Cons tr trs => NNode' n (Cons (ntsplice tr tr2) trs)
adamc@35 1108 end = S (plus (ntsize tr2) (sum (map ntsize ls)))
adamc@208 1109
adamc@35 1110 ]]
adamc@35 1111
adamc@35 1112 After a few moments of squinting at this goal, it becomes apparent that we need to do a case analysis on the structure of [ls]. The rest is routine. *)
adamc@35 1113
adamc@35 1114 destruct ls; crush.
adamc@35 1115
adam@317 1116 (** We can go further in automating the proof by exploiting the hint mechanism.%\index{Vernacular commands!Hint Extern}% *)
adamc@35 1117
adamc@35 1118 Restart.
adam@536 1119
adamc@35 1120 Hint Extern 1 (ntsize (match ?LS with Nil => _ | Cons _ _ => _ end) = _) =>
adamc@35 1121 destruct LS; crush.
adamc@35 1122 induction tr1 using nat_tree_ind'; crush.
adamc@35 1123 Qed.
adamc@41 1124 (* end thide *)
adamc@35 1125
adamc@35 1126 (** We will go into great detail on hints in a later chapter, but the only important thing to note here is that we register a pattern that describes a conclusion we expect to encounter during the proof. The pattern may contain unification variables, whose names are prefixed with question marks, and we may refer to those bound variables in a tactic that we ask to have run whenever the pattern matches.
adamc@35 1127
adam@317 1128 The advantage of using the hint is not very clear here, because the original proof was so short. However, the hint has fundamentally improved the readability of our proof. Before, the proof referred to the local variable [ls], which has an automatically generated name. To a human reading the proof script without stepping through it interactively, it was not clear where [ls] came from. The hint explains to the reader the process for choosing which variables to case analyze, and the hint can continue working even if the rest of the proof structure changes significantly. *)
adamc@36 1129
adamc@36 1130
adamc@36 1131 (** * Manual Proofs About Constructors *)
adamc@36 1132
adam@317 1133 (** It can be useful to understand how tactics like %\index{tactics!discriminate}%[discriminate] and %\index{tactics!injection}%[injection] work, so it is worth stepping through a manual proof of each kind. We will start with a proof fit for [discriminate]. *)
adamc@36 1134
adamc@36 1135 Theorem true_neq_false : true <> false.
adamc@208 1136
adamc@41 1137 (* begin thide *)
adam@420 1138 (** We begin with the tactic %\index{tactics!red}%[red], which is short for "one step of reduction," to unfold the definition of logical negation. *)
adamc@36 1139
adamc@36 1140 red.
adam@439 1141 (** %\vspace{-.15in}%[[
adamc@36 1142 ============================
adamc@36 1143 true = false -> False
adamc@36 1144 ]]
adamc@36 1145
adam@442 1146 The negation is replaced with an implication of falsehood. We use the tactic %\index{tactics!intro}%[intro H] to change the assumption of the implication into a hypothesis named [H]. *)
adamc@36 1147
adamc@36 1148 intro H.
adam@439 1149 (** %\vspace{-.15in}%[[
adamc@36 1150 H : true = false
adamc@36 1151 ============================
adamc@36 1152 False
adamc@36 1153 ]]
adamc@36 1154
adam@442 1155 This is the point in the proof where we apply some creativity. We define a function whose utility will become clear soon. *)
adamc@36 1156
adam@317 1157 Definition toProp (b : bool) := if b then True else False.
adamc@36 1158
adam@448 1159 (** It is worth recalling the difference between the lowercase and uppercase versions of truth and falsehood: [True] and [False] are logical propositions, while [true] and [false] are Boolean values that we can case-analyze. We have defined [toProp] such that our conclusion of [False] is computationally equivalent to [toProp false]. Thus, the %\index{tactics!change}%[change] tactic will let us change the conclusion to [toProp false]. The general form [change e] replaces the conclusion with [e], whenever Coq's built-in computation rules suffice to establish the equivalence of [e] with the original conclusion. *)
adamc@36 1160
adam@317 1161 change (toProp false).
adam@439 1162 (** %\vspace{-.15in}%[[
adamc@36 1163 H : true = false
adamc@36 1164 ============================
adam@317 1165 toProp false
adamc@36 1166 ]]
adamc@36 1167
adam@448 1168 Now the righthand side of [H]'s equality appears in the conclusion, so we can rewrite, using the notation [<-] to request to replace the righthand side of the equality with the lefthand side.%\index{tactics!rewrite}% *)
adamc@36 1169
adamc@36 1170 rewrite <- H.
adam@439 1171 (** %\vspace{-.15in}%[[
adamc@36 1172 H : true = false
adamc@36 1173 ============================
adam@317 1174 toProp true
adamc@36 1175 ]]
adamc@36 1176
adam@442 1177 We are almost done. Just how close we are to done is revealed by computational simplification. *)
adamc@36 1178
adamc@36 1179 simpl.
adam@439 1180 (** %\vspace{-.15in}%[[
adamc@36 1181 H : true = false
adamc@36 1182 ============================
adamc@36 1183 True
adam@302 1184 ]]
adam@302 1185 *)
adamc@36 1186
adamc@36 1187 trivial.
adamc@36 1188 Qed.
adamc@41 1189 (* end thide *)
adamc@36 1190
adamc@36 1191 (** I have no trivial automated version of this proof to suggest, beyond using [discriminate] or [congruence] in the first place.
adamc@36 1192
adamc@36 1193 %\medskip%
adamc@36 1194
adamc@36 1195 We can perform a similar manual proof of injectivity of the constructor [S]. I leave a walk-through of the details to curious readers who want to run the proof script interactively. *)
adamc@36 1196
adamc@36 1197 Theorem S_inj' : forall n m : nat, S n = S m -> n = m.
adamc@41 1198 (* begin thide *)
adamc@36 1199 intros n m H.
adamc@36 1200 change (pred (S n) = pred (S m)).
adamc@36 1201 rewrite H.
adamc@36 1202 reflexivity.
adamc@36 1203 Qed.
adamc@41 1204 (* end thide *)
adamc@36 1205
adam@400 1206 (** The key piece of creativity in this theorem comes in the use of the natural number predecessor function [pred]. Embodied in the implementation of [injection] is a generic recipe for writing such type-specific functions.
adam@317 1207
adam@317 1208 The examples in this section illustrate an important aspect of the design philosophy behind Coq. We could certainly design a Gallina replacement that built in rules for constructor discrimination and injectivity, but a simpler alternative is to include a few carefully chosen rules that enable the desired reasoning patterns and many others. A key benefit of this philosophy is that the complexity of proof checking is minimized, which bolsters our confidence that proved theorems are really true. *)