Mercurial > cpdt > repo
comparison src/Predicates.v @ 209:90af611e2993
Port Predicates
author | Adam Chlipala <adamc@hcoop.net> |
---|---|
date | Mon, 09 Nov 2009 11:48:27 -0500 |
parents | f05514cc6c0d |
children | b149a07b9b5b |
comparison
equal
deleted
inserted
replaced
208:b9e9ff52913c | 209:90af611e2993 |
---|---|
19 (** %\chapter{Inductive Predicates}% *) | 19 (** %\chapter{Inductive Predicates}% *) |
20 | 20 |
21 (** The so-called "Curry-Howard Correspondence" states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *) | 21 (** The so-called "Curry-Howard Correspondence" states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *) |
22 | 22 |
23 Print unit. | 23 Print unit. |
24 (** [[ | 24 (** %\vspace{-.15in}% [[ |
25 | 25 Inductive unit : Set := tt : unit |
26 Inductive unit : Set := tt : unit | |
27 ]] *) | 26 ]] *) |
28 | 27 |
29 Print True. | 28 Print True. |
30 (** [[ | 29 (** %\vspace{-.15in}% [[ |
31 | 30 Inductive True : Prop := I : True |
32 Inductive True : Prop := I : True | |
33 ]] *) | 31 ]] *) |
34 | 32 |
35 (** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. | 33 (** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds. Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism. The connection goes further than this. We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop]. The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs. A term [T] of type [Set] is a type of programs, and a term of type [T] is a program. A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T]. |
36 | 34 |
37 [unit] has one value, [tt]. [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact %\textit{%#<i>#should not#</i>#%}% be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving. | 35 [unit] has one value, [tt]. [True] has one proof, [I]. Why distinguish between these two types? Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact %\textit{%#<i>#should not#</i>#%}% be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving. |
68 (* end thide *) | 66 (* end thide *) |
69 | 67 |
70 (** There is also a predicate [False], which is the Curry-Howard mirror image of [Empty_set] from the last chapter. *) | 68 (** There is also a predicate [False], which is the Curry-Howard mirror image of [Empty_set] from the last chapter. *) |
71 | 69 |
72 Print False. | 70 Print False. |
73 (** [[ | 71 (** %\vspace{-.15in}% [[ |
74 | 72 Inductive False : Prop := |
75 Inductive False : Prop := | 73 |
76 | 74 ]] |
77 ]] *) | 75 |
78 | 76 We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number. Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *) |
79 (** We can conclude anything from [False], doing case analysis on a proof of [False] in the same way we might do case analysis on, say, a natural number. Since there are no cases to consider, any such case analysis succeeds immediately in proving the goal. *) | |
80 | 77 |
81 Theorem False_imp : False -> 2 + 2 = 5. | 78 Theorem False_imp : False -> 2 + 2 = 5. |
82 (* begin thide *) | 79 (* begin thide *) |
83 destruct 1. | 80 destruct 1. |
84 (* end thide *) | 81 (* end thide *) |
92 | 89 |
93 (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important. We use the [elimtype] tactic to state a proposition, telling Coq that we wish to construct a proof of the new proposition and then prove the original goal by case analysis on the structure of the new auxiliary proof. Since [False] has no constructors, [elimtype False] simply leaves us with the obligation to prove [False]. *) | 90 (** At this point, we have an inconsistent hypothesis [2 + 2 = 5], so the specific conclusion is not important. We use the [elimtype] tactic to state a proposition, telling Coq that we wish to construct a proof of the new proposition and then prove the original goal by case analysis on the structure of the new auxiliary proof. Since [False] has no constructors, [elimtype False] simply leaves us with the obligation to prove [False]. *) |
94 | 91 |
95 elimtype False. | 92 elimtype False. |
96 (** [[ | 93 (** [[ |
97 | |
98 H : 2 + 2 = 5 | 94 H : 2 + 2 = 5 |
99 ============================ | 95 ============================ |
100 False | 96 False |
101 ]] *) | 97 |
102 | 98 ]] |
103 (** For now, we will leave the details of this proof about arithmetic to [crush]. *) | 99 |
100 For now, we will leave the details of this proof about arithmetic to [crush]. *) | |
104 | 101 |
105 crush. | 102 crush. |
106 (* end thide *) | 103 (* end thide *) |
107 Qed. | 104 Qed. |
108 | 105 |
109 (** A related notion to [False] is logical negation. *) | 106 (** A related notion to [False] is logical negation. *) |
110 | 107 |
111 Print not. | 108 Print not. |
112 (** [[ | 109 (** %\vspace{-.15in}% [[ |
113 | 110 not = fun A : Prop => A -> False |
114 not = fun A : Prop => A -> False | 111 : Prop -> Prop |
115 : Prop -> Prop | 112 |
116 ]] *) | 113 ]] |
117 | 114 |
118 (** We see that [not] is just shorthand for implication of [False]. We can use that fact explicitly in proofs. The syntax [~P] expands to [not P]. *) | 115 We see that [not] is just shorthand for implication of [False]. We can use that fact explicitly in proofs. The syntax [~P] expands to [not P]. *) |
119 | 116 |
120 Theorem arith_neq' : ~ (2 + 2 = 5). | 117 Theorem arith_neq' : ~ (2 + 2 = 5). |
121 (* begin thide *) | 118 (* begin thide *) |
122 unfold not. | 119 unfold not. |
123 | |
124 (** [[ | 120 (** [[ |
125 | |
126 ============================ | 121 ============================ |
127 2 + 2 = 5 -> False | 122 2 + 2 = 5 -> False |
128 ]] *) | 123 ]] *) |
129 | 124 |
130 crush. | 125 crush. |
132 Qed. | 127 Qed. |
133 | 128 |
134 (** We also have conjunction, which we introduced in the last chapter. *) | 129 (** We also have conjunction, which we introduced in the last chapter. *) |
135 | 130 |
136 Print and. | 131 Print and. |
137 (** [[ | 132 (** %\vspace{-.15in}% [[ |
138 | 133 Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B |
139 Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B | 134 |
140 ]] *) | 135 ]] |
141 | 136 |
142 (** The interested reader can check that [and] has a Curry-Howard doppleganger called [prod], the type of pairs. However, it is generally most convenient to reason about conjunction using tactics. An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks. [/\] is an infix shorthand for [and]. *) | 137 The interested reader can check that [and] has a Curry-Howard doppleganger called [prod], the type of pairs. However, it is generally most convenient to reason about conjunction using tactics. An explicit proof of commutativity of [and] illustrates the usual suspects for such tasks. [/\] is an infix shorthand for [and]. *) |
143 | 138 |
144 Theorem and_comm : P /\ Q -> Q /\ P. | 139 Theorem and_comm : P /\ Q -> Q /\ P. |
140 | |
145 (* begin thide *) | 141 (* begin thide *) |
146 (** We start by case analysis on the proof of [P /\ Q]. *) | 142 (** We start by case analysis on the proof of [P /\ Q]. *) |
147 | 143 |
148 destruct 1. | 144 destruct 1. |
149 (** [[ | 145 (** [[ |
150 | |
151 H : P | 146 H : P |
152 H0 : Q | 147 H0 : Q |
153 ============================ | 148 ============================ |
154 Q /\ P | 149 Q /\ P |
155 ]] *) | 150 |
156 | 151 ]] |
157 (** Every proof of a conjunction provides proofs for both conjuncts, so we get a single subgoal reflecting that. We can proceed by splitting this subgoal into a case for each conjunct of [Q /\ P]. *) | 152 |
153 Every proof of a conjunction provides proofs for both conjuncts, so we get a single subgoal reflecting that. We can proceed by splitting this subgoal into a case for each conjunct of [Q /\ P]. *) | |
158 | 154 |
159 split. | 155 split. |
160 (** [[ | 156 (** [[ |
161 2 subgoals | 157 2 subgoals |
162 | 158 |
165 ============================ | 161 ============================ |
166 Q | 162 Q |
167 | 163 |
168 subgoal 2 is: | 164 subgoal 2 is: |
169 P | 165 P |
170 ]] *) | 166 |
171 | 167 ]] |
172 (** In each case, the conclusion is among our hypotheses, so the [assumption] tactic finishes the process. *) | 168 |
169 In each case, the conclusion is among our hypotheses, so the [assumption] tactic finishes the process. *) | |
173 | 170 |
174 assumption. | 171 assumption. |
175 assumption. | 172 assumption. |
176 (* end thide *) | 173 (* end thide *) |
177 Qed. | 174 Qed. |
178 | 175 |
179 (** Coq disjunction is called [or] and abbreviated with the infix operator [\/]. *) | 176 (** Coq disjunction is called [or] and abbreviated with the infix operator [\/]. *) |
180 | 177 |
181 Print or. | 178 Print or. |
182 (** [[ | 179 (** %\vspace{-.15in}% [[ |
183 | 180 Inductive or (A : Prop) (B : Prop) : Prop := |
184 Inductive or (A : Prop) (B : Prop) : Prop := | |
185 or_introl : A -> A \/ B | or_intror : B -> A \/ B | 181 or_introl : A -> A \/ B | or_intror : B -> A \/ B |
186 ]] *) | 182 |
187 | 183 ]] |
188 (** We see that there are two ways to prove a disjunction: prove the first disjunct or prove the second. The Curry-Howard analogue of this is the Coq [sum] type. We can demonstrate the main tactics here with another proof of commutativity. *) | 184 |
185 We see that there are two ways to prove a disjunction: prove the first disjunct or prove the second. The Curry-Howard analogue of this is the Coq [sum] type. We can demonstrate the main tactics here with another proof of commutativity. *) | |
189 | 186 |
190 Theorem or_comm : P \/ Q -> Q \/ P. | 187 Theorem or_comm : P \/ Q -> Q \/ P. |
191 | 188 |
192 (* begin thide *) | 189 (* begin thide *) |
193 (** As in the proof for [and], we begin with case analysis, though this time we are met by two cases instead of one. *) | 190 (** As in the proof for [and], we begin with case analysis, though this time we are met by two cases instead of one. *) |
191 | |
194 destruct 1. | 192 destruct 1. |
195 (** [[ | 193 (** [[ |
196 | |
197 2 subgoals | 194 2 subgoals |
198 | 195 |
199 H : P | 196 H : P |
200 ============================ | 197 ============================ |
201 Q \/ P | 198 Q \/ P |
202 | 199 |
203 subgoal 2 is: | 200 subgoal 2 is: |
204 Q \/ P | 201 Q \/ P |
205 ]] *) | 202 |
206 | 203 ]] |
207 (** We can see that, in the first subgoal, we want to prove the disjunction by proving its second disjunct. The [right] tactic telegraphs this intent. *) | 204 |
208 | 205 We can see that, in the first subgoal, we want to prove the disjunction by proving its second disjunct. The [right] tactic telegraphs this intent. *) |
206 | |
209 right; assumption. | 207 right; assumption. |
210 | 208 |
211 (** The second subgoal has a symmetric proof. | 209 (** The second subgoal has a symmetric proof. |
212 | 210 |
213 [[ | 211 [[ |
214 | |
215 1 subgoal | 212 1 subgoal |
216 | 213 |
217 H : Q | 214 H : Q |
218 ============================ | 215 ============================ |
219 Q \/ P | 216 Q \/ P |
286 intuition. | 283 intuition. |
287 | 284 |
288 (** A lot of the proof structure has been generated for us by [intuition], but the final proof depends on a fact about lists. The remaining subgoal hints at what cleverness we need to inject. *) | 285 (** A lot of the proof structure has been generated for us by [intuition], but the final proof depends on a fact about lists. The remaining subgoal hints at what cleverness we need to inject. *) |
289 | 286 |
290 (** [[ | 287 (** [[ |
291 | |
292 ls1 : list nat | 288 ls1 : list nat |
293 ls2 : list nat | 289 ls2 : list nat |
294 H0 : length ls1 + length ls2 = 6 | 290 H0 : length ls1 + length ls2 = 6 |
295 ============================ | 291 ============================ |
296 length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2 | 292 length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2 |
297 ]] *) | 293 |
298 | 294 ]] |
299 (** We can see that we need a theorem about lengths of concatenated lists, which we proved last chapter and is also in the standard library. *) | 295 |
296 We can see that we need a theorem about lengths of concatenated lists, which we proved last chapter and is also in the standard library. *) | |
300 | 297 |
301 rewrite app_length. | 298 rewrite app_length. |
302 (** [[ | 299 (** [[ |
303 | |
304 ls1 : list nat | 300 ls1 : list nat |
305 ls2 : list nat | 301 ls2 : list nat |
306 H0 : length ls1 + length ls2 = 6 | 302 H0 : length ls1 + length ls2 = 6 |
307 ============================ | 303 ============================ |
308 length ls1 + length ls2 = 6 \/ length ls1 = length ls2 | 304 length ls1 + length ls2 = 6 \/ length ls1 = length ls2 |
309 ]] *) | 305 |
310 | 306 ]] |
311 (** Now the subgoal follows by purely propositional reasoning. That is, we could replace [length ls1 + length ls2 = 6] with [P] and [length ls1 = length ls2] with [Q] and arrive at a tautology of propositional logic. *) | 307 |
308 Now the subgoal follows by purely propositional reasoning. That is, we could replace [length ls1 + length ls2 = 6] with [P] and [length ls1 = length ls2] with [Q] and arrive at a tautology of propositional logic. *) | |
312 | 309 |
313 tauto. | 310 tauto. |
314 (* end thide *) | 311 (* end thide *) |
315 Qed. | 312 Qed. |
316 | 313 |
331 | 328 |
332 (** * What Does It Mean to Be Constructive? *) | 329 (** * What Does It Mean to Be Constructive? *) |
333 | 330 |
334 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth? | 331 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth? |
335 | 332 |
336 The answer comes from the fact that Coq implements %\textit{%#<i>#constructive#</i>#%}% or %\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~P -> P] and [P \/ ~P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%#<i>#decidable#</i>#%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~P] from any proof of [P \/ ~P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts." | 333 The answer comes from the fact that Coq implements %\textit{%#<i>#constructive#</i>#%}% or %\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%#<i>#decidable#</i>#%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts." |
337 | 334 |
338 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth." | 335 Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth." |
339 | 336 |
340 Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs. | 337 Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs. |
341 | 338 |
347 (** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]." | 344 (** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]." |
348 | 345 |
349 Existential quantification is defined in the standard library. *) | 346 Existential quantification is defined in the standard library. *) |
350 | 347 |
351 Print ex. | 348 Print ex. |
352 (** [[ | 349 (** %\vspace{-.15in}% [[ |
353 | 350 Inductive ex (A : Type) (P : A -> Prop) : Prop := |
354 Inductive ex (A : Type) (P : A -> Prop) : Prop := | |
355 ex_intro : forall x : A, P x -> ex P | 351 ex_intro : forall x : A, P x -> ex P |
356 ]] *) | 352 |
357 | 353 ]] |
358 (** [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *) | 354 |
355 [ex] is parameterized by the type [A] that we quantify over, and by a predicate [P] over [A]s. We prove an existential by exhibiting some [x] of type [A], along with a proof of [P x]. As usual, there are tactics that save us from worrying about the low-level details most of the time. We use the equality operator [=], which, depending on the settings in which they learned logic, different people will say either is or is not part of first-order logic. For our purposes, it is. *) | |
359 | 356 |
360 Theorem exist1 : exists x : nat, x + 1 = 2. | 357 Theorem exist1 : exists x : nat, x + 1 = 2. |
361 (* begin thide *) | 358 (* begin thide *) |
362 (** remove printing exists *) | 359 (** remove printing exists *) |
363 (** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text "exists" in formulas.) *) | 360 (** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text "exists" in formulas.) *) |
361 | |
364 exists 1. | 362 exists 1. |
365 | 363 |
366 (** The conclusion is replaced with a version using the existential witness that we announced. *) | 364 (** The conclusion is replaced with a version using the existential witness that we announced. |
367 | 365 |
368 (** [[ | 366 [[ |
369 | |
370 ============================ | 367 ============================ |
371 1 + 1 = 2 | 368 1 + 1 = 2 |
372 ]] *) | 369 ]] *) |
373 | 370 |
374 reflexivity. | 371 reflexivity. |
380 (** We can also use tactics to reason about existential hypotheses. *) | 377 (** We can also use tactics to reason about existential hypotheses. *) |
381 | 378 |
382 Theorem exist2 : forall n m : nat, (exists x : nat, n + x = m) -> n <= m. | 379 Theorem exist2 : forall n m : nat, (exists x : nat, n + x = m) -> n <= m. |
383 (* begin thide *) | 380 (* begin thide *) |
384 (** We start by case analysis on the proof of the existential fact. *) | 381 (** We start by case analysis on the proof of the existential fact. *) |
382 | |
385 destruct 1. | 383 destruct 1. |
386 (** [[ | 384 (** [[ |
387 | |
388 n : nat | 385 n : nat |
389 m : nat | 386 m : nat |
390 x : nat | 387 x : nat |
391 H : n + x = m | 388 H : n + x = m |
392 ============================ | 389 ============================ |
393 n <= m | 390 n <= m |
394 ]] *) | 391 |
395 | 392 ]] |
396 (** The goal has been replaced by a form where there is a new free variable [x], and where we have a new hypothesis that the body of the existential holds with [x] substituted for the old bound variable. From here, the proof is just about arithmetic and is easy to automate. *) | 393 |
394 The goal has been replaced by a form where there is a new free variable [x], and where we have a new hypothesis that the body of the existential holds with [x] substituted for the old bound variable. From here, the proof is just about arithmetic and is easy to automate. *) | |
397 | 395 |
398 crush. | 396 crush. |
399 (* end thide *) | 397 (* end thide *) |
400 Qed. | 398 Qed. |
401 | 399 |
437 The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop]. We saw examples of parameterized types like [list], but there the parameters appeared with names %\textit{%#<i>#before#</i>#%}% the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors. | 435 The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop]. We saw examples of parameterized types like [list], but there the parameters appeared with names %\textit{%#<i>#before#</i>#%}% the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors. |
438 | 436 |
439 For instance, [isZero] forces its argument to be [0]. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type! *) | 437 For instance, [isZero] forces its argument to be [0]. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type! *) |
440 | 438 |
441 Print eq. | 439 Print eq. |
442 (** [[ | 440 (** %\vspace{-.15in}% [[ |
443 | 441 Inductive eq (A : Type) (x : A) : A -> Prop := refl_equal : x = x |
444 Inductive eq (A : Type) (x : A) : A -> Prop := refl_equal : x = x | 442 |
445 ]] *) | 443 ]] |
446 | 444 |
447 (** [eq] is the type we get behind the scenes when uses of infix [=] are expanded. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [refl_equal], we see that we can only %\textit{%#<i>#prove#</i>#%}% equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. | 445 [eq] is the type we get behind the scenes when uses of infix [=] are expanded. We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type. The type of [eq] allows us to state any equalities, even those that are provably false. However, examining the type of equality's sole constructor [refl_equal], we see that we can only %\textit{%#<i>#prove#</i>#%}% equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition. |
448 | 446 |
449 Returning to the example of [isZero], we can see how to make use of hypotheses that use this predicate. *) | 447 Returning to the example of [isZero], we can see how to make use of hypotheses that use this predicate. *) |
450 | 448 |
451 Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n. | 449 Theorem isZero_plus : forall n m : nat, isZero m -> n + m = n. |
452 (* begin thide *) | 450 (* begin thide *) |
453 (** We want to proceed by cases on the proof of the assumption about [isZero]. *) | 451 (** We want to proceed by cases on the proof of the assumption about [isZero]. *) |
452 | |
454 destruct 1. | 453 destruct 1. |
455 (** [[ | 454 (** [[ |
456 | |
457 n : nat | 455 n : nat |
458 ============================ | 456 ============================ |
459 n + 0 = n | 457 n + 0 = n |
460 ]] *) | 458 |
461 | 459 ]] |
462 (** Since [isZero] has only one constructor, we are presented with only one subgoal. The argument [m] to [isZero] is replaced with that type's argument from the single constructor [IsZero]. From this point, the proof is trivial. *) | 460 |
461 Since [isZero] has only one constructor, we are presented with only one subgoal. The argument [m] to [isZero] is replaced with that type's argument from the single constructor [IsZero]. From this point, the proof is trivial. *) | |
463 | 462 |
464 crush. | 463 crush. |
465 (* end thide *) | 464 (* end thide *) |
466 Qed. | 465 Qed. |
467 | 466 |
468 (** Another example seems at first like it should admit an analogous proof, but in fact provides a demonstration of one of the most basic gotchas of Coq proving. *) | 467 (** Another example seems at first like it should admit an analogous proof, but in fact provides a demonstration of one of the most basic gotchas of Coq proving. *) |
469 | 468 |
470 Theorem isZero_contra : isZero 1 -> False. | 469 Theorem isZero_contra : isZero 1 -> False. |
471 (* begin thide *) | 470 (* begin thide *) |
472 (** Let us try a proof by cases on the assumption, as in the last proof. *) | 471 (** Let us try a proof by cases on the assumption, as in the last proof. *) |
472 | |
473 destruct 1. | 473 destruct 1. |
474 (** [[ | 474 (** [[ |
475 | |
476 ============================ | 475 ============================ |
477 False | 476 False |
478 ]] *) | 477 |
479 | 478 ]] |
480 (** It seems that case analysis has not helped us much at all! Our sole hypothesis disappears, leaving us, if anything, worse off than we were before. What went wrong? We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments. If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction. Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0]. | 479 |
480 It seems that case analysis has not helped us much at all! Our sole hypothesis disappears, leaving us, if anything, worse off than we were before. What went wrong? We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments. If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction. Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0]. | |
481 | 481 |
482 Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *) | 482 Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *) |
483 | 483 |
484 Undo. | 484 Undo. |
485 inversion 1. | 485 inversion 1. |
491 Sometimes using [destruct] when you should have used [inversion] can lead to confusing results. To illustrate, consider an alternate proof attempt for the last theorem. *) | 491 Sometimes using [destruct] when you should have used [inversion] can lead to confusing results. To illustrate, consider an alternate proof attempt for the last theorem. *) |
492 | 492 |
493 Theorem isZero_contra' : isZero 1 -> 2 + 2 = 5. | 493 Theorem isZero_contra' : isZero 1 -> 2 + 2 = 5. |
494 destruct 1. | 494 destruct 1. |
495 (** [[ | 495 (** [[ |
496 | |
497 ============================ | 496 ============================ |
498 1 + 1 = 4 | 497 1 + 1 = 4 |
499 ]] *) | 498 |
500 | 499 ]] |
501 (** What on earth happened here? Internally, [destruct] replaced [1] with a fresh variable, and, trying to be helpful, it also replaced the occurrence of [1] within the unary representation of each number in the goal. This has the net effect of decrementing each of these numbers. If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *) | 500 |
501 What on earth happened here? Internally, [destruct] replaced [1] with a fresh variable, and, trying to be helpful, it also replaced the occurrence of [1] within the unary representation of each number in the goal. This has the net effect of decrementing each of these numbers. If you are doing a proof and encounter a strange transmutation like this, there is a good chance that you should go back and replace a use of [destruct] with [inversion]. *) | |
502 | |
502 Abort. | 503 Abort. |
503 | 504 |
504 | 505 |
505 (* begin hide *) | 506 (* begin hide *) |
506 (* In-class exercises *) | 507 (* In-class exercises *) |
571 | 572 |
572 Theorem even_3_contra : even 3 -> False. | 573 Theorem even_3_contra : even 3 -> False. |
573 (* begin thide *) | 574 (* begin thide *) |
574 inversion 1. | 575 inversion 1. |
575 (** [[ | 576 (** [[ |
576 | |
577 H : even 3 | 577 H : even 3 |
578 n : nat | 578 n : nat |
579 H1 : even 1 | 579 H1 : even 1 |
580 H0 : n = 1 | 580 H0 : n = 1 |
581 ============================ | 581 ============================ |
582 False | 582 False |
583 ]] *) | 583 |
584 | 584 ]] |
585 (** [inversion] can be a little overzealous at times, as we can see here with the introduction of the unused variable [n] and an equality hypothesis about it. For more complicated predicates, though, adding such assumptions is critical to dealing with the undecidability of general inversion. *) | 585 |
586 [inversion] can be a little overzealous at times, as we can see here with the introduction of the unused variable [n] and an equality hypothesis about it. For more complicated predicates, though, adding such assumptions is critical to dealing with the undecidability of general inversion. *) | |
586 | 587 |
587 inversion H1. | 588 inversion H1. |
588 (* end thide *) | 589 (* end thide *) |
589 Qed. | 590 Qed. |
590 | 591 |
591 (** We can also do inductive proofs about [even]. *) | 592 (** We can also do inductive proofs about [even]. *) |
592 | 593 |
593 Theorem even_plus : forall n m, even n -> even m -> even (n + m). | 594 Theorem even_plus : forall n m, even n -> even m -> even (n + m). |
594 (* begin thide *) | 595 (* begin thide *) |
595 (** It seems a reasonable first choice to proceed by induction on [n]. *) | 596 (** It seems a reasonable first choice to proceed by induction on [n]. *) |
597 | |
596 induction n; crush. | 598 induction n; crush. |
597 (** [[ | 599 (** [[ |
598 | |
599 n : nat | 600 n : nat |
600 IHn : forall m : nat, even n -> even m -> even (n + m) | 601 IHn : forall m : nat, even n -> even m -> even (n + m) |
601 m : nat | 602 m : nat |
602 H : even (S n) | 603 H : even (S n) |
603 H0 : even m | 604 H0 : even m |
604 ============================ | 605 ============================ |
605 even (S (n + m)) | 606 even (S (n + m)) |
606 ]] *) | 607 |
607 | 608 ]] |
608 (** We will need to use the hypotheses [H] and [H0] somehow. The most natural choice is to invert [H]. *) | 609 |
610 We will need to use the hypotheses [H] and [H0] somehow. The most natural choice is to invert [H]. *) | |
609 | 611 |
610 inversion H. | 612 inversion H. |
611 (** [[ | 613 (** [[ |
612 | |
613 n : nat | 614 n : nat |
614 IHn : forall m : nat, even n -> even m -> even (n + m) | 615 IHn : forall m : nat, even n -> even m -> even (n + m) |
615 m : nat | 616 m : nat |
616 H : even (S n) | 617 H : even (S n) |
617 H0 : even m | 618 H0 : even m |
618 n0 : nat | 619 n0 : nat |
619 H2 : even n0 | 620 H2 : even n0 |
620 H1 : S n0 = n | 621 H1 : S n0 = n |
621 ============================ | 622 ============================ |
622 even (S (S n0 + m)) | 623 even (S (S n0 + m)) |
623 ]] *) | 624 |
624 | 625 ]] |
625 (** Simplifying the conclusion brings us to a point where we can apply a constructor. *) | 626 |
627 Simplifying the conclusion brings us to a point where we can apply a constructor. *) | |
628 | |
626 simpl. | 629 simpl. |
627 (** [[ | 630 (** [[ |
628 | |
629 ============================ | 631 ============================ |
630 even (S (S (n0 + m))) | 632 even (S (S (n0 + m))) |
631 ]] *) | 633 ]] *) |
632 | 634 |
633 constructor. | 635 constructor. |
634 (** [[ | 636 (** [[ |
635 | |
636 ============================ | 637 ============================ |
637 even (n0 + m) | 638 even (n0 + m) |
638 ]] *) | 639 |
639 | 640 ]] |
640 (** At this point, we would like to apply the inductive hypothesis, which is: *) | 641 |
641 (** [[ | 642 At this point, we would like to apply the inductive hypothesis, which is: |
643 | |
644 [[ | |
642 | 645 |
643 IHn : forall m : nat, even n -> even m -> even (n + m) | 646 IHn : forall m : nat, even n -> even m -> even (n + m) |
644 ]] *) | 647 |
645 | 648 ]] |
646 (** Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%. This technique is commonly called %\textit{%#<i>#rule induction#</i>#%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent. *) | 649 |
650 Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%. This technique is commonly called %\textit{%#<i>#rule induction#</i>#%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent. *) | |
647 | 651 |
648 Restart. | 652 Restart. |
649 | 653 |
650 induction 1. | 654 induction 1. |
651 (** [[ | 655 (** [[ |
652 | |
653 m : nat | 656 m : nat |
654 ============================ | 657 ============================ |
655 even m -> even (0 + m) | 658 even m -> even (0 + m) |
656 | 659 |
657 subgoal 2 is: | 660 subgoal 2 is: |
658 even m -> even (S (S n) + m) | 661 even m -> even (S (S n) + m) |
659 ]] *) | 662 |
660 | 663 ]] |
661 (** The first case is easily discharged by [crush], based on the hint we added earlier to try the constructors of [even]. *) | 664 |
665 The first case is easily discharged by [crush], based on the hint we added earlier to try the constructors of [even]. *) | |
662 | 666 |
663 crush. | 667 crush. |
664 | 668 |
665 (** Now we focus on the second case: *) | 669 (** Now we focus on the second case: *) |
670 | |
666 intro. | 671 intro. |
667 | 672 |
668 (** [[ | 673 (** [[ |
669 | |
670 m : nat | 674 m : nat |
671 n : nat | 675 n : nat |
672 H : even n | 676 H : even n |
673 IHeven : even m -> even (n + m) | 677 IHeven : even m -> even (n + m) |
674 H0 : even m | 678 H0 : even m |
675 ============================ | 679 ============================ |
676 even (S (S n) + m) | 680 even (S (S n) + m) |
677 ]] *) | 681 |
678 | 682 ]] |
679 (** We simplify and apply a constructor, as in our last proof attempt. *) | 683 |
684 We simplify and apply a constructor, as in our last proof attempt. *) | |
680 | 685 |
681 simpl; constructor. | 686 simpl; constructor. |
682 (** [[ | 687 (** [[ |
683 | |
684 ============================ | 688 ============================ |
685 even (n + m) | 689 even (n + m) |
686 ]] *) | 690 |
687 | 691 ]] |
688 (** Now we have an exact match with our inductive hypothesis, and the remainder of the proof is trivial. *) | 692 |
693 Now we have an exact match with our inductive hypothesis, and the remainder of the proof is trivial. *) | |
689 | 694 |
690 apply IHeven; assumption. | 695 apply IHeven; assumption. |
691 | 696 |
692 (** In fact, [crush] can handle all of the details of the proof once we declare the induction strategy. *) | 697 (** In fact, [crush] can handle all of the details of the proof once we declare the induction strategy. *) |
693 | 698 |
700 | 705 |
701 Theorem even_contra : forall n, even (S (n + n)) -> False. | 706 Theorem even_contra : forall n, even (S (n + n)) -> False. |
702 (* begin thide *) | 707 (* begin thide *) |
703 induction 1. | 708 induction 1. |
704 (** [[ | 709 (** [[ |
705 | |
706 n : nat | 710 n : nat |
707 ============================ | 711 ============================ |
708 False | 712 False |
709 | 713 |
710 subgoal 2 is: | 714 subgoal 2 is: |
711 False | 715 False |
712 ]] *) | 716 |
713 | 717 ]] |
714 (** We are already sunk trying to prove the first subgoal, since the argument to [even] was replaced by a fresh variable internally. This time, we find it easiest to prove this theorem by way of a lemma. Instead of trusting [induction] to replace expressions with fresh variables, we do it ourselves, explicitly adding the appropriate equalities as new assumptions. *) | 718 |
719 We are already sunk trying to prove the first subgoal, since the argument to [even] was replaced by a fresh variable internally. This time, we find it easiest to prove this theorem by way of a lemma. Instead of trusting [induction] to replace expressions with fresh variables, we do it ourselves, explicitly adding the appropriate equalities as new assumptions. *) | |
720 | |
715 Abort. | 721 Abort. |
716 | 722 |
717 Lemma even_contra' : forall n', even n' -> forall n, n' = S (n + n) -> False. | 723 Lemma even_contra' : forall n', even n' -> forall n, n' = S (n + n) -> False. |
718 induction 1; crush. | 724 induction 1; crush. |
719 | 725 |
720 (** At this point, it is useful to consider all cases of [n] and [n0] being zero or nonzero. Only one of these cases has any trickiness to it. *) | 726 (** At this point, it is useful to consider all cases of [n] and [n0] being zero or nonzero. Only one of these cases has any trickiness to it. *) |
727 | |
721 destruct n; destruct n0; crush. | 728 destruct n; destruct n0; crush. |
722 | 729 |
723 (** [[ | 730 (** [[ |
724 | |
725 n : nat | 731 n : nat |
726 H : even (S n) | 732 H : even (S n) |
727 IHeven : forall n0 : nat, S n = S (n0 + n0) -> False | 733 IHeven : forall n0 : nat, S n = S (n0 + n0) -> False |
728 n0 : nat | 734 n0 : nat |
729 H0 : S n = n0 + S n0 | 735 H0 : S n = n0 + S n0 |
730 ============================ | 736 ============================ |
731 False | 737 False |
732 ]] *) | 738 |
733 | 739 ]] |
734 (** At this point it is useful to use a theorem from the standard library, which we also proved with a different name in the last chapter. *) | 740 |
741 At this point it is useful to use a theorem from the standard library, which we also proved with a different name in the last chapter. *) | |
742 | |
735 Check plus_n_Sm. | 743 Check plus_n_Sm. |
736 (** [[ | 744 (** %\vspace{-.15in}% [[ |
737 | 745 plus_n_Sm |
738 plus_n_Sm | |
739 : forall n m : nat, S (n + m) = n + S m | 746 : forall n m : nat, S (n + m) = n + S m |
740 ]] *) | 747 ]] *) |
741 | 748 |
742 rewrite <- plus_n_Sm in H0. | 749 rewrite <- plus_n_Sm in H0. |
743 | 750 |
744 (** The induction hypothesis lets us complete the proof. *) | 751 (** The induction hypothesis lets us complete the proof. *) |
752 | |
745 apply IHeven with n0; assumption. | 753 apply IHeven with n0; assumption. |
746 | 754 |
747 (** As usual, we can rewrite the proof to avoid referencing any locally-generated names, which makes our proof script more readable and more robust to changes in the theorem statement. We use the notation [<-] to request a hint that does right-to-left rewriting, just like we can with the [rewrite] tactic. *) | 755 (** As usual, we can rewrite the proof to avoid referencing any locally-generated names, which makes our proof script more readable and more robust to changes in the theorem statement. We use the notation [<-] to request a hint that does right-to-left rewriting, just like we can with the [rewrite] tactic. *) |
748 Restart. | 756 |
757 Restart. | |
749 Hint Rewrite <- plus_n_Sm : cpdt. | 758 Hint Rewrite <- plus_n_Sm : cpdt. |
750 | 759 |
751 induction 1; crush; | 760 induction 1; crush; |
752 match goal with | 761 match goal with |
753 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0 | 762 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0 |
772 induction 1; crush; | 781 induction 1; crush; |
773 match goal with | 782 match goal with |
774 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0 | 783 | [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0 |
775 end; crush; eauto. | 784 end; crush; eauto. |
776 | 785 |
777 (** One subgoal remains: *) | 786 (** One subgoal remains: |
778 | 787 |
779 (** [[ | 788 [[ |
780 | |
781 n : nat | 789 n : nat |
782 H : even (S (n + n)) | 790 H : even (S (n + n)) |
783 IHeven : S (n + n) = S (S (S (n + n))) -> False | 791 IHeven : S (n + n) = S (S (S (n + n))) -> False |
784 ============================ | 792 ============================ |
785 False | 793 False |
786 ]] *) | 794 |
787 | 795 ]] |
788 (** We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n]. This is because the quantification of [n] %\textit{%#<i>#appeared after the thing we are inducting on#</i>#%}% in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. | 796 |
797 We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n]. This is because the quantification of [n] %\textit{%#<i>#appeared after the thing we are inducting on#</i>#%}% in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. | |
789 | 798 |
790 Why should Coq implement [induction] this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *) | 799 Why should Coq implement [induction] this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that [induction] analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the [induction] tactic forces us to do. *) |
791 (* end thide *) | 800 (* end thide *) |
801 | |
792 Abort. | 802 Abort. |
793 | 803 |
794 | 804 |
795 (* begin hide *) | 805 (* begin hide *) |
796 (* In-class exercises *) | 806 (* In-class exercises *) |
863 (** %\begin{enumerate}%#<ol># | 873 (** %\begin{enumerate}%#<ol># |
864 | 874 |
865 %\item%#<li># Prove these tautologies of propositional logic, using only the tactics [apply], [assumption], [constructor], [destruct], [intro], [intros], [left], [right], [split], and [unfold]. | 875 %\item%#<li># Prove these tautologies of propositional logic, using only the tactics [apply], [assumption], [constructor], [destruct], [intro], [intros], [left], [right], [split], and [unfold]. |
866 %\begin{enumerate}%#<ol># | 876 %\begin{enumerate}%#<ol># |
867 %\item%#<li># [(True \/ False) /\ (False \/ True)]#</li># | 877 %\item%#<li># [(True \/ False) /\ (False \/ True)]#</li># |
868 %\item%#<li># [P -> ~ ~P]#</li># | 878 %\item%#<li># [P -> ~ ~ P]#</li># |
869 %\item%#<li># [P /\ (Q \/ R) -> (P /\ Q) \/ (P /\ R)]#</li># | 879 %\item%#<li># [P /\ (Q \/ R) -> (P /\ Q) \/ (P /\ R)]#</li># |
870 #</ol> </li>#%\end{enumerate}% | 880 #</ol> </li>#%\end{enumerate}% |
871 | 881 |
872 %\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%#<tt>#exists#</tt>#%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of "forward reasoning," in contrast to the "backward reasoning" that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol. | 882 %\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%#<tt>#exists#</tt>#%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of "forward reasoning," in contrast to the "backward reasoning" that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol. |
873 %\begin{enumerate}%#<ol># | 883 %\begin{enumerate}%#<ol># |
881 %\item%#<li># Define [var] as a synonym for the natural numbers.#</li># | 891 %\item%#<li># Define [var] as a synonym for the natural numbers.#</li># |
882 %\item%#<li># Define an inductive type [exp] of expressions, containing natural number constants, natural number addition, pairing of two other expressions, extraction of the first component of a pair, extraction of the second component of a pair, and variables (based on the [var] type you defined).#</li># | 892 %\item%#<li># Define an inductive type [exp] of expressions, containing natural number constants, natural number addition, pairing of two other expressions, extraction of the first component of a pair, extraction of the second component of a pair, and variables (based on the [var] type you defined).#</li># |
883 %\item%#<li># Define an inductive type [cmd] of commands, containing expressions and variable assignments. A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#</li># | 893 %\item%#<li># Define an inductive type [cmd] of commands, containing expressions and variable assignments. A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#</li># |
884 %\item%#<li># Define an inductive type [val] of values, containing natural number constants and pairings of values.#</li># | 894 %\item%#<li># Define an inductive type [val] of values, containing natural number constants and pairings of values.#</li># |
885 %\item%#<li># Define a type of variable assignments, which assign a value to each variable.#</li># | 895 %\item%#<li># Define a type of variable assignments, which assign a value to each variable.#</li># |
886 %\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. "Big step" means that the evaluation of every expression should evaluatable with a single instance of the inductive predicate you will define. For instance, "[1 + 1] evaluates to [2] under assignment [va]" should be derivable for any assignment [va].#</li># | 896 %\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. "Big step" means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, "[1 + 1] evaluates to [2] under assignment [va]" should be derivable for any assignment [va].#</li># |
887 %\item%#<li># Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment. The value of a command is the result of evaluating its final expression.#</li># | 897 %\item%#<li># Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment. The value of a command is the result of evaluating its final expression.#</li># |
888 %\item%#<li># Define a type of variable typings, which are like variable assignments, but map variables to types instead of values. You might use polymorphism to share some code with your variable assignments.#</li># | 898 %\item%#<li># Define a type of variable typings, which are like variable assignments, but map variables to types instead of values. You might use polymorphism to share some code with your variable assignments.#</li># |
889 %\item%#<li># Define typing judgments for expressions, values, and commands. The expression and command cases will be in terms of a typing assignment.#</li># | 899 %\item%#<li># Define typing judgments for expressions, values, and commands. The expression and command cases will be in terms of a typing assignment.#</li># |
890 %\item%#<li># Define a predicate [varsType] to express when a variable assignment and a variable typing agree on the types of variables.#</li># | 900 %\item%#<li># Define a predicate [varsType] to express when a variable assignment and a variable typing agree on the types of variables.#</li># |
891 %\item%#<li># Prove that any expression that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li># | 901 %\item%#<li># Prove that any expression that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li># |