annotate src/Large.v @ 395:3c941750c347

Citations for lazy data structures, Haskell 'deriving', and logic programming
author Adam Chlipala <adam@chlipala.net>
date Sun, 22 Apr 2012 16:25:22 -0400
parents 7ece04e15446
children d62ed7381a0b
rev   line source
adam@381 1 (* Copyright (c) 2009-2012, Adam Chlipala
adamc@235 2 *
adamc@235 3 * This work is licensed under a
adamc@235 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@235 5 * Unported License.
adamc@235 6 * The license text is available at:
adamc@235 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@235 8 *)
adamc@235 9
adamc@235 10 (* begin hide *)
adamc@236 11 Require Import Arith.
adamc@236 12
adam@314 13 Require Import CpdtTactics.
adamc@235 14
adamc@235 15 Set Implicit Arguments.
adamc@235 16 (* end hide *)
adamc@235 17
adamc@235 18
adam@381 19 (** %\part{The Big Picture}
adam@381 20
adam@381 21 \chapter{Proving in the Large}% *)
adamc@235 22
adam@367 23 (** It is somewhat unfortunate that the term %``%#"#theorem proving#"#%''% looks so much like the word %``%#"#theory.#"#%''% Most researchers and practitioners in software assume that mechanized theorem proving is profoundly impractical. Indeed, until recently, most advances in theorem proving for higher-order logics have been largely theoretical. However, starting around the beginning of the 21st century, there was a surge in the use of proof assistants in serious verification efforts. That line of work is still quite new, but I believe it is not too soon to distill some lessons on how to work effectively with large formal proofs.
adamc@236 24
adamc@236 25 Thus, this chapter gives some tips for structuring and maintaining large Coq developments. *)
adamc@236 26
adamc@236 27
adamc@236 28 (** * Ltac Anti-Patterns *)
adamc@236 29
adam@367 30 (** In this book, I have been following an unusual style, where proofs are not considered finished until they are %\index{fully automated proofs}``%#"#fully automated,#"#%''% in a certain sense. Each such theorem is proved by a single tactic. Since Ltac is a Turing-complete programming language, it is not hard to squeeze arbitrary heuristics into single tactics, using operators like the semicolon to combine steps. In contrast, most Ltac proofs %``%#"#in the wild#"#%''% consist of many steps, performed by individual tactics followed by periods. Is it really worth drawing a distinction between proof steps terminated by semicolons and steps terminated by periods?
adamc@236 31
adamc@237 32 I argue that this is, in fact, a very important distinction, with serious consequences for a majority of important verification domains. The more uninteresting drudge work a proof domain involves, the more important it is to work to prove theorems with single tactics. From an automation standpoint, single-tactic proofs can be extremely effective, and automation becomes more and more critical as proofs are populated by more uninteresting detail. In this section, I will give some examples of the consequences of more common proof styles.
adamc@236 33
adamc@236 34 As a running example, consider a basic language of arithmetic expressions, an interpreter for it, and a transformation that scales up every constant in an expression. *)
adamc@236 35
adamc@236 36 Inductive exp : Set :=
adamc@236 37 | Const : nat -> exp
adamc@236 38 | Plus : exp -> exp -> exp.
adamc@236 39
adamc@236 40 Fixpoint eval (e : exp) : nat :=
adamc@236 41 match e with
adamc@236 42 | Const n => n
adamc@236 43 | Plus e1 e2 => eval e1 + eval e2
adamc@236 44 end.
adamc@236 45
adamc@236 46 Fixpoint times (k : nat) (e : exp) : exp :=
adamc@236 47 match e with
adamc@236 48 | Const n => Const (k * n)
adamc@236 49 | Plus e1 e2 => Plus (times k e1) (times k e2)
adamc@236 50 end.
adamc@236 51
adamc@236 52 (** We can write a very manual proof that [double] really doubles an expression's value. *)
adamc@236 53
adamc@236 54 Theorem eval_times : forall k e,
adamc@236 55 eval (times k e) = k * eval e.
adamc@236 56 induction e.
adamc@236 57
adamc@236 58 trivial.
adamc@236 59
adamc@236 60 simpl.
adamc@236 61 rewrite IHe1.
adamc@236 62 rewrite IHe2.
adamc@236 63 rewrite mult_plus_distr_l.
adamc@236 64 trivial.
adamc@236 65 Qed.
adamc@236 66
adam@368 67 (* begin thide *)
adam@367 68 (** We use spaces to separate the two inductive cases, but note that these spaces have no real semantic content; Coq does not enforce that our spacing matches the real case structure of a proof. The second case mentions automatically generated hypothesis names explicitly. As a result, innocuous changes to the theorem statement can invalidate the proof. *)
adamc@236 69
adamc@236 70 Reset eval_times.
adamc@236 71
adam@368 72 Theorem eval_times : forall k x,
adamc@236 73 eval (times k x) = k * eval x.
adamc@236 74 induction x.
adamc@236 75
adamc@236 76 trivial.
adamc@236 77
adamc@236 78 simpl.
adam@367 79 (** %\vspace{-.15in}%[[
adamc@236 80 rewrite IHe1.
adam@367 81 ]]
adamc@236 82
adam@367 83 <<
adamc@236 84 Error: The reference IHe1 was not found in the current environment.
adam@367 85 >>
adamc@236 86
adamc@236 87 The inductive hypotheses are named [IHx1] and [IHx2] now, not [IHe1] and [IHe2]. *)
adamc@236 88
adamc@236 89 Abort.
adamc@236 90
adamc@236 91 (** We might decide to use a more explicit invocation of [induction] to give explicit binders for all of the names that we will reference later in the proof. *)
adamc@236 92
adamc@236 93 Theorem eval_times : forall k e,
adamc@236 94 eval (times k e) = k * eval e.
adamc@236 95 induction e as [ | ? IHe1 ? IHe2 ].
adamc@236 96
adamc@236 97 trivial.
adamc@236 98
adamc@236 99 simpl.
adamc@236 100 rewrite IHe1.
adamc@236 101 rewrite IHe2.
adamc@236 102 rewrite mult_plus_distr_l.
adamc@236 103 trivial.
adamc@236 104 Qed.
adamc@236 105
adam@367 106 (** We pass %\index{tactics!induction}%[induction] an %\index{intro pattern}\textit{%#<i>#intro pattern#</i>#%}%, using a [|] character to separate out instructions for the different inductive cases. Within a case, we write [?] to ask Coq to generate a name automatically, and we write an explicit name to assign that name to the corresponding new variable. It is apparent that, to use intro patterns to avoid proof brittleness, one needs to keep track of the seemingly unimportant facts of the orders in which variables are introduced. Thus, the script keeps working if we replace [e] by [x], but it has become more cluttered. Arguably, neither proof is particularly easy to follow.
adamc@236 107
adamc@237 108 That category of complaint has to do with understanding proofs as static artifacts. As with programming in general, with serious projects, it tends to be much more important to be able to support evolution of proofs as specifications change. Unstructured proofs like the above examples can be very hard to update in concert with theorem statements. For instance, consider how the last proof script plays out when we modify [times] to introduce a bug. *)
adamc@236 109
adamc@236 110 Reset times.
adamc@236 111
adamc@236 112 Fixpoint times (k : nat) (e : exp) : exp :=
adamc@236 113 match e with
adamc@236 114 | Const n => Const (1 + k * n)
adamc@236 115 | Plus e1 e2 => Plus (times k e1) (times k e2)
adamc@236 116 end.
adamc@236 117
adamc@236 118 Theorem eval_times : forall k e,
adamc@236 119 eval (times k e) = k * eval e.
adamc@236 120 induction e as [ | ? IHe1 ? IHe2 ].
adamc@236 121
adamc@236 122 trivial.
adamc@236 123
adamc@236 124 simpl.
adam@367 125 (** %\vspace{-.15in}%[[
adamc@236 126 rewrite IHe1.
adam@367 127 ]]
adamc@236 128
adam@367 129 <<
adamc@236 130 Error: The reference IHe1 was not found in the current environment.
adam@367 131 >>
adam@302 132 *)
adamc@236 133
adamc@236 134 Abort.
adamc@236 135
adam@367 136 (** Can you spot what went wrong, without stepping through the script step-by-step? The problem is that [trivial] never fails. Originally, [trivial] had been succeeding in proving an equality that follows by reflexivity. Our change to [times] leads to a case where that equality is no longer true. The invocation [trivial] happily leaves the false equality in place, and we continue on to the span of tactics intended for the second inductive case. Unfortunately, those tactics end up being applied to the %\textit{%#<i>#first#</i>#%}% case instead.
adamc@237 137
adam@387 138 The problem with [trivial] could be %``%#"#solved#"#%''% by writing, e.g., [solve [ trivial ]] instead, so that an error is signaled early on if something unexpected happens. However, the root problem is that the syntax of a tactic invocation does not imply how many subgoals it produces. Much more confusing instances of this problem are possible. For example, if a lemma [L] is modified to take an extra hypothesis, then uses of [apply L] will generate more subgoals than before. Old unstructured proof scripts will become hopelessly jumbled, with tactics applied to inappropriate subgoals. Because of the lack of structure, there is usually relatively little to be gleaned from knowledge of the precise point in a proof script where an error is raised. *)
adamc@236 139
adamc@236 140 Reset times.
adamc@236 141
adamc@236 142 Fixpoint times (k : nat) (e : exp) : exp :=
adamc@236 143 match e with
adamc@236 144 | Const n => Const (k * n)
adamc@236 145 | Plus e1 e2 => Plus (times k e1) (times k e2)
adamc@236 146 end.
adamc@236 147
adam@387 148 (** Many real developments try to make essentially unstructured proofs look structured by applying careful indentation conventions, idempotent case-marker tactics included solely to serve as documentation, and so on. All of these strategies suffer from the same kind of failure of abstraction that was just demonstrated. I like to say that if you find yourself caring about indentation in a proof script, it is a sign that the script is structured poorly.
adamc@236 149
adamc@236 150 We can rewrite the current proof with a single tactic. *)
adamc@236 151
adamc@236 152 Theorem eval_times : forall k e,
adamc@236 153 eval (times k e) = k * eval e.
adamc@236 154 induction e as [ | ? IHe1 ? IHe2 ]; [
adamc@236 155 trivial
adamc@236 156 | simpl; rewrite IHe1; rewrite IHe2; rewrite mult_plus_distr_l; trivial ].
adamc@236 157 Qed.
adamc@236 158
adam@387 159 (** We use the form of the semicolon operator that allows a different tactic to be specified for each generated subgoal. This is an improvement in robustness of the script. We no longer need to worry about tactics from one case being applied to a different case. Still, the proof script is not especially readable. Probably most readers would not find it helpful in explaining why the theorem is true.
adamc@236 160
adamc@236 161 The situation gets worse in considering extensions to the theorem we want to prove. Let us add multiplication nodes to our [exp] type and see how the proof fares. *)
adamc@236 162
adamc@236 163 Reset exp.
adamc@236 164
adamc@236 165 Inductive exp : Set :=
adamc@236 166 | Const : nat -> exp
adamc@236 167 | Plus : exp -> exp -> exp
adamc@236 168 | Mult : exp -> exp -> exp.
adamc@236 169
adamc@236 170 Fixpoint eval (e : exp) : nat :=
adamc@236 171 match e with
adamc@236 172 | Const n => n
adamc@236 173 | Plus e1 e2 => eval e1 + eval e2
adamc@236 174 | Mult e1 e2 => eval e1 * eval e2
adamc@236 175 end.
adamc@236 176
adamc@236 177 Fixpoint times (k : nat) (e : exp) : exp :=
adamc@236 178 match e with
adamc@236 179 | Const n => Const (k * n)
adamc@236 180 | Plus e1 e2 => Plus (times k e1) (times k e2)
adamc@236 181 | Mult e1 e2 => Mult (times k e1) e2
adamc@236 182 end.
adamc@236 183
adamc@236 184 Theorem eval_times : forall k e,
adamc@236 185 eval (times k e) = k * eval e.
adam@367 186 (** %\vspace{-.25in}%[[
adamc@236 187 induction e as [ | ? IHe1 ? IHe2 ]; [
adamc@236 188 trivial
adamc@236 189 | simpl; rewrite IHe1; rewrite IHe2; rewrite mult_plus_distr_l; trivial ].
adam@367 190 ]]
adamc@236 191
adam@367 192 <<
adamc@236 193 Error: Expects a disjunctive pattern with 3 branches.
adam@367 194 >>
adam@302 195 *)
adamc@236 196
adamc@236 197 Abort.
adamc@236 198
adamc@236 199 (** Unsurprisingly, the old proof fails, because it explicitly says that there are two inductive cases. To update the script, we must, at a minimum, remember the order in which the inductive cases are generated, so that we can insert the new case in the appropriate place. Even then, it will be painful to add the case, because we cannot walk through proof steps interactively when they occur inside an explicit set of cases. *)
adamc@236 200
adamc@236 201 Theorem eval_times : forall k e,
adamc@236 202 eval (times k e) = k * eval e.
adamc@236 203 induction e as [ | ? IHe1 ? IHe2 | ? IHe1 ? IHe2 ]; [
adamc@236 204 trivial
adamc@236 205 | simpl; rewrite IHe1; rewrite IHe2; rewrite mult_plus_distr_l; trivial
adamc@236 206 | simpl; rewrite IHe1; rewrite mult_assoc; trivial ].
adamc@236 207 Qed.
adamc@236 208
adamc@236 209 (** Now we are in a position to see how much nicer is the style of proof that we have followed in most of this book. *)
adamc@236 210
adamc@236 211 Reset eval_times.
adamc@236 212
adam@375 213 Hint Rewrite mult_plus_distr_l.
adamc@238 214
adamc@236 215 Theorem eval_times : forall k e,
adamc@236 216 eval (times k e) = k * eval e.
adamc@236 217 induction e; crush.
adamc@236 218 Qed.
adam@368 219 (* end thide *)
adamc@236 220
adamc@237 221 (** This style is motivated by a hard truth: one person's manual proof script is almost always mostly inscrutable to most everyone else. I claim that step-by-step formal proofs are a poor way of conveying information. Thus, we had might as well cut out the steps and automate as much as possible.
adamc@237 222
adamc@237 223 What about the illustrative value of proofs? Most informal proofs are read to convey the big ideas of proofs. How can reading [induction e; crush] convey any big ideas? My position is that any ideas that standard automation can find are not very big after all, and the %\textit{%#<i>#real#</i>#%}% big ideas should be expressed through lemmas that are added as hints.
adamc@237 224
adamc@237 225 An example should help illustrate what I mean. Consider this function, which rewrites an expression using associativity of addition and multiplication. *)
adamc@237 226
adamc@237 227 Fixpoint reassoc (e : exp) : exp :=
adamc@237 228 match e with
adamc@237 229 | Const _ => e
adamc@237 230 | Plus e1 e2 =>
adamc@237 231 let e1' := reassoc e1 in
adamc@237 232 let e2' := reassoc e2 in
adamc@237 233 match e2' with
adamc@237 234 | Plus e21 e22 => Plus (Plus e1' e21) e22
adamc@237 235 | _ => Plus e1' e2'
adamc@237 236 end
adamc@237 237 | Mult e1 e2 =>
adamc@237 238 let e1' := reassoc e1 in
adamc@237 239 let e2' := reassoc e2 in
adamc@237 240 match e2' with
adamc@237 241 | Mult e21 e22 => Mult (Mult e1' e21) e22
adamc@237 242 | _ => Mult e1' e2'
adamc@237 243 end
adamc@237 244 end.
adamc@237 245
adamc@237 246 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e.
adam@368 247 (* begin thide *)
adamc@237 248 induction e; crush;
adamc@237 249 match goal with
adamc@237 250 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | Mult _ _ => _ end] ] =>
adamc@237 251 destruct E; crush
adamc@237 252 end.
adamc@237 253
adamc@237 254 (** One subgoal remains:
adamc@237 255 [[
adamc@237 256 IHe2 : eval e3 * eval e4 = eval e2
adamc@237 257 ============================
adamc@237 258 eval e1 * eval e3 * eval e4 = eval e1 * eval e2
adamc@237 259 ]]
adamc@237 260
adamc@237 261 [crush] does not know how to finish this goal. We could finish the proof manually. *)
adamc@237 262
adamc@237 263 rewrite <- IHe2; crush.
adamc@237 264
adamc@237 265 (** However, the proof would be easier to understand and maintain if we separated this insight into a separate lemma. *)
adamc@237 266
adamc@237 267 Abort.
adamc@237 268
adamc@237 269 Lemma rewr : forall a b c d, b * c = d -> a * b * c = a * d.
adamc@237 270 crush.
adamc@237 271 Qed.
adamc@237 272
adamc@237 273 Hint Resolve rewr.
adamc@237 274
adamc@237 275 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e.
adamc@237 276 induction e; crush;
adamc@237 277 match goal with
adamc@237 278 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | Mult _ _ => _ end] ] =>
adamc@237 279 destruct E; crush
adamc@237 280 end.
adamc@237 281 Qed.
adam@368 282 (* end thide *)
adamc@237 283
adamc@237 284 (** In the limit, a complicated inductive proof might rely on one hint for each inductive case. The lemma for each hint could restate the associated case. Compared to manual proof scripts, we arrive at more readable results. Scripts no longer need to depend on the order in which cases are generated. The lemmas are easier to digest separately than are fragments of tactic code, since lemma statements include complete proof contexts. Such contexts can only be extracted from monolithic manual proofs by stepping through scripts interactively.
adamc@237 285
adam@387 286 The more common situation is that a large induction has several easy cases that automation makes short work of. In the remaining cases, automation performs some standard simplification. Among these cases, some may require quite involved proofs; such a case may deserve a hint lemma of its own, where the lemma statement may copy the simplified version of the case. Alternatively, the proof script for the main theorem may be extended with some automation code targeted at the specific case. Even such targeted scripting is more desirable than manual proving, because it may be read and understood without knowledge of a proof's hierarchical structure, case ordering, or name binding structure.
adam@387 287
adam@387 288 A competing alternative to the common style of Coq tactics is the %\index{declarative proof scripts}\emph{%#<i>#declarative#</i>#%}% style, most frequently associated today with the %\index{Isar}%Isar%~\cite{Isar}% language. A declarative proof script is very explicit about subgoal structure and introduction of local names, aiming for human readability. The coding of proof automation is taken to be outside the scope of the proof language, an assumption related to the idea that it is not worth building new automation for each serious theorem. I have shown in this book many examples of theorem-specific automation, which I believe is crucial for scaling to significant results. Declarative proof scripts make it easier to read scripts to modify them for theorem statement changes, but the alternate %\index{adaptive proof scripts}\emph{%#<i>#adaptive#</i>#%}% style from this book allows use of the %\emph{%#<i>#same#</i>#%}% scripts for many versions of a theorem.
adam@387 289
adam@387 290 Perhaps I am a pessimist for thinking that fully formal proofs will inevitably consist of details that are uninteresting to people, but it is my preference to focus on conveying proof-specific details through choice of lemmas. Additionally, adaptive Ltac scripts contain bits of automation that can be understood in isolation. For instance, in a big [repeat match] loop, each case can generally be digested separately, which is a big contrast from trying to understand the hierarchical structure of a script in a more common style. Adaptive scripts rely on variable binding, but generally only over very small scopes, whereas understanding a traditional script requires tracking the identities of local variables potentially across pages of code.
adam@387 291
adam@387 292 One might also wonder why it makes sense to prove all theorems automatically (in the sense of adaptive proof scripts) but not construct all programs automatically. My view there is that %\emph{%#<i>#program synthesis#</i>#%}% is a very useful idea that deserves broader application! In practice, there are difficult obstacles in the way of finding a program automatically from its specification. A typical specification is not exhaustive in its description of program properties. For instance, details of performance on particular machine architectures are often omitted. As a result, a synthesized program may be correct in some sense while suffering from deficiencies in other senses. Program synthesis research will continue to come up with ways of dealing with this problem, but the situation for theorem proving is fundamentally similar. Following mathematical practice, the only property of a formal proof that we care about is which theorem it proves, and it is trivial to check this property automatically. In other words, with a simple criterion for what makes a proof acceptable, automatic search is straightforward. Of course, in practice we also care about understandability of proofs to facilitate long-term maintenance, and that is just what the techniques outlined above are meant to support, and the next section gives some related advice. *)
adamc@237 293
adamc@235 294
adamc@238 295 (** * Debugging and Maintaining Automation *)
adamc@238 296
adam@367 297 (** Fully automated proofs are desirable because they open up possibilities for automatic adaptation to changes of specification. A well-engineered script within a narrow domain can survive many changes to the formulation of the problem it solves. Still, as we are working with higher-order logic, most theorems fall within no obvious decidable theories. It is inevitable that most long-lived automated proofs will need updating.
adamc@238 298
adam@367 299 Before we are ready to update our proofs, we need to write them in the first place. While fully automated scripts are most robust to changes of specification, it is hard to write every new proof directly in that form. Instead, it is useful to begin a theorem with exploratory proving and then gradually refine it into a suitable automated form.
adamc@238 300
adam@387 301 Consider this theorem from Chapter 8, which we begin by proving in a mostly manual way, invoking [crush] after each step to discharge any low-hanging fruit. Our manual effort involves choosing which expressions to case-analyze on. *)
adamc@238 302
adamc@238 303 (* begin hide *)
adamc@238 304 Require Import MoreDep.
adamc@238 305 (* end hide *)
adamc@238 306
adamc@238 307 Theorem cfold_correct : forall t (e : exp t), expDenote e = expDenote (cfold e).
adam@368 308 (* begin thide *)
adamc@238 309 induction e; crush.
adamc@238 310
adamc@238 311 dep_destruct (cfold e1); crush.
adamc@238 312 dep_destruct (cfold e2); crush.
adamc@238 313
adamc@238 314 dep_destruct (cfold e1); crush.
adamc@238 315 dep_destruct (cfold e2); crush.
adamc@238 316
adamc@238 317 dep_destruct (cfold e1); crush.
adamc@238 318 dep_destruct (cfold e2); crush.
adamc@238 319
adamc@238 320 dep_destruct (cfold e1); crush.
adamc@238 321 dep_destruct (expDenote e1); crush.
adamc@238 322
adamc@238 323 dep_destruct (cfold e); crush.
adamc@238 324
adamc@238 325 dep_destruct (cfold e); crush.
adamc@238 326 Qed.
adamc@238 327
adamc@238 328 (** In this complete proof, it is hard to avoid noticing a pattern. We rework the proof, abstracting over the patterns we find. *)
adamc@238 329
adamc@238 330 Reset cfold_correct.
adamc@238 331
adamc@238 332 Theorem cfold_correct : forall t (e : exp t), expDenote e = expDenote (cfold e).
adamc@238 333 induction e; crush.
adamc@238 334
adamc@238 335 (** The expression we want to destruct here turns out to be the discriminee of a [match], and we can easily enough write a tactic that destructs all such expressions. *)
adamc@238 336
adamc@238 337 Ltac t :=
adamc@238 338 repeat (match goal with
adamc@238 339 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _
adamc@238 340 | Eq _ _ => _ | BConst _ => _ | And _ _ => _
adamc@238 341 | If _ _ _ _ => _ | Pair _ _ _ _ => _
adamc@238 342 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] =>
adamc@238 343 dep_destruct E
adamc@238 344 end; crush).
adamc@238 345
adamc@238 346 t.
adamc@238 347
adamc@238 348 (** This tactic invocation discharges the whole case. It does the same on the next two cases, but it gets stuck on the fourth case. *)
adamc@238 349
adamc@238 350 t.
adamc@238 351
adamc@238 352 t.
adamc@238 353
adamc@238 354 t.
adamc@238 355
adamc@238 356 (** The subgoal's conclusion is:
adamc@238 357 [[
adamc@238 358 ============================
adamc@238 359 (if expDenote e1 then expDenote (cfold e2) else expDenote (cfold e3)) =
adamc@238 360 expDenote (if expDenote e1 then cfold e2 else cfold e3)
adamc@238 361 ]]
adamc@238 362
adamc@238 363 We need to expand our [t] tactic to handle this case. *)
adamc@238 364
adamc@238 365 Ltac t' :=
adamc@238 366 repeat (match goal with
adamc@238 367 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _
adamc@238 368 | Eq _ _ => _ | BConst _ => _ | And _ _ => _
adamc@238 369 | If _ _ _ _ => _ | Pair _ _ _ _ => _
adamc@238 370 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] =>
adamc@238 371 dep_destruct E
adamc@238 372 | [ |- (if ?E then _ else _) = _ ] => destruct E
adamc@238 373 end; crush).
adamc@238 374
adamc@238 375 t'.
adamc@238 376
adamc@238 377 (** Now the goal is discharged, but [t'] has no effect on the next subgoal. *)
adamc@238 378
adamc@238 379 t'.
adamc@238 380
adamc@238 381 (** A final revision of [t] finishes the proof. *)
adamc@238 382
adamc@238 383 Ltac t'' :=
adamc@238 384 repeat (match goal with
adamc@238 385 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _
adamc@238 386 | Eq _ _ => _ | BConst _ => _ | And _ _ => _
adamc@238 387 | If _ _ _ _ => _ | Pair _ _ _ _ => _
adamc@238 388 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] =>
adamc@238 389 dep_destruct E
adamc@238 390 | [ |- (if ?E then _ else _) = _ ] => destruct E
adamc@238 391 | [ |- context[match pairOut ?E with Some _ => _
adamc@238 392 | None => _ end] ] =>
adamc@238 393 dep_destruct E
adamc@238 394 end; crush).
adamc@238 395
adamc@238 396 t''.
adamc@238 397
adamc@238 398 t''.
adamc@238 399 Qed.
adamc@238 400
adam@367 401 (** We can take the final tactic and move it into the initial part of the proof script, arriving at a nicely automated proof. *)
adamc@238 402
adamc@238 403 Reset t.
adamc@238 404
adamc@238 405 Theorem cfold_correct : forall t (e : exp t), expDenote e = expDenote (cfold e).
adamc@238 406 induction e; crush;
adamc@238 407 repeat (match goal with
adamc@238 408 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _
adamc@238 409 | Eq _ _ => _ | BConst _ => _ | And _ _ => _
adamc@238 410 | If _ _ _ _ => _ | Pair _ _ _ _ => _
adamc@238 411 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] =>
adamc@238 412 dep_destruct E
adamc@238 413 | [ |- (if ?E then _ else _) = _ ] => destruct E
adamc@238 414 | [ |- context[match pairOut ?E with Some _ => _
adamc@238 415 | None => _ end] ] =>
adamc@238 416 dep_destruct E
adamc@238 417 end; crush).
adamc@238 418 Qed.
adam@368 419 (* end thide *)
adamc@238 420
adam@367 421 (** Even after we put together nice automated proofs, we must deal with specification changes that can invalidate them. It is not generally possible to step through single-tactic proofs interactively. There is a command %\index{Vernacular commands!Debug On}%[Debug On] that lets us step through points in tactic execution, but the debugger tends to make counterintuitive choices of which points we would like to stop at, and per-point output is quite verbose, so most Coq users do not find this debugging mode very helpful. How are we to understand what has broken in a script that used to work?
adamc@240 422
adamc@240 423 An example helps demonstrate a useful approach. Consider what would have happened in our proof of [reassoc_correct] if we had first added an unfortunate rewriting hint. *)
adamc@240 424
adamc@240 425 Reset reassoc_correct.
adamc@240 426
adamc@240 427 Theorem confounder : forall e1 e2 e3,
adamc@240 428 eval e1 * eval e2 * eval e3 = eval e1 * (eval e2 + 1 - 1) * eval e3.
adamc@240 429 crush.
adamc@240 430 Qed.
adamc@240 431
adam@375 432 Hint Rewrite confounder.
adamc@240 433
adamc@240 434 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e.
adam@368 435 (* begin thide *)
adamc@240 436 induction e; crush;
adamc@240 437 match goal with
adamc@240 438 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | Mult _ _ => _ end] ] =>
adamc@240 439 destruct E; crush
adamc@240 440 end.
adamc@240 441
adamc@240 442 (** One subgoal remains:
adamc@240 443
adamc@240 444 [[
adamc@240 445 ============================
adamc@240 446 eval e1 * (eval e3 + 1 - 1) * eval e4 = eval e1 * eval e2
adamc@240 447 ]]
adamc@240 448
adam@367 449 The poorly chosen rewrite rule fired, changing the goal to a form where another hint no longer applies. Imagine that we are in the middle of a large development with many hints. How would we diagnose the problem? First, we might not be sure which case of the inductive proof has gone wrong. It is useful to separate out our automation procedure and apply it manually. *)
adamc@240 450
adamc@240 451 Restart.
adamc@240 452
adamc@240 453 Ltac t := crush; match goal with
adamc@240 454 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _
adamc@240 455 | Mult _ _ => _ end] ] =>
adamc@240 456 destruct E; crush
adamc@240 457 end.
adamc@240 458
adamc@240 459 induction e.
adamc@240 460
adam@387 461 (** Since we see the subgoals before any simplification occurs, it is clear that this is the case for constants. Our [t] makes short work of it. *)
adamc@240 462
adamc@240 463 t.
adamc@240 464
adamc@240 465 (** The next subgoal, for addition, is also discharged without trouble. *)
adamc@240 466
adamc@240 467 t.
adamc@240 468
adamc@240 469 (** The final subgoal is for multiplication, and it is here that we get stuck in the proof state summarized above. *)
adamc@240 470
adamc@240 471 t.
adamc@240 472
adam@367 473 (** What is [t] doing to get us to this point? The %\index{tactics!info}%[info] command can help us answer this kind of question. *)
adamc@240 474
adamc@240 475 (** remove printing * *)
adamc@240 476 Undo.
adamc@240 477 info t.
adam@367 478 (** %\vspace{-.15in}%[[
adam@375 479 == simpl in *; intuition; subst; autorewrite with core in *;
adam@375 480 simpl in *; intuition; subst; autorewrite with core in *;
adamc@240 481 simpl in *; intuition; subst; destruct (reassoc e2).
adamc@240 482 simpl in *; intuition.
adamc@240 483
adamc@240 484 simpl in *; intuition.
adamc@240 485
adam@375 486 simpl in *; intuition; subst; autorewrite with core in *;
adamc@240 487 refine (eq_ind_r
adamc@240 488 (fun n : nat =>
adamc@240 489 n * (eval e3 + 1 - 1) * eval e4 = eval e1 * eval e2) _ IHe1);
adam@375 490 autorewrite with core in *; simpl in *; intuition;
adam@375 491 subst; autorewrite with core in *; simpl in *;
adamc@240 492 intuition; subst.
adamc@240 493
adamc@240 494 ]]
adamc@240 495
adamc@240 496 A detailed trace of [t]'s execution appears. Since we are using the very general [crush] tactic, many of these steps have no effect and only occur as instances of a more general strategy. We can copy-and-paste the details to see where things go wrong. *)
adamc@240 497
adamc@240 498 Undo.
adamc@240 499
adamc@240 500 (** We arbitrarily split the script into chunks. The first few seem not to do any harm. *)
adamc@240 501
adam@375 502 simpl in *; intuition; subst; autorewrite with core in *.
adam@375 503 simpl in *; intuition; subst; autorewrite with core in *.
adamc@240 504 simpl in *; intuition; subst; destruct (reassoc e2).
adamc@240 505 simpl in *; intuition.
adamc@240 506 simpl in *; intuition.
adamc@240 507
adamc@240 508 (** The next step is revealed as the culprit, bringing us to the final unproved subgoal. *)
adamc@240 509
adam@375 510 simpl in *; intuition; subst; autorewrite with core in *.
adamc@240 511
adamc@240 512 (** We can split the steps further to assign blame. *)
adamc@240 513
adamc@240 514 Undo.
adamc@240 515
adamc@240 516 simpl in *.
adamc@240 517 intuition.
adamc@240 518 subst.
adam@375 519 autorewrite with core in *.
adamc@240 520
adamc@240 521 (** It was the final of these four tactics that made the rewrite. We can find out exactly what happened. The [info] command presents hierarchical views of proof steps, and we can zoom down to a lower level of detail by applying [info] to one of the steps that appeared in the original trace. *)
adamc@240 522
adamc@240 523 Undo.
adamc@240 524
adam@375 525 info autorewrite with core in *.
adam@367 526 (** %\vspace{-.15in}%[[
adamc@240 527 == refine (eq_ind_r (fun n : nat => n = eval e1 * eval e2) _
adamc@240 528 (confounder (reassoc e1) e3 e4)).
adamc@240 529 ]]
adamc@240 530
adamc@240 531 The way a rewrite is displayed is somewhat baroque, but we can see that theorem [confounder] is the final culprit. At this point, we could remove that hint, prove an alternate version of the key lemma [rewr], or come up with some other remedy. Fixing this kind of problem tends to be relatively easy once the problem is revealed. *)
adamc@240 532
adamc@240 533 Abort.
adam@368 534 (* end thide *)
adamc@240 535
adamc@240 536 (** printing * $\times$ *)
adamc@240 537
adamc@241 538 (** Sometimes a change to a development has undesirable performance consequences, even if it does not prevent any old proof scripts from completing. If the performance consequences are severe enough, the proof scripts can be considered broken for practical purposes.
adamc@241 539
adamc@241 540 Here is one example of a performance surprise. *)
adamc@241 541
adamc@239 542 Section slow.
adamc@239 543 Hint Resolve trans_eq.
adamc@239 544
adamc@241 545 (** The central element of the problem is the addition of transitivity as a hint. With transitivity available, it is easy for proof search to wind up exploring exponential search spaces. We also add a few other arbitrary variables and hypotheses, designed to lead to trouble later. *)
adamc@241 546
adamc@239 547 Variable A : Set.
adamc@239 548 Variables P Q R S : A -> A -> Prop.
adamc@239 549 Variable f : A -> A.
adamc@239 550
adamc@239 551 Hypothesis H1 : forall x y, P x y -> Q x y -> R x y -> f x = f y.
adamc@239 552 Hypothesis H2 : forall x y, S x y -> R x y.
adamc@239 553
adam@367 554 (** We prove a simple lemma very quickly, using the %\index{Vernacular commands!Time}%[Time] command to measure exactly how quickly. *)
adamc@241 555
adamc@239 556 Lemma slow : forall x y, P x y -> Q x y -> S x y -> f x = f y.
adamc@241 557 Time eauto 6.
adam@367 558 (** %\vspace{-.2in}%[[
adamc@241 559 Finished transaction in 0. secs (0.068004u,0.s)
adam@302 560 ]]
adam@302 561 *)
adamc@241 562
adamc@239 563 Qed.
adamc@239 564
adamc@241 565 (** Now we add a different hypothesis, which is innocent enough; in fact, it is even provable as a theorem. *)
adamc@241 566
adamc@239 567 Hypothesis H3 : forall x y, x = y -> f x = f y.
adamc@239 568
adamc@239 569 Lemma slow' : forall x y, P x y -> Q x y -> S x y -> f x = f y.
adamc@241 570 Time eauto 6.
adam@367 571 (** %\vspace{-.2in}%[[
adamc@241 572 Finished transaction in 2. secs (1.264079u,0.s)
adamc@241 573 ]]
adamc@241 574
adamc@241 575 Why has the search time gone up so much? The [info] command is not much help, since it only shows the result of search, not all of the paths that turned out to be worthless. *)
adamc@241 576
adam@368 577 (* begin thide *)
adamc@241 578 Restart.
adamc@241 579 info eauto 6.
adam@367 580 (** %\vspace{-.15in}%[[
adamc@241 581 == intro x; intro y; intro H; intro H0; intro H4;
adamc@241 582 simple eapply trans_eq.
adamc@241 583 simple apply refl_equal.
adamc@241 584
adamc@241 585 simple eapply trans_eq.
adamc@241 586 simple apply refl_equal.
adamc@241 587
adamc@241 588 simple eapply trans_eq.
adamc@241 589 simple apply refl_equal.
adamc@241 590
adamc@241 591 simple apply H1.
adamc@241 592 eexact H.
adamc@241 593
adamc@241 594 eexact H0.
adamc@241 595
adamc@241 596 simple apply H2; eexact H4.
adamc@241 597 ]]
adamc@241 598
adam@367 599 This output does not tell us why proof search takes so long, but it does provide a clue that would be useful if we had forgotten that we added transitivity as a hint. The [eauto] tactic is applying depth-first search, and the proof script where the real action is ends up buried inside a chain of pointless invocations of transitivity, where each invocation uses reflexivity to discharge one subgoal. Each increment to the depth argument to [eauto] adds another silly use of transitivity. This wasted proof effort only adds linear time overhead, as long as proof search never makes false steps. No false steps were made before we added the new hypothesis, but somehow the addition made possible a new faulty path. To understand which paths we enabled, we can use the %\index{tactics!debug}%[debug] command. *)
adamc@241 600
adamc@241 601 Restart.
adamc@241 602 debug eauto 6.
adamc@241 603
adamc@241 604 (** The output is a large proof tree. The beginning of the tree is enough to reveal what is happening:
adamc@241 605 [[
adamc@241 606 1 depth=6
adamc@241 607 1.1 depth=6 intro
adamc@241 608 1.1.1 depth=6 intro
adamc@241 609 1.1.1.1 depth=6 intro
adamc@241 610 1.1.1.1.1 depth=6 intro
adamc@241 611 1.1.1.1.1.1 depth=6 intro
adamc@241 612 1.1.1.1.1.1.1 depth=5 apply H3
adamc@241 613 1.1.1.1.1.1.1.1 depth=4 eapply trans_eq
adamc@241 614 1.1.1.1.1.1.1.1.1 depth=4 apply refl_equal
adamc@241 615 1.1.1.1.1.1.1.1.1.1 depth=3 eapply trans_eq
adamc@241 616 1.1.1.1.1.1.1.1.1.1.1 depth=3 apply refl_equal
adamc@241 617 1.1.1.1.1.1.1.1.1.1.1.1 depth=2 eapply trans_eq
adamc@241 618 1.1.1.1.1.1.1.1.1.1.1.1.1 depth=2 apply refl_equal
adamc@241 619 1.1.1.1.1.1.1.1.1.1.1.1.1.1 depth=1 eapply trans_eq
adamc@241 620 1.1.1.1.1.1.1.1.1.1.1.1.1.1.1 depth=1 apply refl_equal
adamc@241 621 1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1 depth=0 eapply trans_eq
adamc@241 622 1.1.1.1.1.1.1.1.1.1.1.1.1.1.2 depth=1 apply sym_eq ; trivial
adamc@241 623 1.1.1.1.1.1.1.1.1.1.1.1.1.1.2.1 depth=0 eapply trans_eq
adamc@241 624 1.1.1.1.1.1.1.1.1.1.1.1.1.1.3 depth=0 eapply trans_eq
adamc@241 625 1.1.1.1.1.1.1.1.1.1.1.1.2 depth=2 apply sym_eq ; trivial
adamc@241 626 1.1.1.1.1.1.1.1.1.1.1.1.2.1 depth=1 eapply trans_eq
adamc@241 627 1.1.1.1.1.1.1.1.1.1.1.1.2.1.1 depth=1 apply refl_equal
adamc@241 628 1.1.1.1.1.1.1.1.1.1.1.1.2.1.1.1 depth=0 eapply trans_eq
adamc@241 629 1.1.1.1.1.1.1.1.1.1.1.1.2.1.2 depth=1 apply sym_eq ; trivial
adamc@241 630 1.1.1.1.1.1.1.1.1.1.1.1.2.1.2.1 depth=0 eapply trans_eq
adamc@241 631 1.1.1.1.1.1.1.1.1.1.1.1.2.1.3 depth=0 eapply trans_eq
adamc@241 632 ]]
adamc@241 633
adam@367 634 The first choice [eauto] makes is to apply [H3], since [H3] has the fewest hypotheses of all of the hypotheses and hints that match. However, it turns out that the single hypothesis generated is unprovable. That does not stop [eauto] from trying to prove it with an exponentially sized tree of applications of transitivity, reflexivity, and symmetry of equality. It is the children of the initial [apply H3] that account for all of the noticeable time in proof execution. In a more realistic development, we might use this output of [debug] to realize that adding transitivity as a hint was a bad idea. *)
adamc@241 635
adamc@239 636 Qed.
adam@368 637 (* end thide *)
adamc@239 638 End slow.
adamc@239 639
adam@387 640 (** As aggravating as the above situation may be, there is greater aggravation to be had from importing library modules with commands like %\index{Vernacular commands!Require Import}%[Require Import]. Such a command imports not just the Gallina terms from a module, but also all the hints for [auto], [eauto], and [autorewrite]. Some very recent versions of Coq include mechanisms for removing hints from databases, but the proper solution is to be very conservative in exporting hints from modules. Consider putting hints in named databases, so that they may be used only when called upon explicitly, as demonstrated in Chapter 13.
adam@387 641
adam@387 642 It is also easy to end up with a proof script that uses too much memory. As tactics run, they avoid generating proof terms, since serious proof search will consider many possible avenues, and we do not want to build proof terms for subproofs that end up unused. Instead, tactic execution maintains %\index{thunks}\textit{%#<i>#thunks#</i>#%}% (suspended computations, represented with closures), such that a tactic's proof-producing thunk is only executed when we run %\index{Vernacular commands!Qed}%[Qed]. These thunks can use up large amounts of space, such that a proof script exhausts available memory, even when we know that we could have used much less memory by forcing some thunks earlier.
adamc@241 643
adam@367 644 The %\index{tactics!abstract}%[abstract] tactical helps us force thunks by proving some subgoals as their own lemmas. For instance, a proof [induction x; crush] can in many cases be made to use significantly less peak memory by changing it to [induction x; abstract crush]. The main limitation of [abstract] is that it can only be applied to subgoals that are proved completely, with no undetermined unification variables remaining. Still, many large automated proofs can realize vast memory savings via [abstract]. *)
adamc@241 645
adamc@238 646
adamc@235 647 (** * Modules *)
adamc@235 648
adam@367 649 (** Last chapter's examples of proof by reflection demonstrate opportunities for implementing abstract proof strategies with stronger formal guarantees than can be had with Ltac scripting. Coq's %\textit{%#<i>#module system#</i>#%}% provides another tool for more rigorous development of generic theorems. This feature is inspired by the module systems found in Standard ML%~\cite{modules}% and Objective Caml, and the discussion that follows assumes familiarity with the basics of one of those systems.
adamc@242 650
adam@367 651 ML modules facilitate the grouping of %\index{abstract type}%abstract types with operations over those types. Moreover, there is support for %\index{functor}\textit{%#<i>#functors#</i>#%}%, which are functions from modules to modules. A canonical example of a functor is one that builds a data structure implementation from a module that describes a domain of keys and its associated comparison operations.
adamc@242 652
adam@367 653 When we add modules to a base language with dependent types, it becomes possible to use modules and functors to formalize kinds of reasoning that are common in algebra. For instance, this module signature captures the essence of the algebraic structure known as a group. A group consists of a carrier set [G], an associative binary operation [f], a left identity element [e] for [f], and an operation [i] that is a left inverse for [f].%\index{Vernacular commands!Module Type}% *)
adamc@242 654
adamc@235 655 Module Type GROUP.
adamc@235 656 Parameter G : Set.
adamc@235 657 Parameter f : G -> G -> G.
adamc@235 658 Parameter e : G.
adamc@235 659 Parameter i : G -> G.
adamc@235 660
adamc@235 661 Axiom assoc : forall a b c, f (f a b) c = f a (f b c).
adamc@235 662 Axiom ident : forall a, f e a = a.
adamc@235 663 Axiom inverse : forall a, f (i a) a = e.
adamc@235 664 End GROUP.
adamc@235 665
adam@367 666 (** Many useful theorems hold of arbitrary groups. We capture some such theorem statements in another module signature.%\index{Vernacular commands!Declare Module}% *)
adamc@242 667
adamc@235 668 Module Type GROUP_THEOREMS.
adamc@235 669 Declare Module M : GROUP.
adamc@235 670
adamc@235 671 Axiom ident' : forall a, M.f a M.e = a.
adamc@235 672
adamc@235 673 Axiom inverse' : forall a, M.f a (M.i a) = M.e.
adamc@235 674
adamc@235 675 Axiom unique_ident : forall e', (forall a, M.f e' a = a) -> e' = M.e.
adamc@235 676 End GROUP_THEOREMS.
adamc@235 677
adam@387 678 (** We implement generic proofs of these theorems with a functor, whose input is an arbitrary group [M]. %\index{Vernacular commands!Module}% *)
adamc@242 679
adam@387 680 Module GroupProofs (M : GROUP) : GROUP_THEOREMS with Module M := M.
adam@387 681 (** As in ML, Coq provides multiple options for ascribing signatures to modules. Here we use just the colon operator, which implements %\index{opaque ascription}\emph{%#<i>#opaque ascription#</i>#%}%, hiding all details of the module not exposed by the signature. Another option is %\index{transparent ascription}\emph{%#<i>#transparent ascription#</i>#%}% via the [<:] operator, which checks for signature compatibility without hiding implementation details. Here we stick with opaque ascription but employ the [with] operation to add more detail to a signature, exposing just those implementation details that we need to. For instance, here we expose the underlying group representation set and operator definitions. Without such a refinement, we would get an output module proving theorems about some unknown group, which is not very useful. Also note that opaque ascription can in Coq have some undesirable consequences without analogues in ML, since not just the types but also the %\emph{%#<i>#definitions#</i>#%}% of identifiers have significance in type checking and theorem proving. *)
adam@387 682
adamc@235 683 Module M := M.
adam@387 684 (** To ensure that the module we are building meets the [GROUP_THEOREMS] signature, we add an extra local name for [M], the functor argument. *)
adamc@235 685
adamc@235 686 Import M.
adam@387 687 (** It would be inconvenient to repeat the prefix [M.] everywhere in our theorem statements and proofs, so we bring all the identifiers of [M] into the local scope unqualified.
adam@387 688
adam@387 689 Now we are ready to prove the three theorems. The proofs are completely manual, which may seem ironic given the content of the previous sections! This illustrates another lesson, which is that short proof scripts that change infrequently may be worth leaving unautomated. It would take some effort to build suitable generic automation for these theorems about groups, so I stick with manual proof scripts to avoid distracting us from the main message of the section. We take the proofs from the Wikipedia page on elementary group theory. *)
adamc@235 690
adamc@235 691 Theorem inverse' : forall a, f a (i a) = e.
adamc@235 692 intro.
adamc@235 693 rewrite <- (ident (f a (i a))).
adamc@235 694 rewrite <- (inverse (f a (i a))) at 1.
adamc@235 695 rewrite assoc.
adamc@235 696 rewrite assoc.
adamc@235 697 rewrite <- (assoc (i a) a (i a)).
adamc@235 698 rewrite inverse.
adamc@235 699 rewrite ident.
adamc@235 700 apply inverse.
adamc@235 701 Qed.
adamc@235 702
adamc@235 703 Theorem ident' : forall a, f a e = a.
adamc@235 704 intro.
adamc@235 705 rewrite <- (inverse a).
adamc@235 706 rewrite <- assoc.
adamc@235 707 rewrite inverse'.
adamc@235 708 apply ident.
adamc@235 709 Qed.
adamc@235 710
adamc@235 711 Theorem unique_ident : forall e', (forall a, M.f e' a = a) -> e' = M.e.
adamc@235 712 intros.
adamc@235 713 rewrite <- (H e).
adamc@235 714 symmetry.
adamc@235 715 apply ident'.
adamc@235 716 Qed.
adam@387 717 End GroupProofs.
adamc@239 718
adamc@242 719 (** We can show that the integers with [+] form a group. *)
adamc@242 720
adamc@239 721 Require Import ZArith.
adamc@239 722 Open Scope Z_scope.
adamc@239 723
adamc@239 724 Module Int.
adamc@239 725 Definition G := Z.
adamc@239 726 Definition f x y := x + y.
adamc@239 727 Definition e := 0.
adamc@239 728 Definition i x := -x.
adamc@239 729
adamc@239 730 Theorem assoc : forall a b c, f (f a b) c = f a (f b c).
adamc@239 731 unfold f; crush.
adamc@239 732 Qed.
adamc@239 733 Theorem ident : forall a, f e a = a.
adamc@239 734 unfold f, e; crush.
adamc@239 735 Qed.
adamc@239 736 Theorem inverse : forall a, f (i a) a = e.
adamc@239 737 unfold f, i, e; crush.
adamc@239 738 Qed.
adamc@239 739 End Int.
adamc@239 740
adamc@242 741 (** Next, we can produce integer-specific versions of the generic group theorems. *)
adamc@242 742
adam@387 743 Module IntProofs := GroupProofs(Int).
adamc@239 744
adam@387 745 Check IntProofs.unique_ident.
adamc@242 746 (** %\vspace{-.15in}% [[
adam@387 747 IntProofs.unique_ident
adamc@242 748 : forall e' : Int.G, (forall a : Int.G, Int.f e' a = a) -> e' = Int.e
adam@302 749 ]]
adam@367 750
adam@367 751 Projections like [Int.G] are known to be definitionally equal to the concrete values we have assigned to them, so the above theorem yields as a trivial corollary the following more natural restatement: *)
adamc@239 752
adamc@239 753 Theorem unique_ident : forall e', (forall a, e' + a = a) -> e' = 0.
adam@368 754 (* begin thide *)
adam@387 755 exact IntProofs.unique_ident.
adamc@239 756 Qed.
adam@368 757 (* end thide *)
adamc@242 758
adam@367 759 (** As in ML, the module system provides an effective way to structure large developments. Unlike in ML, Coq modules add no expressiveness; we can implement any module as an inhabitant of a dependent record type. It is the second-class nature of modules that makes them easier to use than dependent records in many case. Because modules may only be used in quite restricted ways, it is easier to support convenient module coding through special commands and editing modes, as the above example demonstrates. An isomorphic implementation with records would have suffered from lack of such conveniences as module subtyping and importation of the fields of a module. On the other hand, all module values must be determined statically, so modules may not be computed, e.g., within the defintions of normal functions, based on particular function parameters. *)
adamc@243 760
adamc@243 761
adamc@243 762 (** * Build Processes *)
adamc@243 763
adamc@243 764 (** As in software development, large Coq projects are much more manageable when split across multiple files and when decomposed into libraries. Coq and Proof General provide very good support for these activities.
adamc@243 765
adam@367 766 Consider a library that we will name [Lib], housed in directory %\texttt{%#<tt>#LIB#</tt>#%}% and split between files %\texttt{%#<tt>#A.v#</tt>#%}%, %\texttt{%#<tt>#B.v#</tt>#%}%, and %\texttt{%#<tt>#C.v#</tt>#%}%. A simple %\index{Makefile}%Makefile will compile the library, relying on the standard Coq tool %\index{coq\_makefile}\texttt{%#<tt>#coq_makefile#</tt>#%}% to do the hard work.
adamc@243 767
adamc@243 768 <<
adamc@243 769 MODULES := A B C
adamc@243 770 VS := $(MODULES:%=%.v)
adamc@243 771
adamc@243 772 .PHONY: coq clean
adamc@243 773
adamc@243 774 coq: Makefile.coq
adam@369 775 $(MAKE) -f Makefile.coq
adamc@243 776
adamc@243 777 Makefile.coq: Makefile $(VS)
adamc@243 778 coq_makefile -R . Lib $(VS) -o Makefile.coq
adamc@243 779
adamc@243 780 clean:: Makefile.coq
adam@369 781 $(MAKE) -f Makefile.coq clean
adamc@243 782 rm -f Makefile.coq
adamc@243 783 >>
adamc@243 784
adamc@243 785 The Makefile begins by defining a variable %\texttt{%#<tt>#VS#</tt>#%}% holding the list of filenames to be included in the project. The primary target is %\texttt{%#<tt>#coq#</tt>#%}%, which depends on the construction of an auxiliary Makefile called %\texttt{%#<tt>#Makefile.coq#</tt>#%}%. Another rule explains how to build that file. We call %\texttt{%#<tt>#coq_makefile#</tt>#%}%, using the %\texttt{%#<tt>#-R#</tt>#%}% flag to specify that files in the current directory should be considered to belong to the library [Lib]. This Makefile will build a compiled version of each module, such that %\texttt{%#<tt>#X.v#</tt>#%}% is compiled into %\texttt{%#<tt>#X.vo#</tt>#%}%.
adamc@243 786
adamc@243 787 Now code in %\texttt{%#<tt>#B.v#</tt>#%}% may refer to definitions in %\texttt{%#<tt>#A.v#</tt>#%}% after running
adamc@243 788 [[
adamc@243 789 Require Import Lib.A.
adam@367 790 ]]
adam@367 791 %\vspace{-.15in}%Library [Lib] is presented as a module, containing a submodule [A], which contains the definitions from %\texttt{%#<tt>#A.v#</tt>#%}%. These are genuine modules in the sense of Coq's module system, and they may be passed to functors and so on.
adamc@243 792
adam@367 793 The command [Require Import] is a convenient combination of two more primitive commands. The %\index{Vernacular commands!Require}%[Require] command finds the %\texttt{%#<tt>#.vo#</tt>#%}% file containing the named module, ensuring that the module is loaded into memory. The %\index{Vernacular commands!Import}%[Import] command loads all top-level definitions of the named module into the current namespace, and it may be used with local modules that do not have corresponding %\texttt{%#<tt>#.vo#</tt>#%}% files. Another command, %\index{Vernacular commands!Load}%[Load], is for inserting the contents of a named file verbatim. It is generally better to use the module-based commands, since they avoid rerunning proof scripts, and they facilitate reorganization of directory structure without the need to change code.
adamc@243 794
adamc@243 795 Now we would like to use our library from a different development, called [Client] and found in directory %\texttt{%#<tt>#CLIENT#</tt>#%}%, which has its own Makefile.
adamc@243 796
adamc@243 797 <<
adamc@243 798 MODULES := D E
adamc@243 799 VS := $(MODULES:%=%.v)
adamc@243 800
adamc@243 801 .PHONY: coq clean
adamc@243 802
adamc@243 803 coq: Makefile.coq
adam@369 804 $(MAKE) -f Makefile.coq
adamc@243 805
adamc@243 806 Makefile.coq: Makefile $(VS)
adamc@243 807 coq_makefile -R LIB Lib -R . Client $(VS) -o Makefile.coq
adamc@243 808
adamc@243 809 clean:: Makefile.coq
adam@369 810 $(MAKE) -f Makefile.coq clean
adamc@243 811 rm -f Makefile.coq
adamc@243 812 >>
adamc@243 813
adamc@243 814 We change the %\texttt{%#<tt>#coq_makefile#</tt>#%}% call to indicate where the library [Lib] is found. Now %\texttt{%#<tt>#D.v#</tt>#%}% and %\texttt{%#<tt>#E.v#</tt>#%}% can refer to definitions from [Lib] module [A] after running
adamc@243 815 [[
adamc@243 816 Require Import Lib.A.
adamc@243 817 ]]
adam@367 818 %\vspace{-.15in}\noindent{}%and %\texttt{%#<tt>#E.v#</tt>#%}% can refer to definitions from %\texttt{%#<tt>#D.v#</tt>#%}% by running
adamc@243 819 [[
adamc@243 820 Require Import Client.D.
adamc@243 821 ]]
adam@367 822 %\vspace{-.15in}%It can be useful to split a library into several files, but it is also inconvenient for client code to import library modules individually. We can get the best of both worlds by, for example, adding an extra source file %\texttt{%#<tt>#Lib.v#</tt>#%}% to [Lib]'s directory and Makefile, where that file contains just this line:%\index{Vernacular commands!Require Export}%
adamc@243 823 [[
adamc@243 824 Require Export Lib.A Lib.B Lib.C.
adamc@243 825 ]]
adam@367 826 %\vspace{-.15in}%Now client code can import all definitions from all of [Lib]'s modules simply by running
adamc@243 827 [[
adamc@243 828 Require Import Lib.
adamc@243 829 ]]
adam@367 830 %\vspace{-.15in}%The two Makefiles above share a lot of code, so, in practice, it is useful to define a common Makefile that is included by multiple library-specific Makefiles.
adamc@243 831
adamc@243 832 %\medskip%
adamc@243 833
adamc@243 834 The remaining ingredient is the proper way of editing library code files in Proof General. Recall this snippet of %\texttt{%#<tt>#.emacs#</tt>#%}% code from Chapter 2, which tells Proof General where to find the library associated with this book.
adamc@243 835
adamc@243 836 <<
adamc@243 837 (custom-set-variables
adamc@243 838 ...
adamc@243 839 '(coq-prog-args '("-I" "/path/to/cpdt/src"))
adamc@243 840 ...
adamc@243 841 )
adamc@243 842 >>
adamc@243 843
adamc@243 844 To do interactive editing of our current example, we just need to change the flags to point to the right places.
adamc@243 845
adamc@243 846 <<
adamc@243 847 (custom-set-variables
adamc@243 848 ...
adamc@243 849 ; '(coq-prog-args '("-I" "/path/to/cpdt/src"))
adamc@243 850 '(coq-prog-args '("-R" "LIB" "Lib" "-R" "CLIENT" "Client"))
adamc@243 851 ...
adamc@243 852 )
adamc@243 853 >>
adamc@243 854
adamc@243 855 When working on multiple projects, it is useful to leave multiple versions of this setting in your %\texttt{%#<tt>#.emacs#</tt>#%}% file, commenting out all but one of them at any moment in time. To switch between projects, change the commenting structure and restart Emacs. *)