Mercurial > cpdt > repo
comparison src/Large.v @ 413:6f0f80ffd5b6
Typesetting pass over Large
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Fri, 08 Jun 2012 15:44:54 -0400 |
parents | 05efde66559d |
children | 5f25705a10ea |
comparison
equal
deleted
inserted
replaced
412:d32de711635c | 413:6f0f80ffd5b6 |
---|---|
101 rewrite IHe2. | 101 rewrite IHe2. |
102 rewrite mult_plus_distr_l. | 102 rewrite mult_plus_distr_l. |
103 trivial. | 103 trivial. |
104 Qed. | 104 Qed. |
105 | 105 |
106 (** We pass %\index{tactics!induction}%[induction] an %\index{intro pattern}%_intro pattern_, using a [|] character to separate out instructions for the different inductive cases. Within a case, we write [?] to ask Coq to generate a name automatically, and we write an explicit name to assign that name to the corresponding new variable. It is apparent that, to use intro patterns to avoid proof brittleness, one needs to keep track of the seemingly unimportant facts of the orders in which variables are introduced. Thus, the script keeps working if we replace [e] by [x], but it has become more cluttered. Arguably, neither proof is particularly easy to follow. | 106 (** We pass %\index{tactics!induction}%[induction] an%\index{intro pattern}% _intro pattern_, using a [|] character to separate out instructions for the different inductive cases. Within a case, we write [?] to ask Coq to generate a name automatically, and we write an explicit name to assign that name to the corresponding new variable. It is apparent that, to use intro patterns to avoid proof brittleness, one needs to keep track of the seemingly unimportant facts of the orders in which variables are introduced. Thus, the script keeps working if we replace [e] by [x], but it has become more cluttered. Arguably, neither proof is particularly easy to follow. |
107 | 107 |
108 That category of complaint has to do with understanding proofs as static artifacts. As with programming in general, with serious projects, it tends to be much more important to be able to support evolution of proofs as specifications change. Unstructured proofs like the above examples can be very hard to update in concert with theorem statements. For instance, consider how the last proof script plays out when we modify [times] to introduce a bug. *) | 108 That category of complaint has to do with understanding proofs as static artifacts. As with programming in general, with serious projects, it tends to be much more important to be able to support evolution of proofs as specifications change. Unstructured proofs like the above examples can be very hard to update in concert with theorem statements. For instance, consider how the last proof script plays out when we modify [times] to introduce a bug. *) |
109 | 109 |
110 Reset times. | 110 Reset times. |
111 | 111 |
245 | 245 |
246 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e. | 246 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e. |
247 (* begin thide *) | 247 (* begin thide *) |
248 induction e; crush; | 248 induction e; crush; |
249 match goal with | 249 match goal with |
250 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | Mult _ _ => _ end] ] => | 250 | [ |- context[match ?E with Const _ => _ | _ => _ end] ] => |
251 destruct E; crush | 251 destruct E; crush |
252 end. | 252 end. |
253 | 253 |
254 (** One subgoal remains: | 254 (** One subgoal remains: |
255 [[ | 255 [[ |
273 Hint Resolve rewr. | 273 Hint Resolve rewr. |
274 | 274 |
275 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e. | 275 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e. |
276 induction e; crush; | 276 induction e; crush; |
277 match goal with | 277 match goal with |
278 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | Mult _ _ => _ end] ] => | 278 | [ |- context[match ?E with Const _ => _ | _ => _ end] ] => |
279 destruct E; crush | 279 destruct E; crush |
280 end. | 280 end. |
281 Qed. | 281 Qed. |
282 (* end thide *) | 282 (* end thide *) |
283 | 283 |
284 (** In the limit, a complicated inductive proof might rely on one hint for each inductive case. The lemma for each hint could restate the associated case. Compared to manual proof scripts, we arrive at more readable results. Scripts no longer need to depend on the order in which cases are generated. The lemmas are easier to digest separately than are fragments of tactic code, since lemma statements include complete proof contexts. Such contexts can only be extracted from monolithic manual proofs by stepping through scripts interactively. | 284 (** In the limit, a complicated inductive proof might rely on one hint for each inductive case. The lemma for each hint could restate the associated case. Compared to manual proof scripts, we arrive at more readable results. Scripts no longer need to depend on the order in which cases are generated. The lemmas are easier to digest separately than are fragments of tactic code, since lemma statements include complete proof contexts. Such contexts can only be extracted from monolithic manual proofs by stepping through scripts interactively. |
285 | 285 |
286 The more common situation is that a large induction has several easy cases that automation makes short work of. In the remaining cases, automation performs some standard simplification. Among these cases, some may require quite involved proofs; such a case may deserve a hint lemma of its own, where the lemma statement may copy the simplified version of the case. Alternatively, the proof script for the main theorem may be extended with some automation code targeted at the specific case. Even such targeted scripting is more desirable than manual proving, because it may be read and understood without knowledge of a proof's hierarchical structure, case ordering, or name binding structure. | 286 The more common situation is that a large induction has several easy cases that automation makes short work of. In the remaining cases, automation performs some standard simplification. Among these cases, some may require quite involved proofs; such a case may deserve a hint lemma of its own, where the lemma statement may copy the simplified version of the case. Alternatively, the proof script for the main theorem may be extended with some automation code targeted at the specific case. Even such targeted scripting is more desirable than manual proving, because it may be read and understood without knowledge of a proof's hierarchical structure, case ordering, or name binding structure. |
287 | 287 |
288 A competing alternative to the common style of Coq tactics is the %\index{declarative proof scripts}%_declarative_ style, most frequently associated today with the %\index{Isar}%Isar%~\cite{Isar}% language. A declarative proof script is very explicit about subgoal structure and introduction of local names, aiming for human readability. The coding of proof automation is taken to be outside the scope of the proof language, an assumption related to the idea that it is not worth building new automation for each serious theorem. I have shown in this book many examples of theorem-specific automation, which I believe is crucial for scaling to significant results. Declarative proof scripts make it easier to read scripts to modify them for theorem statement changes, but the alternate %\index{adaptive proof scripts}%_adaptive_ style from this book allows use of the _same_ scripts for many versions of a theorem. | 288 A competing alternative to the common style of Coq tactics is the%\index{declarative proof scripts}% _declarative_ style, most frequently associated today with the %\index{Isar}%Isar%~\cite{Isar}% language. A declarative proof script is very explicit about subgoal structure and introduction of local names, aiming for human readability. The coding of proof automation is taken to be outside the scope of the proof language, an assumption related to the idea that it is not worth building new automation for each serious theorem. I have shown in this book many examples of theorem-specific automation, which I believe is crucial for scaling to significant results. Declarative proof scripts make it easier to read scripts to modify them for theorem statement changes, but the alternate%\index{adaptive proof scripts}% _adaptive_ style from this book allows use of the _same_ scripts for many versions of a theorem. |
289 | 289 |
290 Perhaps I am a pessimist for thinking that fully formal proofs will inevitably consist of details that are uninteresting to people, but it is my preference to focus on conveying proof-specific details through choice of lemmas. Additionally, adaptive Ltac scripts contain bits of automation that can be understood in isolation. For instance, in a big [repeat match] loop, each case can generally be digested separately, which is a big contrast from trying to understand the hierarchical structure of a script in a more common style. Adaptive scripts rely on variable binding, but generally only over very small scopes, whereas understanding a traditional script requires tracking the identities of local variables potentially across pages of code. | 290 Perhaps I am a pessimist for thinking that fully formal proofs will inevitably consist of details that are uninteresting to people, but it is my preference to focus on conveying proof-specific details through choice of lemmas. Additionally, adaptive Ltac scripts contain bits of automation that can be understood in isolation. For instance, in a big [repeat match] loop, each case can generally be digested separately, which is a big contrast from trying to understand the hierarchical structure of a script in a more common style. Adaptive scripts rely on variable binding, but generally only over very small scopes, whereas understanding a traditional script requires tracking the identities of local variables potentially across pages of code. |
291 | 291 |
292 One might also wonder why it makes sense to prove all theorems automatically (in the sense of adaptive proof scripts) but not construct all programs automatically. My view there is that _program synthesis_ is a very useful idea that deserves broader application! In practice, there are difficult obstacles in the way of finding a program automatically from its specification. A typical specification is not exhaustive in its description of program properties. For instance, details of performance on particular machine architectures are often omitted. As a result, a synthesized program may be correct in some sense while suffering from deficiencies in other senses. Program synthesis research will continue to come up with ways of dealing with this problem, but the situation for theorem proving is fundamentally different. Following mathematical practice, the only property of a formal proof that we care about is which theorem it proves, and it is trivial to check this property automatically. In other words, with a simple criterion for what makes a proof acceptable, automatic search is straightforward. Of course, in practice we also care about understandability of proofs to facilitate long-term maintenance, and that is just what the techniques outlined above are meant to support, and the next section gives some related advice. *) | 292 One might also wonder why it makes sense to prove all theorems automatically (in the sense of adaptive proof scripts) but not construct all programs automatically. My view there is that _program synthesis_ is a very useful idea that deserves broader application! In practice, there are difficult obstacles in the way of finding a program automatically from its specification. A typical specification is not exhaustive in its description of program properties. For instance, details of performance on particular machine architectures are often omitted. As a result, a synthesized program may be correct in some sense while suffering from deficiencies in other senses. Program synthesis research will continue to come up with ways of dealing with this problem, but the situation for theorem proving is fundamentally different. Following mathematical practice, the only property of a formal proof that we care about is which theorem it proves, and it is trivial to check this property automatically. In other words, with a simple criterion for what makes a proof acceptable, automatic search is straightforward. Of course, in practice we also care about understandability of proofs to facilitate long-term maintenance, and that is just what the techniques outlined above are meant to support, and the next section gives some related advice. *) |
293 | 293 |
334 | 334 |
335 (** The expression we want to destruct here turns out to be the discriminee of a [match], and we can easily enough write a tactic that destructs all such expressions. *) | 335 (** The expression we want to destruct here turns out to be the discriminee of a [match], and we can easily enough write a tactic that destructs all such expressions. *) |
336 | 336 |
337 Ltac t := | 337 Ltac t := |
338 repeat (match goal with | 338 repeat (match goal with |
339 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _ | 339 | [ |- context[match ?E with NConst _ => _ | _ => _ end] ] => |
340 | Eq _ _ => _ | BConst _ => _ | And _ _ => _ | |
341 | If _ _ _ _ => _ | Pair _ _ _ _ => _ | |
342 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] => | |
343 dep_destruct E | 340 dep_destruct E |
344 end; crush). | 341 end; crush). |
345 | 342 |
346 t. | 343 t. |
347 | 344 |
362 | 359 |
363 We need to expand our [t] tactic to handle this case. *) | 360 We need to expand our [t] tactic to handle this case. *) |
364 | 361 |
365 Ltac t' := | 362 Ltac t' := |
366 repeat (match goal with | 363 repeat (match goal with |
367 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _ | 364 | [ |- context[match ?E with NConst _ => _ | _ => _ end] ] => |
368 | Eq _ _ => _ | BConst _ => _ | And _ _ => _ | |
369 | If _ _ _ _ => _ | Pair _ _ _ _ => _ | |
370 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] => | |
371 dep_destruct E | 365 dep_destruct E |
372 | [ |- (if ?E then _ else _) = _ ] => destruct E | 366 | [ |- (if ?E then _ else _) = _ ] => destruct E |
373 end; crush). | 367 end; crush). |
374 | 368 |
375 t'. | 369 t'. |
380 | 374 |
381 (** A final revision of [t] finishes the proof. *) | 375 (** A final revision of [t] finishes the proof. *) |
382 | 376 |
383 Ltac t'' := | 377 Ltac t'' := |
384 repeat (match goal with | 378 repeat (match goal with |
385 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _ | 379 | [ |- context[match ?E with NConst _ => _ | _ => _ end] ] => |
386 | Eq _ _ => _ | BConst _ => _ | And _ _ => _ | |
387 | If _ _ _ _ => _ | Pair _ _ _ _ => _ | |
388 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] => | |
389 dep_destruct E | 380 dep_destruct E |
390 | [ |- (if ?E then _ else _) = _ ] => destruct E | 381 | [ |- (if ?E then _ else _) = _ ] => destruct E |
391 | [ |- context[match pairOut ?E with Some _ => _ | 382 | [ |- context[match pairOut ?E with Some _ => _ |
392 | None => _ end] ] => | 383 | None => _ end] ] => |
393 dep_destruct E | 384 dep_destruct E |
403 Reset t. | 394 Reset t. |
404 | 395 |
405 Theorem cfold_correct : forall t (e : exp t), expDenote e = expDenote (cfold e). | 396 Theorem cfold_correct : forall t (e : exp t), expDenote e = expDenote (cfold e). |
406 induction e; crush; | 397 induction e; crush; |
407 repeat (match goal with | 398 repeat (match goal with |
408 | [ |- context[match ?E with NConst _ => _ | Plus _ _ => _ | 399 | [ |- context[match ?E with NConst _ => _ | _ => _ end] ] => |
409 | Eq _ _ => _ | BConst _ => _ | And _ _ => _ | |
410 | If _ _ _ _ => _ | Pair _ _ _ _ => _ | |
411 | Fst _ _ _ => _ | Snd _ _ _ => _ end] ] => | |
412 dep_destruct E | 400 dep_destruct E |
413 | [ |- (if ?E then _ else _) = _ ] => destruct E | 401 | [ |- (if ?E then _ else _) = _ ] => destruct E |
414 | [ |- context[match pairOut ?E with Some _ => _ | 402 | [ |- context[match pairOut ?E with Some _ => _ |
415 | None => _ end] ] => | 403 | None => _ end] ] => |
416 dep_destruct E | 404 dep_destruct E |
433 | 421 |
434 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e. | 422 Theorem reassoc_correct : forall e, eval (reassoc e) = eval e. |
435 (* begin thide *) | 423 (* begin thide *) |
436 induction e; crush; | 424 induction e; crush; |
437 match goal with | 425 match goal with |
438 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | Mult _ _ => _ end] ] => | 426 | [ |- context[match ?E with Const _ => _ | _ => _ end] ] => |
439 destruct E; crush | 427 destruct E; crush |
440 end. | 428 end. |
441 | 429 |
442 (** One subgoal remains: | 430 (** One subgoal remains: |
443 | 431 |
449 The poorly chosen rewrite rule fired, changing the goal to a form where another hint no longer applies. Imagine that we are in the middle of a large development with many hints. How would we diagnose the problem? First, we might not be sure which case of the inductive proof has gone wrong. It is useful to separate out our automation procedure and apply it manually. *) | 437 The poorly chosen rewrite rule fired, changing the goal to a form where another hint no longer applies. Imagine that we are in the middle of a large development with many hints. How would we diagnose the problem? First, we might not be sure which case of the inductive proof has gone wrong. It is useful to separate out our automation procedure and apply it manually. *) |
450 | 438 |
451 Restart. | 439 Restart. |
452 | 440 |
453 Ltac t := crush; match goal with | 441 Ltac t := crush; match goal with |
454 | [ |- context[match ?E with Const _ => _ | Plus _ _ => _ | 442 | [ |- context[match ?E with Const _ => _ | _ => _ end] ] => |
455 | Mult _ _ => _ end] ] => | |
456 destruct E; crush | 443 destruct E; crush |
457 end. | 444 end. |
458 | 445 |
459 induction e. | 446 induction e. |
460 | 447 |
470 | 457 |
471 t. | 458 t. |
472 | 459 |
473 (** What is [t] doing to get us to this point? The %\index{tactics!info}%[info] command can help us answer this kind of question. *) | 460 (** What is [t] doing to get us to this point? The %\index{tactics!info}%[info] command can help us answer this kind of question. *) |
474 | 461 |
475 (** remove printing * *) | |
476 Undo. | 462 Undo. |
477 info t. | 463 info t. |
464 | |
478 (** %\vspace{-.15in}%[[ | 465 (** %\vspace{-.15in}%[[ |
479 == simpl in *; intuition; subst; autorewrite with core in *; | 466 == simpl in *; intuition; subst; autorewrite with core in *; |
480 simpl in *; intuition; subst; autorewrite with core in *; | 467 simpl in *; intuition; subst; autorewrite with core in *; |
481 simpl in *; intuition; subst; destruct (reassoc e2). | 468 simpl in *; intuition; subst; destruct (reassoc e2). |
482 simpl in *; intuition. | 469 simpl in *; intuition. |
530 | 517 |
531 The way a rewrite is displayed is somewhat baroque, but we can see that theorem [confounder] is the final culprit. At this point, we could remove that hint, prove an alternate version of the key lemma [rewr], or come up with some other remedy. Fixing this kind of problem tends to be relatively easy once the problem is revealed. *) | 518 The way a rewrite is displayed is somewhat baroque, but we can see that theorem [confounder] is the final culprit. At this point, we could remove that hint, prove an alternate version of the key lemma [rewr], or come up with some other remedy. Fixing this kind of problem tends to be relatively easy once the problem is revealed. *) |
532 | 519 |
533 Abort. | 520 Abort. |
534 (* end thide *) | 521 (* end thide *) |
535 | |
536 (** printing * $\times$ *) | |
537 | 522 |
538 (** Sometimes a change to a development has undesirable performance consequences, even if it does not prevent any old proof scripts from completing. If the performance consequences are severe enough, the proof scripts can be considered broken for practical purposes. | 523 (** Sometimes a change to a development has undesirable performance consequences, even if it does not prevent any old proof scripts from completing. If the performance consequences are severe enough, the proof scripts can be considered broken for practical purposes. |
539 | 524 |
540 Here is one example of a performance surprise. *) | 525 Here is one example of a performance surprise. *) |
541 | 526 |
637 (* end thide *) | 622 (* end thide *) |
638 End slow. | 623 End slow. |
639 | 624 |
640 (** As aggravating as the above situation may be, there is greater aggravation to be had from importing library modules with commands like %\index{Vernacular commands!Require Import}%[Require Import]. Such a command imports not just the Gallina terms from a module, but also all the hints for [auto], [eauto], and [autorewrite]. Some very recent versions of Coq include mechanisms for removing hints from databases, but the proper solution is to be very conservative in exporting hints from modules. Consider putting hints in named databases, so that they may be used only when called upon explicitly, as demonstrated in Chapter 13. | 625 (** As aggravating as the above situation may be, there is greater aggravation to be had from importing library modules with commands like %\index{Vernacular commands!Require Import}%[Require Import]. Such a command imports not just the Gallina terms from a module, but also all the hints for [auto], [eauto], and [autorewrite]. Some very recent versions of Coq include mechanisms for removing hints from databases, but the proper solution is to be very conservative in exporting hints from modules. Consider putting hints in named databases, so that they may be used only when called upon explicitly, as demonstrated in Chapter 13. |
641 | 626 |
642 It is also easy to end up with a proof script that uses too much memory. As tactics run, they avoid generating proof terms, since serious proof search will consider many possible avenues, and we do not want to build proof terms for subproofs that end up unused. Instead, tactic execution maintains %\index{thunks}%_thunks_ (suspended computations, represented with closures), such that a tactic's proof-producing thunk is only executed when we run %\index{Vernacular commands!Qed}%[Qed]. These thunks can use up large amounts of space, such that a proof script exhausts available memory, even when we know that we could have used much less memory by forcing some thunks earlier. | 627 It is also easy to end up with a proof script that uses too much memory. As tactics run, they avoid generating proof terms, since serious proof search will consider many possible avenues, and we do not want to build proof terms for subproofs that end up unused. Instead, tactic execution maintains%\index{thunks}% _thunks_ (suspended computations, represented with closures), such that a tactic's proof-producing thunk is only executed when we run %\index{Vernacular commands!Qed}%[Qed]. These thunks can use up large amounts of space, such that a proof script exhausts available memory, even when we know that we could have used much less memory by forcing some thunks earlier. |
643 | 628 |
644 The %\index{tactics!abstract}%[abstract] tactical helps us force thunks by proving some subgoals as their own lemmas. For instance, a proof [induction x; crush] can in many cases be made to use significantly less peak memory by changing it to [induction x; abstract crush]. The main limitation of [abstract] is that it can only be applied to subgoals that are proved completely, with no undetermined unification variables remaining. Still, many large automated proofs can realize vast memory savings via [abstract]. *) | 629 The %\index{tactics!abstract}%[abstract] tactical helps us force thunks by proving some subgoals as their own lemmas. For instance, a proof [induction x; crush] can in many cases be made to use significantly less peak memory by changing it to [induction x; abstract crush]. The main limitation of [abstract] is that it can only be applied to subgoals that are proved completely, with no undetermined unification variables remaining. Still, many large automated proofs can realize vast memory savings via [abstract]. *) |
645 | 630 |
646 | 631 |
647 (** * Modules *) | 632 (** * Modules *) |
648 | 633 |
649 (** Last chapter's examples of proof by reflection demonstrate opportunities for implementing abstract proof strategies with stronger formal guarantees than can be had with Ltac scripting. Coq's _module system_ provides another tool for more rigorous development of generic theorems. This feature is inspired by the module systems found in Standard ML%~\cite{modules}% and Objective Caml, and the discussion that follows assumes familiarity with the basics of one of those systems. | 634 (** Last chapter's examples of proof by reflection demonstrate opportunities for implementing abstract proof strategies with stronger formal guarantees than can be had with Ltac scripting. Coq's _module system_ provides another tool for more rigorous development of generic theorems. This feature is inspired by the module systems found in Standard ML%~\cite{modules}% and Objective Caml, and the discussion that follows assumes familiarity with the basics of one of those systems. |
650 | 635 |
651 ML modules facilitate the grouping of %\index{abstract type}%abstract types with operations over those types. Moreover, there is support for %\index{functor}%_functors_, which are functions from modules to modules. A canonical example of a functor is one that builds a data structure implementation from a module that describes a domain of keys and its associated comparison operations. | 636 ML modules facilitate the grouping of %\index{abstract type}%abstract types with operations over those types. Moreover, there is support for%\index{functor}% _functors_, which are functions from modules to modules. A canonical example of a functor is one that builds a data structure implementation from a module that describes a domain of keys and its associated comparison operations. |
652 | 637 |
653 When we add modules to a base language with dependent types, it becomes possible to use modules and functors to formalize kinds of reasoning that are common in algebra. For instance, this module signature captures the essence of the algebraic structure known as a group. A group consists of a carrier set [G], an associative binary operation [f], a left identity element [e] for [f], and an operation [i] that is a left inverse for [f].%\index{Vernacular commands!Module Type}% *) | 638 When we add modules to a base language with dependent types, it becomes possible to use modules and functors to formalize kinds of reasoning that are common in algebra. For instance, this module signature captures the essence of the algebraic structure known as a group. A group consists of a carrier set [G], an associative binary operation [f], a left identity element [e] for [f], and an operation [i] that is a left inverse for [f].%\index{Vernacular commands!Module Type}% *) |
654 | 639 |
655 Module Type GROUP. | 640 Module Type GROUP. |
656 Parameter G : Set. | 641 Parameter G : Set. |
676 End GROUP_THEOREMS. | 661 End GROUP_THEOREMS. |
677 | 662 |
678 (** We implement generic proofs of these theorems with a functor, whose input is an arbitrary group [M]. %\index{Vernacular commands!Module}% *) | 663 (** We implement generic proofs of these theorems with a functor, whose input is an arbitrary group [M]. %\index{Vernacular commands!Module}% *) |
679 | 664 |
680 Module GroupProofs (M : GROUP) : GROUP_THEOREMS with Module M := M. | 665 Module GroupProofs (M : GROUP) : GROUP_THEOREMS with Module M := M. |
681 (** As in ML, Coq provides multiple options for ascribing signatures to modules. Here we use just the colon operator, which implements %\index{opaque ascription}%_opaque ascription_, hiding all details of the module not exposed by the signature. Another option is %\index{transparent ascription}%_transparent ascription_ via the [<:] operator, which checks for signature compatibility without hiding implementation details. Here we stick with opaque ascription but employ the [with] operation to add more detail to a signature, exposing just those implementation details that we need to. For instance, here we expose the underlying group representation set and operator definitions. Without such a refinement, we would get an output module proving theorems about some unknown group, which is not very useful. Also note that opaque ascription can in Coq have some undesirable consequences without analogues in ML, since not just the types but also the _definitions_ of identifiers have significance in type checking and theorem proving. *) | 666 (** As in ML, Coq provides multiple options for ascribing signatures to modules. Here we use just the colon operator, which implements%\index{opaque ascription}% _opaque ascription_, hiding all details of the module not exposed by the signature. Another option is%\index{transparent ascription}% _transparent ascription_ via the [<:] operator, which checks for signature compatibility without hiding implementation details. Here we stick with opaque ascription but employ the [with] operation to add more detail to a signature, exposing just those implementation details that we need to. For instance, here we expose the underlying group representation set and operator definitions. Without such a refinement, we would get an output module proving theorems about some unknown group, which is not very useful. Also note that opaque ascription can in Coq have some undesirable consequences without analogues in ML, since not just the types but also the _definitions_ of identifiers have significance in type checking and theorem proving. *) |
682 | 667 |
683 Module M := M. | 668 Module M := M. |
684 (** To ensure that the module we are building meets the [GROUP_THEOREMS] signature, we add an extra local name for [M], the functor argument. *) | 669 (** To ensure that the module we are building meets the [GROUP_THEOREMS] signature, we add an extra local name for [M], the functor argument. *) |
685 | 670 |
686 Import M. | 671 Import M. |