Mercurial > cpdt > repo
comparison src/StackMachine.v @ 398:05efde66559d
Get it working in Coq 8.4beta1; use nice coqdoc notation for italics
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Wed, 06 Jun 2012 11:25:13 -0400 |
parents | cc8d0503619f |
children | 5986e9fd40b5 |
comparison
equal
deleted
inserted
replaced
397:d62ed7381a0b | 398:05efde66559d |
---|---|
1 (* Copyright (c) 2008-2011, Adam Chlipala | 1 (* Copyright (c) 2008-2012, Adam Chlipala |
2 * | 2 * |
3 * This work is licensed under a | 3 * This work is licensed under a |
4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 | 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 |
5 * Unported License. | 5 * Unported License. |
6 * The license text is available at: | 6 * The license text is available at: |
83 | Times => mult | 83 | Times => mult |
84 end. | 84 end. |
85 | 85 |
86 ]] | 86 ]] |
87 | 87 |
88 Languages like Haskell and ML have a convenient %\index{principal types}\index{type inference}\emph{%#<i>#principal types#</i>#%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language. | 88 Languages like Haskell and ML have a convenient %\index{principal types}\index{type inference}%_principal types_ property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language. |
89 | 89 |
90 This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\index{Calculus of Inductive Constructions}\index{CIC|see{Calculus of Inductive Constructions}}\emph{%#<i>#Calculus of Inductive Constructions (CIC)#</i>#%}~\cite{CIC}%, which is an extension of the older %\index{Calculus of Constructions}\index{CoC|see{Calculus of Constructions}}\emph{%#<i>#Calculus of Constructions (CoC)#</i>#%}~\cite{CoC}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\index{strong normalization}\emph{%#<i>#strong normalization#</i>#%}~\cite{CIC}%, meaning that every program (and, more importantly, every proof term) terminates; and %\index{relative consistency}\emph{%#<i>#relative consistency#</i>#%}~\cite{SetsInTypes}% with systems like versions of %\index{Zermelo-Fraenkel set theory}%Zermelo-Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory. | 90 This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\index{Calculus of Inductive Constructions}\index{CIC|see{Calculus of Inductive Constructions}}%_Calculus of Inductive Constructions (CIC)_ %\cite{CIC}%, which is an extension of the older %\index{Calculus of Constructions}\index{CoC|see{Calculus of Constructions}}%_Calculus of Constructions (CoC)_ %\cite{CoC}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\index{strong normalization}%_strong normalization_ %\cite{CIC}%, meaning that every program (and, more importantly, every proof term) terminates; and %\index{relative consistency}%_relative consistency_ %\cite{SetsInTypes}% with systems like versions of %\index{Zermelo-Fraenkel set theory}%Zermelo-Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory. |
91 | 91 |
92 Coq is actually based on an extension of CIC called %\index{Gallina}\emph{%#<i>#Gallina#</i>#%}%. The text after the [:=] and before the period in the last code example is a term of Gallina. Gallina adds many useful features that are not compiled internally to more primitive CIC features. The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission. | 92 Coq is actually based on an extension of CIC called %\index{Gallina}%_Gallina_. The text after the [:=] and before the period in the last code example is a term of Gallina. Gallina adds many useful features that are not compiled internally to more primitive CIC features. The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission. |
93 | 93 |
94 Next, there is %\index{Ltac}\emph{%#<i>#Ltac#</i>#%}%, Coq's domain-specific language for writing proofs and decision procedures. We will see some basic examples of Ltac later in this chapter, and much of this book is devoted to more involved Ltac examples. | 94 Next, there is %\index{Ltac}%_Ltac_, Coq's domain-specific language for writing proofs and decision procedures. We will see some basic examples of Ltac later in this chapter, and much of this book is devoted to more involved Ltac examples. |
95 | 95 |
96 Finally, commands like [Inductive] and [Definition] are part of %\index{Vernacular commands}\emph{%#<i>#the Vernacular#</i>#%}%, which includes all sorts of useful queries and requests to the Coq system. Every Coq source file is a series of vernacular commands, where many command forms take arguments that are Gallina or Ltac programs. (Actually, Coq source files are more like %\emph{%#<i>#trees#</i>#%}% of vernacular commands, thanks to various nested scoping constructs.) | 96 Finally, commands like [Inductive] and [Definition] are part of %\index{Vernacular commands}%_the Vernacular_, which includes all sorts of useful queries and requests to the Coq system. Every Coq source file is a series of vernacular commands, where many command forms take arguments that are Gallina or Ltac programs. (Actually, Coq source files are more like _trees_ of vernacular commands, thanks to various nested scoping constructs.) |
97 | 97 |
98 %\medskip% | 98 %\medskip% |
99 | 99 |
100 We can give a simple definition of the meaning of an expression:%\index{Vernacular commands!Fixpoint}% *) | 100 We can give a simple definition of the meaning of an expression:%\index{Vernacular commands!Fixpoint}% *) |
101 | 101 |
105 | Binop b e1 e2 => (binopDenote b) (expDenote e1) (expDenote e2) | 105 | Binop b e1 e2 => (binopDenote b) (expDenote e1) (expDenote e2) |
106 end. | 106 end. |
107 | 107 |
108 (** We declare explicitly that this is a recursive definition, using the keyword [Fixpoint]. The rest should be old hat for functional programmers. *) | 108 (** We declare explicitly that this is a recursive definition, using the keyword [Fixpoint]. The rest should be old hat for functional programmers. *) |
109 | 109 |
110 (** It is convenient to be able to test definitions before starting to prove things about them. We can verify that our semantics is sensible by evaluating some sample uses, using the command %\index{Vernacular commands!Eval}%[Eval]. This command takes an argument expressing a %\index{reduction strategy}\emph{%#<i>#reduction strategy#</i>#%}%, or an %``%#"#order of evaluation.#"#%''% Unlike with ML, which hardcodes an %\emph{%#<i>#eager#</i>#%}% reduction strategy, or Haskell, which hardcodes a %\emph{%#<i>#lazy#</i>#%}% strategy, in Coq we are free to choose between these and many other orders of evaluation, because all Coq programs terminate. In fact, Coq silently checked %\index{termination checking}%termination of our [Fixpoint] definition above, using a simple heuristic based on monotonically decreasing size of arguments across recursive calls. Specifically, recursive calls must be made on arguments that were pulled out of the original recursive argument with [match] expressions. (In Chapter 7, we will see some ways of getting around this restriction, though simply removing the restriction would leave Coq useless as a theorem proving tool, for reasons we will start to learn about in the next chapter.) | 110 (** It is convenient to be able to test definitions before starting to prove things about them. We can verify that our semantics is sensible by evaluating some sample uses, using the command %\index{Vernacular commands!Eval}%[Eval]. This command takes an argument expressing a %\index{reduction strategy}%_reduction strategy_, or an %``%#"#order of evaluation.#"#%''% Unlike with ML, which hardcodes an _eager_ reduction strategy, or Haskell, which hardcodes a _lazy_ strategy, in Coq we are free to choose between these and many other orders of evaluation, because all Coq programs terminate. In fact, Coq silently checked %\index{termination checking}%termination of our [Fixpoint] definition above, using a simple heuristic based on monotonically decreasing size of arguments across recursive calls. Specifically, recursive calls must be made on arguments that were pulled out of the original recursive argument with [match] expressions. (In Chapter 7, we will see some ways of getting around this restriction, though simply removing the restriction would leave Coq useless as a theorem proving tool, for reasons we will start to learn about in the next chapter.) |
111 | 111 |
112 To return to our test evaluations, we run the [Eval] command using the [simpl] evaluation strategy, whose definition is best postponed until we have learned more about Coq's foundations, but which usually gets the job done. *) | 112 To return to our test evaluations, we run the [Eval] command using the [simpl] evaluation strategy, whose definition is best postponed until we have learned more about Coq's foundations, but which usually gets the job done. *) |
113 | 113 |
114 Eval simpl in expDenote (Const 42). | 114 Eval simpl in expDenote (Const 42). |
115 (** [= 42 : nat] *) | 115 (** [= 42 : nat] *) |
201 (* begin hide *) | 201 (* begin hide *) |
202 Abort. | 202 Abort. |
203 (* end hide *) | 203 (* end hide *) |
204 (* begin thide *) | 204 (* begin thide *) |
205 | 205 |
206 (** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\index{strengthening the induction hypothesis}\emph{%#<i>#strengthening the induction hypothesis#</i>#%}%. We do that by proving an auxiliary lemma, using the command [Lemma] that is a synonym for [Theorem], conventionally used for less important theorems that appear in the proofs of primary theorems.%\index{Vernacular commands!Lemma}% *) | 206 (** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\index{strengthening the induction hypothesis}%_strengthening the induction hypothesis_. We do that by proving an auxiliary lemma, using the command [Lemma] that is a synonym for [Theorem], conventionally used for less important theorems that appear in the proofs of primary theorems.%\index{Vernacular commands!Lemma}% *) |
207 | 207 |
208 Lemma compile_correct' : forall e p s, | 208 Lemma compile_correct' : forall e p s, |
209 progDenote (compile e ++ p) s = progDenote p (expDenote e :: s). | 209 progDenote (compile e ++ p) s = progDenote p (expDenote e :: s). |
210 | 210 |
211 (** After the period in the [Lemma] command, we are in %\index{interactive proof-editing mode}\emph{%#<i>#the interactive proof-editing mode#</i>#%}%. We find ourselves staring at this ominous screen of text: | 211 (** After the period in the [Lemma] command, we are in %\index{interactive proof-editing mode}%_the interactive proof-editing mode_. We find ourselves staring at this ominous screen of text: |
212 | 212 |
213 [[ | 213 [[ |
214 1 subgoal | 214 1 subgoal |
215 | 215 |
216 ============================ | 216 ============================ |
221 | 221 |
222 Coq seems to be restating the lemma for us. What we are seeing is a limited case of a more general protocol for describing where we are in a proof. We are told that we have a single subgoal. In general, during a proof, we can have many pending %\index{subgoals}%subgoals, each of which is a logical proposition to prove. Subgoals can be proved in any order, but it usually works best to prove them in the order that Coq chooses. | 222 Coq seems to be restating the lemma for us. What we are seeing is a limited case of a more general protocol for describing where we are in a proof. We are told that we have a single subgoal. In general, during a proof, we can have many pending %\index{subgoals}%subgoals, each of which is a logical proposition to prove. Subgoals can be proved in any order, but it usually works best to prove them in the order that Coq chooses. |
223 | 223 |
224 Next in the output, we see our single subgoal described in full detail. There is a double-dashed line, above which would be our free variables and %\index{hypotheses}%hypotheses, if we had any. Below the line is the %\index{conclusion}%conclusion, which, in general, is to be proved from the hypotheses. | 224 Next in the output, we see our single subgoal described in full detail. There is a double-dashed line, above which would be our free variables and %\index{hypotheses}%hypotheses, if we had any. Below the line is the %\index{conclusion}%conclusion, which, in general, is to be proved from the hypotheses. |
225 | 225 |
226 We manipulate the proof state by running commands called %\index{tactics}\emph{%#<i>#tactics#</i>#%}%. Let us start out by running one of the most important tactics:%\index{tactics!induction}% | 226 We manipulate the proof state by running commands called %\index{tactics}%_tactics_. Let us start out by running one of the most important tactics:%\index{tactics!induction}% |
227 *) | 227 *) |
228 | 228 |
229 induction e. | 229 induction e. |
230 | 230 |
231 (** We declare that this proof will proceed by induction on the structure of the expression [e]. This swaps out our initial subgoal for two new subgoals, one for each case of the inductive proof: | 231 (** We declare that this proof will proceed by induction on the structure of the expression [e]. This swaps out our initial subgoal for two new subgoals, one for each case of the inductive proof: |
507 | 507 |
508 (** We need only to state the basic inductive proof scheme and call a tactic that automates the tedious reasoning in between. In contrast to the period tactic terminator from our last proof, the %\index{tactics!semicolon}%semicolon tactic separator supports structured, compositional proofs. The tactic [t1; t2] has the effect of running [t1] and then running [t2] on each remaining subgoal. The semicolon is one of the most fundamental building blocks of effective proof automation. The period terminator is very useful for exploratory proving, where you need to see intermediate proof states, but final proofs of any serious complexity should have just one period, terminating a single compound tactic that probably uses semicolons. | 508 (** We need only to state the basic inductive proof scheme and call a tactic that automates the tedious reasoning in between. In contrast to the period tactic terminator from our last proof, the %\index{tactics!semicolon}%semicolon tactic separator supports structured, compositional proofs. The tactic [t1; t2] has the effect of running [t1] and then running [t2] on each remaining subgoal. The semicolon is one of the most fundamental building blocks of effective proof automation. The period terminator is very useful for exploratory proving, where you need to see intermediate proof states, but final proofs of any serious complexity should have just one period, terminating a single compound tactic that probably uses semicolons. |
509 | 509 |
510 The [crush] tactic comes from the library associated with this book and is not part of the Coq standard library. The book's library contains a number of other tactics that are especially helpful in highly-automated proofs. | 510 The [crush] tactic comes from the library associated with this book and is not part of the Coq standard library. The book's library contains a number of other tactics that are especially helpful in highly-automated proofs. |
511 | 511 |
512 The %\index{Vernacular commands!Qed}%[Qed] command checks that the proof is finished and, if so, saves it. The tactic commands we have written above are an example of a %\emph{%#<i>#proof script#</i>#%}%, or a series of Ltac programs; while [Qed] uses the result of the script to generate a %\emph{%#<i>#proof term#</i>#%}%, a well-typed term of Gallina. To believe that a theorem is true, we only need to trust that the (relatively simple) checker for proof terms is correct; the use of proof scripts is immaterial. Part I of this book will introduce the principles behind encoding all proofs as terms of Gallina. | 512 The %\index{Vernacular commands!Qed}%[Qed] command checks that the proof is finished and, if so, saves it. The tactic commands we have written above are an example of a _proof script_, or a series of Ltac programs; while [Qed] uses the result of the script to generate a _proof term_, a well-typed term of Gallina. To believe that a theorem is true, we only need to trust that the (relatively simple) checker for proof terms is correct; the use of proof scripts is immaterial. Part I of this book will introduce the principles behind encoding all proofs as terms of Gallina. |
513 | 513 |
514 The proof of our main theorem is now easy. We prove it with four period-terminated tactics, though separating them with semicolons would work as well; the version here is easier to step through.%\index{tactics!intros}% *) | 514 The proof of our main theorem is now easy. We prove it with four period-terminated tactics, though separating them with semicolons would work as well; the version here is easier to step through.%\index{tactics!intros}% *) |
515 | 515 |
516 Theorem compile_correct : forall e, progDenote (compile e) nil = Some (expDenote e :: nil). | 516 Theorem compile_correct : forall e, progDenote (compile e) nil = Some (expDenote e :: nil). |
517 intros. | 517 intros. |
579 | TPlus : tbinop Nat Nat Nat | 579 | TPlus : tbinop Nat Nat Nat |
580 | TTimes : tbinop Nat Nat Nat | 580 | TTimes : tbinop Nat Nat Nat |
581 | TEq : forall t, tbinop t t Bool | 581 | TEq : forall t, tbinop t t Bool |
582 | TLt : tbinop Nat Nat Bool. | 582 | TLt : tbinop Nat Nat Bool. |
583 | 583 |
584 (** The definition of [tbinop] is different from [binop] in an important way. Where we declared that [binop] has type [Set], here we declare that [tbinop] has type [type -> type -> type -> Set]. We define [tbinop] as an %\emph{%#<i>#indexed type family#</i>#%}%. Indexed inductive types are at the heart of Coq's expressive power; almost everything else of interest is defined in terms of them. | 584 (** The definition of [tbinop] is different from [binop] in an important way. Where we declared that [binop] has type [Set], here we declare that [tbinop] has type [type -> type -> type -> Set]. We define [tbinop] as an _indexed type family_. Indexed inductive types are at the heart of Coq's expressive power; almost everything else of interest is defined in terms of them. |
585 | 585 |
586 The inuitive explanation of [tbinop] is that a [tbinop t1 t2 t] is a binary operator whose operands should have types [t1] and [t2], and whose result has type [t]. For instance, constructor [TLt] (for less-than comparison of numbers) is assigned type [tbinop Nat Nat Bool], meaning the operator's arguments are naturals and its result is boolean. The type of [TEq] introduces a small bit of additional complication via polymorphism: we want to allow equality comparison of any two values of any type, as long as they have the %\emph{%#<i>#same#</i>#%}% type. | 586 The inuitive explanation of [tbinop] is that a [tbinop t1 t2 t] is a binary operator whose operands should have types [t1] and [t2], and whose result has type [t]. For instance, constructor [TLt] (for less-than comparison of numbers) is assigned type [tbinop Nat Nat Bool], meaning the operator's arguments are naturals and its result is boolean. The type of [TEq] introduces a small bit of additional complication via polymorphism: we want to allow equality comparison of any two values of any type, as long as they have the _same_ type. |
587 | 587 |
588 ML and Haskell have indexed algebraic datatypes. For instance, their list types are indexed by the type of data that the list carries. However, compared to Coq, ML and Haskell 98 place two important restrictions on datatype definitions. | 588 ML and Haskell have indexed algebraic datatypes. For instance, their list types are indexed by the type of data that the list carries. However, compared to Coq, ML and Haskell 98 place two important restrictions on datatype definitions. |
589 | 589 |
590 First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition. There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat]. %\index{generalized algebraic datatypes}\index{GADTs|see{generalized algebraic datatypes}}\emph{%#<i>#Generalized algebraic datatypes (GADTs)#</i>#%}~\cite{GADT}% are a popular feature in %\index{GHC Haskell}%GHC Haskell and other languages that removes this first restriction. | 590 First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition. There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat]. %\index{generalized algebraic datatypes}\index{GADTs|see{generalized algebraic datatypes}}%_Generalized algebraic datatypes (GADTs)_ %\cite{GADT}% are a popular feature in %\index{GHC Haskell}%GHC Haskell and other languages that removes this first restriction. |
591 | 591 |
592 The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\emph{%#<i>#expressions#</i>#%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on %\index{Haskell}%multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally. | 592 The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be _expressions_. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on %\index{Haskell}%multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally. |
593 *) | 593 *) |
594 | 594 |
595 (** We can define a similar type family for typed expressions, where a term of type [texp t] can be assigned object language type [t]. (It is conventional in the world of interactive theorem proving to call the language of the proof assistant the %\index{meta language}\emph{%#<i>#meta language#</i>#%}% and a language being formalized the %\index{object language}\emph{%#<i>#object language#</i>#%}%.) *) | 595 (** We can define a similar type family for typed expressions, where a term of type [texp t] can be assigned object language type [t]. (It is conventional in the world of interactive theorem proving to call the language of the proof assistant the %\index{meta language}%_meta language_ and a language being formalized the %\index{object language}%_object language_.) *) |
596 | 596 |
597 Inductive texp : type -> Set := | 597 Inductive texp : type -> Set := |
598 | TNConst : nat -> texp Nat | 598 | TNConst : nat -> texp Nat |
599 | TBConst : bool -> texp Bool | 599 | TBConst : bool -> texp Bool |
600 | TBinop : forall t1 t2 t, tbinop t1 t2 t -> texp t1 -> texp t2 -> texp t. | 600 | TBinop : forall t1 t2 t, tbinop t1 t2 t -> texp t1 -> texp t2 -> texp t. |
617 | TEq Nat => beq_nat | 617 | TEq Nat => beq_nat |
618 | TEq Bool => eqb | 618 | TEq Bool => eqb |
619 | TLt => leb | 619 | TLt => leb |
620 end. | 620 end. |
621 | 621 |
622 (** This function has just a few differences from the denotation functions we saw earlier. First, [tbinop] is an indexed type, so its indices become additional arguments to [tbinopDenote]. Second, we need to perform a genuine %\index{dependent pattern matching}\emph{%#<i>#dependent pattern match#</i>#%}%, where the necessary %\emph{%#<i>#type#</i>#%}% of each case body depends on the %\emph{%#<i>#value#</i>#%}% that has been matched. At this early stage, we will not go into detail on the many subtle aspects of Gallina that support dependent pattern-matching, but the subject is central to Part II of the book. | 622 (** This function has just a few differences from the denotation functions we saw earlier. First, [tbinop] is an indexed type, so its indices become additional arguments to [tbinopDenote]. Second, we need to perform a genuine %\index{dependent pattern matching}%_dependent pattern match_, where the necessary _type_ of each case body depends on the _value_ that has been matched. At this early stage, we will not go into detail on the many subtle aspects of Gallina that support dependent pattern-matching, but the subject is central to Part II of the book. |
623 | 623 |
624 The same tricks suffice to define an expression denotation function in an unsurprising way: | 624 The same tricks suffice to define an expression denotation function in an unsurprising way: |
625 *) | 625 *) |
626 | 626 |
627 Fixpoint texpDenote t (e : texp t) : typeDenote t := | 627 Fixpoint texpDenote t (e : texp t) : typeDenote t := |
753 | TBConst b => TCons (TiBConst _ b) (TNil _) | 753 | TBConst b => TCons (TiBConst _ b) (TNil _) |
754 | TBinop _ _ _ b e1 e2 => tconcat (tcompile e2 _) | 754 | TBinop _ _ _ b e1 e2 => tconcat (tcompile e2 _) |
755 (tconcat (tcompile e1 _) (TCons (TiBinop _ b) (TNil _))) | 755 (tconcat (tcompile e1 _) (TCons (TiBinop _ b) (TNil _))) |
756 end. | 756 end. |
757 | 757 |
758 (** One interesting feature of the definition is the underscores appearing to the right of [=>] arrows. Haskell and ML programmers are quite familiar with compilers that infer type parameters to polymorphic values. In Coq, it is possible to go even further and ask the system to infer arbitrary terms, by writing underscores in place of specific values. You may have noticed that we have been calling functions without specifying all of their arguments. For instance, the recursive calls here to [tcompile] omit the [t] argument. Coq's %\emph{%#<i>#implicit argument#</i>#%}% mechanism automatically inserts underscores for arguments that it will probably be able to infer. Inference of such values is far from complete, though; generally, it only works in cases similar to those encountered with polymorphic type instantiation in Haskell and ML. | 758 (** One interesting feature of the definition is the underscores appearing to the right of [=>] arrows. Haskell and ML programmers are quite familiar with compilers that infer type parameters to polymorphic values. In Coq, it is possible to go even further and ask the system to infer arbitrary terms, by writing underscores in place of specific values. You may have noticed that we have been calling functions without specifying all of their arguments. For instance, the recursive calls here to [tcompile] omit the [t] argument. Coq's _implicit argument_ mechanism automatically inserts underscores for arguments that it will probably be able to infer. Inference of such values is far from complete, though; generally, it only works in cases similar to those encountered with polymorphic type instantiation in Haskell and ML. |
759 | 759 |
760 The underscores here are being filled in with stack types. That is, the Coq type inferencer is, in a sense, inferring something about the flow of control in the translated programs. We can take a look at exactly which values are filled in: *) | 760 The underscores here are being filled in with stack types. That is, the Coq type inferencer is, in a sense, inferring something about the flow of control in the translated programs. We can take a look at exactly which values are filled in: *) |
761 | 761 |
762 Print tcompile. | 762 Print tcompile. |
763 (** [[ | 763 (** [[ |
850 (* begin hide *) | 850 (* begin hide *) |
851 Hint Rewrite tconcat_correct. | 851 Hint Rewrite tconcat_correct. |
852 (* end hide *) | 852 (* end hide *) |
853 (** %\noindent%[Hint] %\coqdockw{%#<tt>#Rewrite#</tt>#%}% [tconcat_correct.] *) | 853 (** %\noindent%[Hint] %\coqdockw{%#<tt>#Rewrite#</tt>#%}% [tconcat_correct.] *) |
854 | 854 |
855 (** Here we meet the pervasive concept of a %\emph{%#<i>#hint#</i>#%}%. Many proofs can be found through exhaustive enumerations of combinations of possible proof steps; hints provide the set of steps to consider. The tactic [crush] is applying such brute force search for us silently, and it will consider more possibilities as we add more hints. This particular hint asks that the lemma be used for left-to-right rewriting, and we ask for the hint to be added to the hint database called [cpdt], which is the database used by [crush]. In general, segragating hints into different databases is helpful to control the performance of proof search, in cases where domain knowledge allows us to narrow the set of proof steps to be considered in brute force search. Part III of this book considers such pragmatic aspects of proof search in much more detail. | 855 (** Here we meet the pervasive concept of a _hint_. Many proofs can be found through exhaustive enumerations of combinations of possible proof steps; hints provide the set of steps to consider. The tactic [crush] is applying such brute force search for us silently, and it will consider more possibilities as we add more hints. This particular hint asks that the lemma be used for left-to-right rewriting, and we ask for the hint to be added to the hint database called [cpdt], which is the database used by [crush]. In general, segragating hints into different databases is helpful to control the performance of proof search, in cases where domain knowledge allows us to narrow the set of proof steps to be considered in brute force search. Part III of this book considers such pragmatic aspects of proof search in much more detail. |
856 | 856 |
857 Now we are ready to return to [tcompile_correct'], proving it automatically this time. *) | 857 Now we are ready to return to [tcompile_correct'], proving it automatically this time. *) |
858 | 858 |
859 Lemma tcompile_correct' : forall t (e : texp t) ts (s : vstack ts), | 859 Lemma tcompile_correct' : forall t (e : texp t) ts (s : vstack ts), |
860 tprogDenote (tcompile e ts) s = (texpDenote e, s). | 860 tprogDenote (tcompile e ts) s = (texpDenote e, s). |
872 tprogDenote (tcompile e nil) tt = (texpDenote e, tt). | 872 tprogDenote (tcompile e nil) tt = (texpDenote e, tt). |
873 crush. | 873 crush. |
874 Qed. | 874 Qed. |
875 (* end thide *) | 875 (* end thide *) |
876 | 876 |
877 (** It is probably worth emphasizing that we are doing more than building mathematical models. Our compilers are functional programs that can be executed efficiently. One strategy for doing so is based on %\index{program extraction}\emph{%#<i>#program extraction#</i>#%}%, which generates OCaml code from Coq developments. For instance, we run a command to output the OCaml version of [tcompile]:%\index{Vernacular commands!Extraction}% *) | 877 (** It is probably worth emphasizing that we are doing more than building mathematical models. Our compilers are functional programs that can be executed efficiently. One strategy for doing so is based on %\index{program extraction}%_program extraction_, which generates OCaml code from Coq developments. For instance, we run a command to output the OCaml version of [tcompile]:%\index{Vernacular commands!Extraction}% *) |
878 | 878 |
879 (* begin hide *) | 879 (* begin hide *) |
880 Extraction tcompile. | 880 Extraction tcompile. |
881 (* end hide *) | 881 (* end hide *) |
882 (** %\noindent\coqdockw{%#<tt>#Extraction#</tt>#%}%[ tcompile.] *) | 882 (** %\noindent\coqdockw{%#<tt>#Extraction#</tt>#%}%[ tcompile.] *) |