Mercurial > cpdt > repo
comparison src/Match.v @ 386:b911d0df5eee
A pass through Match
author | Adam Chlipala <adam@chlipala.net> |
---|---|
date | Thu, 12 Apr 2012 14:30:53 -0400 |
parents | 549d604c3d16 |
children | cc8d0503619f |
comparison
equal
deleted
inserted
replaced
385:bef6fb896edd | 386:b911d0df5eee |
---|---|
1 (* Copyright (c) 2008-2011, Adam Chlipala | 1 (* Copyright (c) 2008-2012, Adam Chlipala |
2 * | 2 * |
3 * This work is licensed under a | 3 * This work is licensed under a |
4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 | 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 |
5 * Unported License. | 5 * Unported License. |
6 * The license text is available at: | 6 * The license text is available at: |
20 | 20 |
21 (** We have seen many examples of proof automation so far, some with tantalizing code snippets from Ltac, Coq's domain-specific language for proof search procedures. This chapter aims to give a bottom-up presentation of the features of Ltac, focusing in particular on the Ltac %\index{tactics!match}%[match] construct, which supports a novel approach to backtracking search. First, though, we will run through some useful automation tactics that are built into Coq. They are described in detail in the manual, so we only outline what is possible. *) | 21 (** We have seen many examples of proof automation so far, some with tantalizing code snippets from Ltac, Coq's domain-specific language for proof search procedures. This chapter aims to give a bottom-up presentation of the features of Ltac, focusing in particular on the Ltac %\index{tactics!match}%[match] construct, which supports a novel approach to backtracking search. First, though, we will run through some useful automation tactics that are built into Coq. They are described in detail in the manual, so we only outline what is possible. *) |
22 | 22 |
23 (** * Some Built-In Automation Tactics *) | 23 (** * Some Built-In Automation Tactics *) |
24 | 24 |
25 (** A number of tactics are called repeatedly by [crush]. The %\index{tactics!intuition}%[intuition] tactic simplifies propositional structure of goals. The %\index{tactics!congruence}%[congruence] tactic applies the rules of equality and congruence closure, plus properties of constructors of inductive types. The %\index{tactics!omega}%[omega] tactic provides a complete decision procedure for a theory that is called %\index{linear arithmetic}%quantifier-free linear arithmetic or %\index{Presburger arithmetic}%Presburger arithmetic, depending on whom you ask. That is, [omega] proves any goal that follows from looking only at parts of that goal that can be interpreted as propositional formulas whose atomic formulas are basic comparison operations on natural numbers or integers. | 25 (** A number of tactics are called repeatedly by [crush]. The %\index{tactics!intuition}%[intuition] tactic simplifies propositional structure of goals. The %\index{tactics!congruence}%[congruence] tactic applies the rules of equality and congruence closure, plus properties of constructors of inductive types. The %\index{tactics!omega}%[omega] tactic provides a complete decision procedure for a theory that is called %\index{linear arithmetic}%quantifier-free linear arithmetic or %\index{Presburger arithmetic}%Presburger arithmetic, depending on whom you ask. That is, [omega] proves any goal that follows from looking only at parts of that goal that can be interpreted as propositional formulas whose atomic formulas are basic comparison operations on natural numbers or integers, with operands built from constants, variables, addition, and subtraction (with multiplication by a constant available as a shorthand for addition or subtraction). |
26 | 26 |
27 The %\index{tactics!ring}%[ring] tactic solves goals by appealing to the axioms of rings or semi-rings (as in algebra), depending on the type involved. Coq developments may declare new types to be parts of rings and semi-rings by proving the associated axioms. There is a similar tactic %\index{tactics!field}\coqdockw{%#<tt>#field#</tt>#%}% for simplifying values in fields by conversion to fractions over rings. Both [ring] and %\coqdockw{%#<tt>#field#</tt>#%}% can only solve goals that are equalities. The %\index{tactics!fourier}\coqdockw{%#<tt>#fourier#</tt>#%}% tactic uses Fourier's method to prove inequalities over real numbers, which are axiomatized in the Coq standard library. | 27 The %\index{tactics!ring}%[ring] tactic solves goals by appealing to the axioms of rings or semi-rings (as in algebra), depending on the type involved. Coq developments may declare new types to be parts of rings and semi-rings by proving the associated axioms. There is a similar tactic %\index{tactics!field}\coqdockw{%#<tt>#field#</tt>#%}% for simplifying values in fields by conversion to fractions over rings. Both [ring] and %\coqdockw{%#<tt>#field#</tt>#%}% can only solve goals that are equalities. The %\index{tactics!fourier}\coqdockw{%#<tt>#fourier#</tt>#%}% tactic uses Fourier's method to prove inequalities over real numbers, which are axiomatized in the Coq standard library. |
28 | 28 |
29 The %\index{setoids}\textit{%#<i>#setoid#</i>#%}% facility makes it possible to register new equivalence relations to be understood by tactics like [rewrite]. For instance, [Prop] is registered as a setoid with the equivalence relation %``%#"#if and only if.#"#%''% The ability to register new setoids can be very useful in proofs of a kind common in math, where all reasoning is done after %``%#"#modding out by a relation.#"#%''% | 29 The %\index{setoids}\textit{%#<i>#setoid#</i>#%}% facility makes it possible to register new equivalence relations to be understood by tactics like [rewrite]. For instance, [Prop] is registered as a setoid with the equivalence relation %``%#"#if and only if.#"#%''% The ability to register new setoids can be very useful in proofs of a kind common in math, where all reasoning is done after %``%#"#modding out by a relation.#"#%''% |
30 | 30 |
192 Ltac extend pf := | 192 Ltac extend pf := |
193 let t := type of pf in | 193 let t := type of pf in |
194 notHyp t; generalize pf; intro. | 194 notHyp t; generalize pf; intro. |
195 (* end thide *) | 195 (* end thide *) |
196 | 196 |
197 (** We see the useful %\index{tactics!type of}%[type of] operator of Ltac. This operator could not be implemented in Gallina, but it is easy to support in Ltac. We end up with [t] bound to the type of [pf]. We check that [t] is not already present. If so, we use a [generalize]/[intro] combo to add a new hypothesis proved by [pf]. The tactic %\index{tactics!generalize}%[generalize] takes as input a term [t] (for instance, a proof of some proposition) and then changes the conclusion from [G] to [T -> G], where [T] is the type of [t] (for instance, the proposition proved by a proof given as argument). | 197 (** We see the useful %\index{tactics!type of}%[type of] operator of Ltac. This operator could not be implemented in Gallina, but it is easy to support in Ltac. We end up with [t] bound to the type of [pf]. We check that [t] is not already present. If so, we use a [generalize]/[intro] combo to add a new hypothesis proved by [pf]. The tactic %\index{tactics!generalize}%[generalize] takes as input a term [t] (for instance, a proof of some proposition) and then changes the conclusion from [G] to [T -> G], where [T] is the type of [t] (for instance, the proposition proved by the proof [t]). |
198 | 198 |
199 With these tactics defined, we can write a tactic [completer] for adding to the context all consequences of a set of simple first-order formulas. *) | 199 With these tactics defined, we can write a tactic [completer] for adding to the context all consequences of a set of simple first-order formulas. *) |
200 | 200 |
201 (* begin thide *) | 201 (* begin thide *) |
202 Ltac completer := | 202 Ltac completer := |
306 Abort. | 306 Abort. |
307 (* end thide *) | 307 (* end thide *) |
308 | 308 |
309 (** The problem is that unification variables may not contain locally bound variables. In this case, [?][P] would need to be bound to [x = x], which contains the local quantified variable [x]. By using a wildcard in the earlier version, we avoided this restriction. To understand why this applies to the [completer] tactics, recall that, in Coq, implication is shorthand for degenerate universal quantification where the quantified variable is not used. Nonetheless, in an Ltac pattern, Coq is happy to match a wildcard implication against a universal quantification. | 309 (** The problem is that unification variables may not contain locally bound variables. In this case, [?][P] would need to be bound to [x = x], which contains the local quantified variable [x]. By using a wildcard in the earlier version, we avoided this restriction. To understand why this applies to the [completer] tactics, recall that, in Coq, implication is shorthand for degenerate universal quantification where the quantified variable is not used. Nonetheless, in an Ltac pattern, Coq is happy to match a wildcard implication against a universal quantification. |
310 | 310 |
311 The Coq 8.2 release includes a special pattern form for a unification variable with an explicit set of free variables. That unification variable is then bound to a function from the free variables to the %``%#"#real#"#%''% value. In Coq 8.1 and earlier, there is no such workaround. | 311 The Coq 8.2 release includes a special pattern form for a unification variable with an explicit set of free variables. That unification variable is then bound to a function from the free variables to the %``%#"#real#"#%''% value. In Coq 8.1 and earlier, there is no such workaround. We will see an example of this fancier binding form in the next chapter. |
312 | 312 |
313 No matter which version you use, it is important to be aware of this restriction. As we have alluded to, the restriction is the culprit behind the infinite-looping behavior of [completer']. We unintentionally match quantified facts with the modus ponens rule, circumventing the %``%#"#already present#"#%''% check and leading to different behavior, where the same fact may be added to the context repeatedly in an infinite loop. Our earlier [completer] tactic uses a modus ponens rule that matches the implication conclusion with a variable, which blocks matching against non-trivial universal quantifiers. *) | 313 No matter which Coq version you use, it is important to be aware of this restriction. As we have alluded to, the restriction is the culprit behind the infinite-looping behavior of [completer']. We unintentionally match quantified facts with the modus ponens rule, circumventing the %``%#"#already present#"#%''% check and leading to different behavior, where the same fact may be added to the context repeatedly in an infinite loop. Our earlier [completer] tactic uses a modus ponens rule that matches the implication conclusion with a variable, which blocks matching against non-trivial universal quantifiers. *) |
314 | 314 |
315 | 315 |
316 (** * Functional Programming in Ltac *) | 316 (** * Functional Programming in Ltac *) |
317 | 317 |
318 (* EX: Write a list length function in Ltac. *) | 318 (* EX: Write a list length function in Ltac. *) |
433 *) | 433 *) |
434 | 434 |
435 Abort. | 435 Abort. |
436 (* end thide *) | 436 (* end thide *) |
437 | 437 |
438 (** One other gotcha shows up when we want to debug our Ltac functional programs. We might expect the following code to work, to give us a version of [length] that prints a debug trace of the arguments it is called with. *) | 438 (** Each position within an Ltac script has a default applicable non-terminal, where [constr] and [ltac] are the main options worth thinking about, standing respectively for terms of Gallina and Ltac. The explicit colon notation can always be used to override the default non-terminal choice, though code being parsed as Gallina can no longer use such overrides. Within the [ltac] non-terminal, top-level function applications are treated as applications in Ltac, not Gallina; but the %\emph{%#<i>#arguments#</i>#%}% to such functions are parsed with [constr] by default. This choice may seem strange, until we realize that we have been relying on it all along in all the proof scripts we write! For instance, the [apply] tactic is an Ltac function, and it is natural to interpret its argument as a term of Gallina, not Ltac. We use an [ltac] prefix to parse Ltac function arguments as Ltac terms themselves, as in the call to [map] above. For some simple cases, Ltac terms may be passed without an extra prefix. For instance, an identifier that has an Ltac meaning but no Gallina meaning will be interpreted in Ltac automatically. |
439 | |
440 One other gotcha shows up when we want to debug our Ltac functional programs. We might expect the following code to work, to give us a version of [length] that prints a debug trace of the arguments it is called with. *) | |
439 | 441 |
440 (* begin thide *) | 442 (* begin thide *) |
441 (* begin hide *) | 443 (* begin hide *) |
442 Reset length. | 444 Reset length. |
443 (* end hide *) | 445 (* end hide *) |
463 << | 465 << |
464 Error: variable n should be bound to a term. | 466 Error: variable n should be bound to a term. |
465 >> *) | 467 >> *) |
466 Abort. | 468 Abort. |
467 | 469 |
468 (** What is going wrong here? The answer has to do with the dual status of Ltac as both a purely functional and an imperative programming language. The basic programming language is purely functional, but tactic scripts are one %``%#"#datatype#"#%''% that can be returned by such programs, and Coq will run such a script using an imperative semantics that mutates proof states. Readers familiar with %\index{monad}\index{Haskell}%monadic programming in Haskell%~\cite{monads,IO}% may recognize a similarity. Side-effecting Haskell programs can be thought of as pure programs that return %\emph{%#<i>#the code of programs in an imperative language#</i>#%}%, where some out-of-band mechanism takes responsibility for running these derived programs. In this way, Haskell remains pure, while supporting usual input-output side effects and more. Ltac uses the same basic mechanism, but in a dynamically typed setting. Here the embedded imperative language includes all the tactics we have been applying so far. | 470 (** What is going wrong here? The answer has to do with the dual status of Ltac as both a purely functional and an imperative programming language. The basic programming language is purely functional, but tactic scripts are one %``%#"#datatype#"#%''% that can be returned by such programs, and Coq will run such a script using an imperative semantics that mutates proof states. Readers familiar with %\index{monad}\index{Haskell}%monadic programming in Haskell%~\cite{Monads,IO}% may recognize a similarity. Side-effecting Haskell programs can be thought of as pure programs that return %\emph{%#<i>#the code of programs in an imperative language#</i>#%}%, where some out-of-band mechanism takes responsibility for running these derived programs. In this way, Haskell remains pure, while supporting usual input-output side effects and more. Ltac uses the same basic mechanism, but in a dynamically typed setting. Here the embedded imperative language includes all the tactics we have been applying so far. |
469 | 471 |
470 Even basic [idtac] is an embedded imperative program, so we may not automatically mix it with purely functional code. In fact, a semicolon operator alone marks a span of Ltac code as an embedded tactic script. This makes some amount of sense, since pure functional languages have no need for sequencing: since they lack side effects, there is no reason to run an expression and then just throw away its value and move on to another expression. | 472 Even basic [idtac] is an embedded imperative program, so we may not automatically mix it with purely functional code. In fact, a semicolon operator alone marks a span of Ltac code as an embedded tactic script. This makes some amount of sense, since pure functional languages have no need for sequencing: since they lack side effects, there is no reason to run an expression and then just throw away its value and move on to another expression. |
471 | 473 |
472 The solution is like in Haskell: we must %``%#"#monadify#"#%''% our pure program to give it access to side effects. The trouble is that the embedded tactic language has no [return] construct. Proof scripts are about proving theorems, not calculating results. We can apply a somewhat awkward workaround that requires translating our program into %\index{continuation-passing style}\emph{%#<i>#continuation-passing style#</i>#%}%, a program structuring idea popular in functional programming. *) | 474 The solution is like in Haskell: we must %``%#"#monadify#"#%''% our pure program to give it access to side effects. The trouble is that the embedded tactic language has no [return] construct. Proof scripts are about proving theorems, not calculating results. We can apply a somewhat awkward workaround that requires translating our program into %\index{continuation-passing style}\emph{%#<i>#continuation-passing style#</i>#%}%, a program structuring idea popular in functional programming. *) |
473 | 475 |
497 ]] | 499 ]] |
498 *) | 500 *) |
499 Abort. | 501 Abort. |
500 (* end thide *) | 502 (* end thide *) |
501 | 503 |
502 (** We see exactly the trace of function arguments that we expected initially, and an examination of the proof state afterward would show that variable [n] has been added with value [3]. *) | 504 (** We see exactly the trace of function arguments that we expected initially, and an examination of the proof state afterward would show that variable [n] has been added with value [3]. |
505 | |
506 Considering the comparison with Haskell's IO monad, there is an important subtlety that deserves to be mentioned. A Haskell IO computation represents (theoretically speaking, at least) a transformer from one state of the real world to another, plus a pure value to return. Some of the state can be very specific to the program, as in the case of heap-allocated mutable references, but some can be along the lines of the favorite example %``%#"#launch missile,#"#%''% where the program has a side effect on the real world that is not possible to undo. | |
507 | |
508 In contrast, Ltac scripts can be thought of as controlling just two simple kinds of mutable state. First, there is the current sequence of proof subgoals. Second, there is a partial assignment of discovered values to unification variables introduced by proof search (for instance, by [eauto], as we saw in the previous chapter). Crucially, %\emph{%#<i>#every mutation of this state can be undone#</i>#%}% during backtracking introduced by [match], [auto], and other built-in Ltac constructs. Ltac proof scripts have state, but it is purely local, and all changes to it are reversible, which is a very useful semantics for proof search. *) | |
503 | 509 |
504 | 510 |
505 (** * Recursive Proof Search *) | 511 (** * Recursive Proof Search *) |
506 | 512 |
507 (** Deciding how to instantiate quantifiers is one of the hardest parts of automated first-order theorem proving. For a given problem, we can consider all possible bounded-length sequences of quantifier instantiations, applying only propositional reasoning at the end. This is probably a bad idea for almost all goals, but it makes for a nice example of recursive proof search procedures in Ltac. | 513 (** Deciding how to instantiate quantifiers is one of the hardest parts of automated first-order theorem proving. For a given problem, we can consider all possible bounded-length sequences of quantifier instantiations, applying only propositional reasoning at the end. This is probably a bad idea for almost all goals, but it makes for a nice example of recursive proof search procedures in Ltac. |
517 | [ H : forall x : ?T, _, x : ?T |- _ ] => generalize (H x); inster n' | 523 | [ H : forall x : ?T, _, x : ?T |- _ ] => generalize (H x); inster n' |
518 end | 524 end |
519 end. | 525 end. |
520 (* end thide *) | 526 (* end thide *) |
521 | 527 |
522 (** The tactic begins by applying propositional simplification. Next, it checks if any chain length remains. If so, it tries all possible ways of instantiating quantified hypotheses with properly typed local variables. It is critical to realize that, if the recursive call [inster n'] fails, then the [match goal] just seeks out another way of unifying its pattern against proof state. Thus, this small amount of code provides an elegant demonstration of how backtracking [match] enables exhaustive search. | 528 (** The tactic begins by applying propositional simplification. Next, it checks if any chain length remains, failing if not. If so, it tries all possible ways of instantiating quantified hypotheses with properly typed local variables. It is critical to realize that, if the recursive call [inster n'] fails, then the [match goal] just seeks out another way of unifying its pattern against proof state. Thus, this small amount of code provides an elegant demonstration of how backtracking [match] enables exhaustive search. |
523 | 529 |
524 We can verify the efficacy of [inster] with two short examples. The built-in [firstorder] tactic (with no extra arguments) is able to prove the first but not the second. *) | 530 We can verify the efficacy of [inster] with two short examples. The built-in [firstorder] tactic (with no extra arguments) is able to prove the first but not the second. *) |
525 | 531 |
526 Section test_inster. | 532 Section test_inster. |
527 Variable A : Set. | 533 Variable A : Set. |
541 Theorem test_inster2 : forall x y, x <> y -> P x -> Q (f y) -> Q (f x). | 547 Theorem test_inster2 : forall x y, x <> y -> P x -> Q (f y) -> Q (f x). |
542 inster 3. | 548 inster 3. |
543 Qed. | 549 Qed. |
544 End test_inster. | 550 End test_inster. |
545 | 551 |
546 (** The style employed in the definition of [inster] can seem very counterintuitive to functional programmers. Usually, functional programs accumulate state changes in explicit arguments to recursive functions. In Ltac, the state of the current subgoal is always implicit. Nonetheless, in contrast to general imperative programming, it is easy to undo any changes to this state, and indeed such %``%#"#undoing#"#%''% happens automatically at failures within [match]es. In this way, Ltac programming is similar to programming in Haskell with a stateful failure monad that supports a composition operator along the lines of the [first] tactical. The key pieces of state include not only the form of the goal, but also decisions about the values of unification variables. These decisions are rolled back with all the other state after failure. | 552 (** The style employed in the definition of [inster] can seem very counterintuitive to functional programmers. Usually, functional programs accumulate state changes in explicit arguments to recursive functions. In Ltac, the state of the current subgoal is always implicit. Nonetheless, recalling the discussion at the end of the last section, in contrast to general imperative programming, it is easy to undo any changes to this state, and indeed such %``%#"#undoing#"#%''% happens automatically at failures within [match]es. In this way, Ltac programming is similar to programming in Haskell with a stateful failure monad that supports a composition operator along the lines of the [first] tactical. |
547 | 553 |
548 Functional programming purists may react indignantly to the suggestion of programming this way. Nonetheless, as with other kinds of %``%#"#monadic programming,#"#%''% many problems are much simpler to solve with Ltac than they would be with explicit, pure proof manipulation in ML or Haskell. To demonstrate, we will write a basic simplification procedure for logical implications. | 554 Functional programming purists may react indignantly to the suggestion of programming this way. Nonetheless, as with other kinds of %``%#"#monadic programming,#"#%''% many problems are much simpler to solve with Ltac than they would be with explicit, pure proof manipulation in ML or Haskell. To demonstrate, we will write a basic simplification procedure for logical implications. |
549 | 555 |
550 This procedure is inspired by one for separation logic%~\cite{separation}%, where conjuncts in formulas are thought of as %``%#"#resources,#"#%''% such that we lose no completeness by %``%#"#crossing out#"#%''% equal conjuncts on the two sides of an implication. This process is complicated by the fact that, for reasons of modularity, our formulas can have arbitrary nested tree structure (branching at conjunctions) and may include existential quantifiers. It is helpful for the matching process to %``%#"#go under#"#%''% quantifiers and in fact decide how to instantiate existential quantifiers in the conclusion. | 556 This procedure is inspired by one for separation logic%~\cite{separation}%, where conjuncts in formulas are thought of as %``%#"#resources,#"#%''% such that we lose no completeness by %``%#"#crossing out#"#%''% equal conjuncts on the two sides of an implication. This process is complicated by the fact that, for reasons of modularity, our formulas can have arbitrary nested tree structure (branching at conjunctions) and may include existential quantifiers. It is helpful for the matching process to %``%#"#go under#"#%''% quantifiers and in fact decide how to instantiate existential quantifiers in the conclusion. |
551 | 557 |
680 Theorem imp_True : forall P, | 686 Theorem imp_True : forall P, |
681 P --> True. | 687 P --> True. |
682 imp. | 688 imp. |
683 Qed. | 689 Qed. |
684 | 690 |
685 (** Our final [matcher] tactic is now straightforward. First, we [intros] all variables into scope. Then we attempt simple premise simplifications, finishing the proof upon finding [False] and eliminating any existential quantifiers that we find. After that, we search through the conclusion. We remove [True] conjuncts, remove existential quantifiers by introducing unification variables for their bound variables, and search for matching premises to cancel. Finally, when no more progress is made, we see if the goal has become trivial and can be solved by [imp_True]. In each case, we use the tactic [simple apply] in place of [apply] to use a simpler, less expensive unification algorithm. *) | 691 (** Our final [matcher] tactic is now straightforward. First, we [intros] all variables into scope. Then we attempt simple premise simplifications, finishing the proof upon finding [False] and eliminating any existential quantifiers that we find. After that, we search through the conclusion. We remove [True] conjuncts, remove existential quantifiers by introducing unification variables for their bound variables, and search for matching premises to cancel. Finally, when no more progress is made, we see if the goal has become trivial and can be solved by [imp_True]. In each case, we use the tactic %\index{tactics!simple apply}%[simple apply] in place of [apply] to use a simpler, less expensive unification algorithm. *) |
686 | 692 |
687 Ltac matcher := | 693 Ltac matcher := |
688 intros; | 694 intros; |
689 repeat search_prem ltac:( simple apply False_prem || ( simple apply ex_prem; intro)); | 695 repeat search_prem ltac:( simple apply False_prem || ( simple apply ex_prem; intro)); |
690 repeat search_conc ltac:( simple apply True_conc || simple eapply ex_conc | 696 repeat search_conc ltac:( simple apply True_conc || simple eapply ex_conc |
744 (ex_conc (fun x0 : nat => P x0) x | 750 (ex_conc (fun x0 : nat => P x0) x |
745 (Match (P:=P x) (imp_True (P:=True)))))))) | 751 (Match (P:=P x) (imp_True (P:=True)))))))) |
746 : forall (P : nat -> Prop) (Q : Prop), | 752 : forall (P : nat -> Prop) (Q : Prop), |
747 (exists x : nat, P x /\ Q) --> Q /\ (exists x : nat, P x) | 753 (exists x : nat, P x /\ Q) --> Q /\ (exists x : nat, P x) |
748 ]] | 754 ]] |
749 *) | 755 |
756 This proof term is a mouthful, and we can be glad that we did not build it manually! *) | |
750 | 757 |
751 | 758 |
752 (** * Creating Unification Variables *) | 759 (** * Creating Unification Variables *) |
753 | 760 |
754 (** A final useful ingredient in tactic crafting is the ability to allocate new unification variables explicitly. Tactics like [eauto] introduce unification variable internally to support flexible proof search. While [eauto] and its relatives do %\textit{%#<i>#backward#</i>#%}% reasoning, we often want to do similar %\textit{%#<i>#forward#</i>#%}% reasoning, where unification variables can be useful for similar reasons. | 761 (** A final useful ingredient in tactic crafting is the ability to allocate new unification variables explicitly. Tactics like [eauto] introduce unification variables internally to support flexible proof search. While [eauto] and its relatives do %\textit{%#<i>#backward#</i>#%}% reasoning, we often want to do similar %\textit{%#<i>#forward#</i>#%}% reasoning, where unification variables can be useful for similar reasons. |
755 | 762 |
756 For example, we can write a tactic that instantiates the quantifiers of a universally quantified hypothesis. The tactic should not need to know what the appropriate instantiantiations are; rather, we want these choices filled with placeholders. We hope that, when we apply the specialized hypothesis later, syntactic unification will determine concrete values. | 763 For example, we can write a tactic that instantiates the quantifiers of a universally quantified hypothesis. The tactic should not need to know what the appropriate instantiantiations are; rather, we want these choices filled with placeholders. We hope that, when we apply the specialized hypothesis later, syntactic unification will determine concrete values. |
757 | 764 |
758 Before we are ready to write a tactic, we can try out its ingredients one at a time. *) | 765 Before we are ready to write a tactic, we can try out its ingredients one at a time. *) |
759 | 766 |
780 ]] | 787 ]] |
781 | 788 |
782 The proof context is extended with a new variable [y], which has been assigned to be equal to a fresh unification variable [?279]. We want to instantiate [H] with [?279]. To get ahold of the new unification variable, rather than just its alias [y], we perform a trivial unfolding in the expression [y], using the %\index{tactics!eval}%[eval] Ltac construct, which works with the same reduction strategies that we have seen in tactics (e.g., [simpl], [compute], etc.). *) | 789 The proof context is extended with a new variable [y], which has been assigned to be equal to a fresh unification variable [?279]. We want to instantiate [H] with [?279]. To get ahold of the new unification variable, rather than just its alias [y], we perform a trivial unfolding in the expression [y], using the %\index{tactics!eval}%[eval] Ltac construct, which works with the same reduction strategies that we have seen in tactics (e.g., [simpl], [compute], etc.). *) |
783 | 790 |
784 let y' := eval unfold y in y in | 791 let y' := eval unfold y in y in |
785 clear y; generalize (H y'). | 792 clear y; specialize (H y'). |
786 | 793 |
787 (** [[ | 794 (** [[ |
788 H : forall x : nat, S x > x | 795 H : S ?279 > ?279 |
789 ============================ | 796 ============================ |
790 S ?279 > ?279 -> 2 > 1 | 797 2 > 1 |
791 | 798 |
792 ]] | 799 ]] |
793 | 800 |
794 Our instantiation was successful. We can finish by using the refined formula to replace the original. *) | 801 Our instantiation was successful. We can finish the proof by using [apply]'s unification to figure out the proper value of [?279]. *) |
795 | |
796 clear H; intro H. | |
797 | |
798 (** [[ | |
799 H : S ?281 > ?281 | |
800 ============================ | |
801 2 > 1 | |
802 | |
803 ]] | |
804 | |
805 We can finish the proof by using [apply]'s unification to figure out the proper value of [?281]. (The original unification variable was replaced by another, as often happens in the internals of the various tactics' implementations.) *) | |
806 | 802 |
807 apply H. | 803 apply H. |
808 Qed. | 804 Qed. |
809 | 805 |
810 (** Now we can write a tactic that encapsulates the pattern we just employed, instantiating all quantifiers of a particular hypothesis. *) | 806 (** Now we can write a tactic that encapsulates the pattern we just employed, instantiating all quantifiers of a particular hypothesis. *) |
855 ============================ | 851 ============================ |
856 exists u1 : B, exists u2 : B, P (f v1 v2) (g u1 u2) | 852 exists u1 : B, exists u2 : B, P (f v1 v2) (g u1 u2) |
857 | 853 |
858 ]] | 854 ]] |
859 | 855 |
860 [eauto] still cannot prove the goal, so we eliminate the two new existential quantifiers. (Recall that [ex] is the underlying type family to which uses of the [exists] syntax are compiled.) *) | 856 Normal [eauto] still cannot prove the goal, so we eliminate the two new existential quantifiers. (Recall that [ex] is the underlying type family to which uses of the [exists] syntax are compiled.) *) |
861 | 857 |
862 repeat match goal with | 858 repeat match goal with |
863 | [ H : ex _ |- _ ] => destruct H | 859 | [ H : ex _ |- _ ] => destruct H |
864 end. | 860 end. |
865 | 861 |
998 Theorem t9 : exists p : nat * nat, fst p = 3. | 994 Theorem t9 : exists p : nat * nat, fst p = 3. |
999 econstructor; match goal with | 995 econstructor; match goal with |
1000 | [ |- fst ?x = 3 ] => equate x (3, 2) | 996 | [ |- fst ?x = 3 ] => equate x (3, 2) |
1001 end; reflexivity. | 997 end; reflexivity. |
1002 Qed. | 998 Qed. |
999 | |
1000 (** This technique is even more useful within recursive and iterative tactics that are meant to solve broad classes of goals. *) |