annotate src/Exercises.v @ 534:ed829eaa91b2

Builds with Coq 8.5beta2
author Adam Chlipala <adam@chlipala.net>
date Wed, 05 Aug 2015 14:46:55 -0400
parents 5f25705a10ea
children
rev   line source
adam@534 1 (* Copyright (c) 2008-2012, 2015, Adam Chlipala
adam@370 2 *
adam@370 3 * This work is licensed under a
adam@370 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adam@370 5 * Unported License.
adam@370 6 * The license text is available at:
adam@370 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adam@370 8 *)
adam@370 9
adam@370 10 (* begin hide *)
adam@370 11 Require Import List.
adam@370 12
adam@534 13 Require Import Cpdt.CpdtTactics.
adam@370 14 (* end hide *)
adam@370 15
adam@370 16 (** These exercises were originally included inline in the text, but my latest feeling is that I don't have the time to maintain the exercises at a sufficient quality level to match the level I'm targetting for the rest of the book. I'm including them in this file for now. *)
adam@370 17
adam@370 18 (** * From InductiveTypes *)
adam@370 19
adam@370 20 (** %\begin{enumerate}%#<ol>#
adam@370 21
adam@370 22 %\item%#<li># Define an inductive type [truth] with three constructors, [Yes], [No], and [Maybe]. [Yes] stands for certain truth, [No] for certain falsehood, and [Maybe] for an unknown situation. Define %``%#"#not,#"#%''% %``%#"#and,#"#%''% and %``%#"#or#"#%''% for this replacement boolean algebra. Prove that your implementation of %``%#"#and#"#%''% is commutative and distributes over your implementation of %``%#"#or.#"#%''%#</li>#
adam@370 23
adam@370 24 %\item%#<li># Define an inductive type [slist] that implements lists with support for constant-time concatenation. This type should be polymorphic in a choice of type for data values in lists. The type [slist] should have three constructors, for empty lists, singleton lists, and concatenation. Define a function [flatten] that converts [slist]s to [list]s. (You will want to run [Require Import] %\coqdocconstructor{%#<tt>#List#</tt>#%}%[.] to bring list definitions into scope.) Finally, prove that [flatten] distributes over concatenation, where the two sides of your quantified equality will use the [slist] and [list] versions of concatenation, as appropriate. Recall from Chapter 2 that the infix operator [++] is syntactic sugar for the [list] concatenation function [app].#</li>#
adam@370 25
adam@370 26 %\item%#<li># Modify the first example language of Chapter 2 to include variables, where variables are represented with [nat]. Extend the syntax and semantics of expressions to accommodate the change. Your new [expDenote] function should take as a new extra first argument a value of type [var -> nat], where [var] is a synonym for naturals-as-variables, and the function assigns a value to each variable. Define a constant folding function which does a bottom-up pass over an expression, at each stage replacing every binary operation on constants with an equivalent constant. Prove that constant folding preserves the meanings of expressions.#</li>#
adam@370 27
adam@370 28 %\item%#<li># Reimplement the second example language of Chapter 2 to use mutually inductive types instead of dependent types. That is, define two separate (non-dependent) inductive types [nat_exp] and [bool_exp] for expressions of the two different types, rather than a single indexed type. To keep things simple, you may consider only the binary operators that take naturals as operands. Add natural number variables to the language, as in the last exercise, and add an %``%#"#if#"#%''% expression form taking as arguments one boolean expression and two natural number expressions. Define semantics and constant-folding functions for this new language. Your constant folding should simplify not just binary operations (returning naturals or booleans) with known arguments, but also %``%#"#if#"#%''% expressions with known values for their test expressions but possibly undetermined %``%#"#then#"#%''% and %``%#"#else#"#%''% cases. Prove that constant-folding a natural number expression preserves its meaning.#</li>#
adam@370 29
adam@370 30 %\item%#<li># Define mutually inductive types of even and odd natural numbers, such that any natural number is isomorphic to a value of one of the two types. (This problem does not ask you to prove that correspondence, though some interpretations of the task may be interesting exercises.) Write a function that computes the sum of two even numbers, such that the function type guarantees that the output is even as well. Prove that this function is commutative.#</li>#
adam@370 31
adam@370 32 %\item%#<li># Using a reflexive inductive definition, define a type [nat_tree] of infinitary trees, with natural numbers at their leaves and a countable infinity of new trees branching out of each internal node. Define a function [increment] that increments the number in every leaf of a [nat_tree]. Define a function [leapfrog] over a natural [i] and a tree [nt]. [leapfrog] should recurse into the [i]th child of [nt], the [i+1]st child of that node, the [i+2]nd child of the next node, and so on, until reaching a leaf, in which case [leapfrog] should return the number at that leaf. Prove that the result of any call to [leapfrog] is incremented by one by calling [increment] on the tree.#</li>#
adam@370 33
adam@370 34 %\item%#<li># Define a type of trees of trees of trees of (repeat to infinity). That is, define an inductive type [trexp], whose members are either base cases containing natural numbers or binary trees of [trexp]s. Base your definition on a parameterized binary tree type [btree] that you will also define, so that [trexp] is defined as a nested inductive type. Define a function [total] that sums all of the naturals at the leaves of a [trexp]. Define a function [increment] that increments every leaf of a [trexp] by one. Prove that, for all [tr], [total (increment tr) >= total tr]. On the way to finishing this proof, you will probably want to prove a lemma and add it as a hint using the syntax [Hint Resolve name_of_lemma.].#</li>#
adam@370 35
adam@370 36 %\item%#<li># Prove discrimination and injectivity theorems for the [nat_btree] type defined earlier in this chapter. In particular, without using the tactics [discriminate], [injection], or [congruence], prove that no leaf equals any node, and prove that two equal nodes carry the same natural number.#</li>#
adam@370 37
adam@370 38 #</ol>#%\end{enumerate}% *)
adam@370 39
adam@370 40
adam@370 41
adam@370 42 (** * From Predicates *)
adam@370 43
adam@370 44 (** %\begin{enumerate}%#<ol>#
adam@370 45
adam@370 46 %\item%#<li># Prove these tautologies of propositional logic, using only the tactics [apply], [assumption], %\coqdockw{%#<tt>#constructor#</tt>#%}%, [destruct], [intro], [intros], %\coqdockw{%#<tt>#left#</tt>#%}%, %\coqdockw{%#<tt>#right#</tt>#%}%, [split], and [unfold].
adam@370 47 %\begin{enumerate}%#<ol>#
adam@370 48 %\item%#<li># [(][True \/ False) /\ (][False \/ True)]#</li>#
adam@370 49 %\item%#<li># [P -> ~ ~ P]#</li>#
adam@370 50 %\item%#<li># [P /\ (][Q \/ R) -> (][P /\ Q) \/ (][P /\ R)]#</li>#
adam@370 51 #</ol> </li>#%\end{enumerate}%
adam@370 52
adam@370 53 %\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], %\coqdockw{%#<tt>#eassumption#</tt>#%}%, and %\coqdockw{%#<tt>#exists#</tt>#%}%. You will probably find the [assert] tactic useful for stating and proving an intermediate lemma, enabling a kind of %``%#"#forward reasoning,#"#%''% in contrast to the %``%#"#backward reasoning#"#%''% that is the default for Coq tactics. The tactic %\coqdockw{%#<tt>#eassumption#</tt>#%}% is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol.
adam@370 54 %\begin{enumerate}%#<ol>#
adam@370 55 %\item%#<li># [p x -> (][forall x, p x -> exists y, q x y) -> (][forall x y, q x y -> q y (f y)) -> exists z, q z (f z)]#</li>#
adam@370 56 #</ol> </li>#%\end{enumerate}%
adam@370 57
adam@370 58 %\item%#<li># Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10. Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd. It is probably easiest to prove the second theorem by indicating %``%#"#odd-ness#"#%''% as equality to [2 * n + 1] for some [n].#</li>#
adam@370 59
adam@370 60 %\item%#<li># Define a simple programming language, its semantics, and its typing rules, and then prove that well-typed programs cannot go wrong. Specifically:
adam@370 61 %\begin{enumerate}%#<ol>#
adam@370 62 %\item%#<li># Define [var] as a synonym for the natural numbers.#</li>#
adam@370 63 %\item%#<li># Define an inductive type [exp] of expressions, containing natural number constants, natural number addition, pairing of two other expressions, extraction of the first component of a pair, extraction of the second component of a pair, and variables (based on the [var] type you defined).#</li>#
adam@370 64 %\item%#<li># Define an inductive type [cmd] of commands, containing expressions and variable assignments. A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#</li>#
adam@370 65 %\item%#<li># Define an inductive type [val] of values, containing natural number constants and pairings of values.#</li>#
adam@370 66 %\item%#<li># Define a type of variable assignments, which assign a value to each variable.#</li>#
adam@370 67 %\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. %``%#"#Big step#"#%''% means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, %``%#"#[1 + 1] evaluates to [2] under assignment [va]#"#%''% should be derivable for any assignment [va].#</li>#
adam@370 68 %\item%#<li># Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment. The value of a command is the result of evaluating its final expression.#</li>#
adam@370 69 %\item%#<li># Define a type of variable typings, which are like variable assignments, but map variables to types instead of values. You might use polymorphism to share some code with your variable assignments.#</li>#
adam@370 70 %\item%#<li># Define typing judgments for expressions, values, and commands. The expression and command cases will be in terms of a typing assignment.#</li>#
adam@370 71 %\item%#<li># Define a predicate [varsType] to express when a variable assignment and a variable typing agree on the types of variables.#</li>#
adam@370 72 %\item%#<li># Prove that any expression that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li>#
adam@370 73 %\item%#<li># Prove that any command that has type [t] under variable typing [vt] evaluates under variable assignment [va] to some value that also has type [t] in [vt], as long as [va] and [vt] agree.#</li>#
adam@370 74 #</ol> </li>#%\end{enumerate}%
adam@370 75 A few hints that may be helpful:
adam@370 76 %\begin{enumerate}%#<ol>#
adam@370 77 %\item%#<li># One easy way of defining variable assignments and typings is to define both as instances of a polymorphic map type. The map type at parameter [T] can be defined to be the type of arbitrary functions from variables to [T]. A helpful function for implementing insertion into such a functional map is [eq_nat_dec], which you can make available with [Require Import Arith.]. [eq_nat_dec] has a dependent type that tells you that it makes accurate decisions on whether two natural numbers are equal, but you can use it as if it returned a boolean, e.g., [if eq_nat_dec n m then E1 else E2].#</li>#
adam@370 78 %\item%#<li># If you follow the last hint, you may find yourself writing a proof that involves an expression with [eq_nat_dec] that you would like to simplify. Running [destruct] on the particular call to [eq_nat_dec] should do the trick. You can automate this advice with a piece of Ltac: [[
adam@370 79 match goal with
adam@370 80 | [ |- context[eq_nat_dec ?X ?Y] ] => destruct (eq_nat_dec X Y)
adam@370 81 end
adam@370 82 ]]
adam@370 83 #</li>#
adam@370 84 %\item%#<li># You probably do not want to use an inductive definition for compatibility of variable assignments and typings.#</li>#
adam@370 85 %\item%#<li># The [CpdtTactics] module from this book contains a variant [crush'] of [crush]. [crush'] takes two arguments. The first argument is a list of lemmas and other functions to be tried automatically in %``%#"#forward reasoning#"#%''% style, where we add new facts without being sure yet that they link into a proof of the conclusion. The second argument is a list of predicates on which inversion should be attempted automatically. For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found. Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept. The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#</li>#
adam@370 86 %\item%#<li># If you want [crush'] to apply polymorphic lemmas, you may have to do a little extra work, if the type parameter is not a free variable of your proof context (so that [crush'] does not know to try it). For instance, if you define a polymorphic map insert function [assign] of some type [forall T : Set, ...], and you want particular applications of [assign] added automatically with type parameter [U], you would need to include [assign] in the lemma list as [assign U] (if you have implicit arguments off) or [assign (T := U)] or [@assign U] (if you have implicit arguments on).#</li>#
adam@370 87 #</ol> </li>#%\end{enumerate}%
adam@370 88
adam@370 89 #</li>#
adam@370 90
adam@370 91 #</ol>#%\end{enumerate}% *)
adam@370 92
adam@370 93
adam@370 94 (** * From Coinductive *)
adam@370 95
adam@370 96 (** %\begin{enumerate}%#<ol>#
adam@370 97
adam@370 98 %\item%#<li># %\begin{enumerate}%#<ol>#
adam@370 99 %\item%#<li># Define a co-inductive type of infinite trees carrying data of a fixed parameter type. Each node should contain a data value and two child trees.#</li>#
adam@370 100 %\item%#<li># Define a function [everywhere] for building a tree with the same data value at every node.#</li>#
adam@370 101 %\item%#<li># Define a function [map] for building an output tree out of two input trees by traversing them in parallel and applying a two-argument function to their corresponding data values.#</li>#
adam@370 102 %\item%#<li># Define a tree [falses] where every node has the value [false].#</li>#
adam@370 103 %\item%#<li># Define a tree [true_false] where the root node has value [true], its children have value [false], all nodes at the next have the value [true], and so on, alternating boolean values from level to level.#</li>#
adam@370 104 %\item%#<li># Prove that [true_false] is equal to the result of mapping the boolean %``%#"#or#"#%''% function [orb] over [true_false] and [falses]. You can make [orb] available with [Require Import Bool.]. You may find the lemma [orb_false_r] from the same module helpful. Your proof here should not be about the standard equality [=], but rather about some new equality relation that you define.#</li>#
adam@370 105 #</ol>#%\end{enumerate}% #</li>#
adam@370 106
adam@370 107 #</ol>#%\end{enumerate}% *)
adam@370 108
adam@370 109
adam@370 110 (** * From Subset *)
adam@370 111
adam@370 112 (** All of the notations defined in this chapter, plus some extras, are available for import from the module [MoreSpecif] of the book source.
adam@370 113
adam@370 114 %\begin{enumerate}%#<ol>#
adam@370 115 %\item%#<li># Write a function of type [forall n m : nat, {][n <= m} + {][n > m}]. That is, this function decides whether one natural is less than another, and its dependent type guarantees that its results are accurate.#</li>#
adam@370 116
adam@370 117 %\item%#<li># %\begin{enumerate}%#<ol>#
adam@370 118 %\item%#<li># Define [var], a type of propositional variables, as a synonym for [nat].#</li>#
adam@370 119 %\item%#<li># Define an inductive type [prop] of propositional logic formulas, consisting of variables, negation, and binary conjunction and disjunction.#</li>#
adam@370 120 %\item%#<li># Define a function [propDenote] from variable truth assignments and [prop]s to [Prop], based on the usual meanings of the connectives. Represent truth assignments as functions from [var] to [bool].#</li>#
adam@370 121 %\item%#<li># Define a function [bool_true_dec] that checks whether a boolean is true, with a maximally expressive dependent type. That is, the function should have type [forall b, {b = true} + {b = true -> False}]. #</li>#
adam@370 122 %\item%#<li># Define a function [decide] that determines whether a particular [prop] is true under a particular truth assignment. That is, the function should have type [forall (truth : var -> bool) (p : prop), {propDenote truth p} + {~ propDenote truth p}]. This function is probably easiest to write in the usual tactical style, instead of programming with [refine]. The function [bool_true_dec] may come in handy as a hint.#</li>#
adam@370 123 %\item%#<li># Define a function [negate] that returns a simplified version of the negation of a [prop]. That is, the function should have type [forall p : prop, {p' : prop | forall truth, propDenote truth p <-> ~ propDenote truth p'}]. To simplify a variable, just negate it. Simplify a negation by returning its argument. Simplify conjunctions and disjunctions using De Morgan's laws, negating the arguments recursively and switching the kind of connective. Your [decide] function may be useful in some of the proof obligations, even if you do not use it in the computational part of [negate]'s definition. Lemmas like [decide] allow us to compensate for the lack of a general Law of the Excluded Middle in CIC.#</li>#
adam@370 124 #</ol>#%\end{enumerate}% #</li>#
adam@370 125
adam@370 126 %\item%#<li># Implement the DPLL satisfiability decision procedure for boolean formulas in conjunctive normal form, with a dependent type that guarantees its correctness. An example of a reasonable type for this function would be [forall f : formula, {truth : tvals | formulaTrue truth f} + {][forall truth, ~ formulaTrue truth f}]. Implement at least %``%#"#the basic backtracking algorithm#"#%''% as defined here:
adam@370 127 %\begin{center}\url{http://en.wikipedia.org/wiki/DPLL_algorithm}\end{center}%
adam@370 128 #<blockquote><a href="http://en.wikipedia.org/wiki/DPLL_algorithm">http://en.wikipedia.org/wiki/DPLL_algorithm</a></blockquote>#
adam@370 129 It might also be instructive to implement the unit propagation and pure literal elimination optimizations described there or some other optimizations that have been used in modern SAT solvers.#</li>#
adam@370 130
adam@370 131 #</ol>#%\end{enumerate}% *)
adam@370 132
adam@370 133
adam@370 134 (** * From MoreDep *)
adam@370 135
adam@370 136 (** %\begin{enumerate}%#<ol>#
adam@370 137
adam@370 138 %\item%#<li># Define a kind of dependently typed lists, where a list's type index gives a lower bound on how many of its elements satisfy a particular predicate. In particular, for an arbitrary set [A] and a predicate [P] over it:
adam@370 139 %\begin{enumerate}%#<ol>#
adam@370 140 %\item%#<li># Define a type [plist : nat -> Set]. Each [plist n] should be a list of [A]s, where it is guaranteed that at least [n] distinct elements satisfy [P]. There is wide latitude in choosing how to encode this. You should try to avoid using subset types or any other mechanism based on annotating non-dependent types with propositions after-the-fact.#</li>#
adam@370 141 %\item%#<li># Define a version of list concatenation that works on [plist]s. The type of this new function should express as much information as possible about the output [plist].#</li>#
adam@370 142 %\item%#<li># Define a function [plistOut] for translating [plist]s to normal [list]s.#</li>#
adam@370 143 %\item%#<li># Define a function [plistIn] for translating [list]s to [plist]s. The type of [plistIn] should make it clear that the best bound on [P]-matching elements is chosen. You may assume that you are given a dependently typed function for deciding instances of [P].#</li>#
adam@370 144 %\item%#<li># Prove that, for any list [ls], [plistOut (plistIn ls) = ls]. This should be the only part of the exercise where you use tactic-based proving.#</li>#
adam@370 145 %\item%#<li># Define a function [grab : forall n (ls : plist (][S n)), sig P]. That is, when given a [plist] guaranteed to contain at least one element satisfying [P], [grab] produces such an element. The type family [sig] is the one we met earlier for sigma types (i.e., dependent pairs of programs and proofs), and [sig P] is extensionally equivalent to [{][x : A | P x}], though the latter form uses an eta-expansion of [P] instead of [P] itself as the predicate.#</li>#
adam@370 146 #</ol>#%\end{enumerate}% #</li>#
adam@370 147
adam@370 148 #</ol>#%\end{enumerate}% *)
adam@370 149
adam@370 150
adam@370 151 (** * From DataStruct *)
adam@370 152
adam@370 153 (** remove printing * *)
adam@370 154
adam@370 155 (** Some of the type family definitions and associated functions from this chapter are duplicated in the [DepList] module of the book source. Some of their names have been changed to be more sensible in a general context.
adam@370 156
adam@370 157 %\begin{enumerate}%#<ol>#
adam@370 158
adam@370 159 %\item%#<li># Define a tree analogue of [hlist]. That is, define a parameterized type of binary trees with data at their leaves, and define a type family [htree] indexed by trees. The structure of an [htree] mirrors its index tree, with the type of each data element (which only occur at leaves) determined by applying a type function to the corresponding element of the index tree. Define a type standing for all possible paths from the root of a tree to leaves and use it to implement a function [tget] for extracting an element of an [htree] by path. Define a function [htmap2] for %``%#"#mapping over two trees in parallel.#"#%''% That is, [htmap2] takes in two [htree]s with the same index tree, and it forms a new [htree] with the same index by applying a binary function pointwise.
adam@370 160
adam@370 161 Repeat this process so that you implement each definition for each of the three definition styles covered in this chapter: inductive, recursive, and index function.#</li>#
adam@370 162
adam@370 163 %\item%#<li># Write a dependently typed interpreter for a simple programming language with ML-style pattern-matching, using one of the encodings of heterogeneous lists to represent the different branches of a [case] expression. (There are other ways to represent the same thing, but the point of this exercise is to practice using those heterogeneous list types.) The object language is defined informally by this grammar:
adam@370 164 [[
adam@370 165 t ::= bool | t + t
adam@370 166 p ::= x | b | inl p | inr p
adam@370 167 e ::= x | b | inl e | inr e | case e of [p => e]* | _ => e
adam@370 168 ]]
adam@370 169
adam@370 170 The non-terminal [x] stands for a variable, and [b] stands for a boolean constant. The production for [case] expressions means that a pattern-match includes zero or more pairs of patterns and expressions, along with a default case.
adam@370 171
adam@370 172 Your interpreter should be implemented in the style demonstrated in this chapter. That is, your definition of expressions should use dependent types and de Bruijn indices to combine syntax and typing rules, such that the type of an expression tells the types of variables that are in scope. You should implement a simple recursive function translating types [t] to [Set], and your interpreter should produce values in the image of this translation.#</li>#
adam@370 173
adam@370 174 #</ol>#%\end{enumerate}% *)
adam@370 175
adam@370 176
adam@370 177 (** * From Equality *)
adam@370 178
adam@370 179 (** %\begin{enumerate}%#<ol>#
adam@370 180
adam@370 181 %\item%#<li># Implement and prove correct a substitution function for simply typed lambda calculus. In particular:
adam@370 182 %\begin{enumerate}%#<ol>#
adam@370 183 %\item%#<li># Define a datatype [type] of lambda types, including just booleans and function types.#</li>#
adam@370 184 %\item%#<li># Define a type family [exp : list type -> type -> Type] of lambda expressions, including boolean constants, variables, and function application and abstraction.#</li>#
adam@370 185 %\item%#<li># Implement a definitional interpreter for [exp]s, by way of a recursive function over expressions and substitutions for free variables, like in the related example from the last chapter.#</li>#
adam@370 186 %\item%#<li># Implement a function [subst : forall t' ts t, exp (t' :: ts) t -> exp ts t' -> exp ts t]. The type of the first expression indicates that its most recently bound free variable has type [t']. The second expression also has type [t'], and the job of [subst] is to substitute the second expression for every occurrence of the %``%#"#first#"#%''% variable of the first expression.#</li>#
adam@370 187 %\item%#<li># Prove that [subst] preserves program meanings. That is, prove
adam@370 188 [[
adam@370 189 forall t' ts t (e : exp (t' :: ts) t) (e' : exp ts t') (s : hlist typeDenote ts),
adam@370 190 expDenote (subst e e') s = expDenote e (expDenote e' s ::: s)
adam@370 191 ]]
adam@370 192 where [:::] is an infix operator for heterogeneous %``%#"#cons#"#%''% that is defined in the book's [DepList] module.#</li>#
adam@370 193 #</ol>#%\end{enumerate}%
adam@370 194 The material presented up to this point should be sufficient to enable a good solution of this exercise, with enough ingenuity. If you get stuck, it may be helpful to use the following structure. None of these elements need to appear in your solution, but we can at least guarantee that there is a reasonable solution based on them.
adam@370 195 %\begin{enumerate}%#<ol>#
adam@370 196 %\item%#<li># The [DepList] module will be useful. You can get the standard dependent list definitions there, instead of copying-and-pasting from the last chapter. It is worth reading the source for that module over, since it defines some new helpful functions and notations that we did not use last chapter.#</li>#
adam@370 197 %\item%#<li># Define a recursive function [liftVar : forall ts1 ts2 t t', member t (ts1 ++ ts2) -> member t (ts1 ++ t' :: ts2)]. This function should %``%#"#lift#"#%''% a de Bruijn variable so that its type refers to a new variable inserted somewhere in the index list.#</li>#
adam@370 198 %\item%#<li># Define a recursive function [lift' : forall ts t (e : exp ts t) ts1 ts2 t', ts = ts1 ++ ts2 -> exp (ts1 ++ t' :: ts2) t] which performs a similar lifting on an [exp]. The convoluted type is to get around restrictions on [match] annotations. We delay %``%#"#realizing#"#%''% that the first index of [e] is built with list concatenation until after a dependent [match], and the new explicit proof argument must be used to cast some terms that come up in the [match] body.#</li>#
adam@370 199 %\item%#<li># Define a function [lift : forall ts t t', exp ts t -> exp (t' :: ts) t], which handles simpler top-level lifts. This should be an easy one-liner based on [lift'].#</li>#
adam@399 200 %\item%#<li># Define a recursive function [substVar : forall ts1 ts2 t t', member t (ts1 ++ t' :: ts2) -> (t' = t) + member t (ts1 ++ ts2)]. This function is the workhorse behind substitution applied to a variable. It returns [inl] to indicate that the variable we pass to it is the variable that we are substituting for, and it returns [inr] to indicate that the variable we are examining is _not_ the one we are substituting for. In the first case, we get a proof that the necessary typing relationship holds, and, in the second case, we get the original variable modified to reflect the removal of the substitutee from the typing context.#</li>#
adam@370 201 %\item%#<li># Define a recursive function [subst' : forall ts t (e : exp ts t) ts1 t' ts2, ts = ts1 ++ t' :: ts2 -> exp (ts1 ++ ts2) t' -> exp (ts1 ++ ts2) t]. This is the workhorse of substitution in expressions, employing the same proof-passing trick as for [lift']. You will probably want to use [lift] somewhere in the definition of [subst'].#</li>#
adam@370 202 %\item%#<li># Now [subst] should be a one-liner, defined in terms of [subst'].#</li>#
adam@370 203 %\item%#<li># Prove a correctness theorem for each auxiliary function, leading up to the proof of [subst] correctness.#</li>#
adam@426 204 %\item%#<li># All of the reasoning about equality proofs in these theorems follows a regular pattern. If you have an equality proof that you want to replace with [eq_refl] somehow, run [generalize] on that proof variable. Your goal is to get to the point where you can [rewrite] with the original proof to change the type of the generalized version. To avoid type errors (the infamous %``%#"#second-order unification#"#%''% failure messages), it will be helpful to run [generalize] on other pieces of the proof context that mention the equality's lefthand side. You might also want to use [generalize dependent], which generalizes not just one variable but also all variables whose types depend on it. [generalize dependent] has the sometimes-helpful property of removing from the context all variables that it generalizes. Once you do manage the mind-bending trick of using the equality proof to rewrite its own type, you will be able to rewrite with [UIP_refl].#</li>#
adam@370 205 %\item%#<li># The [ext_eq] axiom from the end of this chapter is available in the Coq standard library as [functional_extensionality] in module [FunctionalExtensionality], and you will probably want to use it in the [lift'] and [subst'] correctness proofs.#</li>#
adam@370 206 %\item%#<li># The [change] tactic should come in handy in the proofs about [lift] and [subst], where you want to introduce %``%#"#extraneous#"#%''% list concatenations with [nil] to match the forms of earlier theorems.#</li>#
adam@370 207 %\item%#<li># Be careful about [destruct]ing a term %``%#"#too early.#"#%''% You can use [generalize] on proof terms to bring into the proof context any important propositions about the term. Then, when you [destruct] the term, it is updated in the extra propositions, too. The [case_eq] tactic is another alternative to this approach, based on saving an equality between the original term and its new form.#</li>#
adam@370 208 #</ol>#%\end{enumerate}%
adam@370 209 #</li>#
adam@370 210
adam@370 211 #</ol>#%\end{enumerate}% *)
adam@370 212
adam@370 213
adam@370 214 (** * From LogicProg *)
adam@370 215
adam@370 216 (** printing * $\cdot$ *)
adam@370 217
adam@370 218 (** %\begin{enumerate}%#<ol>#
adam@370 219
adam@370 220 %\item%#<li># I did a Google search for group theory and found #<a href="http://dogschool.tripod.com/housekeeping.html">#a page that proves some standard theorems#</a>#%\footnote{\url{http://dogschool.tripod.com/housekeeping.html}}%. This exercise is about proving all of the theorems on that page automatically.
adam@370 221
adam@370 222 For the purposes of this exercise, a group is a set [G], a binary function [f] over [G], an identity element [e] of [G], and a unary inverse function [i] for [G]. The following laws define correct choices of these parameters. We follow standard practice in algebra, where all variables that we mention are quantified universally implicitly at the start of a fact. We write infix [*] for [f], and you can set up the same sort of notation in your code with a command like [Infix "*" := f.].
adam@370 223
adam@370 224 %\begin{itemize}%#<ul>#
adam@370 225 %\item%#<li># %\textbf{%#<b>#Associativity#</b>#%}%: [(a * b) * c = a * (b * c)]#</li>#
adam@370 226 %\item%#<li># %\textbf{%#<b>#Right Identity#</b>#%}%: [a * e = a]#</li>#
adam@370 227 %\item%#<li># %\textbf{%#<b>#Right Inverse#</b>#%}%: [a * i a = e]#</li>#
adam@370 228 #</ul> </li>#%\end{itemize}%
adam@370 229
adam@370 230 The task in this exercise is to prove each of the following theorems for all groups, where we define a group exactly as above. There is a wrinkle: every theorem or lemma must be proved by either a single call to [crush] or a single call to [eauto]! It is allowed to pass numeric arguments to [eauto], where appropriate. Recall that a numeric argument sets the depth of proof search, where 5 is the default. Lower values can speed up execution when a proof exists within the bound. Higher values may be necessary to find more involved proofs.
adam@370 231
adam@370 232 %\begin{itemize}%#<ul>#
adam@370 233 %\item%#<li># %\textbf{%#<b>#Characterizing Identity#</b>#%}%: [a * a = a -> a = e]#</li>#
adam@370 234 %\item%#<li># %\textbf{%#<b>#Left Inverse#</b>#%}%: [i a * a = e]#</li>#
adam@370 235 %\item%#<li># %\textbf{%#<b>#Left Identity#</b>#%}%: [e * a = a]#</li>#
adam@370 236 %\item%#<li># %\textbf{%#<b>#Uniqueness of Left Identity#</b>#%}%: [p * a = a -> p = e]#</li>#
adam@370 237 %\item%#<li># %\textbf{%#<b>#Uniqueness of Right Inverse#</b>#%}%: [a * b = e -> b = i a]#</li>#
adam@370 238 %\item%#<li># %\textbf{%#<b>#Uniqueness of Left Inverse#</b>#%}%: [a * b = e -> a = i b]#</li>#
adam@370 239 %\item%#<li># %\textbf{%#<b>#Right Cancellation#</b>#%}%: [a * x = b * x -> a = b]#</li>#
adam@370 240 %\item%#<li># %\textbf{%#<b>#Left Cancellation#</b>#%}%: [x * a = x * b -> a = b]#</li>#
adam@370 241 %\item%#<li># %\textbf{%#<b>#Distributivity of Inverse#</b>#%}%: [i (a * b) = i b * i a]#</li>#
adam@370 242 %\item%#<li># %\textbf{%#<b>#Double Inverse#</b>#%}%: [i (][i a) = a]#</li>#
adam@370 243 %\item%#<li># %\textbf{%#<b>#Identity Inverse#</b>#%}%: [i e = e]#</li>#
adam@370 244 #</ul> </li>#%\end{itemize}%
adam@370 245
adam@370 246 One more use of tactics is allowed in this problem. The following lemma captures one common pattern of reasoning in algebra proofs: *)
adam@370 247
adam@370 248 (* begin hide *)
adam@370 249 Variable G : Set.
adam@370 250 Variable f : G -> G -> G.
adam@370 251 Infix "*" := f.
adam@370 252 (* end hide *)
adam@370 253
adam@370 254 Lemma mult_both : forall a b c d1 d2,
adam@370 255 a * c = d1
adam@370 256 -> b * c = d2
adam@370 257 -> a = b
adam@370 258 -> d1 = d2.
adam@370 259 crush.
adam@370 260 Qed.
adam@370 261
adam@370 262 (** That is, we know some equality [a = b], which is the third hypothesis above. We derive a further equality by multiplying both sides by [c], to yield [a * c = b * c]. Next, we do algebraic simplification on both sides of this new equality, represented by the first two hypotheses above. The final result is a new theorem of algebra.
adam@370 263
adam@370 264 The next chapter introduces more details of programming in Ltac, but here is a quick teaser that will be useful in this problem. Include the following hint command before you start proving the main theorems of this exercise: *)
adam@370 265
adam@370 266 Hint Extern 100 (_ = _) =>
adam@370 267 match goal with
adam@370 268 | [ _ : True |- _ ] => fail 1
adam@370 269 | _ => assert True by constructor; eapply mult_both
adam@370 270 end.
adam@370 271
adam@399 272 (** This hint has the effect of applying [mult_both] _at most once_ during a proof. After the next chapter, it should be clear why the hint has that effect, but for now treat it as a useful black box. Simply using [Hint Resolve mult_both] would increase proof search time unacceptably, because there are just too many ways to use [mult_both] repeatedly within a proof.
adam@370 273
adam@370 274 The order of the theorems above is itself a meta-level hint, since I found that order to work well for allowing the use of earlier theorems as hints in the proofs of later theorems.
adam@370 275
adam@370 276 The key to this problem is coming up with further lemmas like [mult_both] that formalize common patterns of reasoning in algebraic proofs. These lemmas need to be more than sound: they must also fit well with the way that [eauto] does proof search. For instance, if we had given [mult_both] a traditional statement, we probably would have avoided %``%#"#pointless#"#%''% equalities like [a = b], which could be avoided simply by replacing all occurrences of [b] with [a]. However, the resulting theorem would not work as well with automated proof search! Every additional hint you come up with should be registered with [Hint Resolve], so that the lemma statement needs to be in a form that [eauto] understands %``%#"#natively.#"#%''%
adam@370 277
adam@370 278 I recommend testing a few simple rules corresponding to common steps in algebraic proofs. You can apply them manually with any tactics you like (e.g., [apply] or [eapply]) to figure out what approaches work, and then switch to [eauto] once you have the full set of hints.
adam@370 279
adam@370 280 I also proved a few hint lemmas tailored to particular theorems, but which do not give common algebraic simplification rules. You will probably want to use some, too, in cases where [eauto] does not find a proof within a reasonable amount of time. In total, beside the main theorems to be proved, my sample solution includes 6 lemmas, with a mix of the two kinds of lemmas. You may use more in your solution, but I suggest trying to minimize the number.
adam@370 281
adam@370 282 #</ol>#%\end{enumerate}% *)
adam@370 283
adam@370 284
adam@370 285 (** * From Match *)
adam@370 286
adam@370 287 (** %\begin{enumerate}%#<ol>#
adam@370 288
adam@370 289 %\item%#<li># An anonymous Coq fan from the Internet was excited to come up with this tactic definition shortly after getting started learning Ltac: *)
adam@370 290
adam@370 291 Ltac deSome :=
adam@370 292 match goal with
adam@370 293 | [ H : Some _ = Some _ |- _ ] => injection H; clear H; intros; subst; deSome
adam@370 294 | _ => reflexivity
adam@370 295 end.
adam@370 296
adam@370 297 (** Without lifting a finger, exciting theorems can be proved: *)
adam@370 298
adam@370 299 Theorem test : forall (a b c d e f g : nat),
adam@370 300 Some a = Some b
adam@370 301 -> Some b = Some c
adam@370 302 -> Some e = Some c
adam@370 303 -> Some f = Some g
adam@370 304 -> c = a.
adam@370 305 intros; deSome.
adam@370 306 Qed.
adam@370 307
adam@370 308 (** Unfortunately, this tactic exhibits some degenerate behavior. Consider the following example: *)
adam@370 309
adam@370 310 Theorem test2 : forall (a x1 y1 x2 y2 x3 y3 x4 y4 x5 y5 x6 y6 : nat),
adam@370 311 Some x1 = Some y1
adam@370 312 -> Some x2 = Some y2
adam@370 313 -> Some x3 = Some y3
adam@370 314 -> Some x4 = Some y4
adam@370 315 -> Some x5 = Some y5
adam@370 316 -> Some x6 = Some y6
adam@370 317 -> Some a = Some a
adam@370 318 -> x1 = x2.
adam@370 319 intros.
adam@370 320 Time try deSome.
adam@370 321 Abort.
adam@370 322
adam@370 323 (* begin hide *)
adam@370 324 Reset test.
adam@370 325 (* end hide *)
adam@370 326
adam@370 327 (** This (failed) proof already takes about one second on my workstation. I hope a pattern in the theorem statement is clear; this is a representative of a class of theorems, where we may add more matched pairs of [x] and [y] variables, with equality hypotheses between them. The running time of [deSome] is exponential in the number of such hypotheses.
adam@370 328
adam@370 329 The task in this exercise is twofold. First, figure out why [deSome] exhibits exponential behavior for this class of examples and record your explanation in a comment. Second, write an improved version of [deSome] that runs in polynomial time.#</li>#
adam@370 330
adam@370 331 %\item%#<li># Sometimes it can be convenient to know that a proof attempt is doomed because the theorem is false. For instance, here are three non-theorems about lists: *)
adam@370 332
adam@370 333 Theorem test1 : forall A (ls1 ls2 : list A), ls1 ++ ls2 = ls2 ++ ls1.
adam@370 334 (* begin hide *)
adam@370 335 Abort.
adam@370 336 (* end hide *)
adam@370 337
adam@370 338 Theorem test2 : forall A (ls1 ls2 : list A), length (ls1 ++ ls2) = length ls1 - length ls2.
adam@370 339 (* begin hide *)
adam@370 340 Abort.
adam@370 341 (* end hide *)
adam@370 342
adam@370 343 Theorem test3 : forall A (ls : list A), length (rev ls) - 3 = 0.
adam@370 344 (* begin hide *)
adam@370 345 Abort.
adam@370 346 (* end hide *)
adam@370 347
adam@370 348 (** The task in this exercise is to write a tactic that disproves these and many other related %``%#"#theorems#"#%''% about lists. Your tactic should follow a simple brute-force enumeration strategy, considering all [list bool] values with length up to some bound given by the user, as a [nat] argument to the tactic. A successful invocation should add a new hypothesis of the negation of the theorem (guaranteeing that the tactic has made a sound decision about falsehood).
adam@370 349
adam@370 350 A few hints: A good starting point is to pattern-match the conclusion formula and use the [assert] tactic on its negation. An [assert] invocation may include a [by] clause to specify a tactic to use to prove the assertion.
adam@370 351
adam@370 352 The idea in this exercise is to disprove a quantified formula by finding instantiations for the quantifiers that make it manifestly false. Recall the [specialize] tactic for specializing a hypothesis to particular quantifier instantiations. When you have instantiated quantifiers fully, [discriminate] is a good choice to derive a contradiction. (It at least works for the three examples above and is smart enough for this exercise's purposes.) The [type of] Ltac construct may be useful to analyze the type of a hypothesis to choose how to instantiate its quantifiers.
adam@370 353
adam@370 354 To enumerate all boolean lists up to a certain length, it will be helpful to write a recursive tactic in continuation-passing style, where the continuation is meant to be called on each candidate list.
adam@370 355
adam@370 356 Remember that arguments to Ltac functions may not be type-checked in contexts large enough to allow usual implicit argument inference, so instead of [nil] it will be useful to write [@][nil bool], which specifies the usually implicit argument explicitly.
adam@370 357
adam@370 358 %\item%#<li># Some theorems involving existential quantifiers are easy to prove with [eauto]. *)
adam@370 359
adam@370 360 Theorem test1 : exists x, x = 0.
adam@370 361 eauto.
adam@370 362 Qed.
adam@370 363
adam@370 364 (** Others are harder. The problem with the next theorem is that the existentially quantified variable does not appear in the rest of the theorem, so [eauto] has no way to deduce its value. However, we know that we had might as well instantiate that variable to [tt], the only value of type [unit]. *)
adam@370 365
adam@370 366 Theorem test2 : exists x : unit, 0 = 0.
adam@370 367 (* begin hide *)
adam@370 368 eauto.
adam@370 369 Abort.
adam@370 370 (* end hide *)
adam@370 371
adam@370 372 (** We also run into trouble in the next theorem, because [eauto] does not understand the [fst] and [snd] projection functions for pairs. *)
adam@370 373
adam@370 374 Theorem test3 : exists x : nat * nat, fst x = 7 /\ snd x = 2 + fst x.
adam@370 375 (* begin hide *)
adam@370 376 eauto.
adam@370 377 Abort.
adam@370 378 (* end hide *)
adam@370 379
adam@370 380 (** Both problems show up in this monster example. *)
adam@370 381
adam@370 382 Theorem test4 : exists x : (unit * nat) * (nat * bool),
adam@370 383 snd (fst x) = 7 /\ fst (snd x) = 2 + snd (fst x) /\ snd (snd x) = true.
adam@370 384 (* begin hide *)
adam@370 385 eauto.
adam@370 386 Abort.
adam@370 387 (* end hide *)
adam@370 388
adam@370 389 (** The task in this problem is to write a tactic that preprocesses such goals so that [eauto] can finish them. Your tactic should serve as a complete proof of each of the above examples, along with the wide class of similar examples. The key smarts that your tactic will bring are: first, it introduces separate unification variables for all the %``%#"#leaf types#"#%''% of compound types built out of pairs; and second, leaf unification variables of type [unit] are simply replaced by [tt].
adam@370 390
adam@370 391 A few hints: The following tactic is more convenient than direct use of the built-in tactic [evar], for generation of new unification variables: *)
adam@370 392
adam@370 393 Ltac makeEvar T k := let x := fresh in
adam@370 394 evar (x : T); let y := eval unfold x in x in clear x; k y.
adam@370 395
adam@370 396 (** remove printing exists *)
adam@370 397
adam@370 398 (** This is a continuation-passing style tactic. For instance, when the goal begins with existential quantification over a type [T], the following tactic invocation will create a new unification variable to use as the quantifier instantiation:
adam@370 399
adam@370 400 [makeEvar T ltac:(][fun x => exists x)] *)
adam@370 401
adam@370 402 (** printing exists $\exists$ *)
adam@370 403
adam@370 404 (** Recall that [exists] formulas are desugared to uses of the [ex] inductive family. In particular, a pattern like the following can be used to extract the domain of an [exists] quantifier into variable [T]:
adam@370 405
adam@370 406 [| ]#[#%[%[ |- ex (][A := ?][T) _ ]#]#%]%[ => ...]
adam@370 407
adam@370 408 The [equate] tactic used as an example in this chapter will probably be useful, to unify two terms, for instance if the first is a unification variable whose value you want to set.
adam@370 409 [[
adam@370 410 Ltac equate E1 E2 := let H := fresh in
adam@370 411 assert (H : E1 = E2) by reflexivity; clear H.
adam@370 412 ]]
adam@370 413
adam@370 414 Finally, there are some minor complications surrounding overloading of the [*] operator for both numeric multiplication and Cartesian product for sets (i.e., pair types). To ensure that an Ltac pattern is using the type version, write it like this:
adam@370 415
adam@370 416 [| (?T1 * ?T2)%][type => ...]#</li>#
adam@370 417
adam@370 418 %\item%#<li># An exercise in the last chapter dealt with automating proofs about rings using [eauto], where we must prove some odd-looking theorems to push proof search in a direction where unification does all the work. Algebraic proofs consist mostly of rewriting in equations, so we might hope that the [autorewrite] tactic would yield more natural automated proofs. Indeed, consider this example within the same formulation of ring theory that we dealt with last chapter, where each of the three axioms has been added to the rewrite hint database [cpdt] using [Hint Rewrite]:
adam@370 419 [[
adam@370 420 Theorem test1 : forall a b, a * b * i b = a.
adam@370 421 intros; autorewrite with cpdt; reflexivity.
adam@370 422 Qed.
adam@370 423 ]]
adam@370 424
adam@370 425 So far so good. However, consider this further example:
adam@370 426 [[
adam@370 427 Theorem test2 : forall a, a * e * i a * i e = e.
adam@370 428 intros; autorewrite with cpdt.
adam@370 429 ]]
adam@370 430
adam@370 431 The goal is merely reduced to [a * (][i a * i e) = e], which of course [reflexivity] cannot prove. The essential problem is that [autorewrite] does not do backtracking search. Instead, it follows a %``%#"#greedy#"#%''% approach, at each stage choosing a rewrite to perform and then never allowing that rewrite to be undone. An early mistake can doom the whole process.
adam@370 432
adam@370 433 The task in this problem is to use Ltac to implement a backtracking version of [autorewrite] that works much like [eauto], in that its inputs are a database of hint lemmas and a bound on search depth. Here our search trees will have uses of [rewrite] at their nodes, rather than uses of [eapply] as in the case of [eauto], and proofs must be finished by [reflexivity].
adam@370 434
adam@370 435 An invocation to the tactic to prove [test2] might look like this:
adam@370 436 [[
adam@370 437 rewriter (right_identity, (right_inverse, tt)) 3.
adam@370 438 ]]
adam@370 439
adam@370 440 The first argument gives the set of lemmas to consider, as a kind of list encoded with pair types. Such a format cannot be analyzed directly by Gallina programs, but Ltac allows us much more freedom to deconstruct syntax. For example, to case analyze such a list found in a variable [x], we need only write:
adam@370 441 [[
adam@370 442 match x with
adam@370 443 | (?lemma, ?more) => ...
adam@370 444 end
adam@370 445 ]]
adam@370 446
adam@370 447 In the body of the case analysis, [lemma] will be bound to the first lemma, and [more] will be bound to the remaining lemmas. There is no need to consider a case for [tt], our stand-in for [nil]. This is because lack of any matching pattern will trigger failure, which is exactly the outcome we would like upon reaching the end of the lemma list without finding one that applies. The tactic will fail, triggering backtracking to some previous [match].
adam@370 448
adam@370 449 There are different kinds of backtracking, corresponding to different sorts of decisions to be made. The examples considered above can be handled with backtracking that only reconsiders decisions about the order in which to apply rewriting lemmas. A full-credit solution need only handle that kind of backtracking, considering all rewriting sequences up to the length bound passed to your tactic. A good test of this level of applicability is to prove both [test1] and [test2] above. However, some theorems could only be proved using a smarter tactic that considers not only order of rewriting lemma uses, but also choice of arguments to the lemmas. That is, at some points in a proof, the same lemma may apply at multiple places within the goal formula, and some choices may lead to stuck proof states while others lead to success. For an extra challenge (without any impact on the grade for the problem), you might try beefing up your tactic to do backtracking on argument choice, too.#</li>#
adam@370 450
adam@370 451 #</ol>#%\end{enumerate}% *)
adam@370 452
adam@370 453
adam@370 454 (** * Exercises *)
adam@370 455
adam@370 456 (** remove printing * *)
adam@370 457
adam@370 458 (** %\begin{enumerate}%#<ol>#
adam@370 459
adam@370 460 %\item%#<li># Implement a reflective procedure for normalizing systems of linear equations over rational numbers. In particular, the tactic should identify all hypotheses that are linear equations over rationals where the equation righthand sides are constants. It should normalize each hypothesis to have a lefthand side that is a sum of products of constants and variables, with no variable appearing multiple times. Then, your tactic should add together all of these equations to form a single new equation, possibly clearing the original equations. Some coefficients may cancel in the addition, reducing the number of variables that appear.
adam@370 461
adam@370 462 To work with rational numbers, import module [QArith] and use [Local Open Scope Q_scope]. All of the usual arithmetic operator notations will then work with rationals, and there are shorthands for constants 0 and 1. Other rationals must be written as [num # den] for numerator [num] and denominator [den]. Use the infix operator [==] in place of [=], to deal with different ways of expressing the same number as a fraction. For instance, a theorem and proof like this one should work with your tactic:
adam@370 463 [[
adam@370 464 Theorem t2 : forall x y z, (2 # 1) * (x - (3 # 2) * y) == 15 # 1
adam@370 465 -> z + (8 # 1) * x == 20 # 1
adam@370 466 -> (-6 # 2) * y + (10 # 1) * x + z == 35 # 1.
adam@370 467 intros; reifyContext; assumption.
adam@370 468 Qed.
adam@370 469 ]]
adam@370 470
adam@370 471 Your solution can work in any way that involves reifying syntax and doing most calculation with a Gallina function. These hints outline a particular possible solution. Throughout, the [ring] tactic will be helpful for proving many simple facts about rationals, and tactics like [rewrite] are correctly overloaded to work with rational equality [==].
adam@370 472
adam@370 473 %\begin{enumerate}%#<ol>#
adam@370 474 %\item%#<li># Define an inductive type [exp] of expressions over rationals (which inhabit the Coq type [Q]). Include variables (represented as natural numbers), constants, addition, subtraction, and multiplication.#</li>#
adam@370 475 %\item%#<li># Define a function [lookup] for reading an element out of a list of rationals, by its position in the list.#</li>#
adam@370 476 %\item%#<li># Define a function [expDenote] that translates [exp]s, along with lists of rationals representing variable values, to [Q].#</li>#
adam@370 477 %\item%#<li># Define a recursive function [eqsDenote] over [list (exp * Q)], characterizing when all of the equations are true.#</li>#
adam@370 478 %\item%#<li># Fix a representation [lhs] of flattened expressions. Where [len] is the number of variables, represent a flattened equation as [ilist Q len]. Each position of the list gives the coefficient of the corresponding variable.#</li>#
adam@370 479 %\item%#<li># Write a recursive function [linearize] that takes a constant [k] and an expression [e] and optionally returns an [lhs] equivalent to [k * e]. This function returns [None] when it discovers that the input expression is not linear. The parameter [len] of [lhs] should be a parameter of [linearize], too. The functions [singleton], [everywhere], and [map2] from [DepList] will probably be helpful. It is also helpful to know that [Qplus] is the identifier for rational addition.#</li>#
adam@370 480 %\item%#<li># Write a recursive function [linearizeEqs : list (exp * Q) -> option (lhs * Q)]. This function linearizes all of the equations in the list in turn, building up the sum of the equations. It returns [None] if the linearization of any constituent equation fails.#</li>#
adam@370 481 %\item%#<li># Define a denotation function for [lhs].#</li>#
adam@370 482 %\item%#<li># Prove that, when [exp] linearization succeeds on constant [k] and expression [e], the linearized version has the same meaning as [k * e].#</li>#
adam@370 483 %\item%#<li># Prove that, when [linearizeEqs] succeeds on an equation list [eqs], then the final summed-up equation is true whenever the original equation list is true.#</li>#
adam@370 484 %\item%#<li># Write a tactic [findVarsHyps] to search through all equalities on rationals in the context, recursing through addition, subtraction, and multiplication to find the list of expressions that should be treated as variables. This list should be suitable as an argument to [expDenote] and [eqsDenote], associating a [Q] value to each natural number that stands for a variable.#</li>#
adam@370 485 %\item%#<li># Write a tactic [reify] to reify a [Q] expression into [exp], with respect to a given list of variable values.#</li>#
adam@370 486 %\item%#<li># Write a tactic [reifyEqs] to reify a formula that begins with a sequence of implications from linear equalities whose lefthand sides are expressed with [expDenote]. This tactic should build a [list (exp * Q)] representing the equations. Remember to give an explicit type annotation when returning a nil list, as in [constr:(][@][nil (exp * Q))].#</li>#
adam@370 487 %\item%#<li># Now this final tactic should do the job:
adam@370 488 [[
adam@370 489 Ltac reifyContext :=
adam@370 490 let ls := findVarsHyps in
adam@370 491 repeat match goal with
adam@370 492 | [ H : ?e == ?num # ?den |- _ ] =>
adam@370 493 let r := reify ls e in
adam@370 494 change (expDenote ls r == num # den) in H;
adam@370 495 generalize H
adam@370 496 end;
adam@370 497 match goal with
adam@370 498 | [ |- ?g ] => let re := reifyEqs g in
adam@370 499 intros;
adam@370 500 let H := fresh "H" in
adam@370 501 assert (H : eqsDenote ls re); [ simpl in *; tauto
adam@370 502 | repeat match goal with
adam@370 503 | [ H : expDenote _ _ == _ |- _ ] => clear H
adam@370 504 end;
adam@370 505 generalize (linearizeEqsCorrect ls re H); clear H; simpl;
adam@370 506 match goal with
adam@370 507 | [ |- ?X == ?Y -> _ ] =>
adam@370 508 ring_simplify X Y; intro
adam@370 509 end ]
adam@370 510 end.
adam@370 511 ]]
adam@370 512
adam@370 513 #</ol>#%\end{enumerate}%
adam@370 514 #</li>#
adam@370 515
adam@370 516 #</ol>#%\end{enumerate}% *)