annotate src/Reflection.v @ 536:d65e9c1c9041

Touch-ups in 8.4
author Adam Chlipala <adam@chlipala.net>
date Wed, 05 Aug 2015 18:07:57 -0400
parents ed829eaa91b2
children 2c8c693ddaba
rev   line source
adam@534 1 (* Copyright (c) 2008-2012, 2015, Adam Chlipala
adamc@142 2 *
adamc@142 3 * This work is licensed under a
adamc@142 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@142 5 * Unported License.
adamc@142 6 * The license text is available at:
adamc@142 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@142 8 *)
adamc@142 9
adamc@142 10 (* begin hide *)
adamc@142 11 Require Import List.
adamc@142 12
adam@534 13 Require Import Cpdt.CpdtTactics Cpdt.MoreSpecif.
adamc@142 14
adamc@142 15 Set Implicit Arguments.
adam@534 16 Set Asymmetric Patterns.
adamc@142 17 (* end hide *)
adamc@142 18
adamc@142 19
adamc@142 20 (** %\chapter{Proof by Reflection}% *)
adamc@142 21
adam@412 22 (** The last chapter highlighted a very heuristic approach to proving. In this chapter, we will study an alternative technique,%\index{proof by reflection}% _proof by reflection_ %\cite{reflection}%. We will write, in Gallina, decision procedures with proofs of correctness, and we will appeal to these procedures in writing very short proofs. Such a proof is checked by running the decision procedure. The term _reflection_ applies because we will need to translate Gallina propositions into values of inductive types representing syntax, so that Gallina programs may analyze them, and translating such a term back to the original form is called _reflecting_ it. *)
adamc@142 23
adamc@142 24
adamc@142 25 (** * Proving Evenness *)
adamc@142 26
adamc@142 27 (** Proving that particular natural number constants are even is certainly something we would rather have happen automatically. The Ltac-programming techniques that we learned in the last chapter make it easy to implement such a procedure. *)
adamc@142 28
adamc@142 29 Inductive isEven : nat -> Prop :=
adamc@144 30 | Even_O : isEven O
adamc@144 31 | Even_SS : forall n, isEven n -> isEven (S (S n)).
adamc@142 32
adamc@148 33 (* begin thide *)
adamc@142 34 Ltac prove_even := repeat constructor.
adamc@148 35 (* end thide *)
adamc@142 36
adamc@142 37 Theorem even_256 : isEven 256.
adamc@142 38 prove_even.
adamc@142 39 Qed.
adamc@142 40
adamc@142 41 Print even_256.
adamc@221 42 (** %\vspace{-.15in}% [[
adamc@142 43 even_256 =
adamc@142 44 Even_SS
adamc@142 45 (Even_SS
adamc@142 46 (Even_SS
adamc@142 47 (Even_SS
adamc@142 48 ]]
adamc@142 49
adam@360 50 %\noindent%...and so on. This procedure always works (at least on machines with infinite resources), but it has a serious drawback, which we see when we print the proof it generates that 256 is even. The final proof term has length super-linear in the input value. Coq's implicit arguments mechanism is hiding the values given for parameter [n] of [Even_SS], which is why the proof term only appears linear here. Also, proof terms are represented internally as syntax trees, with opportunity for sharing of node representations, but in this chapter we will measure proof term size as simple textual length or as the number of nodes in the term's syntax tree, two measures that are approximately equivalent. Sometimes apparently large proof terms have enough internal sharing that they take up less memory than we expect, but one avoids having to reason about such sharing by ensuring that the size of a sharing-free version of a term is low enough.
adam@360 51
adam@360 52 Superlinear evenness proof terms seem like a shame, since we could write a trivial and trustworthy program to verify evenness of constants. The proof checker could simply call our program where needed.
adamc@142 53
adamc@142 54 It is also unfortunate not to have static typing guarantees that our tactic always behaves appropriately. Other invocations of similar tactics might fail with dynamic type errors, and we would not know about the bugs behind these errors until we happened to attempt to prove complex enough goals.
adamc@142 55
adam@508 56 The techniques of proof by reflection address both complaints. We will be able to write proofs like in the example above with constant size overhead beyond the size of the input, and we will do it with verified decision procedures written in Gallina.
adamc@142 57
adamc@221 58 For this example, we begin by using a type from the [MoreSpecif] module (included in the book source) to write a certified evenness checker. *)
adamc@142 59
adam@432 60 (* begin hide *)
adam@437 61 (* begin thide *)
adam@432 62 Definition paartial := partial.
adam@437 63 (* end thide *)
adam@432 64 (* end hide *)
adam@432 65
adamc@142 66 Print partial.
adamc@221 67 (** %\vspace{-.15in}% [[
adamc@221 68 Inductive partial (P : Prop) : Set := Proved : P -> [P] | Uncertain : [P]
adamc@221 69 ]]
adamc@142 70
adamc@221 71 A [partial P] value is an optional proof of [P]. The notation [[P]] stands for [partial P]. *)
adamc@142 72
adamc@221 73 Local Open Scope partial_scope.
adamc@142 74
adamc@142 75 (** We bring into scope some notations for the [partial] type. These overlap with some of the notations we have seen previously for specification types, so they were placed in a separate scope that needs separate opening. *)
adamc@142 76
adamc@148 77 (* begin thide *)
adam@297 78 Definition check_even : forall n : nat, [isEven n].
adamc@142 79 Hint Constructors isEven.
adamc@142 80
adamc@142 81 refine (fix F (n : nat) : [isEven n] :=
adamc@221 82 match n with
adamc@142 83 | 0 => Yes
adamc@142 84 | 1 => No
adamc@142 85 | S (S n') => Reduce (F n')
adamc@142 86 end); auto.
adamc@142 87 Defined.
adamc@142 88
adam@461 89 (** The function [check_even] may be viewed as a _verified decision procedure_, because its type guarantees that it never returns %\coqdocnotation{%#<tt>#Yes#</tt>#%}% for inputs that are not even.
adam@360 90
adam@360 91 Now we can use dependent pattern-matching to write a function that performs a surprising feat. When given a [partial P], this function [partialOut] returns a proof of [P] if the [partial] value contains a proof, and it returns a (useless) proof of [True] otherwise. From the standpoint of ML and Haskell programming, it seems impossible to write such a type, but it is trivial with a [return] annotation. *)
adamc@142 92
adamc@142 93 Definition partialOut (P : Prop) (x : [P]) :=
adamc@142 94 match x return (match x with
adamc@142 95 | Proved _ => P
adamc@142 96 | Uncertain => True
adamc@142 97 end) with
adamc@142 98 | Proved pf => pf
adamc@142 99 | Uncertain => I
adamc@142 100 end.
adamc@142 101
adam@289 102 (** It may seem strange to define a function like this. However, it turns out to be very useful in writing a reflective version of our earlier [prove_even] tactic: *)
adamc@142 103
adamc@142 104 Ltac prove_even_reflective :=
adamc@142 105 match goal with
adamc@142 106 | [ |- isEven ?N] => exact (partialOut (check_even N))
adamc@142 107 end.
adamc@148 108 (* end thide *)
adamc@142 109
adam@432 110 (** We identify which natural number we are considering, and we "prove" its evenness by pulling the proof out of the appropriate [check_even] call. Recall that the %\index{tactics!exact}%[exact] tactic proves a proposition [P] when given a proof term of precisely type [P]. *)
adamc@142 111
adamc@142 112 Theorem even_256' : isEven 256.
adamc@142 113 prove_even_reflective.
adamc@142 114 Qed.
adamc@142 115
adamc@142 116 Print even_256'.
adamc@221 117 (** %\vspace{-.15in}% [[
adamc@142 118 even_256' = partialOut (check_even 256)
adamc@142 119 : isEven 256
adamc@142 120 ]]
adamc@142 121
adam@289 122 We can see a constant wrapper around the object of the proof. For any even number, this form of proof will suffice. The size of the proof term is now linear in the number being checked, containing two repetitions of the unary form of that number, one of which is hidden above within the implicit argument to [partialOut].
adam@289 123
adam@289 124 What happens if we try the tactic with an odd number? *)
adamc@142 125
adamc@142 126 Theorem even_255 : isEven 255.
adam@360 127 (** %\vspace{-.275in}%[[
adamc@142 128 prove_even_reflective.
adam@360 129 ]]
adamc@142 130
adam@360 131 <<
adamc@142 132 User error: No matching clauses for match goal
adam@360 133 >>
adamc@142 134
adamc@142 135 Thankfully, the tactic fails. To see more precisely what goes wrong, we can run manually the body of the [match].
adamc@142 136
adam@360 137 %\vspace{-.15in}%[[
adamc@142 138 exact (partialOut (check_even 255)).
adam@360 139 ]]
adamc@142 140
adam@360 141 <<
adamc@142 142 Error: The term "partialOut (check_even 255)" has type
adamc@142 143 "match check_even 255 with
adamc@142 144 | Yes => isEven 255
adamc@142 145 | No => True
adamc@142 146 end" while it is expected to have type "isEven 255"
adam@360 147 >>
adamc@142 148
adam@461 149 As usual, the type checker performs no reductions to simplify error messages. If we reduced the first term ourselves, we would see that [check_even 255] reduces to a %\coqdocnotation{%#<tt>#No#</tt>#%}%, so that the first term is equivalent to [True], which certainly does not unify with [isEven 255]. *)
adamc@221 150
adamc@142 151 Abort.
adamc@143 152
adam@362 153 (** Our tactic [prove_even_reflective] is reflective because it performs a proof search process (a trivial one, in this case) wholly within Gallina, where the only use of Ltac is to translate a goal into an appropriate use of [check_even]. *)
adamc@143 154
adam@360 155
adam@360 156 (** * Reifying the Syntax of a Trivial Tautology Language *)
adamc@143 157
adamc@143 158 (** We might also like to have reflective proofs of trivial tautologies like this one: *)
adamc@143 159
adamc@143 160 Theorem true_galore : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 161 tauto.
adamc@143 162 Qed.
adamc@143 163
adam@432 164 (* begin hide *)
adam@437 165 (* begin thide *)
adam@432 166 Definition tg := (and_ind, or_introl).
adam@437 167 (* end thide *)
adam@432 168 (* end hide *)
adam@432 169
adamc@143 170 Print true_galore.
adamc@221 171 (** %\vspace{-.15in}% [[
adamc@143 172 true_galore =
adamc@143 173 fun H : True /\ True =>
adamc@143 174 and_ind (fun _ _ : True => or_introl (True /\ (True -> True)) I) H
adamc@143 175 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 176 ]]
adamc@143 177
adamc@143 178 As we might expect, the proof that [tauto] builds contains explicit applications of natural deduction rules. For large formulas, this can add a linear amount of proof size overhead, beyond the size of the input.
adamc@143 179
adam@432 180 To write a reflective procedure for this class of goals, we will need to get into the actual "reflection" part of "proof by reflection." It is impossible to case-analyze a [Prop] in any way in Gallina. We must%\index{reification}% _reify_ [Prop] into some type that we _can_ analyze. This inductive type is a good candidate: *)
adamc@143 181
adamc@148 182 (* begin thide *)
adamc@143 183 Inductive taut : Set :=
adamc@143 184 | TautTrue : taut
adamc@143 185 | TautAnd : taut -> taut -> taut
adamc@143 186 | TautOr : taut -> taut -> taut
adamc@143 187 | TautImp : taut -> taut -> taut.
adamc@143 188
adam@432 189 (** We write a recursive function to _reflect_ this syntax back to [Prop]. Such functions are also called%\index{interpretation function}% _interpretation functions_, and we have used them in previous examples to give semantics to small programming languages. *)
adamc@143 190
adamc@143 191 Fixpoint tautDenote (t : taut) : Prop :=
adamc@143 192 match t with
adamc@143 193 | TautTrue => True
adamc@143 194 | TautAnd t1 t2 => tautDenote t1 /\ tautDenote t2
adamc@143 195 | TautOr t1 t2 => tautDenote t1 \/ tautDenote t2
adamc@143 196 | TautImp t1 t2 => tautDenote t1 -> tautDenote t2
adamc@143 197 end.
adamc@143 198
adamc@143 199 (** It is easy to prove that every formula in the range of [tautDenote] is true. *)
adamc@143 200
adamc@143 201 Theorem tautTrue : forall t, tautDenote t.
adamc@143 202 induction t; crush.
adamc@143 203 Qed.
adamc@143 204
adam@360 205 (** To use [tautTrue] to prove particular formulas, we need to implement the syntax reification process. A recursive Ltac function does the job. *)
adamc@143 206
adam@360 207 Ltac tautReify P :=
adamc@143 208 match P with
adamc@143 209 | True => TautTrue
adamc@143 210 | ?P1 /\ ?P2 =>
adam@360 211 let t1 := tautReify P1 in
adam@360 212 let t2 := tautReify P2 in
adamc@143 213 constr:(TautAnd t1 t2)
adamc@143 214 | ?P1 \/ ?P2 =>
adam@360 215 let t1 := tautReify P1 in
adam@360 216 let t2 := tautReify P2 in
adamc@143 217 constr:(TautOr t1 t2)
adamc@143 218 | ?P1 -> ?P2 =>
adam@360 219 let t1 := tautReify P1 in
adam@360 220 let t2 := tautReify P2 in
adamc@143 221 constr:(TautImp t1 t2)
adamc@143 222 end.
adamc@143 223
adam@461 224 (** With [tautReify] available, it is easy to finish our reflective tactic. We look at the goal formula, reify it, and apply [tautTrue] to the reified formula. *)
adamc@143 225
adamc@143 226 Ltac obvious :=
adamc@143 227 match goal with
adamc@143 228 | [ |- ?P ] =>
adam@360 229 let t := tautReify P in
adamc@143 230 exact (tautTrue t)
adamc@143 231 end.
adamc@143 232
adamc@143 233 (** We can verify that [obvious] solves our original example, with a proof term that does not mention details of the proof. *)
adamc@148 234 (* end thide *)
adamc@143 235
adamc@143 236 Theorem true_galore' : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 237 obvious.
adamc@143 238 Qed.
adamc@143 239
adamc@143 240 Print true_galore'.
adamc@221 241 (** %\vspace{-.15in}% [[
adamc@143 242 true_galore' =
adamc@143 243 tautTrue
adamc@143 244 (TautImp (TautAnd TautTrue TautTrue)
adamc@143 245 (TautOr TautTrue (TautAnd TautTrue (TautImp TautTrue TautTrue))))
adamc@143 246 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 247 ]]
adamc@143 248
adam@508 249 It is worth considering how the reflective tactic improves on a pure-Ltac implementation. The formula reification process is just as ad-hoc as before, so we gain little there. In general, proofs will be more complicated than formula translation, and the "generic proof rule" that we apply here _is_ on much better formal footing than a recursive Ltac function. The dependent type of the proof guarantees that it "works" on any input formula. This benefit is in addition to the proof-size improvement that we have already seen.
adam@360 250
adam@360 251 It may also be worth pointing out that our previous example of evenness testing used a function [partialOut] for sound handling of input goals that the verified decision procedure fails to prove. Here, we prove that our procedure [tautTrue] (recall that an inductive proof may be viewed as a recursive procedure) is able to prove any goal representable in [taut], so no extra step is necessary. *)
adamc@144 252
adamc@144 253
adamc@145 254 (** * A Monoid Expression Simplifier *)
adamc@145 255
adam@432 256 (** Proof by reflection does not require encoding of all of the syntax in a goal. We can insert "variables" in our syntax types to allow injection of arbitrary pieces, even if we cannot apply specialized reasoning to them. In this section, we explore that possibility by writing a tactic for normalizing monoid equations. *)
adamc@146 257
adamc@145 258 Section monoid.
adamc@145 259 Variable A : Set.
adamc@145 260 Variable e : A.
adamc@145 261 Variable f : A -> A -> A.
adamc@145 262
adamc@145 263 Infix "+" := f.
adamc@145 264
adamc@145 265 Hypothesis assoc : forall a b c, (a + b) + c = a + (b + c).
adamc@145 266 Hypothesis identl : forall a, e + a = a.
adamc@145 267 Hypothesis identr : forall a, a + e = a.
adamc@145 268
adamc@146 269 (** We add variables and hypotheses characterizing an arbitrary instance of the algebraic structure of monoids. We have an associative binary operator and an identity element for it.
adamc@146 270
adam@432 271 It is easy to define an expression tree type for monoid expressions. A [Var] constructor is a "catch-all" case for subexpressions that we cannot model. These subexpressions could be actual Gallina variables, or they could just use functions that our tactic is unable to understand. *)
adamc@146 272
adamc@148 273 (* begin thide *)
adamc@145 274 Inductive mexp : Set :=
adamc@145 275 | Ident : mexp
adamc@145 276 | Var : A -> mexp
adamc@145 277 | Op : mexp -> mexp -> mexp.
adamc@145 278
adam@360 279 (** Next, we write an interpretation function. *)
adamc@146 280
adamc@145 281 Fixpoint mdenote (me : mexp) : A :=
adamc@145 282 match me with
adamc@145 283 | Ident => e
adamc@145 284 | Var v => v
adamc@145 285 | Op me1 me2 => mdenote me1 + mdenote me2
adamc@145 286 end.
adamc@145 287
adamc@146 288 (** We will normalize expressions by flattening them into lists, via associativity, so it is helpful to have a denotation function for lists of monoid values. *)
adamc@146 289
adamc@145 290 Fixpoint mldenote (ls : list A) : A :=
adamc@145 291 match ls with
adamc@145 292 | nil => e
adamc@145 293 | x :: ls' => x + mldenote ls'
adamc@145 294 end.
adamc@145 295
adamc@146 296 (** The flattening function itself is easy to implement. *)
adamc@146 297
adamc@145 298 Fixpoint flatten (me : mexp) : list A :=
adamc@145 299 match me with
adamc@145 300 | Ident => nil
adamc@145 301 | Var x => x :: nil
adamc@145 302 | Op me1 me2 => flatten me1 ++ flatten me2
adamc@145 303 end.
adamc@145 304
adam@461 305 (** This function has a straightforward correctness proof in terms of our [denote] functions. *)
adamc@146 306
adamc@146 307 Lemma flatten_correct' : forall ml2 ml1,
adamc@146 308 mldenote ml1 + mldenote ml2 = mldenote (ml1 ++ ml2).
adamc@145 309 induction ml1; crush.
adamc@145 310 Qed.
adamc@145 311
adamc@145 312 Theorem flatten_correct : forall me, mdenote me = mldenote (flatten me).
adamc@145 313 Hint Resolve flatten_correct'.
adamc@145 314
adamc@145 315 induction me; crush.
adamc@145 316 Qed.
adamc@145 317
adamc@146 318 (** Now it is easy to prove a theorem that will be the main tool behind our simplification tactic. *)
adamc@146 319
adamc@146 320 Theorem monoid_reflect : forall me1 me2,
adamc@146 321 mldenote (flatten me1) = mldenote (flatten me2)
adamc@146 322 -> mdenote me1 = mdenote me2.
adamc@145 323 intros; repeat rewrite flatten_correct; assumption.
adamc@145 324 Qed.
adamc@145 325
adam@360 326 (** We implement reification into the [mexp] type. *)
adamc@146 327
adam@360 328 Ltac reify me :=
adamc@146 329 match me with
adamc@145 330 | e => Ident
adamc@146 331 | ?me1 + ?me2 =>
adam@360 332 let r1 := reify me1 in
adam@360 333 let r2 := reify me2 in
adamc@145 334 constr:(Op r1 r2)
adamc@146 335 | _ => constr:(Var me)
adamc@145 336 end.
adamc@145 337
adam@360 338 (** The final [monoid] tactic works on goals that equate two monoid terms. We reify each and change the goal to refer to the reified versions, finishing off by applying [monoid_reflect] and simplifying uses of [mldenote]. Recall that the %\index{tactics!change}%[change] tactic replaces a conclusion formula with another that is definitionally equal to it. *)
adamc@146 339
adamc@145 340 Ltac monoid :=
adamc@145 341 match goal with
adamc@146 342 | [ |- ?me1 = ?me2 ] =>
adam@360 343 let r1 := reify me1 in
adam@360 344 let r2 := reify me2 in
adamc@145 345 change (mdenote r1 = mdenote r2);
adam@360 346 apply monoid_reflect; simpl
adamc@145 347 end.
adamc@145 348
adamc@146 349 (** We can make short work of theorems like this one: *)
adamc@146 350
adamc@148 351 (* end thide *)
adamc@148 352
adamc@145 353 Theorem t1 : forall a b c d, a + b + c + d = a + (b + c) + d.
adamc@146 354 intros; monoid.
adamc@146 355 (** [[
adamc@146 356 ============================
adamc@146 357 a + (b + (c + (d + e))) = a + (b + (c + (d + e)))
adamc@221 358
adamc@146 359 ]]
adamc@146 360
adam@360 361 Our tactic has canonicalized both sides of the equality, such that we can finish the proof by reflexivity. *)
adamc@146 362
adamc@145 363 reflexivity.
adamc@145 364 Qed.
adamc@146 365
adamc@146 366 (** It is interesting to look at the form of the proof. *)
adamc@146 367
adamc@146 368 Print t1.
adamc@221 369 (** %\vspace{-.15in}% [[
adamc@146 370 t1 =
adamc@146 371 fun a b c d : A =>
adamc@146 372 monoid_reflect (Op (Op (Op (Var a) (Var b)) (Var c)) (Var d))
adamc@146 373 (Op (Op (Var a) (Op (Var b) (Var c))) (Var d))
adam@426 374 (eq_refl (a + (b + (c + (d + e)))))
adamc@146 375 : forall a b c d : A, a + b + c + d = a + (b + c) + d
adamc@146 376 ]]
adamc@146 377
adam@360 378 The proof term contains only restatements of the equality operands in reified form, followed by a use of reflexivity on the shared canonical form. *)
adamc@221 379
adamc@145 380 End monoid.
adamc@145 381
adam@360 382 (** Extensions of this basic approach are used in the implementations of the %\index{tactics!ring}%[ring] and %\index{tactics!field}%[field] tactics that come packaged with Coq. *)
adamc@146 383
adamc@145 384
adamc@144 385 (** * A Smarter Tautology Solver *)
adamc@144 386
adam@412 387 (** Now we are ready to revisit our earlier tautology solver example. We want to broaden the scope of the tactic to include formulas whose truth is not syntactically apparent. We will want to allow injection of arbitrary formulas, like we allowed arbitrary monoid expressions in the last example. Since we are working in a richer theory, it is important to be able to use equalities between different injected formulas. For instance, we cannot prove [P -> P] by translating the formula into a value like [Imp (Var P) (Var P)], because a Gallina function has no way of comparing the two [P]s for equality.
adamc@147 388
adam@360 389 To arrive at a nice implementation satisfying these criteria, we introduce the %\index{tactics!quote}%[quote] tactic and its associated library. *)
adamc@147 390
adamc@144 391 Require Import Quote.
adamc@144 392
adamc@148 393 (* begin thide *)
adamc@144 394 Inductive formula : Set :=
adamc@144 395 | Atomic : index -> formula
adamc@144 396 | Truth : formula
adamc@144 397 | Falsehood : formula
adamc@144 398 | And : formula -> formula -> formula
adamc@144 399 | Or : formula -> formula -> formula
adamc@144 400 | Imp : formula -> formula -> formula.
adam@362 401 (* end thide *)
adamc@144 402
adam@360 403 (** The type %\index{Gallina terms!index}%[index] comes from the [Quote] library and represents a countable variable type. The rest of [formula]'s definition should be old hat by now.
adamc@147 404
adam@484 405 The [quote] tactic will implement injection from [Prop] into [formula] for us, but it is not quite as smart as we might like. In particular, it wants to treat function types specially, so it gets confused if function types are part of the structure we want to encode syntactically. To trick [quote] into not noticing our uses of function types to express logical implication, we will need to declare a wrapper definition for implication, as we did in the last chapter. *)
adamc@144 406
adamc@144 407 Definition imp (P1 P2 : Prop) := P1 -> P2.
adamc@144 408 Infix "-->" := imp (no associativity, at level 95).
adamc@144 409
adamc@147 410 (** Now we can define our denotation function. *)
adamc@147 411
adamc@147 412 Definition asgn := varmap Prop.
adamc@147 413
adam@362 414 (* begin thide *)
adamc@144 415 Fixpoint formulaDenote (atomics : asgn) (f : formula) : Prop :=
adamc@144 416 match f with
adamc@144 417 | Atomic v => varmap_find False v atomics
adamc@144 418 | Truth => True
adamc@144 419 | Falsehood => False
adamc@144 420 | And f1 f2 => formulaDenote atomics f1 /\ formulaDenote atomics f2
adamc@144 421 | Or f1 f2 => formulaDenote atomics f1 \/ formulaDenote atomics f2
adamc@144 422 | Imp f1 f2 => formulaDenote atomics f1 --> formulaDenote atomics f2
adamc@144 423 end.
adam@362 424 (* end thide *)
adamc@144 425
adam@461 426 (** The %\index{Gallina terms!varmap}%[varmap] type family implements maps from [index] values. In this case, we define an assignment as a map from variables to [Prop]s. Our interpretation function [formulaDenote] works with an assignment, and we use the [varmap_find] function to consult the assignment in the [Atomic] case. The first argument to [varmap_find] is a default value, in case the variable is not found. *)
adamc@147 427
adamc@144 428 Section my_tauto.
adamc@144 429 Variable atomics : asgn.
adamc@144 430
adamc@144 431 Definition holds (v : index) := varmap_find False v atomics.
adamc@144 432
adamc@147 433 (** We define some shorthand for a particular variable being true, and now we are ready to define some helpful functions based on the [ListSet] module of the standard library, which (unsurprisingly) presents a view of lists as sets. *)
adamc@147 434
adamc@144 435 Require Import ListSet.
adamc@144 436
adamc@144 437 Definition index_eq : forall x y : index, {x = y} + {x <> y}.
adamc@144 438 decide equality.
adamc@144 439 Defined.
adamc@144 440
adamc@144 441 Definition add (s : set index) (v : index) := set_add index_eq v s.
adamc@147 442
adamc@221 443 Definition In_dec : forall v (s : set index), {In v s} + {~ In v s}.
adamc@221 444 Local Open Scope specif_scope.
adamc@144 445
adamc@221 446 intro; refine (fix F (s : set index) : {In v s} + {~ In v s} :=
adamc@221 447 match s with
adamc@144 448 | nil => No
adamc@144 449 | v' :: s' => index_eq v' v || F s'
adamc@144 450 end); crush.
adamc@144 451 Defined.
adamc@144 452
adamc@147 453 (** We define what it means for all members of an index set to represent true propositions, and we prove some lemmas about this notion. *)
adamc@147 454
adamc@144 455 Fixpoint allTrue (s : set index) : Prop :=
adamc@144 456 match s with
adamc@144 457 | nil => True
adamc@144 458 | v :: s' => holds v /\ allTrue s'
adamc@144 459 end.
adamc@144 460
adamc@144 461 Theorem allTrue_add : forall v s,
adamc@144 462 allTrue s
adamc@144 463 -> holds v
adamc@144 464 -> allTrue (add s v).
adamc@144 465 induction s; crush;
adamc@144 466 match goal with
adamc@144 467 | [ |- context[if ?E then _ else _] ] => destruct E
adamc@144 468 end; crush.
adamc@144 469 Qed.
adamc@144 470
adamc@144 471 Theorem allTrue_In : forall v s,
adamc@144 472 allTrue s
adamc@144 473 -> set_In v s
adamc@144 474 -> varmap_find False v atomics.
adamc@144 475 induction s; crush.
adamc@144 476 Qed.
adamc@144 477
adamc@144 478 Hint Resolve allTrue_add allTrue_In.
adamc@144 479
adamc@221 480 Local Open Scope partial_scope.
adamc@144 481
adam@484 482 (** Now we can write a function [forward] that implements deconstruction of hypotheses, expanding a compound formula into a set of sets of atomic formulas covering all possible cases introduced with use of [Or]. To handle consideration of multiple cases, the function takes in a continuation argument, which will be called once for each case.
adam@484 483
adam@484 484 The [forward] function has a dependent type, in the style of Chapter 6, guaranteeing correctness. The arguments to [forward] are a goal formula [f], a set [known] of atomic formulas that we may assume are true, a hypothesis formula [hyp], and a success continuation [cont] that we call when we have extended [known] to hold new truths implied by [hyp]. *)
adamc@147 485
adam@297 486 Definition forward : forall (f : formula) (known : set index) (hyp : formula)
adam@297 487 (cont : forall known', [allTrue known' -> formulaDenote atomics f]),
adam@297 488 [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f].
adamc@144 489 refine (fix F (f : formula) (known : set index) (hyp : formula)
adamc@221 490 (cont : forall known', [allTrue known' -> formulaDenote atomics f])
adamc@144 491 : [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f] :=
adamc@221 492 match hyp with
adamc@144 493 | Atomic v => Reduce (cont (add known v))
adamc@144 494 | Truth => Reduce (cont known)
adamc@144 495 | Falsehood => Yes
adamc@144 496 | And h1 h2 =>
adamc@144 497 Reduce (F (Imp h2 f) known h1 (fun known' =>
adamc@144 498 Reduce (F f known' h2 cont)))
adamc@144 499 | Or h1 h2 => F f known h1 cont && F f known h2 cont
adamc@144 500 | Imp _ _ => Reduce (cont known)
adamc@144 501 end); crush.
adamc@144 502 Defined.
adamc@144 503
adamc@147 504 (** A [backward] function implements analysis of the final goal. It calls [forward] to handle implications. *)
adamc@147 505
adam@362 506 (* begin thide *)
adam@297 507 Definition backward : forall (known : set index) (f : formula),
adam@297 508 [allTrue known -> formulaDenote atomics f].
adamc@221 509 refine (fix F (known : set index) (f : formula)
adamc@221 510 : [allTrue known -> formulaDenote atomics f] :=
adamc@221 511 match f with
adamc@144 512 | Atomic v => Reduce (In_dec v known)
adamc@144 513 | Truth => Yes
adamc@144 514 | Falsehood => No
adamc@144 515 | And f1 f2 => F known f1 && F known f2
adamc@144 516 | Or f1 f2 => F known f1 || F known f2
adamc@144 517 | Imp f1 f2 => forward f2 known f1 (fun known' => F known' f2)
adamc@144 518 end); crush; eauto.
adamc@144 519 Defined.
adam@362 520 (* end thide *)
adamc@144 521
adamc@147 522 (** A simple wrapper around [backward] gives us the usual type of a partial decision procedure. *)
adamc@147 523
adam@297 524 Definition my_tauto : forall f : formula, [formulaDenote atomics f].
adam@362 525 (* begin thide *)
adamc@144 526 intro; refine (Reduce (backward nil f)); crush.
adamc@144 527 Defined.
adam@362 528 (* end thide *)
adamc@144 529 End my_tauto.
adamc@144 530
adam@360 531 (** Our final tactic implementation is now fairly straightforward. First, we [intro] all quantifiers that do not bind [Prop]s. Then we call the [quote] tactic, which implements the reification for us. Finally, we are able to construct an exact proof via [partialOut] and the [my_tauto] Gallina function. *)
adamc@147 532
adamc@144 533 Ltac my_tauto :=
adamc@144 534 repeat match goal with
adamc@144 535 | [ |- forall x : ?P, _ ] =>
adamc@144 536 match type of P with
adamc@144 537 | Prop => fail 1
adamc@144 538 | _ => intro
adamc@144 539 end
adamc@144 540 end;
adamc@144 541 quote formulaDenote;
adamc@144 542 match goal with
adamc@144 543 | [ |- formulaDenote ?m ?f ] => exact (partialOut (my_tauto m f))
adamc@144 544 end.
adamc@148 545 (* end thide *)
adamc@144 546
adamc@147 547 (** A few examples demonstrate how the tactic works. *)
adamc@147 548
adamc@144 549 Theorem mt1 : True.
adamc@144 550 my_tauto.
adamc@144 551 Qed.
adamc@144 552
adamc@144 553 Print mt1.
adamc@221 554 (** %\vspace{-.15in}% [[
adamc@147 555 mt1 = partialOut (my_tauto (Empty_vm Prop) Truth)
adamc@147 556 : True
adamc@147 557 ]]
adamc@147 558
adamc@147 559 We see [my_tauto] applied with an empty [varmap], since every subformula is handled by [formulaDenote]. *)
adamc@144 560
adamc@144 561 Theorem mt2 : forall x y : nat, x = y --> x = y.
adamc@144 562 my_tauto.
adamc@144 563 Qed.
adamc@144 564
adam@432 565 (* begin hide *)
adam@437 566 (* begin thide *)
adam@432 567 Definition nvm := (Node_vm, Empty_vm, End_idx, Left_idx, Right_idx).
adam@437 568 (* end thide *)
adam@432 569 (* end hide *)
adam@432 570
adamc@144 571 Print mt2.
adamc@221 572 (** %\vspace{-.15in}% [[
adamc@147 573 mt2 =
adamc@147 574 fun x y : nat =>
adamc@147 575 partialOut
adamc@147 576 (my_tauto (Node_vm (x = y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 577 (Imp (Atomic End_idx) (Atomic End_idx)))
adamc@147 578 : forall x y : nat, x = y --> x = y
adamc@147 579 ]]
adamc@147 580
adamc@147 581 Crucially, both instances of [x = y] are represented with the same index, [End_idx]. The value of this index only needs to appear once in the [varmap], whose form reveals that [varmap]s are represented as binary trees, where [index] values denote paths from tree roots to leaves. *)
adamc@144 582
adamc@144 583 Theorem mt3 : forall x y z,
adamc@144 584 (x < y /\ y > z) \/ (y > z /\ x < S y)
adamc@144 585 --> y > z /\ (x < y \/ x < S y).
adamc@144 586 my_tauto.
adamc@144 587 Qed.
adamc@144 588
adamc@144 589 Print mt3.
adamc@221 590 (** %\vspace{-.15in}% [[
adamc@147 591 fun x y z : nat =>
adamc@147 592 partialOut
adamc@147 593 (my_tauto
adamc@147 594 (Node_vm (x < S y) (Node_vm (x < y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 595 (Node_vm (y > z) (Empty_vm Prop) (Empty_vm Prop)))
adamc@147 596 (Imp
adamc@147 597 (Or (And (Atomic (Left_idx End_idx)) (Atomic (Right_idx End_idx)))
adamc@147 598 (And (Atomic (Right_idx End_idx)) (Atomic End_idx)))
adamc@147 599 (And (Atomic (Right_idx End_idx))
adamc@147 600 (Or (Atomic (Left_idx End_idx)) (Atomic End_idx)))))
adamc@147 601 : forall x y z : nat,
adamc@147 602 x < y /\ y > z \/ y > z /\ x < S y --> y > z /\ (x < y \/ x < S y)
adamc@147 603 ]]
adamc@147 604
adamc@147 605 Our goal contained three distinct atomic formulas, and we see that a three-element [varmap] is generated.
adamc@147 606
adamc@147 607 It can be interesting to observe differences between the level of repetition in proof terms generated by [my_tauto] and [tauto] for especially trivial theorems. *)
adamc@144 608
adamc@144 609 Theorem mt4 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False.
adamc@144 610 my_tauto.
adamc@144 611 Qed.
adamc@144 612
adamc@144 613 Print mt4.
adamc@221 614 (** %\vspace{-.15in}% [[
adamc@147 615 mt4 =
adamc@147 616 partialOut
adamc@147 617 (my_tauto (Empty_vm Prop)
adamc@147 618 (Imp
adamc@147 619 (And Truth
adamc@147 620 (And Truth
adamc@147 621 (And Truth (And Truth (And Truth (And Truth Falsehood))))))
adamc@147 622 Falsehood))
adamc@147 623 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False
adam@302 624 ]]
adam@302 625 *)
adamc@144 626
adamc@144 627 Theorem mt4' : True /\ True /\ True /\ True /\ True /\ True /\ False -> False.
adamc@144 628 tauto.
adamc@144 629 Qed.
adamc@144 630
adam@432 631 (* begin hide *)
adam@437 632 (* begin thide *)
adam@432 633 Definition fi := False_ind.
adam@437 634 (* end thide *)
adam@432 635 (* end hide *)
adam@432 636
adamc@144 637 Print mt4'.
adamc@221 638 (** %\vspace{-.15in}% [[
adamc@147 639 mt4' =
adamc@147 640 fun H : True /\ True /\ True /\ True /\ True /\ True /\ False =>
adamc@147 641 and_ind
adamc@147 642 (fun (_ : True) (H1 : True /\ True /\ True /\ True /\ True /\ False) =>
adamc@147 643 and_ind
adamc@147 644 (fun (_ : True) (H3 : True /\ True /\ True /\ True /\ False) =>
adamc@147 645 and_ind
adamc@147 646 (fun (_ : True) (H5 : True /\ True /\ True /\ False) =>
adamc@147 647 and_ind
adamc@147 648 (fun (_ : True) (H7 : True /\ True /\ False) =>
adamc@147 649 and_ind
adamc@147 650 (fun (_ : True) (H9 : True /\ False) =>
adamc@147 651 and_ind (fun (_ : True) (H11 : False) => False_ind False H11)
adamc@147 652 H9) H7) H5) H3) H1) H
adamc@147 653 : True /\ True /\ True /\ True /\ True /\ True /\ False -> False
adam@302 654 ]]
adam@360 655
adam@360 656 The traditional [tauto] tactic introduces a quadratic blow-up in the size of the proof term, whereas proofs produced by [my_tauto] always have linear size. *)
adamc@147 657
adam@361 658 (** ** Manual Reification of Terms with Variables *)
adam@361 659
adam@362 660 (* begin thide *)
adam@361 661 (** The action of the [quote] tactic above may seem like magic. Somehow it performs equality comparison between subterms of arbitrary types, so that these subterms may be represented with the same reified variable. While [quote] is implemented in OCaml, we can code the reification process completely in Ltac, as well. To make our job simpler, we will represent variables as [nat]s, indexing into a simple list of variable values that may be referenced.
adam@361 662
adam@485 663 Step one of the process is to crawl over a term, building a duplicate-free list of all values that appear in positions we will encode as variables. A useful helper function adds an element to a list, preventing duplicates. Note how we use Ltac pattern matching to implement an equality test on Gallina terms; this is simple syntactic equality, not even the richer definitional equality. We also represent lists as nested tuples, to allow different list elements to have different Gallina types. *)
adam@361 664
adam@361 665 Ltac inList x xs :=
adam@361 666 match xs with
adam@361 667 | tt => false
adam@361 668 | (x, _) => true
adam@361 669 | (_, ?xs') => inList x xs'
adam@361 670 end.
adam@361 671
adam@361 672 Ltac addToList x xs :=
adam@361 673 let b := inList x xs in
adam@361 674 match b with
adam@361 675 | true => xs
adam@361 676 | false => constr:(x, xs)
adam@361 677 end.
adam@361 678
adam@361 679 (** Now we can write our recursive function to calculate the list of variable values we will want to use to represent a term. *)
adam@361 680
adam@361 681 Ltac allVars xs e :=
adam@361 682 match e with
adam@361 683 | True => xs
adam@361 684 | False => xs
adam@361 685 | ?e1 /\ ?e2 =>
adam@361 686 let xs := allVars xs e1 in
adam@361 687 allVars xs e2
adam@361 688 | ?e1 \/ ?e2 =>
adam@361 689 let xs := allVars xs e1 in
adam@361 690 allVars xs e2
adam@361 691 | ?e1 -> ?e2 =>
adam@361 692 let xs := allVars xs e1 in
adam@361 693 allVars xs e2
adam@361 694 | _ => addToList e xs
adam@361 695 end.
adam@361 696
adam@361 697 (** We will also need a way to map a value to its position in a list. *)
adam@361 698
adam@361 699 Ltac lookup x xs :=
adam@361 700 match xs with
adam@361 701 | (x, _) => O
adam@361 702 | (_, ?xs') =>
adam@361 703 let n := lookup x xs' in
adam@361 704 constr:(S n)
adam@361 705 end.
adam@361 706
adam@461 707 (** The next building block is a procedure for reifying a term, given a list of all allowed variable values. We are free to make this procedure partial, where tactic failure may be triggered upon attempting to reify a term containing subterms not included in the list of variables. The type of the output term is a copy of [formula] where [index] is replaced by [nat], in the type of the constructor for atomic formulas. *)
adam@361 708
adam@361 709 Inductive formula' : Set :=
adam@361 710 | Atomic' : nat -> formula'
adam@361 711 | Truth' : formula'
adam@361 712 | Falsehood' : formula'
adam@361 713 | And' : formula' -> formula' -> formula'
adam@361 714 | Or' : formula' -> formula' -> formula'
adam@361 715 | Imp' : formula' -> formula' -> formula'.
adam@361 716
adam@361 717 (** Note that, when we write our own Ltac procedure, we can work directly with the normal [->] operator, rather than needing to introduce a wrapper for it. *)
adam@361 718
adam@361 719 Ltac reifyTerm xs e :=
adam@361 720 match e with
adam@432 721 | True => constr:Truth'
adam@432 722 | False => constr:Falsehood'
adam@361 723 | ?e1 /\ ?e2 =>
adam@361 724 let p1 := reifyTerm xs e1 in
adam@361 725 let p2 := reifyTerm xs e2 in
adam@361 726 constr:(And' p1 p2)
adam@361 727 | ?e1 \/ ?e2 =>
adam@361 728 let p1 := reifyTerm xs e1 in
adam@361 729 let p2 := reifyTerm xs e2 in
adam@361 730 constr:(Or' p1 p2)
adam@361 731 | ?e1 -> ?e2 =>
adam@361 732 let p1 := reifyTerm xs e1 in
adam@361 733 let p2 := reifyTerm xs e2 in
adam@361 734 constr:(Imp' p1 p2)
adam@361 735 | _ =>
adam@361 736 let n := lookup e xs in
adam@361 737 constr:(Atomic' n)
adam@361 738 end.
adam@361 739
adam@361 740 (** Finally, we bring all the pieces together. *)
adam@361 741
adam@361 742 Ltac reify :=
adam@361 743 match goal with
adam@361 744 | [ |- ?G ] => let xs := allVars tt G in
adam@361 745 let p := reifyTerm xs G in
adam@361 746 pose p
adam@361 747 end.
adam@361 748
adam@361 749 (** A quick test verifies that we are doing reification correctly. *)
adam@361 750
adam@361 751 Theorem mt3' : forall x y z,
adam@361 752 (x < y /\ y > z) \/ (y > z /\ x < S y)
adam@361 753 -> y > z /\ (x < y \/ x < S y).
adam@361 754 do 3 intro; reify.
adam@361 755
adam@361 756 (** Our simple tactic adds the translated term as a new variable:
adam@361 757 [[
adam@361 758 f := Imp'
adam@361 759 (Or' (And' (Atomic' 2) (Atomic' 1)) (And' (Atomic' 1) (Atomic' 0)))
adam@361 760 (And' (Atomic' 1) (Or' (Atomic' 2) (Atomic' 0))) : formula'
adam@361 761 ]]
adam@361 762 *)
adam@361 763 Abort.
adam@361 764
adam@361 765 (** More work would be needed to complete the reflective tactic, as we must connect our new syntax type with the real meanings of formulas, but the details are the same as in our prior implementation with [quote]. *)
adam@362 766 (* end thide *)
adam@378 767
adam@378 768
adam@378 769 (** * Building a Reification Tactic that Recurses Under Binders *)
adam@378 770
adam@378 771 (** All of our examples so far have stayed away from reifying the syntax of terms that use such features as quantifiers and [fun] function abstractions. Such cases are complicated by the fact that different subterms may be allowed to reference different sets of free variables. Some cleverness is needed to clear this hurdle, but a few simple patterns will suffice. Consider this example of a simple dependently typed term language, where a function abstraction body is represented conveniently with a Coq function. *)
adam@378 772
adam@378 773 Inductive type : Type :=
adam@378 774 | Nat : type
adam@378 775 | NatFunc : type -> type.
adam@378 776
adam@378 777 Inductive term : type -> Type :=
adam@378 778 | Const : nat -> term Nat
adam@378 779 | Plus : term Nat -> term Nat -> term Nat
adam@378 780 | Abs : forall t, (nat -> term t) -> term (NatFunc t).
adam@378 781
adam@378 782 Fixpoint typeDenote (t : type) : Type :=
adam@378 783 match t with
adam@378 784 | Nat => nat
adam@378 785 | NatFunc t => nat -> typeDenote t
adam@378 786 end.
adam@378 787
adam@378 788 Fixpoint termDenote t (e : term t) : typeDenote t :=
adam@378 789 match e with
adam@378 790 | Const n => n
adam@378 791 | Plus e1 e2 => termDenote e1 + termDenote e2
adam@378 792 | Abs _ e1 => fun x => termDenote (e1 x)
adam@378 793 end.
adam@378 794
adam@484 795 (** Here is a %\%naive%{}% first attempt at a reification tactic. *)
adam@378 796
adam@534 797 (* begin hide *)
adam@534 798 Definition red_herring := O.
adam@534 799 (* end hide *)
adam@378 800 Ltac refl' e :=
adam@378 801 match e with
adam@378 802 | ?E1 + ?E2 =>
adam@378 803 let r1 := refl' E1 in
adam@378 804 let r2 := refl' E2 in
adam@378 805 constr:(Plus r1 r2)
adam@378 806
adam@378 807 | fun x : nat => ?E1 =>
adam@378 808 let r1 := refl' E1 in
adam@378 809 constr:(Abs (fun x => r1 x))
adam@378 810
adam@378 811 | _ => constr:(Const e)
adam@378 812 end.
adam@378 813
adam@484 814 (** Recall that a regular Ltac pattern variable [?X] only matches terms that _do not mention new variables introduced within the pattern_. In our %\%naive%{}% implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument! Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
adam@378 815
adam@484 816 To handle functions in general, we will use the pattern variable form [@?X], which allows [X] to mention newly introduced variables that are declared explicitly. A use of [@?X] must be followed by a list of the local variables that may be mentioned. The variable [X] then comes to stand for a Gallina function over the values of those variables. For instance: *)
adam@378 817
adam@378 818 Reset refl'.
adam@534 819 (* begin hide *)
adam@534 820 Reset red_herring.
adam@534 821 Definition red_herring := O.
adam@534 822 (* end hide *)
adam@378 823 Ltac refl' e :=
adam@378 824 match e with
adam@378 825 | ?E1 + ?E2 =>
adam@378 826 let r1 := refl' E1 in
adam@378 827 let r2 := refl' E2 in
adam@378 828 constr:(Plus r1 r2)
adam@378 829
adam@378 830 | fun x : nat => @?E1 x =>
adam@378 831 let r1 := refl' E1 in
adam@378 832 constr:(Abs r1)
adam@378 833
adam@378 834 | _ => constr:(Const e)
adam@378 835 end.
adam@378 836
adam@398 837 (** Now, in the abstraction case, we bind [E1] as a function from an [x] value to the value of the abstraction body. Unfortunately, our recursive call there is not destined for success. It will match the same abstraction pattern and trigger another recursive call, and so on through infinite recursion. One last refactoring yields a working procedure. The key idea is to consider every input to [refl'] as _a function over the values of variables introduced during recursion_. *)
adam@378 838
adam@378 839 Reset refl'.
adam@534 840 (* begin hide *)
adam@534 841 Reset red_herring.
adam@534 842 (* end hide *)
adam@378 843 Ltac refl' e :=
adam@378 844 match eval simpl in e with
adam@378 845 | fun x : ?T => @?E1 x + @?E2 x =>
adam@378 846 let r1 := refl' E1 in
adam@378 847 let r2 := refl' E2 in
adam@378 848 constr:(fun x => Plus (r1 x) (r2 x))
adam@378 849
adam@378 850 | fun (x : ?T) (y : nat) => @?E1 x y =>
adam@378 851 let r1 := refl' (fun p : T * nat => E1 (fst p) (snd p)) in
adam@534 852 constr:(fun u => Abs (fun v => r1 (u, v)))
adam@378 853
adam@378 854 | _ => constr:(fun x => Const (e x))
adam@378 855 end.
adam@378 856
adam@378 857 (** Note how now even the addition case works in terms of functions, with [@?X] patterns. The abstraction case introduces a new variable by extending the type used to represent the free variables. In particular, the argument to [refl'] used type [T] to represent all free variables. We extend the type to [T * nat] for the type representing free variable values within the abstraction body. A bit of bookkeeping with pairs and their projections produces an appropriate version of the abstraction body to pass in a recursive call. To ensure that all this repackaging of terms does not interfere with pattern matching, we add an extra [simpl] reduction on the function argument, in the first line of the body of [refl'].
adam@378 858
adam@378 859 Now one more tactic provides an example of how to apply reification. Let us consider goals that are equalities between terms that can be reified. We want to change such goals into equalities between appropriate calls to [termDenote]. *)
adam@378 860
adam@378 861 Ltac refl :=
adam@378 862 match goal with
adam@378 863 | [ |- ?E1 = ?E2 ] =>
adam@378 864 let E1' := refl' (fun _ : unit => E1) in
adam@378 865 let E2' := refl' (fun _ : unit => E2) in
adam@378 866 change (termDenote (E1' tt) = termDenote (E2' tt));
adam@378 867 cbv beta iota delta [fst snd]
adam@378 868 end.
adam@378 869
adam@378 870 Goal (fun (x y : nat) => x + y + 13) = (fun (_ z : nat) => z).
adam@378 871 refl.
adam@378 872 (** %\vspace{-.15in}%[[
adam@378 873 ============================
adam@378 874 termDenote
adam@378 875 (Abs
adam@378 876 (fun y : nat =>
adam@378 877 Abs (fun y0 : nat => Plus (Plus (Const y) (Const y0)) (Const 13)))) =
adam@378 878 termDenote (Abs (fun _ : nat => Abs (fun y0 : nat => Const y0)))
adam@378 879 ]]
adam@378 880 *)
adam@378 881
adam@378 882 Abort.
adam@378 883
adam@484 884 (** Our encoding here uses Coq functions to represent binding within the terms we reify, which makes it difficult to implement certain functions over reified terms. An alternative would be to represent variables with numbers. This can be done by writing a slightly smarter reification function that identifies variable references by detecting when term arguments are just compositions of [fst] and [snd]; from the order of the compositions we may read off the variable number. We leave the details as an exercise (though not a trivial one!) for the reader. *)