annotate src/Reflection.v @ 485:2d66421b8aa1

Small wording change suggested by a proofreader
author Adam Chlipala <adam@chlipala.net>
date Sun, 06 Jan 2013 16:34:22 -0500
parents 5025a401ad9e
children 36428dfcde84
rev   line source
adam@379 1 (* Copyright (c) 2008-2012, Adam Chlipala
adamc@142 2 *
adamc@142 3 * This work is licensed under a
adamc@142 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@142 5 * Unported License.
adamc@142 6 * The license text is available at:
adamc@142 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@142 8 *)
adamc@142 9
adamc@142 10 (* begin hide *)
adamc@142 11 Require Import List.
adamc@142 12
adam@314 13 Require Import CpdtTactics MoreSpecif.
adamc@142 14
adamc@142 15 Set Implicit Arguments.
adamc@142 16 (* end hide *)
adamc@142 17
adamc@142 18
adamc@142 19 (** %\chapter{Proof by Reflection}% *)
adamc@142 20
adam@412 21 (** The last chapter highlighted a very heuristic approach to proving. In this chapter, we will study an alternative technique,%\index{proof by reflection}% _proof by reflection_ %\cite{reflection}%. We will write, in Gallina, decision procedures with proofs of correctness, and we will appeal to these procedures in writing very short proofs. Such a proof is checked by running the decision procedure. The term _reflection_ applies because we will need to translate Gallina propositions into values of inductive types representing syntax, so that Gallina programs may analyze them, and translating such a term back to the original form is called _reflecting_ it. *)
adamc@142 22
adamc@142 23
adamc@142 24 (** * Proving Evenness *)
adamc@142 25
adamc@142 26 (** Proving that particular natural number constants are even is certainly something we would rather have happen automatically. The Ltac-programming techniques that we learned in the last chapter make it easy to implement such a procedure. *)
adamc@142 27
adamc@142 28 Inductive isEven : nat -> Prop :=
adamc@144 29 | Even_O : isEven O
adamc@144 30 | Even_SS : forall n, isEven n -> isEven (S (S n)).
adamc@142 31
adamc@148 32 (* begin thide *)
adamc@142 33 Ltac prove_even := repeat constructor.
adamc@148 34 (* end thide *)
adamc@142 35
adamc@142 36 Theorem even_256 : isEven 256.
adamc@142 37 prove_even.
adamc@142 38 Qed.
adamc@142 39
adamc@142 40 Print even_256.
adamc@221 41 (** %\vspace{-.15in}% [[
adamc@142 42 even_256 =
adamc@142 43 Even_SS
adamc@142 44 (Even_SS
adamc@142 45 (Even_SS
adamc@142 46 (Even_SS
adamc@142 47 ]]
adamc@142 48
adam@360 49 %\noindent%...and so on. This procedure always works (at least on machines with infinite resources), but it has a serious drawback, which we see when we print the proof it generates that 256 is even. The final proof term has length super-linear in the input value. Coq's implicit arguments mechanism is hiding the values given for parameter [n] of [Even_SS], which is why the proof term only appears linear here. Also, proof terms are represented internally as syntax trees, with opportunity for sharing of node representations, but in this chapter we will measure proof term size as simple textual length or as the number of nodes in the term's syntax tree, two measures that are approximately equivalent. Sometimes apparently large proof terms have enough internal sharing that they take up less memory than we expect, but one avoids having to reason about such sharing by ensuring that the size of a sharing-free version of a term is low enough.
adam@360 50
adam@360 51 Superlinear evenness proof terms seem like a shame, since we could write a trivial and trustworthy program to verify evenness of constants. The proof checker could simply call our program where needed.
adamc@142 52
adamc@142 53 It is also unfortunate not to have static typing guarantees that our tactic always behaves appropriately. Other invocations of similar tactics might fail with dynamic type errors, and we would not know about the bugs behind these errors until we happened to attempt to prove complex enough goals.
adamc@142 54
adamc@142 55 The techniques of proof by reflection address both complaints. We will be able to write proofs like this with constant size overhead beyond the size of the input, and we will do it with verified decision procedures written in Gallina.
adamc@142 56
adamc@221 57 For this example, we begin by using a type from the [MoreSpecif] module (included in the book source) to write a certified evenness checker. *)
adamc@142 58
adam@432 59 (* begin hide *)
adam@437 60 (* begin thide *)
adam@432 61 Definition paartial := partial.
adam@437 62 (* end thide *)
adam@432 63 (* end hide *)
adam@432 64
adamc@142 65 Print partial.
adamc@221 66 (** %\vspace{-.15in}% [[
adamc@221 67 Inductive partial (P : Prop) : Set := Proved : P -> [P] | Uncertain : [P]
adamc@221 68 ]]
adamc@142 69
adamc@221 70 A [partial P] value is an optional proof of [P]. The notation [[P]] stands for [partial P]. *)
adamc@142 71
adamc@221 72 Local Open Scope partial_scope.
adamc@142 73
adamc@142 74 (** We bring into scope some notations for the [partial] type. These overlap with some of the notations we have seen previously for specification types, so they were placed in a separate scope that needs separate opening. *)
adamc@142 75
adamc@148 76 (* begin thide *)
adam@297 77 Definition check_even : forall n : nat, [isEven n].
adamc@142 78 Hint Constructors isEven.
adamc@142 79
adamc@142 80 refine (fix F (n : nat) : [isEven n] :=
adamc@221 81 match n with
adamc@142 82 | 0 => Yes
adamc@142 83 | 1 => No
adamc@142 84 | S (S n') => Reduce (F n')
adamc@142 85 end); auto.
adamc@142 86 Defined.
adamc@142 87
adam@461 88 (** The function [check_even] may be viewed as a _verified decision procedure_, because its type guarantees that it never returns %\coqdocnotation{%#<tt>#Yes#</tt>#%}% for inputs that are not even.
adam@360 89
adam@360 90 Now we can use dependent pattern-matching to write a function that performs a surprising feat. When given a [partial P], this function [partialOut] returns a proof of [P] if the [partial] value contains a proof, and it returns a (useless) proof of [True] otherwise. From the standpoint of ML and Haskell programming, it seems impossible to write such a type, but it is trivial with a [return] annotation. *)
adamc@142 91
adamc@142 92 Definition partialOut (P : Prop) (x : [P]) :=
adamc@142 93 match x return (match x with
adamc@142 94 | Proved _ => P
adamc@142 95 | Uncertain => True
adamc@142 96 end) with
adamc@142 97 | Proved pf => pf
adamc@142 98 | Uncertain => I
adamc@142 99 end.
adamc@142 100
adam@289 101 (** It may seem strange to define a function like this. However, it turns out to be very useful in writing a reflective version of our earlier [prove_even] tactic: *)
adamc@142 102
adamc@142 103 Ltac prove_even_reflective :=
adamc@142 104 match goal with
adamc@142 105 | [ |- isEven ?N] => exact (partialOut (check_even N))
adamc@142 106 end.
adamc@148 107 (* end thide *)
adamc@142 108
adam@432 109 (** We identify which natural number we are considering, and we "prove" its evenness by pulling the proof out of the appropriate [check_even] call. Recall that the %\index{tactics!exact}%[exact] tactic proves a proposition [P] when given a proof term of precisely type [P]. *)
adamc@142 110
adamc@142 111 Theorem even_256' : isEven 256.
adamc@142 112 prove_even_reflective.
adamc@142 113 Qed.
adamc@142 114
adamc@142 115 Print even_256'.
adamc@221 116 (** %\vspace{-.15in}% [[
adamc@142 117 even_256' = partialOut (check_even 256)
adamc@142 118 : isEven 256
adamc@142 119 ]]
adamc@142 120
adam@289 121 We can see a constant wrapper around the object of the proof. For any even number, this form of proof will suffice. The size of the proof term is now linear in the number being checked, containing two repetitions of the unary form of that number, one of which is hidden above within the implicit argument to [partialOut].
adam@289 122
adam@289 123 What happens if we try the tactic with an odd number? *)
adamc@142 124
adamc@142 125 Theorem even_255 : isEven 255.
adam@360 126 (** %\vspace{-.275in}%[[
adamc@142 127 prove_even_reflective.
adam@360 128 ]]
adamc@142 129
adam@360 130 <<
adamc@142 131 User error: No matching clauses for match goal
adam@360 132 >>
adamc@142 133
adamc@142 134 Thankfully, the tactic fails. To see more precisely what goes wrong, we can run manually the body of the [match].
adamc@142 135
adam@360 136 %\vspace{-.15in}%[[
adamc@142 137 exact (partialOut (check_even 255)).
adam@360 138 ]]
adamc@142 139
adam@360 140 <<
adamc@142 141 Error: The term "partialOut (check_even 255)" has type
adamc@142 142 "match check_even 255 with
adamc@142 143 | Yes => isEven 255
adamc@142 144 | No => True
adamc@142 145 end" while it is expected to have type "isEven 255"
adam@360 146 >>
adamc@142 147
adam@461 148 As usual, the type checker performs no reductions to simplify error messages. If we reduced the first term ourselves, we would see that [check_even 255] reduces to a %\coqdocnotation{%#<tt>#No#</tt>#%}%, so that the first term is equivalent to [True], which certainly does not unify with [isEven 255]. *)
adamc@221 149
adamc@142 150 Abort.
adamc@143 151
adam@362 152 (** Our tactic [prove_even_reflective] is reflective because it performs a proof search process (a trivial one, in this case) wholly within Gallina, where the only use of Ltac is to translate a goal into an appropriate use of [check_even]. *)
adamc@143 153
adam@360 154
adam@360 155 (** * Reifying the Syntax of a Trivial Tautology Language *)
adamc@143 156
adamc@143 157 (** We might also like to have reflective proofs of trivial tautologies like this one: *)
adamc@143 158
adamc@143 159 Theorem true_galore : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 160 tauto.
adamc@143 161 Qed.
adamc@143 162
adam@432 163 (* begin hide *)
adam@437 164 (* begin thide *)
adam@432 165 Definition tg := (and_ind, or_introl).
adam@437 166 (* end thide *)
adam@432 167 (* end hide *)
adam@432 168
adamc@143 169 Print true_galore.
adamc@221 170 (** %\vspace{-.15in}% [[
adamc@143 171 true_galore =
adamc@143 172 fun H : True /\ True =>
adamc@143 173 and_ind (fun _ _ : True => or_introl (True /\ (True -> True)) I) H
adamc@143 174 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 175 ]]
adamc@143 176
adamc@143 177 As we might expect, the proof that [tauto] builds contains explicit applications of natural deduction rules. For large formulas, this can add a linear amount of proof size overhead, beyond the size of the input.
adamc@143 178
adam@432 179 To write a reflective procedure for this class of goals, we will need to get into the actual "reflection" part of "proof by reflection." It is impossible to case-analyze a [Prop] in any way in Gallina. We must%\index{reification}% _reify_ [Prop] into some type that we _can_ analyze. This inductive type is a good candidate: *)
adamc@143 180
adamc@148 181 (* begin thide *)
adamc@143 182 Inductive taut : Set :=
adamc@143 183 | TautTrue : taut
adamc@143 184 | TautAnd : taut -> taut -> taut
adamc@143 185 | TautOr : taut -> taut -> taut
adamc@143 186 | TautImp : taut -> taut -> taut.
adamc@143 187
adam@432 188 (** We write a recursive function to _reflect_ this syntax back to [Prop]. Such functions are also called%\index{interpretation function}% _interpretation functions_, and we have used them in previous examples to give semantics to small programming languages. *)
adamc@143 189
adamc@143 190 Fixpoint tautDenote (t : taut) : Prop :=
adamc@143 191 match t with
adamc@143 192 | TautTrue => True
adamc@143 193 | TautAnd t1 t2 => tautDenote t1 /\ tautDenote t2
adamc@143 194 | TautOr t1 t2 => tautDenote t1 \/ tautDenote t2
adamc@143 195 | TautImp t1 t2 => tautDenote t1 -> tautDenote t2
adamc@143 196 end.
adamc@143 197
adamc@143 198 (** It is easy to prove that every formula in the range of [tautDenote] is true. *)
adamc@143 199
adamc@143 200 Theorem tautTrue : forall t, tautDenote t.
adamc@143 201 induction t; crush.
adamc@143 202 Qed.
adamc@143 203
adam@360 204 (** To use [tautTrue] to prove particular formulas, we need to implement the syntax reification process. A recursive Ltac function does the job. *)
adamc@143 205
adam@360 206 Ltac tautReify P :=
adamc@143 207 match P with
adamc@143 208 | True => TautTrue
adamc@143 209 | ?P1 /\ ?P2 =>
adam@360 210 let t1 := tautReify P1 in
adam@360 211 let t2 := tautReify P2 in
adamc@143 212 constr:(TautAnd t1 t2)
adamc@143 213 | ?P1 \/ ?P2 =>
adam@360 214 let t1 := tautReify P1 in
adam@360 215 let t2 := tautReify P2 in
adamc@143 216 constr:(TautOr t1 t2)
adamc@143 217 | ?P1 -> ?P2 =>
adam@360 218 let t1 := tautReify P1 in
adam@360 219 let t2 := tautReify P2 in
adamc@143 220 constr:(TautImp t1 t2)
adamc@143 221 end.
adamc@143 222
adam@461 223 (** With [tautReify] available, it is easy to finish our reflective tactic. We look at the goal formula, reify it, and apply [tautTrue] to the reified formula. *)
adamc@143 224
adamc@143 225 Ltac obvious :=
adamc@143 226 match goal with
adamc@143 227 | [ |- ?P ] =>
adam@360 228 let t := tautReify P in
adamc@143 229 exact (tautTrue t)
adamc@143 230 end.
adamc@143 231
adamc@143 232 (** We can verify that [obvious] solves our original example, with a proof term that does not mention details of the proof. *)
adamc@148 233 (* end thide *)
adamc@143 234
adamc@143 235 Theorem true_galore' : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 236 obvious.
adamc@143 237 Qed.
adamc@143 238
adamc@143 239 Print true_galore'.
adamc@221 240 (** %\vspace{-.15in}% [[
adamc@143 241 true_galore' =
adamc@143 242 tautTrue
adamc@143 243 (TautImp (TautAnd TautTrue TautTrue)
adamc@143 244 (TautOr TautTrue (TautAnd TautTrue (TautImp TautTrue TautTrue))))
adamc@143 245 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 246 ]]
adamc@143 247
adam@432 248 It is worth considering how the reflective tactic improves on a pure-Ltac implementation. The formula reification process is just as ad-hoc as before, so we gain little there. In general, proofs will be more complicated than formula translation, and the "generic proof rule" that we apply here _is_ on much better formal footing than a recursive Ltac function. The dependent type of the proof guarantees that it "works" on any input formula. This is all in addition to the proof-size improvement that we have already seen.
adam@360 249
adam@360 250 It may also be worth pointing out that our previous example of evenness testing used a function [partialOut] for sound handling of input goals that the verified decision procedure fails to prove. Here, we prove that our procedure [tautTrue] (recall that an inductive proof may be viewed as a recursive procedure) is able to prove any goal representable in [taut], so no extra step is necessary. *)
adamc@144 251
adamc@144 252
adamc@145 253 (** * A Monoid Expression Simplifier *)
adamc@145 254
adam@432 255 (** Proof by reflection does not require encoding of all of the syntax in a goal. We can insert "variables" in our syntax types to allow injection of arbitrary pieces, even if we cannot apply specialized reasoning to them. In this section, we explore that possibility by writing a tactic for normalizing monoid equations. *)
adamc@146 256
adamc@145 257 Section monoid.
adamc@145 258 Variable A : Set.
adamc@145 259 Variable e : A.
adamc@145 260 Variable f : A -> A -> A.
adamc@145 261
adamc@145 262 Infix "+" := f.
adamc@145 263
adamc@145 264 Hypothesis assoc : forall a b c, (a + b) + c = a + (b + c).
adamc@145 265 Hypothesis identl : forall a, e + a = a.
adamc@145 266 Hypothesis identr : forall a, a + e = a.
adamc@145 267
adamc@146 268 (** We add variables and hypotheses characterizing an arbitrary instance of the algebraic structure of monoids. We have an associative binary operator and an identity element for it.
adamc@146 269
adam@432 270 It is easy to define an expression tree type for monoid expressions. A [Var] constructor is a "catch-all" case for subexpressions that we cannot model. These subexpressions could be actual Gallina variables, or they could just use functions that our tactic is unable to understand. *)
adamc@146 271
adamc@148 272 (* begin thide *)
adamc@145 273 Inductive mexp : Set :=
adamc@145 274 | Ident : mexp
adamc@145 275 | Var : A -> mexp
adamc@145 276 | Op : mexp -> mexp -> mexp.
adamc@145 277
adam@360 278 (** Next, we write an interpretation function. *)
adamc@146 279
adamc@145 280 Fixpoint mdenote (me : mexp) : A :=
adamc@145 281 match me with
adamc@145 282 | Ident => e
adamc@145 283 | Var v => v
adamc@145 284 | Op me1 me2 => mdenote me1 + mdenote me2
adamc@145 285 end.
adamc@145 286
adamc@146 287 (** We will normalize expressions by flattening them into lists, via associativity, so it is helpful to have a denotation function for lists of monoid values. *)
adamc@146 288
adamc@145 289 Fixpoint mldenote (ls : list A) : A :=
adamc@145 290 match ls with
adamc@145 291 | nil => e
adamc@145 292 | x :: ls' => x + mldenote ls'
adamc@145 293 end.
adamc@145 294
adamc@146 295 (** The flattening function itself is easy to implement. *)
adamc@146 296
adamc@145 297 Fixpoint flatten (me : mexp) : list A :=
adamc@145 298 match me with
adamc@145 299 | Ident => nil
adamc@145 300 | Var x => x :: nil
adamc@145 301 | Op me1 me2 => flatten me1 ++ flatten me2
adamc@145 302 end.
adamc@145 303
adam@461 304 (** This function has a straightforward correctness proof in terms of our [denote] functions. *)
adamc@146 305
adamc@146 306 Lemma flatten_correct' : forall ml2 ml1,
adamc@146 307 mldenote ml1 + mldenote ml2 = mldenote (ml1 ++ ml2).
adamc@145 308 induction ml1; crush.
adamc@145 309 Qed.
adamc@145 310
adamc@145 311 Theorem flatten_correct : forall me, mdenote me = mldenote (flatten me).
adamc@145 312 Hint Resolve flatten_correct'.
adamc@145 313
adamc@145 314 induction me; crush.
adamc@145 315 Qed.
adamc@145 316
adamc@146 317 (** Now it is easy to prove a theorem that will be the main tool behind our simplification tactic. *)
adamc@146 318
adamc@146 319 Theorem monoid_reflect : forall me1 me2,
adamc@146 320 mldenote (flatten me1) = mldenote (flatten me2)
adamc@146 321 -> mdenote me1 = mdenote me2.
adamc@145 322 intros; repeat rewrite flatten_correct; assumption.
adamc@145 323 Qed.
adamc@145 324
adam@360 325 (** We implement reification into the [mexp] type. *)
adamc@146 326
adam@360 327 Ltac reify me :=
adamc@146 328 match me with
adamc@145 329 | e => Ident
adamc@146 330 | ?me1 + ?me2 =>
adam@360 331 let r1 := reify me1 in
adam@360 332 let r2 := reify me2 in
adamc@145 333 constr:(Op r1 r2)
adamc@146 334 | _ => constr:(Var me)
adamc@145 335 end.
adamc@145 336
adam@360 337 (** The final [monoid] tactic works on goals that equate two monoid terms. We reify each and change the goal to refer to the reified versions, finishing off by applying [monoid_reflect] and simplifying uses of [mldenote]. Recall that the %\index{tactics!change}%[change] tactic replaces a conclusion formula with another that is definitionally equal to it. *)
adamc@146 338
adamc@145 339 Ltac monoid :=
adamc@145 340 match goal with
adamc@146 341 | [ |- ?me1 = ?me2 ] =>
adam@360 342 let r1 := reify me1 in
adam@360 343 let r2 := reify me2 in
adamc@145 344 change (mdenote r1 = mdenote r2);
adam@360 345 apply monoid_reflect; simpl
adamc@145 346 end.
adamc@145 347
adamc@146 348 (** We can make short work of theorems like this one: *)
adamc@146 349
adamc@148 350 (* end thide *)
adamc@148 351
adamc@145 352 Theorem t1 : forall a b c d, a + b + c + d = a + (b + c) + d.
adamc@146 353 intros; monoid.
adamc@146 354 (** [[
adamc@146 355 ============================
adamc@146 356 a + (b + (c + (d + e))) = a + (b + (c + (d + e)))
adamc@221 357
adamc@146 358 ]]
adamc@146 359
adam@360 360 Our tactic has canonicalized both sides of the equality, such that we can finish the proof by reflexivity. *)
adamc@146 361
adamc@145 362 reflexivity.
adamc@145 363 Qed.
adamc@146 364
adamc@146 365 (** It is interesting to look at the form of the proof. *)
adamc@146 366
adamc@146 367 Print t1.
adamc@221 368 (** %\vspace{-.15in}% [[
adamc@146 369 t1 =
adamc@146 370 fun a b c d : A =>
adamc@146 371 monoid_reflect (Op (Op (Op (Var a) (Var b)) (Var c)) (Var d))
adamc@146 372 (Op (Op (Var a) (Op (Var b) (Var c))) (Var d))
adam@426 373 (eq_refl (a + (b + (c + (d + e)))))
adamc@146 374 : forall a b c d : A, a + b + c + d = a + (b + c) + d
adamc@146 375 ]]
adamc@146 376
adam@360 377 The proof term contains only restatements of the equality operands in reified form, followed by a use of reflexivity on the shared canonical form. *)
adamc@221 378
adamc@145 379 End monoid.
adamc@145 380
adam@360 381 (** Extensions of this basic approach are used in the implementations of the %\index{tactics!ring}%[ring] and %\index{tactics!field}%[field] tactics that come packaged with Coq. *)
adamc@146 382
adamc@145 383
adamc@144 384 (** * A Smarter Tautology Solver *)
adamc@144 385
adam@412 386 (** Now we are ready to revisit our earlier tautology solver example. We want to broaden the scope of the tactic to include formulas whose truth is not syntactically apparent. We will want to allow injection of arbitrary formulas, like we allowed arbitrary monoid expressions in the last example. Since we are working in a richer theory, it is important to be able to use equalities between different injected formulas. For instance, we cannot prove [P -> P] by translating the formula into a value like [Imp (Var P) (Var P)], because a Gallina function has no way of comparing the two [P]s for equality.
adamc@147 387
adam@360 388 To arrive at a nice implementation satisfying these criteria, we introduce the %\index{tactics!quote}%[quote] tactic and its associated library. *)
adamc@147 389
adamc@144 390 Require Import Quote.
adamc@144 391
adamc@148 392 (* begin thide *)
adamc@144 393 Inductive formula : Set :=
adamc@144 394 | Atomic : index -> formula
adamc@144 395 | Truth : formula
adamc@144 396 | Falsehood : formula
adamc@144 397 | And : formula -> formula -> formula
adamc@144 398 | Or : formula -> formula -> formula
adamc@144 399 | Imp : formula -> formula -> formula.
adam@362 400 (* end thide *)
adamc@144 401
adam@360 402 (** The type %\index{Gallina terms!index}%[index] comes from the [Quote] library and represents a countable variable type. The rest of [formula]'s definition should be old hat by now.
adamc@147 403
adam@484 404 The [quote] tactic will implement injection from [Prop] into [formula] for us, but it is not quite as smart as we might like. In particular, it wants to treat function types specially, so it gets confused if function types are part of the structure we want to encode syntactically. To trick [quote] into not noticing our uses of function types to express logical implication, we will need to declare a wrapper definition for implication, as we did in the last chapter. *)
adamc@144 405
adamc@144 406 Definition imp (P1 P2 : Prop) := P1 -> P2.
adamc@144 407 Infix "-->" := imp (no associativity, at level 95).
adamc@144 408
adamc@147 409 (** Now we can define our denotation function. *)
adamc@147 410
adamc@147 411 Definition asgn := varmap Prop.
adamc@147 412
adam@362 413 (* begin thide *)
adamc@144 414 Fixpoint formulaDenote (atomics : asgn) (f : formula) : Prop :=
adamc@144 415 match f with
adamc@144 416 | Atomic v => varmap_find False v atomics
adamc@144 417 | Truth => True
adamc@144 418 | Falsehood => False
adamc@144 419 | And f1 f2 => formulaDenote atomics f1 /\ formulaDenote atomics f2
adamc@144 420 | Or f1 f2 => formulaDenote atomics f1 \/ formulaDenote atomics f2
adamc@144 421 | Imp f1 f2 => formulaDenote atomics f1 --> formulaDenote atomics f2
adamc@144 422 end.
adam@362 423 (* end thide *)
adamc@144 424
adam@461 425 (** The %\index{Gallina terms!varmap}%[varmap] type family implements maps from [index] values. In this case, we define an assignment as a map from variables to [Prop]s. Our interpretation function [formulaDenote] works with an assignment, and we use the [varmap_find] function to consult the assignment in the [Atomic] case. The first argument to [varmap_find] is a default value, in case the variable is not found. *)
adamc@147 426
adamc@144 427 Section my_tauto.
adamc@144 428 Variable atomics : asgn.
adamc@144 429
adamc@144 430 Definition holds (v : index) := varmap_find False v atomics.
adamc@144 431
adamc@147 432 (** We define some shorthand for a particular variable being true, and now we are ready to define some helpful functions based on the [ListSet] module of the standard library, which (unsurprisingly) presents a view of lists as sets. *)
adamc@147 433
adamc@144 434 Require Import ListSet.
adamc@144 435
adamc@144 436 Definition index_eq : forall x y : index, {x = y} + {x <> y}.
adamc@144 437 decide equality.
adamc@144 438 Defined.
adamc@144 439
adamc@144 440 Definition add (s : set index) (v : index) := set_add index_eq v s.
adamc@147 441
adamc@221 442 Definition In_dec : forall v (s : set index), {In v s} + {~ In v s}.
adamc@221 443 Local Open Scope specif_scope.
adamc@144 444
adamc@221 445 intro; refine (fix F (s : set index) : {In v s} + {~ In v s} :=
adamc@221 446 match s with
adamc@144 447 | nil => No
adamc@144 448 | v' :: s' => index_eq v' v || F s'
adamc@144 449 end); crush.
adamc@144 450 Defined.
adamc@144 451
adamc@147 452 (** We define what it means for all members of an index set to represent true propositions, and we prove some lemmas about this notion. *)
adamc@147 453
adamc@144 454 Fixpoint allTrue (s : set index) : Prop :=
adamc@144 455 match s with
adamc@144 456 | nil => True
adamc@144 457 | v :: s' => holds v /\ allTrue s'
adamc@144 458 end.
adamc@144 459
adamc@144 460 Theorem allTrue_add : forall v s,
adamc@144 461 allTrue s
adamc@144 462 -> holds v
adamc@144 463 -> allTrue (add s v).
adamc@144 464 induction s; crush;
adamc@144 465 match goal with
adamc@144 466 | [ |- context[if ?E then _ else _] ] => destruct E
adamc@144 467 end; crush.
adamc@144 468 Qed.
adamc@144 469
adamc@144 470 Theorem allTrue_In : forall v s,
adamc@144 471 allTrue s
adamc@144 472 -> set_In v s
adamc@144 473 -> varmap_find False v atomics.
adamc@144 474 induction s; crush.
adamc@144 475 Qed.
adamc@144 476
adamc@144 477 Hint Resolve allTrue_add allTrue_In.
adamc@144 478
adamc@221 479 Local Open Scope partial_scope.
adamc@144 480
adam@484 481 (** Now we can write a function [forward] that implements deconstruction of hypotheses, expanding a compound formula into a set of sets of atomic formulas covering all possible cases introduced with use of [Or]. To handle consideration of multiple cases, the function takes in a continuation argument, which will be called once for each case.
adam@484 482
adam@484 483 The [forward] function has a dependent type, in the style of Chapter 6, guaranteeing correctness. The arguments to [forward] are a goal formula [f], a set [known] of atomic formulas that we may assume are true, a hypothesis formula [hyp], and a success continuation [cont] that we call when we have extended [known] to hold new truths implied by [hyp]. *)
adamc@147 484
adam@297 485 Definition forward : forall (f : formula) (known : set index) (hyp : formula)
adam@297 486 (cont : forall known', [allTrue known' -> formulaDenote atomics f]),
adam@297 487 [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f].
adamc@144 488 refine (fix F (f : formula) (known : set index) (hyp : formula)
adamc@221 489 (cont : forall known', [allTrue known' -> formulaDenote atomics f])
adamc@144 490 : [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f] :=
adamc@221 491 match hyp with
adamc@144 492 | Atomic v => Reduce (cont (add known v))
adamc@144 493 | Truth => Reduce (cont known)
adamc@144 494 | Falsehood => Yes
adamc@144 495 | And h1 h2 =>
adamc@144 496 Reduce (F (Imp h2 f) known h1 (fun known' =>
adamc@144 497 Reduce (F f known' h2 cont)))
adamc@144 498 | Or h1 h2 => F f known h1 cont && F f known h2 cont
adamc@144 499 | Imp _ _ => Reduce (cont known)
adamc@144 500 end); crush.
adamc@144 501 Defined.
adamc@144 502
adamc@147 503 (** A [backward] function implements analysis of the final goal. It calls [forward] to handle implications. *)
adamc@147 504
adam@362 505 (* begin thide *)
adam@297 506 Definition backward : forall (known : set index) (f : formula),
adam@297 507 [allTrue known -> formulaDenote atomics f].
adamc@221 508 refine (fix F (known : set index) (f : formula)
adamc@221 509 : [allTrue known -> formulaDenote atomics f] :=
adamc@221 510 match f with
adamc@144 511 | Atomic v => Reduce (In_dec v known)
adamc@144 512 | Truth => Yes
adamc@144 513 | Falsehood => No
adamc@144 514 | And f1 f2 => F known f1 && F known f2
adamc@144 515 | Or f1 f2 => F known f1 || F known f2
adamc@144 516 | Imp f1 f2 => forward f2 known f1 (fun known' => F known' f2)
adamc@144 517 end); crush; eauto.
adamc@144 518 Defined.
adam@362 519 (* end thide *)
adamc@144 520
adamc@147 521 (** A simple wrapper around [backward] gives us the usual type of a partial decision procedure. *)
adamc@147 522
adam@297 523 Definition my_tauto : forall f : formula, [formulaDenote atomics f].
adam@362 524 (* begin thide *)
adamc@144 525 intro; refine (Reduce (backward nil f)); crush.
adamc@144 526 Defined.
adam@362 527 (* end thide *)
adamc@144 528 End my_tauto.
adamc@144 529
adam@360 530 (** Our final tactic implementation is now fairly straightforward. First, we [intro] all quantifiers that do not bind [Prop]s. Then we call the [quote] tactic, which implements the reification for us. Finally, we are able to construct an exact proof via [partialOut] and the [my_tauto] Gallina function. *)
adamc@147 531
adamc@144 532 Ltac my_tauto :=
adamc@144 533 repeat match goal with
adamc@144 534 | [ |- forall x : ?P, _ ] =>
adamc@144 535 match type of P with
adamc@144 536 | Prop => fail 1
adamc@144 537 | _ => intro
adamc@144 538 end
adamc@144 539 end;
adamc@144 540 quote formulaDenote;
adamc@144 541 match goal with
adamc@144 542 | [ |- formulaDenote ?m ?f ] => exact (partialOut (my_tauto m f))
adamc@144 543 end.
adamc@148 544 (* end thide *)
adamc@144 545
adamc@147 546 (** A few examples demonstrate how the tactic works. *)
adamc@147 547
adamc@144 548 Theorem mt1 : True.
adamc@144 549 my_tauto.
adamc@144 550 Qed.
adamc@144 551
adamc@144 552 Print mt1.
adamc@221 553 (** %\vspace{-.15in}% [[
adamc@147 554 mt1 = partialOut (my_tauto (Empty_vm Prop) Truth)
adamc@147 555 : True
adamc@147 556 ]]
adamc@147 557
adamc@147 558 We see [my_tauto] applied with an empty [varmap], since every subformula is handled by [formulaDenote]. *)
adamc@144 559
adamc@144 560 Theorem mt2 : forall x y : nat, x = y --> x = y.
adamc@144 561 my_tauto.
adamc@144 562 Qed.
adamc@144 563
adam@432 564 (* begin hide *)
adam@437 565 (* begin thide *)
adam@432 566 Definition nvm := (Node_vm, Empty_vm, End_idx, Left_idx, Right_idx).
adam@437 567 (* end thide *)
adam@432 568 (* end hide *)
adam@432 569
adamc@144 570 Print mt2.
adamc@221 571 (** %\vspace{-.15in}% [[
adamc@147 572 mt2 =
adamc@147 573 fun x y : nat =>
adamc@147 574 partialOut
adamc@147 575 (my_tauto (Node_vm (x = y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 576 (Imp (Atomic End_idx) (Atomic End_idx)))
adamc@147 577 : forall x y : nat, x = y --> x = y
adamc@147 578 ]]
adamc@147 579
adamc@147 580 Crucially, both instances of [x = y] are represented with the same index, [End_idx]. The value of this index only needs to appear once in the [varmap], whose form reveals that [varmap]s are represented as binary trees, where [index] values denote paths from tree roots to leaves. *)
adamc@144 581
adamc@144 582 Theorem mt3 : forall x y z,
adamc@144 583 (x < y /\ y > z) \/ (y > z /\ x < S y)
adamc@144 584 --> y > z /\ (x < y \/ x < S y).
adamc@144 585 my_tauto.
adamc@144 586 Qed.
adamc@144 587
adamc@144 588 Print mt3.
adamc@221 589 (** %\vspace{-.15in}% [[
adamc@147 590 fun x y z : nat =>
adamc@147 591 partialOut
adamc@147 592 (my_tauto
adamc@147 593 (Node_vm (x < S y) (Node_vm (x < y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 594 (Node_vm (y > z) (Empty_vm Prop) (Empty_vm Prop)))
adamc@147 595 (Imp
adamc@147 596 (Or (And (Atomic (Left_idx End_idx)) (Atomic (Right_idx End_idx)))
adamc@147 597 (And (Atomic (Right_idx End_idx)) (Atomic End_idx)))
adamc@147 598 (And (Atomic (Right_idx End_idx))
adamc@147 599 (Or (Atomic (Left_idx End_idx)) (Atomic End_idx)))))
adamc@147 600 : forall x y z : nat,
adamc@147 601 x < y /\ y > z \/ y > z /\ x < S y --> y > z /\ (x < y \/ x < S y)
adamc@147 602 ]]
adamc@147 603
adamc@147 604 Our goal contained three distinct atomic formulas, and we see that a three-element [varmap] is generated.
adamc@147 605
adamc@147 606 It can be interesting to observe differences between the level of repetition in proof terms generated by [my_tauto] and [tauto] for especially trivial theorems. *)
adamc@144 607
adamc@144 608 Theorem mt4 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False.
adamc@144 609 my_tauto.
adamc@144 610 Qed.
adamc@144 611
adamc@144 612 Print mt4.
adamc@221 613 (** %\vspace{-.15in}% [[
adamc@147 614 mt4 =
adamc@147 615 partialOut
adamc@147 616 (my_tauto (Empty_vm Prop)
adamc@147 617 (Imp
adamc@147 618 (And Truth
adamc@147 619 (And Truth
adamc@147 620 (And Truth (And Truth (And Truth (And Truth Falsehood))))))
adamc@147 621 Falsehood))
adamc@147 622 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False
adam@302 623 ]]
adam@302 624 *)
adamc@144 625
adamc@144 626 Theorem mt4' : True /\ True /\ True /\ True /\ True /\ True /\ False -> False.
adamc@144 627 tauto.
adamc@144 628 Qed.
adamc@144 629
adam@432 630 (* begin hide *)
adam@437 631 (* begin thide *)
adam@432 632 Definition fi := False_ind.
adam@437 633 (* end thide *)
adam@432 634 (* end hide *)
adam@432 635
adamc@144 636 Print mt4'.
adamc@221 637 (** %\vspace{-.15in}% [[
adamc@147 638 mt4' =
adamc@147 639 fun H : True /\ True /\ True /\ True /\ True /\ True /\ False =>
adamc@147 640 and_ind
adamc@147 641 (fun (_ : True) (H1 : True /\ True /\ True /\ True /\ True /\ False) =>
adamc@147 642 and_ind
adamc@147 643 (fun (_ : True) (H3 : True /\ True /\ True /\ True /\ False) =>
adamc@147 644 and_ind
adamc@147 645 (fun (_ : True) (H5 : True /\ True /\ True /\ False) =>
adamc@147 646 and_ind
adamc@147 647 (fun (_ : True) (H7 : True /\ True /\ False) =>
adamc@147 648 and_ind
adamc@147 649 (fun (_ : True) (H9 : True /\ False) =>
adamc@147 650 and_ind (fun (_ : True) (H11 : False) => False_ind False H11)
adamc@147 651 H9) H7) H5) H3) H1) H
adamc@147 652 : True /\ True /\ True /\ True /\ True /\ True /\ False -> False
adam@302 653 ]]
adam@360 654
adam@360 655 The traditional [tauto] tactic introduces a quadratic blow-up in the size of the proof term, whereas proofs produced by [my_tauto] always have linear size. *)
adamc@147 656
adam@361 657 (** ** Manual Reification of Terms with Variables *)
adam@361 658
adam@362 659 (* begin thide *)
adam@361 660 (** The action of the [quote] tactic above may seem like magic. Somehow it performs equality comparison between subterms of arbitrary types, so that these subterms may be represented with the same reified variable. While [quote] is implemented in OCaml, we can code the reification process completely in Ltac, as well. To make our job simpler, we will represent variables as [nat]s, indexing into a simple list of variable values that may be referenced.
adam@361 661
adam@485 662 Step one of the process is to crawl over a term, building a duplicate-free list of all values that appear in positions we will encode as variables. A useful helper function adds an element to a list, preventing duplicates. Note how we use Ltac pattern matching to implement an equality test on Gallina terms; this is simple syntactic equality, not even the richer definitional equality. We also represent lists as nested tuples, to allow different list elements to have different Gallina types. *)
adam@361 663
adam@361 664 Ltac inList x xs :=
adam@361 665 match xs with
adam@361 666 | tt => false
adam@361 667 | (x, _) => true
adam@361 668 | (_, ?xs') => inList x xs'
adam@361 669 end.
adam@361 670
adam@361 671 Ltac addToList x xs :=
adam@361 672 let b := inList x xs in
adam@361 673 match b with
adam@361 674 | true => xs
adam@361 675 | false => constr:(x, xs)
adam@361 676 end.
adam@361 677
adam@361 678 (** Now we can write our recursive function to calculate the list of variable values we will want to use to represent a term. *)
adam@361 679
adam@361 680 Ltac allVars xs e :=
adam@361 681 match e with
adam@361 682 | True => xs
adam@361 683 | False => xs
adam@361 684 | ?e1 /\ ?e2 =>
adam@361 685 let xs := allVars xs e1 in
adam@361 686 allVars xs e2
adam@361 687 | ?e1 \/ ?e2 =>
adam@361 688 let xs := allVars xs e1 in
adam@361 689 allVars xs e2
adam@361 690 | ?e1 -> ?e2 =>
adam@361 691 let xs := allVars xs e1 in
adam@361 692 allVars xs e2
adam@361 693 | _ => addToList e xs
adam@361 694 end.
adam@361 695
adam@361 696 (** We will also need a way to map a value to its position in a list. *)
adam@361 697
adam@361 698 Ltac lookup x xs :=
adam@361 699 match xs with
adam@361 700 | (x, _) => O
adam@361 701 | (_, ?xs') =>
adam@361 702 let n := lookup x xs' in
adam@361 703 constr:(S n)
adam@361 704 end.
adam@361 705
adam@461 706 (** The next building block is a procedure for reifying a term, given a list of all allowed variable values. We are free to make this procedure partial, where tactic failure may be triggered upon attempting to reify a term containing subterms not included in the list of variables. The type of the output term is a copy of [formula] where [index] is replaced by [nat], in the type of the constructor for atomic formulas. *)
adam@361 707
adam@361 708 Inductive formula' : Set :=
adam@361 709 | Atomic' : nat -> formula'
adam@361 710 | Truth' : formula'
adam@361 711 | Falsehood' : formula'
adam@361 712 | And' : formula' -> formula' -> formula'
adam@361 713 | Or' : formula' -> formula' -> formula'
adam@361 714 | Imp' : formula' -> formula' -> formula'.
adam@361 715
adam@361 716 (** Note that, when we write our own Ltac procedure, we can work directly with the normal [->] operator, rather than needing to introduce a wrapper for it. *)
adam@361 717
adam@361 718 Ltac reifyTerm xs e :=
adam@361 719 match e with
adam@432 720 | True => constr:Truth'
adam@432 721 | False => constr:Falsehood'
adam@361 722 | ?e1 /\ ?e2 =>
adam@361 723 let p1 := reifyTerm xs e1 in
adam@361 724 let p2 := reifyTerm xs e2 in
adam@361 725 constr:(And' p1 p2)
adam@361 726 | ?e1 \/ ?e2 =>
adam@361 727 let p1 := reifyTerm xs e1 in
adam@361 728 let p2 := reifyTerm xs e2 in
adam@361 729 constr:(Or' p1 p2)
adam@361 730 | ?e1 -> ?e2 =>
adam@361 731 let p1 := reifyTerm xs e1 in
adam@361 732 let p2 := reifyTerm xs e2 in
adam@361 733 constr:(Imp' p1 p2)
adam@361 734 | _ =>
adam@361 735 let n := lookup e xs in
adam@361 736 constr:(Atomic' n)
adam@361 737 end.
adam@361 738
adam@361 739 (** Finally, we bring all the pieces together. *)
adam@361 740
adam@361 741 Ltac reify :=
adam@361 742 match goal with
adam@361 743 | [ |- ?G ] => let xs := allVars tt G in
adam@361 744 let p := reifyTerm xs G in
adam@361 745 pose p
adam@361 746 end.
adam@361 747
adam@361 748 (** A quick test verifies that we are doing reification correctly. *)
adam@361 749
adam@361 750 Theorem mt3' : forall x y z,
adam@361 751 (x < y /\ y > z) \/ (y > z /\ x < S y)
adam@361 752 -> y > z /\ (x < y \/ x < S y).
adam@361 753 do 3 intro; reify.
adam@361 754
adam@361 755 (** Our simple tactic adds the translated term as a new variable:
adam@361 756 [[
adam@361 757 f := Imp'
adam@361 758 (Or' (And' (Atomic' 2) (Atomic' 1)) (And' (Atomic' 1) (Atomic' 0)))
adam@361 759 (And' (Atomic' 1) (Or' (Atomic' 2) (Atomic' 0))) : formula'
adam@361 760 ]]
adam@361 761 *)
adam@361 762 Abort.
adam@361 763
adam@361 764 (** More work would be needed to complete the reflective tactic, as we must connect our new syntax type with the real meanings of formulas, but the details are the same as in our prior implementation with [quote]. *)
adam@362 765 (* end thide *)
adam@378 766
adam@378 767
adam@378 768 (** * Building a Reification Tactic that Recurses Under Binders *)
adam@378 769
adam@378 770 (** All of our examples so far have stayed away from reifying the syntax of terms that use such features as quantifiers and [fun] function abstractions. Such cases are complicated by the fact that different subterms may be allowed to reference different sets of free variables. Some cleverness is needed to clear this hurdle, but a few simple patterns will suffice. Consider this example of a simple dependently typed term language, where a function abstraction body is represented conveniently with a Coq function. *)
adam@378 771
adam@378 772 Inductive type : Type :=
adam@378 773 | Nat : type
adam@378 774 | NatFunc : type -> type.
adam@378 775
adam@378 776 Inductive term : type -> Type :=
adam@378 777 | Const : nat -> term Nat
adam@378 778 | Plus : term Nat -> term Nat -> term Nat
adam@378 779 | Abs : forall t, (nat -> term t) -> term (NatFunc t).
adam@378 780
adam@378 781 Fixpoint typeDenote (t : type) : Type :=
adam@378 782 match t with
adam@378 783 | Nat => nat
adam@378 784 | NatFunc t => nat -> typeDenote t
adam@378 785 end.
adam@378 786
adam@378 787 Fixpoint termDenote t (e : term t) : typeDenote t :=
adam@378 788 match e with
adam@378 789 | Const n => n
adam@378 790 | Plus e1 e2 => termDenote e1 + termDenote e2
adam@378 791 | Abs _ e1 => fun x => termDenote (e1 x)
adam@378 792 end.
adam@378 793
adam@484 794 (** Here is a %\%naive%{}% first attempt at a reification tactic. *)
adam@378 795
adam@378 796 Ltac refl' e :=
adam@378 797 match e with
adam@378 798 | ?E1 + ?E2 =>
adam@378 799 let r1 := refl' E1 in
adam@378 800 let r2 := refl' E2 in
adam@378 801 constr:(Plus r1 r2)
adam@378 802
adam@378 803 | fun x : nat => ?E1 =>
adam@378 804 let r1 := refl' E1 in
adam@378 805 constr:(Abs (fun x => r1 x))
adam@378 806
adam@378 807 | _ => constr:(Const e)
adam@378 808 end.
adam@378 809
adam@484 810 (** Recall that a regular Ltac pattern variable [?X] only matches terms that _do not mention new variables introduced within the pattern_. In our %\%naive%{}% implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument! Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
adam@378 811
adam@484 812 To handle functions in general, we will use the pattern variable form [@?X], which allows [X] to mention newly introduced variables that are declared explicitly. A use of [@?X] must be followed by a list of the local variables that may be mentioned. The variable [X] then comes to stand for a Gallina function over the values of those variables. For instance: *)
adam@378 813
adam@378 814 Reset refl'.
adam@378 815 Ltac refl' e :=
adam@378 816 match e with
adam@378 817 | ?E1 + ?E2 =>
adam@378 818 let r1 := refl' E1 in
adam@378 819 let r2 := refl' E2 in
adam@378 820 constr:(Plus r1 r2)
adam@378 821
adam@378 822 | fun x : nat => @?E1 x =>
adam@378 823 let r1 := refl' E1 in
adam@378 824 constr:(Abs r1)
adam@378 825
adam@378 826 | _ => constr:(Const e)
adam@378 827 end.
adam@378 828
adam@398 829 (** Now, in the abstraction case, we bind [E1] as a function from an [x] value to the value of the abstraction body. Unfortunately, our recursive call there is not destined for success. It will match the same abstraction pattern and trigger another recursive call, and so on through infinite recursion. One last refactoring yields a working procedure. The key idea is to consider every input to [refl'] as _a function over the values of variables introduced during recursion_. *)
adam@378 830
adam@378 831 Reset refl'.
adam@378 832 Ltac refl' e :=
adam@378 833 match eval simpl in e with
adam@378 834 | fun x : ?T => @?E1 x + @?E2 x =>
adam@378 835 let r1 := refl' E1 in
adam@378 836 let r2 := refl' E2 in
adam@378 837 constr:(fun x => Plus (r1 x) (r2 x))
adam@378 838
adam@378 839 | fun (x : ?T) (y : nat) => @?E1 x y =>
adam@378 840 let r1 := refl' (fun p : T * nat => E1 (fst p) (snd p)) in
adam@378 841 constr:(fun x => Abs (fun y => r1 (x, y)))
adam@378 842
adam@378 843 | _ => constr:(fun x => Const (e x))
adam@378 844 end.
adam@378 845
adam@378 846 (** Note how now even the addition case works in terms of functions, with [@?X] patterns. The abstraction case introduces a new variable by extending the type used to represent the free variables. In particular, the argument to [refl'] used type [T] to represent all free variables. We extend the type to [T * nat] for the type representing free variable values within the abstraction body. A bit of bookkeeping with pairs and their projections produces an appropriate version of the abstraction body to pass in a recursive call. To ensure that all this repackaging of terms does not interfere with pattern matching, we add an extra [simpl] reduction on the function argument, in the first line of the body of [refl'].
adam@378 847
adam@378 848 Now one more tactic provides an example of how to apply reification. Let us consider goals that are equalities between terms that can be reified. We want to change such goals into equalities between appropriate calls to [termDenote]. *)
adam@378 849
adam@378 850 Ltac refl :=
adam@378 851 match goal with
adam@378 852 | [ |- ?E1 = ?E2 ] =>
adam@378 853 let E1' := refl' (fun _ : unit => E1) in
adam@378 854 let E2' := refl' (fun _ : unit => E2) in
adam@378 855 change (termDenote (E1' tt) = termDenote (E2' tt));
adam@378 856 cbv beta iota delta [fst snd]
adam@378 857 end.
adam@378 858
adam@378 859 Goal (fun (x y : nat) => x + y + 13) = (fun (_ z : nat) => z).
adam@378 860 refl.
adam@378 861 (** %\vspace{-.15in}%[[
adam@378 862 ============================
adam@378 863 termDenote
adam@378 864 (Abs
adam@378 865 (fun y : nat =>
adam@378 866 Abs (fun y0 : nat => Plus (Plus (Const y) (Const y0)) (Const 13)))) =
adam@378 867 termDenote (Abs (fun _ : nat => Abs (fun y0 : nat => Const y0)))
adam@378 868 ]]
adam@378 869 *)
adam@378 870
adam@378 871 Abort.
adam@378 872
adam@484 873 (** Our encoding here uses Coq functions to represent binding within the terms we reify, which makes it difficult to implement certain functions over reified terms. An alternative would be to represent variables with numbers. This can be done by writing a slightly smarter reification function that identifies variable references by detecting when term arguments are just compositions of [fst] and [snd]; from the order of the compositions we may read off the variable number. We leave the details as an exercise (though not a trivial one!) for the reader. *)