annotate src/Reflection.v @ 384:eb0fa506d04c

Strengthen last theorem of Coinductive
author Adam Chlipala <adam@chlipala.net>
date Sun, 08 Apr 2012 11:11:27 -0400
parents e23d41ae63be
children 05efde66559d
rev   line source
adam@379 1 (* Copyright (c) 2008-2012, Adam Chlipala
adamc@142 2 *
adamc@142 3 * This work is licensed under a
adamc@142 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@142 5 * Unported License.
adamc@142 6 * The license text is available at:
adamc@142 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@142 8 *)
adamc@142 9
adamc@142 10 (* begin hide *)
adamc@142 11 Require Import List.
adamc@142 12
adam@314 13 Require Import CpdtTactics MoreSpecif.
adamc@142 14
adamc@142 15 Set Implicit Arguments.
adamc@142 16 (* end hide *)
adamc@142 17
adamc@142 18
adamc@142 19 (** %\chapter{Proof by Reflection}% *)
adamc@142 20
adam@360 21 (** The last chapter highlighted a very heuristic approach to proving. In this chapter, we will study an alternative technique, %\index{proof by reflection}\textit{%#<i>#proof by reflection#</i>#%}~\cite{reflection}%. We will write, in Gallina, decision procedures with proofs of correctness, and we will appeal to these procedures in writing very short proofs. Such a proof is checked by running the decision procedure. The term %\textit{%#<i>#reflection#</i>#%}% applies because we will need to translate Gallina propositions into values of inductive types representing syntax, so that Gallina programs may analyze them. *)
adamc@142 22
adamc@142 23
adamc@142 24 (** * Proving Evenness *)
adamc@142 25
adamc@142 26 (** Proving that particular natural number constants are even is certainly something we would rather have happen automatically. The Ltac-programming techniques that we learned in the last chapter make it easy to implement such a procedure. *)
adamc@142 27
adamc@142 28 Inductive isEven : nat -> Prop :=
adamc@144 29 | Even_O : isEven O
adamc@144 30 | Even_SS : forall n, isEven n -> isEven (S (S n)).
adamc@142 31
adamc@148 32 (* begin thide *)
adamc@142 33 Ltac prove_even := repeat constructor.
adamc@148 34 (* end thide *)
adamc@142 35
adamc@142 36 Theorem even_256 : isEven 256.
adamc@142 37 prove_even.
adamc@142 38 Qed.
adamc@142 39
adamc@142 40 Print even_256.
adamc@221 41 (** %\vspace{-.15in}% [[
adamc@142 42 even_256 =
adamc@142 43 Even_SS
adamc@142 44 (Even_SS
adamc@142 45 (Even_SS
adamc@142 46 (Even_SS
adamc@221 47
adamc@142 48 ]]
adamc@142 49
adam@360 50 %\noindent%...and so on. This procedure always works (at least on machines with infinite resources), but it has a serious drawback, which we see when we print the proof it generates that 256 is even. The final proof term has length super-linear in the input value. Coq's implicit arguments mechanism is hiding the values given for parameter [n] of [Even_SS], which is why the proof term only appears linear here. Also, proof terms are represented internally as syntax trees, with opportunity for sharing of node representations, but in this chapter we will measure proof term size as simple textual length or as the number of nodes in the term's syntax tree, two measures that are approximately equivalent. Sometimes apparently large proof terms have enough internal sharing that they take up less memory than we expect, but one avoids having to reason about such sharing by ensuring that the size of a sharing-free version of a term is low enough.
adam@360 51
adam@360 52 Superlinear evenness proof terms seem like a shame, since we could write a trivial and trustworthy program to verify evenness of constants. The proof checker could simply call our program where needed.
adamc@142 53
adamc@142 54 It is also unfortunate not to have static typing guarantees that our tactic always behaves appropriately. Other invocations of similar tactics might fail with dynamic type errors, and we would not know about the bugs behind these errors until we happened to attempt to prove complex enough goals.
adamc@142 55
adamc@142 56 The techniques of proof by reflection address both complaints. We will be able to write proofs like this with constant size overhead beyond the size of the input, and we will do it with verified decision procedures written in Gallina.
adamc@142 57
adamc@221 58 For this example, we begin by using a type from the [MoreSpecif] module (included in the book source) to write a certified evenness checker. *)
adamc@142 59
adamc@142 60 Print partial.
adamc@221 61 (** %\vspace{-.15in}% [[
adamc@221 62 Inductive partial (P : Prop) : Set := Proved : P -> [P] | Uncertain : [P]
adamc@221 63
adamc@221 64 ]]
adamc@142 65
adamc@221 66 A [partial P] value is an optional proof of [P]. The notation [[P]] stands for [partial P]. *)
adamc@142 67
adamc@221 68 Local Open Scope partial_scope.
adamc@142 69
adamc@142 70 (** We bring into scope some notations for the [partial] type. These overlap with some of the notations we have seen previously for specification types, so they were placed in a separate scope that needs separate opening. *)
adamc@142 71
adamc@148 72 (* begin thide *)
adam@297 73 Definition check_even : forall n : nat, [isEven n].
adamc@142 74 Hint Constructors isEven.
adamc@142 75
adamc@142 76 refine (fix F (n : nat) : [isEven n] :=
adamc@221 77 match n with
adamc@142 78 | 0 => Yes
adamc@142 79 | 1 => No
adamc@142 80 | S (S n') => Reduce (F n')
adamc@142 81 end); auto.
adamc@142 82 Defined.
adamc@142 83
adam@360 84 (** The function [check_even] may be viewed as a %\emph{%#<i>#verified decision procedure#</i>#%}%, because its type guarantees that it never returns [Yes] for inputs that are not even.
adam@360 85
adam@360 86 Now we can use dependent pattern-matching to write a function that performs a surprising feat. When given a [partial P], this function [partialOut] returns a proof of [P] if the [partial] value contains a proof, and it returns a (useless) proof of [True] otherwise. From the standpoint of ML and Haskell programming, it seems impossible to write such a type, but it is trivial with a [return] annotation. *)
adamc@142 87
adamc@142 88 Definition partialOut (P : Prop) (x : [P]) :=
adamc@142 89 match x return (match x with
adamc@142 90 | Proved _ => P
adamc@142 91 | Uncertain => True
adamc@142 92 end) with
adamc@142 93 | Proved pf => pf
adamc@142 94 | Uncertain => I
adamc@142 95 end.
adamc@142 96
adam@289 97 (** It may seem strange to define a function like this. However, it turns out to be very useful in writing a reflective version of our earlier [prove_even] tactic: *)
adamc@142 98
adamc@142 99 Ltac prove_even_reflective :=
adamc@142 100 match goal with
adamc@142 101 | [ |- isEven ?N] => exact (partialOut (check_even N))
adamc@142 102 end.
adamc@148 103 (* end thide *)
adamc@142 104
adam@360 105 (** We identify which natural number we are considering, and we %``%#"#prove#"#%''% its evenness by pulling the proof out of the appropriate [check_even] call. Recall that the %\index{tactics!exact}%[exact] tactic proves a proposition [P] when given a proof term of precisely type [P]. *)
adamc@142 106
adamc@142 107 Theorem even_256' : isEven 256.
adamc@142 108 prove_even_reflective.
adamc@142 109 Qed.
adamc@142 110
adamc@142 111 Print even_256'.
adamc@221 112 (** %\vspace{-.15in}% [[
adamc@142 113 even_256' = partialOut (check_even 256)
adamc@142 114 : isEven 256
adamc@142 115 ]]
adamc@142 116
adam@289 117 We can see a constant wrapper around the object of the proof. For any even number, this form of proof will suffice. The size of the proof term is now linear in the number being checked, containing two repetitions of the unary form of that number, one of which is hidden above within the implicit argument to [partialOut].
adam@289 118
adam@289 119 What happens if we try the tactic with an odd number? *)
adamc@142 120
adamc@142 121 Theorem even_255 : isEven 255.
adam@360 122 (** %\vspace{-.275in}%[[
adamc@142 123 prove_even_reflective.
adam@360 124 ]]
adamc@142 125
adam@360 126 <<
adamc@142 127 User error: No matching clauses for match goal
adam@360 128 >>
adamc@142 129
adamc@142 130 Thankfully, the tactic fails. To see more precisely what goes wrong, we can run manually the body of the [match].
adamc@142 131
adam@360 132 %\vspace{-.15in}%[[
adamc@142 133 exact (partialOut (check_even 255)).
adam@360 134 ]]
adamc@142 135
adam@360 136 <<
adamc@142 137 Error: The term "partialOut (check_even 255)" has type
adamc@142 138 "match check_even 255 with
adamc@142 139 | Yes => isEven 255
adamc@142 140 | No => True
adamc@142 141 end" while it is expected to have type "isEven 255"
adam@360 142 >>
adamc@142 143
adam@360 144 As usual, the type checker performs no reductions to simplify error messages. If we reduced the first term ourselves, we would see that [check_even 255] reduces to a [No], so that the first term is equivalent to [True], which certainly does not unify with [isEven 255]. *)
adamc@221 145
adamc@142 146 Abort.
adamc@143 147
adam@362 148 (** Our tactic [prove_even_reflective] is reflective because it performs a proof search process (a trivial one, in this case) wholly within Gallina, where the only use of Ltac is to translate a goal into an appropriate use of [check_even]. *)
adamc@143 149
adam@360 150
adam@360 151 (** * Reifying the Syntax of a Trivial Tautology Language *)
adamc@143 152
adamc@143 153 (** We might also like to have reflective proofs of trivial tautologies like this one: *)
adamc@143 154
adamc@143 155 Theorem true_galore : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 156 tauto.
adamc@143 157 Qed.
adamc@143 158
adamc@143 159 Print true_galore.
adamc@221 160 (** %\vspace{-.15in}% [[
adamc@143 161 true_galore =
adamc@143 162 fun H : True /\ True =>
adamc@143 163 and_ind (fun _ _ : True => or_introl (True /\ (True -> True)) I) H
adamc@143 164 : True /\ True -> True \/ True /\ (True -> True)
adamc@221 165
adamc@143 166 ]]
adamc@143 167
adamc@143 168 As we might expect, the proof that [tauto] builds contains explicit applications of natural deduction rules. For large formulas, this can add a linear amount of proof size overhead, beyond the size of the input.
adamc@143 169
adam@360 170 To write a reflective procedure for this class of goals, we will need to get into the actual %``%#"#reflection#"#%''% part of %``%#"#proof by reflection.#"#%''% It is impossible to case-analyze a [Prop] in any way in Gallina. We must %\index{reification}\textit{%#<i>#reify#</i>#%}% [Prop] into some type that we %\textit{%#<i>#can#</i>#%}% analyze. This inductive type is a good candidate: *)
adamc@143 171
adamc@148 172 (* begin thide *)
adamc@143 173 Inductive taut : Set :=
adamc@143 174 | TautTrue : taut
adamc@143 175 | TautAnd : taut -> taut -> taut
adamc@143 176 | TautOr : taut -> taut -> taut
adamc@143 177 | TautImp : taut -> taut -> taut.
adamc@143 178
adam@360 179 (** We write a recursive function to %\emph{%#<i>#reflect#</i>#%}% this syntax back to [Prop]. Such functions are also called %\index{interpretation function}\emph{%#<i>#interpretation functions#</i>#%}%, and have used them in previous examples to give semantics to small programming languages. *)
adamc@143 180
adamc@143 181 Fixpoint tautDenote (t : taut) : Prop :=
adamc@143 182 match t with
adamc@143 183 | TautTrue => True
adamc@143 184 | TautAnd t1 t2 => tautDenote t1 /\ tautDenote t2
adamc@143 185 | TautOr t1 t2 => tautDenote t1 \/ tautDenote t2
adamc@143 186 | TautImp t1 t2 => tautDenote t1 -> tautDenote t2
adamc@143 187 end.
adamc@143 188
adamc@143 189 (** It is easy to prove that every formula in the range of [tautDenote] is true. *)
adamc@143 190
adamc@143 191 Theorem tautTrue : forall t, tautDenote t.
adamc@143 192 induction t; crush.
adamc@143 193 Qed.
adamc@143 194
adam@360 195 (** To use [tautTrue] to prove particular formulas, we need to implement the syntax reification process. A recursive Ltac function does the job. *)
adamc@143 196
adam@360 197 Ltac tautReify P :=
adamc@143 198 match P with
adamc@143 199 | True => TautTrue
adamc@143 200 | ?P1 /\ ?P2 =>
adam@360 201 let t1 := tautReify P1 in
adam@360 202 let t2 := tautReify P2 in
adamc@143 203 constr:(TautAnd t1 t2)
adamc@143 204 | ?P1 \/ ?P2 =>
adam@360 205 let t1 := tautReify P1 in
adam@360 206 let t2 := tautReify P2 in
adamc@143 207 constr:(TautOr t1 t2)
adamc@143 208 | ?P1 -> ?P2 =>
adam@360 209 let t1 := tautReify P1 in
adam@360 210 let t2 := tautReify P2 in
adamc@143 211 constr:(TautImp t1 t2)
adamc@143 212 end.
adamc@143 213
adam@360 214 (** With [tautReify] available, it is easy to finish our reflective tactic. We look at the goal formula, reflect it, and apply [tautTrue] to the reflected formula. *)
adamc@143 215
adamc@143 216 Ltac obvious :=
adamc@143 217 match goal with
adamc@143 218 | [ |- ?P ] =>
adam@360 219 let t := tautReify P in
adamc@143 220 exact (tautTrue t)
adamc@143 221 end.
adamc@143 222
adamc@143 223 (** We can verify that [obvious] solves our original example, with a proof term that does not mention details of the proof. *)
adamc@148 224 (* end thide *)
adamc@143 225
adamc@143 226 Theorem true_galore' : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 227 obvious.
adamc@143 228 Qed.
adamc@143 229
adamc@143 230 Print true_galore'.
adamc@143 231
adamc@221 232 (** %\vspace{-.15in}% [[
adamc@143 233 true_galore' =
adamc@143 234 tautTrue
adamc@143 235 (TautImp (TautAnd TautTrue TautTrue)
adamc@143 236 (TautOr TautTrue (TautAnd TautTrue (TautImp TautTrue TautTrue))))
adamc@143 237 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 238 ]]
adamc@143 239
adam@360 240 It is worth considering how the reflective tactic improves on a pure-Ltac implementation. The formula reification process is just as ad-hoc as before, so we gain little there. In general, proofs will be more complicated than formula translation, and the %``%#"#generic proof rule#"#%''% that we apply here %\textit{%#<i>#is#</i>#%}% on much better formal footing than a recursive Ltac function. The dependent type of the proof guarantees that it %``%#"#works#"#%''% on any input formula. This is all in addition to the proof-size improvement that we have already seen.
adam@360 241
adam@360 242 It may also be worth pointing out that our previous example of evenness testing used a function [partialOut] for sound handling of input goals that the verified decision procedure fails to prove. Here, we prove that our procedure [tautTrue] (recall that an inductive proof may be viewed as a recursive procedure) is able to prove any goal representable in [taut], so no extra step is necessary. *)
adamc@144 243
adamc@144 244
adamc@145 245 (** * A Monoid Expression Simplifier *)
adamc@145 246
adam@289 247 (** Proof by reflection does not require encoding of all of the syntax in a goal. We can insert %``%#"#variables#"#%''% in our syntax types to allow injection of arbitrary pieces, even if we cannot apply specialized reasoning to them. In this section, we explore that possibility by writing a tactic for normalizing monoid equations. *)
adamc@146 248
adamc@145 249 Section monoid.
adamc@145 250 Variable A : Set.
adamc@145 251 Variable e : A.
adamc@145 252 Variable f : A -> A -> A.
adamc@145 253
adamc@145 254 Infix "+" := f.
adamc@145 255
adamc@145 256 Hypothesis assoc : forall a b c, (a + b) + c = a + (b + c).
adamc@145 257 Hypothesis identl : forall a, e + a = a.
adamc@145 258 Hypothesis identr : forall a, a + e = a.
adamc@145 259
adamc@146 260 (** We add variables and hypotheses characterizing an arbitrary instance of the algebraic structure of monoids. We have an associative binary operator and an identity element for it.
adamc@146 261
adam@289 262 It is easy to define an expression tree type for monoid expressions. A [Var] constructor is a %``%#"#catch-all#"#%''% case for subexpressions that we cannot model. These subexpressions could be actual Gallina variables, or they could just use functions that our tactic is unable to understand. *)
adamc@146 263
adamc@148 264 (* begin thide *)
adamc@145 265 Inductive mexp : Set :=
adamc@145 266 | Ident : mexp
adamc@145 267 | Var : A -> mexp
adamc@145 268 | Op : mexp -> mexp -> mexp.
adamc@145 269
adam@360 270 (** Next, we write an interpretation function. *)
adamc@146 271
adamc@145 272 Fixpoint mdenote (me : mexp) : A :=
adamc@145 273 match me with
adamc@145 274 | Ident => e
adamc@145 275 | Var v => v
adamc@145 276 | Op me1 me2 => mdenote me1 + mdenote me2
adamc@145 277 end.
adamc@145 278
adamc@146 279 (** We will normalize expressions by flattening them into lists, via associativity, so it is helpful to have a denotation function for lists of monoid values. *)
adamc@146 280
adamc@145 281 Fixpoint mldenote (ls : list A) : A :=
adamc@145 282 match ls with
adamc@145 283 | nil => e
adamc@145 284 | x :: ls' => x + mldenote ls'
adamc@145 285 end.
adamc@145 286
adamc@146 287 (** The flattening function itself is easy to implement. *)
adamc@146 288
adamc@145 289 Fixpoint flatten (me : mexp) : list A :=
adamc@145 290 match me with
adamc@145 291 | Ident => nil
adamc@145 292 | Var x => x :: nil
adamc@145 293 | Op me1 me2 => flatten me1 ++ flatten me2
adamc@145 294 end.
adamc@145 295
adamc@146 296 (** [flatten] has a straightforward correctness proof in terms of our [denote] functions. *)
adamc@146 297
adamc@146 298 Lemma flatten_correct' : forall ml2 ml1,
adamc@146 299 mldenote ml1 + mldenote ml2 = mldenote (ml1 ++ ml2).
adamc@145 300 induction ml1; crush.
adamc@145 301 Qed.
adamc@145 302
adamc@145 303 Theorem flatten_correct : forall me, mdenote me = mldenote (flatten me).
adamc@145 304 Hint Resolve flatten_correct'.
adamc@145 305
adamc@145 306 induction me; crush.
adamc@145 307 Qed.
adamc@145 308
adamc@146 309 (** Now it is easy to prove a theorem that will be the main tool behind our simplification tactic. *)
adamc@146 310
adamc@146 311 Theorem monoid_reflect : forall me1 me2,
adamc@146 312 mldenote (flatten me1) = mldenote (flatten me2)
adamc@146 313 -> mdenote me1 = mdenote me2.
adamc@145 314 intros; repeat rewrite flatten_correct; assumption.
adamc@145 315 Qed.
adamc@145 316
adam@360 317 (** We implement reification into the [mexp] type. *)
adamc@146 318
adam@360 319 Ltac reify me :=
adamc@146 320 match me with
adamc@145 321 | e => Ident
adamc@146 322 | ?me1 + ?me2 =>
adam@360 323 let r1 := reify me1 in
adam@360 324 let r2 := reify me2 in
adamc@145 325 constr:(Op r1 r2)
adamc@146 326 | _ => constr:(Var me)
adamc@145 327 end.
adamc@145 328
adam@360 329 (** The final [monoid] tactic works on goals that equate two monoid terms. We reify each and change the goal to refer to the reified versions, finishing off by applying [monoid_reflect] and simplifying uses of [mldenote]. Recall that the %\index{tactics!change}%[change] tactic replaces a conclusion formula with another that is definitionally equal to it. *)
adamc@146 330
adamc@145 331 Ltac monoid :=
adamc@145 332 match goal with
adamc@146 333 | [ |- ?me1 = ?me2 ] =>
adam@360 334 let r1 := reify me1 in
adam@360 335 let r2 := reify me2 in
adamc@145 336 change (mdenote r1 = mdenote r2);
adam@360 337 apply monoid_reflect; simpl
adamc@145 338 end.
adamc@145 339
adamc@146 340 (** We can make short work of theorems like this one: *)
adamc@146 341
adamc@148 342 (* end thide *)
adamc@148 343
adamc@145 344 Theorem t1 : forall a b c d, a + b + c + d = a + (b + c) + d.
adamc@146 345 intros; monoid.
adamc@146 346 (** [[
adamc@146 347 ============================
adamc@146 348 a + (b + (c + (d + e))) = a + (b + (c + (d + e)))
adamc@221 349
adamc@146 350 ]]
adamc@146 351
adam@360 352 Our tactic has canonicalized both sides of the equality, such that we can finish the proof by reflexivity. *)
adamc@146 353
adamc@145 354 reflexivity.
adamc@145 355 Qed.
adamc@146 356
adamc@146 357 (** It is interesting to look at the form of the proof. *)
adamc@146 358
adamc@146 359 Print t1.
adamc@221 360 (** %\vspace{-.15in}% [[
adamc@146 361 t1 =
adamc@146 362 fun a b c d : A =>
adamc@146 363 monoid_reflect (Op (Op (Op (Var a) (Var b)) (Var c)) (Var d))
adamc@146 364 (Op (Op (Var a) (Op (Var b) (Var c))) (Var d))
adamc@146 365 (refl_equal (a + (b + (c + (d + e)))))
adamc@146 366 : forall a b c d : A, a + b + c + d = a + (b + c) + d
adamc@146 367 ]]
adamc@146 368
adam@360 369 The proof term contains only restatements of the equality operands in reified form, followed by a use of reflexivity on the shared canonical form. *)
adamc@221 370
adamc@145 371 End monoid.
adamc@145 372
adam@360 373 (** Extensions of this basic approach are used in the implementations of the %\index{tactics!ring}%[ring] and %\index{tactics!field}%[field] tactics that come packaged with Coq. *)
adamc@146 374
adamc@145 375
adamc@144 376 (** * A Smarter Tautology Solver *)
adamc@144 377
adam@360 378 (** Now we are ready to revisit our earlier tautology solver example. We want to broaden the scope of the tactic to include formulas whose truth is not syntactically apparent. We will want to allow injection of arbitrary formulas, like we allowed arbitrary monoid expressions in the last example. Since we are working in a richer theory, it is important to be able to use equalities between different injected formulas. For instance, we cannot prove [P -> P] by translating the formula into a value like [Imp (][Var P) (][Var P)], because a Gallina function has no way of comparing the two [P]s for equality.
adamc@147 379
adam@360 380 To arrive at a nice implementation satisfying these criteria, we introduce the %\index{tactics!quote}%[quote] tactic and its associated library. *)
adamc@147 381
adamc@144 382 Require Import Quote.
adamc@144 383
adamc@148 384 (* begin thide *)
adamc@144 385 Inductive formula : Set :=
adamc@144 386 | Atomic : index -> formula
adamc@144 387 | Truth : formula
adamc@144 388 | Falsehood : formula
adamc@144 389 | And : formula -> formula -> formula
adamc@144 390 | Or : formula -> formula -> formula
adamc@144 391 | Imp : formula -> formula -> formula.
adam@362 392 (* end thide *)
adamc@144 393
adam@360 394 (** The type %\index{Gallina terms!index}%[index] comes from the [Quote] library and represents a countable variable type. The rest of [formula]'s definition should be old hat by now.
adamc@147 395
adamc@147 396 The [quote] tactic will implement injection from [Prop] into [formula] for us, but it is not quite as smart as we might like. In particular, it interprets implications incorrectly, so we will need to declare a wrapper definition for implication, as we did in the last chapter. *)
adamc@144 397
adamc@144 398 Definition imp (P1 P2 : Prop) := P1 -> P2.
adamc@144 399 Infix "-->" := imp (no associativity, at level 95).
adamc@144 400
adamc@147 401 (** Now we can define our denotation function. *)
adamc@147 402
adamc@147 403 Definition asgn := varmap Prop.
adamc@147 404
adam@362 405 (* begin thide *)
adamc@144 406 Fixpoint formulaDenote (atomics : asgn) (f : formula) : Prop :=
adamc@144 407 match f with
adamc@144 408 | Atomic v => varmap_find False v atomics
adamc@144 409 | Truth => True
adamc@144 410 | Falsehood => False
adamc@144 411 | And f1 f2 => formulaDenote atomics f1 /\ formulaDenote atomics f2
adamc@144 412 | Or f1 f2 => formulaDenote atomics f1 \/ formulaDenote atomics f2
adamc@144 413 | Imp f1 f2 => formulaDenote atomics f1 --> formulaDenote atomics f2
adamc@144 414 end.
adam@362 415 (* end thide *)
adamc@144 416
adam@360 417 (** The %\index{Gallina terms!varmap}%[varmap] type family implements maps from [index] values. In this case, we define an assignment as a map from variables to [Prop]s. Our reifier [formulaDenote] works with an assignment, and we use the [varmap_find] function to consult the assignment in the [Atomic] case. The first argument to [varmap_find] is a default value, in case the variable is not found. *)
adamc@147 418
adamc@144 419 Section my_tauto.
adamc@144 420 Variable atomics : asgn.
adamc@144 421
adamc@144 422 Definition holds (v : index) := varmap_find False v atomics.
adamc@144 423
adamc@147 424 (** We define some shorthand for a particular variable being true, and now we are ready to define some helpful functions based on the [ListSet] module of the standard library, which (unsurprisingly) presents a view of lists as sets. *)
adamc@147 425
adamc@144 426 Require Import ListSet.
adamc@144 427
adamc@144 428 Definition index_eq : forall x y : index, {x = y} + {x <> y}.
adamc@144 429 decide equality.
adamc@144 430 Defined.
adamc@144 431
adamc@144 432 Definition add (s : set index) (v : index) := set_add index_eq v s.
adamc@147 433
adamc@221 434 Definition In_dec : forall v (s : set index), {In v s} + {~ In v s}.
adamc@221 435 Local Open Scope specif_scope.
adamc@144 436
adamc@221 437 intro; refine (fix F (s : set index) : {In v s} + {~ In v s} :=
adamc@221 438 match s with
adamc@144 439 | nil => No
adamc@144 440 | v' :: s' => index_eq v' v || F s'
adamc@144 441 end); crush.
adamc@144 442 Defined.
adamc@144 443
adamc@147 444 (** We define what it means for all members of an index set to represent true propositions, and we prove some lemmas about this notion. *)
adamc@147 445
adamc@144 446 Fixpoint allTrue (s : set index) : Prop :=
adamc@144 447 match s with
adamc@144 448 | nil => True
adamc@144 449 | v :: s' => holds v /\ allTrue s'
adamc@144 450 end.
adamc@144 451
adamc@144 452 Theorem allTrue_add : forall v s,
adamc@144 453 allTrue s
adamc@144 454 -> holds v
adamc@144 455 -> allTrue (add s v).
adamc@144 456 induction s; crush;
adamc@144 457 match goal with
adamc@144 458 | [ |- context[if ?E then _ else _] ] => destruct E
adamc@144 459 end; crush.
adamc@144 460 Qed.
adamc@144 461
adamc@144 462 Theorem allTrue_In : forall v s,
adamc@144 463 allTrue s
adamc@144 464 -> set_In v s
adamc@144 465 -> varmap_find False v atomics.
adamc@144 466 induction s; crush.
adamc@144 467 Qed.
adamc@144 468
adamc@144 469 Hint Resolve allTrue_add allTrue_In.
adamc@144 470
adamc@221 471 Local Open Scope partial_scope.
adamc@144 472
adam@353 473 (** Now we can write a function [forward] which implements deconstruction of hypotheses. It has a dependent type, in the style of Chapter 6, guaranteeing correctness. The arguments to [forward] are a goal formula [f], a set [known] of atomic formulas that we may assume are true, a hypothesis formula [hyp], and a success continuation [cont] that we call when we have extended [known] to hold new truths implied by [hyp]. *)
adamc@147 474
adam@297 475 Definition forward : forall (f : formula) (known : set index) (hyp : formula)
adam@297 476 (cont : forall known', [allTrue known' -> formulaDenote atomics f]),
adam@297 477 [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f].
adamc@144 478 refine (fix F (f : formula) (known : set index) (hyp : formula)
adamc@221 479 (cont : forall known', [allTrue known' -> formulaDenote atomics f])
adamc@144 480 : [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f] :=
adamc@221 481 match hyp with
adamc@144 482 | Atomic v => Reduce (cont (add known v))
adamc@144 483 | Truth => Reduce (cont known)
adamc@144 484 | Falsehood => Yes
adamc@144 485 | And h1 h2 =>
adamc@144 486 Reduce (F (Imp h2 f) known h1 (fun known' =>
adamc@144 487 Reduce (F f known' h2 cont)))
adamc@144 488 | Or h1 h2 => F f known h1 cont && F f known h2 cont
adamc@144 489 | Imp _ _ => Reduce (cont known)
adamc@144 490 end); crush.
adamc@144 491 Defined.
adamc@144 492
adamc@147 493 (** A [backward] function implements analysis of the final goal. It calls [forward] to handle implications. *)
adamc@147 494
adam@362 495 (* begin thide *)
adam@297 496 Definition backward : forall (known : set index) (f : formula),
adam@297 497 [allTrue known -> formulaDenote atomics f].
adamc@221 498 refine (fix F (known : set index) (f : formula)
adamc@221 499 : [allTrue known -> formulaDenote atomics f] :=
adamc@221 500 match f with
adamc@144 501 | Atomic v => Reduce (In_dec v known)
adamc@144 502 | Truth => Yes
adamc@144 503 | Falsehood => No
adamc@144 504 | And f1 f2 => F known f1 && F known f2
adamc@144 505 | Or f1 f2 => F known f1 || F known f2
adamc@144 506 | Imp f1 f2 => forward f2 known f1 (fun known' => F known' f2)
adamc@144 507 end); crush; eauto.
adamc@144 508 Defined.
adam@362 509 (* end thide *)
adamc@144 510
adamc@147 511 (** A simple wrapper around [backward] gives us the usual type of a partial decision procedure. *)
adamc@147 512
adam@297 513 Definition my_tauto : forall f : formula, [formulaDenote atomics f].
adam@362 514 (* begin thide *)
adamc@144 515 intro; refine (Reduce (backward nil f)); crush.
adamc@144 516 Defined.
adam@362 517 (* end thide *)
adamc@144 518 End my_tauto.
adamc@144 519
adam@360 520 (** Our final tactic implementation is now fairly straightforward. First, we [intro] all quantifiers that do not bind [Prop]s. Then we call the [quote] tactic, which implements the reification for us. Finally, we are able to construct an exact proof via [partialOut] and the [my_tauto] Gallina function. *)
adamc@147 521
adamc@144 522 Ltac my_tauto :=
adamc@144 523 repeat match goal with
adamc@144 524 | [ |- forall x : ?P, _ ] =>
adamc@144 525 match type of P with
adamc@144 526 | Prop => fail 1
adamc@144 527 | _ => intro
adamc@144 528 end
adamc@144 529 end;
adamc@144 530 quote formulaDenote;
adamc@144 531 match goal with
adamc@144 532 | [ |- formulaDenote ?m ?f ] => exact (partialOut (my_tauto m f))
adamc@144 533 end.
adamc@148 534 (* end thide *)
adamc@144 535
adamc@147 536 (** A few examples demonstrate how the tactic works. *)
adamc@147 537
adamc@144 538 Theorem mt1 : True.
adamc@144 539 my_tauto.
adamc@144 540 Qed.
adamc@144 541
adamc@144 542 Print mt1.
adamc@221 543 (** %\vspace{-.15in}% [[
adamc@147 544 mt1 = partialOut (my_tauto (Empty_vm Prop) Truth)
adamc@147 545 : True
adamc@147 546 ]]
adamc@147 547
adamc@147 548 We see [my_tauto] applied with an empty [varmap], since every subformula is handled by [formulaDenote]. *)
adamc@144 549
adamc@144 550 Theorem mt2 : forall x y : nat, x = y --> x = y.
adamc@144 551 my_tauto.
adamc@144 552 Qed.
adamc@144 553
adamc@144 554 Print mt2.
adamc@221 555 (** %\vspace{-.15in}% [[
adamc@147 556 mt2 =
adamc@147 557 fun x y : nat =>
adamc@147 558 partialOut
adamc@147 559 (my_tauto (Node_vm (x = y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 560 (Imp (Atomic End_idx) (Atomic End_idx)))
adamc@147 561 : forall x y : nat, x = y --> x = y
adamc@147 562 ]]
adamc@147 563
adamc@147 564 Crucially, both instances of [x = y] are represented with the same index, [End_idx]. The value of this index only needs to appear once in the [varmap], whose form reveals that [varmap]s are represented as binary trees, where [index] values denote paths from tree roots to leaves. *)
adamc@144 565
adamc@144 566 Theorem mt3 : forall x y z,
adamc@144 567 (x < y /\ y > z) \/ (y > z /\ x < S y)
adamc@144 568 --> y > z /\ (x < y \/ x < S y).
adamc@144 569 my_tauto.
adamc@144 570 Qed.
adamc@144 571
adamc@144 572 Print mt3.
adamc@221 573 (** %\vspace{-.15in}% [[
adamc@147 574 fun x y z : nat =>
adamc@147 575 partialOut
adamc@147 576 (my_tauto
adamc@147 577 (Node_vm (x < S y) (Node_vm (x < y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 578 (Node_vm (y > z) (Empty_vm Prop) (Empty_vm Prop)))
adamc@147 579 (Imp
adamc@147 580 (Or (And (Atomic (Left_idx End_idx)) (Atomic (Right_idx End_idx)))
adamc@147 581 (And (Atomic (Right_idx End_idx)) (Atomic End_idx)))
adamc@147 582 (And (Atomic (Right_idx End_idx))
adamc@147 583 (Or (Atomic (Left_idx End_idx)) (Atomic End_idx)))))
adamc@147 584 : forall x y z : nat,
adamc@147 585 x < y /\ y > z \/ y > z /\ x < S y --> y > z /\ (x < y \/ x < S y)
adamc@147 586 ]]
adamc@147 587
adamc@147 588 Our goal contained three distinct atomic formulas, and we see that a three-element [varmap] is generated.
adamc@147 589
adamc@147 590 It can be interesting to observe differences between the level of repetition in proof terms generated by [my_tauto] and [tauto] for especially trivial theorems. *)
adamc@144 591
adamc@144 592 Theorem mt4 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False.
adamc@144 593 my_tauto.
adamc@144 594 Qed.
adamc@144 595
adamc@144 596 Print mt4.
adamc@221 597 (** %\vspace{-.15in}% [[
adamc@147 598 mt4 =
adamc@147 599 partialOut
adamc@147 600 (my_tauto (Empty_vm Prop)
adamc@147 601 (Imp
adamc@147 602 (And Truth
adamc@147 603 (And Truth
adamc@147 604 (And Truth (And Truth (And Truth (And Truth Falsehood))))))
adamc@147 605 Falsehood))
adamc@147 606 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False
adam@302 607 ]]
adam@302 608 *)
adamc@144 609
adamc@144 610 Theorem mt4' : True /\ True /\ True /\ True /\ True /\ True /\ False -> False.
adamc@144 611 tauto.
adamc@144 612 Qed.
adamc@144 613
adamc@144 614 Print mt4'.
adamc@221 615 (** %\vspace{-.15in}% [[
adamc@147 616 mt4' =
adamc@147 617 fun H : True /\ True /\ True /\ True /\ True /\ True /\ False =>
adamc@147 618 and_ind
adamc@147 619 (fun (_ : True) (H1 : True /\ True /\ True /\ True /\ True /\ False) =>
adamc@147 620 and_ind
adamc@147 621 (fun (_ : True) (H3 : True /\ True /\ True /\ True /\ False) =>
adamc@147 622 and_ind
adamc@147 623 (fun (_ : True) (H5 : True /\ True /\ True /\ False) =>
adamc@147 624 and_ind
adamc@147 625 (fun (_ : True) (H7 : True /\ True /\ False) =>
adamc@147 626 and_ind
adamc@147 627 (fun (_ : True) (H9 : True /\ False) =>
adamc@147 628 and_ind (fun (_ : True) (H11 : False) => False_ind False H11)
adamc@147 629 H9) H7) H5) H3) H1) H
adamc@147 630 : True /\ True /\ True /\ True /\ True /\ True /\ False -> False
adam@302 631 ]]
adam@360 632
adam@360 633 The traditional [tauto] tactic introduces a quadratic blow-up in the size of the proof term, whereas proofs produced by [my_tauto] always have linear size. *)
adamc@147 634
adam@361 635 (** ** Manual Reification of Terms with Variables *)
adam@361 636
adam@362 637 (* begin thide *)
adam@361 638 (** The action of the [quote] tactic above may seem like magic. Somehow it performs equality comparison between subterms of arbitrary types, so that these subterms may be represented with the same reified variable. While [quote] is implemented in OCaml, we can code the reification process completely in Ltac, as well. To make our job simpler, we will represent variables as [nat]s, indexing into a simple list of variable values that may be referenced.
adam@361 639
adam@361 640 Step one of the process is to crawl over a term, building a duplicate-free list of all values that appear in positions we will encode as variables. A useful helper function adds an element to a list, maintaining lack of duplicates. Note how we use Ltac pattern matching to implement an equality test on Gallina terms; this is simple syntactic equality, not even the richer definitional equality. We also represent lists as nested tuples, to allow different list elements to have different Gallina types. *)
adam@361 641
adam@361 642 Ltac inList x xs :=
adam@361 643 match xs with
adam@361 644 | tt => false
adam@361 645 | (x, _) => true
adam@361 646 | (_, ?xs') => inList x xs'
adam@361 647 end.
adam@361 648
adam@361 649 Ltac addToList x xs :=
adam@361 650 let b := inList x xs in
adam@361 651 match b with
adam@361 652 | true => xs
adam@361 653 | false => constr:(x, xs)
adam@361 654 end.
adam@361 655
adam@361 656 (** Now we can write our recursive function to calculate the list of variable values we will want to use to represent a term. *)
adam@361 657
adam@361 658 Ltac allVars xs e :=
adam@361 659 match e with
adam@361 660 | True => xs
adam@361 661 | False => xs
adam@361 662 | ?e1 /\ ?e2 =>
adam@361 663 let xs := allVars xs e1 in
adam@361 664 allVars xs e2
adam@361 665 | ?e1 \/ ?e2 =>
adam@361 666 let xs := allVars xs e1 in
adam@361 667 allVars xs e2
adam@361 668 | ?e1 -> ?e2 =>
adam@361 669 let xs := allVars xs e1 in
adam@361 670 allVars xs e2
adam@361 671 | _ => addToList e xs
adam@361 672 end.
adam@361 673
adam@361 674 (** We will also need a way to map a value to its position in a list. *)
adam@361 675
adam@361 676 Ltac lookup x xs :=
adam@361 677 match xs with
adam@361 678 | (x, _) => O
adam@361 679 | (_, ?xs') =>
adam@361 680 let n := lookup x xs' in
adam@361 681 constr:(S n)
adam@361 682 end.
adam@361 683
adam@361 684 (** The next building block is a procedure for reifying a term, given a list of all allowed variable values. We are free to make this procedure partial, where tactic failure may be triggered upon attempting to reflect a term containing subterms not included in the list of variables. The output type of the term is a copy of [formula] where [index] is replaced by [nat], in the type of the constructor for atomic formulas. *)
adam@361 685
adam@361 686 Inductive formula' : Set :=
adam@361 687 | Atomic' : nat -> formula'
adam@361 688 | Truth' : formula'
adam@361 689 | Falsehood' : formula'
adam@361 690 | And' : formula' -> formula' -> formula'
adam@361 691 | Or' : formula' -> formula' -> formula'
adam@361 692 | Imp' : formula' -> formula' -> formula'.
adam@361 693
adam@361 694 (** Note that, when we write our own Ltac procedure, we can work directly with the normal [->] operator, rather than needing to introduce a wrapper for it. *)
adam@361 695
adam@361 696 Ltac reifyTerm xs e :=
adam@361 697 match e with
adam@361 698 | True => Truth'
adam@361 699 | False => Falsehood'
adam@361 700 | ?e1 /\ ?e2 =>
adam@361 701 let p1 := reifyTerm xs e1 in
adam@361 702 let p2 := reifyTerm xs e2 in
adam@361 703 constr:(And' p1 p2)
adam@361 704 | ?e1 \/ ?e2 =>
adam@361 705 let p1 := reifyTerm xs e1 in
adam@361 706 let p2 := reifyTerm xs e2 in
adam@361 707 constr:(Or' p1 p2)
adam@361 708 | ?e1 -> ?e2 =>
adam@361 709 let p1 := reifyTerm xs e1 in
adam@361 710 let p2 := reifyTerm xs e2 in
adam@361 711 constr:(Imp' p1 p2)
adam@361 712 | _ =>
adam@361 713 let n := lookup e xs in
adam@361 714 constr:(Atomic' n)
adam@361 715 end.
adam@361 716
adam@361 717 (** Finally, we bring all the pieces together. *)
adam@361 718
adam@361 719 Ltac reify :=
adam@361 720 match goal with
adam@361 721 | [ |- ?G ] => let xs := allVars tt G in
adam@361 722 let p := reifyTerm xs G in
adam@361 723 pose p
adam@361 724 end.
adam@361 725
adam@361 726 (** A quick test verifies that we are doing reification correctly. *)
adam@361 727
adam@361 728 Theorem mt3' : forall x y z,
adam@361 729 (x < y /\ y > z) \/ (y > z /\ x < S y)
adam@361 730 -> y > z /\ (x < y \/ x < S y).
adam@361 731 do 3 intro; reify.
adam@361 732
adam@361 733 (** Our simple tactic adds the translated term as a new variable:
adam@361 734 [[
adam@361 735 f := Imp'
adam@361 736 (Or' (And' (Atomic' 2) (Atomic' 1)) (And' (Atomic' 1) (Atomic' 0)))
adam@361 737 (And' (Atomic' 1) (Or' (Atomic' 2) (Atomic' 0))) : formula'
adam@361 738 ]]
adam@361 739 *)
adam@361 740 Abort.
adam@361 741
adam@361 742 (** More work would be needed to complete the reflective tactic, as we must connect our new syntax type with the real meanings of formulas, but the details are the same as in our prior implementation with [quote]. *)
adam@362 743 (* end thide *)
adam@378 744
adam@378 745
adam@378 746 (** * Building a Reification Tactic that Recurses Under Binders *)
adam@378 747
adam@378 748 (** All of our examples so far have stayed away from reifying the syntax of terms that use such features as quantifiers and [fun] function abstractions. Such cases are complicated by the fact that different subterms may be allowed to reference different sets of free variables. Some cleverness is needed to clear this hurdle, but a few simple patterns will suffice. Consider this example of a simple dependently typed term language, where a function abstraction body is represented conveniently with a Coq function. *)
adam@378 749
adam@378 750 Inductive type : Type :=
adam@378 751 | Nat : type
adam@378 752 | NatFunc : type -> type.
adam@378 753
adam@378 754 Inductive term : type -> Type :=
adam@378 755 | Const : nat -> term Nat
adam@378 756 | Plus : term Nat -> term Nat -> term Nat
adam@378 757 | Abs : forall t, (nat -> term t) -> term (NatFunc t).
adam@378 758
adam@378 759 Fixpoint typeDenote (t : type) : Type :=
adam@378 760 match t with
adam@378 761 | Nat => nat
adam@378 762 | NatFunc t => nat -> typeDenote t
adam@378 763 end.
adam@378 764
adam@378 765 Fixpoint termDenote t (e : term t) : typeDenote t :=
adam@378 766 match e with
adam@378 767 | Const n => n
adam@378 768 | Plus e1 e2 => termDenote e1 + termDenote e2
adam@378 769 | Abs _ e1 => fun x => termDenote (e1 x)
adam@378 770 end.
adam@378 771
adam@378 772 (** Here is a naive first attempt at a reification tactic. *)
adam@378 773
adam@378 774 Ltac refl' e :=
adam@378 775 match e with
adam@378 776 | ?E1 + ?E2 =>
adam@378 777 let r1 := refl' E1 in
adam@378 778 let r2 := refl' E2 in
adam@378 779 constr:(Plus r1 r2)
adam@378 780
adam@378 781 | fun x : nat => ?E1 =>
adam@378 782 let r1 := refl' E1 in
adam@378 783 constr:(Abs (fun x => r1 x))
adam@378 784
adam@378 785 | _ => constr:(Const e)
adam@378 786 end.
adam@378 787
adam@378 788 (** Recall that a regular Ltac pattern variable [?X] only matches terms that %\emph{%#<i>#do not mention new variables introduced within the pattern#</i>#%}%. In our naive implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument! Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
adam@378 789
adam@378 790 To handle functions in general, we will use the pattern variable form [@?X], which allows [X] to mention newly introduced variables that are declared explicitly. For instance: *)
adam@378 791
adam@378 792 Reset refl'.
adam@378 793 Ltac refl' e :=
adam@378 794 match e with
adam@378 795 | ?E1 + ?E2 =>
adam@378 796 let r1 := refl' E1 in
adam@378 797 let r2 := refl' E2 in
adam@378 798 constr:(Plus r1 r2)
adam@378 799
adam@378 800 | fun x : nat => @?E1 x =>
adam@378 801 let r1 := refl' E1 in
adam@378 802 constr:(Abs r1)
adam@378 803
adam@378 804 | _ => constr:(Const e)
adam@378 805 end.
adam@378 806
adam@378 807 (** Now, in the abstraction case, we bind [E1] as a function from an [x] value to the value of the abstraction body. Unfortunately, our recursive call there is not destined for success. It will match the same abstraction pattern and trigger another recursive call, and so on through infinite recursion. One last refactoring yields a working procedure. The key idea is to consider every input to [refl'] as %\emph{%#<i>#a function over the values of variables introduced during recursion#</i>#%}%. *)
adam@378 808
adam@378 809 Reset refl'.
adam@378 810 Ltac refl' e :=
adam@378 811 match eval simpl in e with
adam@378 812 | fun x : ?T => @?E1 x + @?E2 x =>
adam@378 813 let r1 := refl' E1 in
adam@378 814 let r2 := refl' E2 in
adam@378 815 constr:(fun x => Plus (r1 x) (r2 x))
adam@378 816
adam@378 817 | fun (x : ?T) (y : nat) => @?E1 x y =>
adam@378 818 let r1 := refl' (fun p : T * nat => E1 (fst p) (snd p)) in
adam@378 819 constr:(fun x => Abs (fun y => r1 (x, y)))
adam@378 820
adam@378 821 | _ => constr:(fun x => Const (e x))
adam@378 822 end.
adam@378 823
adam@378 824 (** Note how now even the addition case works in terms of functions, with [@?X] patterns. The abstraction case introduces a new variable by extending the type used to represent the free variables. In particular, the argument to [refl'] used type [T] to represent all free variables. We extend the type to [T * nat] for the type representing free variable values within the abstraction body. A bit of bookkeeping with pairs and their projections produces an appropriate version of the abstraction body to pass in a recursive call. To ensure that all this repackaging of terms does not interfere with pattern matching, we add an extra [simpl] reduction on the function argument, in the first line of the body of [refl'].
adam@378 825
adam@378 826 Now one more tactic provides an example of how to apply reification. Let us consider goals that are equalities between terms that can be reified. We want to change such goals into equalities between appropriate calls to [termDenote]. *)
adam@378 827
adam@378 828 Ltac refl :=
adam@378 829 match goal with
adam@378 830 | [ |- ?E1 = ?E2 ] =>
adam@378 831 let E1' := refl' (fun _ : unit => E1) in
adam@378 832 let E2' := refl' (fun _ : unit => E2) in
adam@378 833 change (termDenote (E1' tt) = termDenote (E2' tt));
adam@378 834 cbv beta iota delta [fst snd]
adam@378 835 end.
adam@378 836
adam@378 837 Goal (fun (x y : nat) => x + y + 13) = (fun (_ z : nat) => z).
adam@378 838 refl.
adam@378 839 (** %\vspace{-.15in}%[[
adam@378 840 ============================
adam@378 841 termDenote
adam@378 842 (Abs
adam@378 843 (fun y : nat =>
adam@378 844 Abs (fun y0 : nat => Plus (Plus (Const y) (Const y0)) (Const 13)))) =
adam@378 845 termDenote (Abs (fun _ : nat => Abs (fun y0 : nat => Const y0)))
adam@378 846 ]]
adam@378 847 *)
adam@378 848
adam@378 849 Abort.
adam@378 850
adam@378 851 (** Our encoding here uses Coq functions to represent binding within the terms we reify, which makes it difficult to implement certain functions over reified terms. An alternative would be to represent variables with numbers. This can be done by writing a slightly smarter reification function that detects variable references by detecting when term arguments are just compositions of [fst] and [snd]; from the order of the compositions we may read off the variable number. We leave the details as an exercise for the reader. *)