annotate src/Reflection.v @ 147:f103f28a6b57

my_tauto prose
author Adam Chlipala <adamc@hcoop.net>
date Tue, 28 Oct 2008 16:31:55 -0400
parents 2779c651406a
children 0952c4ba209b
rev   line source
adamc@142 1 (* Copyright (c) 2008, Adam Chlipala
adamc@142 2 *
adamc@142 3 * This work is licensed under a
adamc@142 4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
adamc@142 5 * Unported License.
adamc@142 6 * The license text is available at:
adamc@142 7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
adamc@142 8 *)
adamc@142 9
adamc@142 10 (* begin hide *)
adamc@142 11 Require Import List.
adamc@142 12
adamc@142 13 Require Import Tactics MoreSpecif.
adamc@142 14
adamc@142 15 Set Implicit Arguments.
adamc@142 16 (* end hide *)
adamc@142 17
adamc@142 18
adamc@142 19 (** %\chapter{Proof by Reflection}% *)
adamc@142 20
adamc@142 21 (** The last chapter highlighted a very heuristic approach to proving. In this chapter, we will study an alternative technique, %\textit{%#<i>#proof by reflection#</i>#%}%. We will write, in Gallina, decision procedures with proofs of correctness, and we will appeal to these procedures in writing very short proofs. Such a proof is checked by running the decision procedure. The term %\textit{%#<i>#reflection#</i>#%}% applies because we will need to translate Gallina propositions into values of inductive types representing syntax, so that Gallina programs may analyze them. *)
adamc@142 22
adamc@142 23
adamc@142 24 (** * Proving Evenness *)
adamc@142 25
adamc@142 26 (** Proving that particular natural number constants are even is certainly something we would rather have happen automatically. The Ltac-programming techniques that we learned in the last chapter make it easy to implement such a procedure. *)
adamc@142 27
adamc@142 28 Inductive isEven : nat -> Prop :=
adamc@144 29 | Even_O : isEven O
adamc@144 30 | Even_SS : forall n, isEven n -> isEven (S (S n)).
adamc@142 31
adamc@142 32 Ltac prove_even := repeat constructor.
adamc@142 33
adamc@142 34 Theorem even_256 : isEven 256.
adamc@142 35 prove_even.
adamc@142 36 Qed.
adamc@142 37
adamc@142 38 Print even_256.
adamc@142 39 (** [[
adamc@142 40
adamc@142 41 even_256 =
adamc@142 42 Even_SS
adamc@142 43 (Even_SS
adamc@142 44 (Even_SS
adamc@142 45 (Even_SS
adamc@142 46 ]]
adamc@142 47
adamc@142 48 ...and so on. This procedure always works (at least on machines with infinite resources), but it has a serious drawback, which we see when we print the proof it generates that 256 is even. The final proof term has length linear in the input value. This seems like a shame, since we could write a trivial and trustworthy program to verify evenness of constants. The proof checker could simply call our program where needed.
adamc@142 49
adamc@142 50 It is also unfortunate not to have static typing guarantees that our tactic always behaves appropriately. Other invocations of similar tactics might fail with dynamic type errors, and we would not know about the bugs behind these errors until we happened to attempt to prove complex enough goals.
adamc@142 51
adamc@142 52 The techniques of proof by reflection address both complaints. We will be able to write proofs like this with constant size overhead beyond the size of the input, and we will do it with verified decision procedures written in Gallina.
adamc@142 53
adamc@142 54 For this example, we begin by using a type from the [MoreSpecif] module to write a certified evenness checker. *)
adamc@142 55
adamc@142 56 Print partial.
adamc@142 57 (** [[
adamc@142 58
adamc@142 59 Inductive partial (P : Prop) : Set := Proved : P -> [P] | Uncertain : [P]
adamc@142 60 ]] *)
adamc@142 61
adamc@142 62 (** A [partial P] value is an optional proof of [P]. The notation [[P]] stands for [partial P]. *)
adamc@142 63
adamc@142 64 Open Local Scope partial_scope.
adamc@142 65
adamc@142 66 (** We bring into scope some notations for the [partial] type. These overlap with some of the notations we have seen previously for specification types, so they were placed in a separate scope that needs separate opening. *)
adamc@142 67
adamc@142 68 Definition check_even (n : nat) : [isEven n].
adamc@142 69 Hint Constructors isEven.
adamc@142 70
adamc@142 71 refine (fix F (n : nat) : [isEven n] :=
adamc@142 72 match n return [isEven n] with
adamc@142 73 | 0 => Yes
adamc@142 74 | 1 => No
adamc@142 75 | S (S n') => Reduce (F n')
adamc@142 76 end); auto.
adamc@142 77 Defined.
adamc@142 78
adamc@142 79 (** We can use dependent pattern-matching to write a function that performs a surprising feat. When given a [partial P], this function [partialOut] returns a proof of [P] if the [partial] value contains a proof, and it returns a (useless) proof of [True] otherwise. From the standpoint of ML and Haskell programming, it seems impossible to write such a type, but it is trivial with a [return] annotation. *)
adamc@142 80
adamc@142 81 Definition partialOut (P : Prop) (x : [P]) :=
adamc@142 82 match x return (match x with
adamc@142 83 | Proved _ => P
adamc@142 84 | Uncertain => True
adamc@142 85 end) with
adamc@142 86 | Proved pf => pf
adamc@142 87 | Uncertain => I
adamc@142 88 end.
adamc@142 89
adamc@142 90 (** It may seem strange to define a function like this. However, it turns out to be very useful in writing a reflective verison of our earlier [prove_even] tactic: *)
adamc@142 91
adamc@142 92 Ltac prove_even_reflective :=
adamc@142 93 match goal with
adamc@142 94 | [ |- isEven ?N] => exact (partialOut (check_even N))
adamc@142 95 end.
adamc@142 96
adamc@142 97 (** We identify which natural number we are considering, and we "prove" its evenness by pulling the proof out of the appropriate [check_even] call. *)
adamc@142 98
adamc@142 99 Theorem even_256' : isEven 256.
adamc@142 100 prove_even_reflective.
adamc@142 101 Qed.
adamc@142 102
adamc@142 103 Print even_256'.
adamc@142 104 (** [[
adamc@142 105
adamc@142 106 even_256' = partialOut (check_even 256)
adamc@142 107 : isEven 256
adamc@142 108 ]]
adamc@142 109
adamc@142 110 We can see a constant wrapper around the object of the proof. For any even number, this form of proof will suffice. What happens if we try the tactic with an odd number? *)
adamc@142 111
adamc@142 112 Theorem even_255 : isEven 255.
adamc@142 113 (** [[
adamc@142 114
adamc@142 115 prove_even_reflective.
adamc@142 116
adamc@142 117 [[
adamc@142 118
adamc@142 119 User error: No matching clauses for match goal
adamc@142 120 ]]
adamc@142 121
adamc@142 122 Thankfully, the tactic fails. To see more precisely what goes wrong, we can run manually the body of the [match].
adamc@142 123
adamc@142 124 [[
adamc@142 125
adamc@142 126 exact (partialOut (check_even 255)).
adamc@142 127
adamc@142 128 [[
adamc@142 129
adamc@142 130 Error: The term "partialOut (check_even 255)" has type
adamc@142 131 "match check_even 255 with
adamc@142 132 | Yes => isEven 255
adamc@142 133 | No => True
adamc@142 134 end" while it is expected to have type "isEven 255"
adamc@142 135 ]]
adamc@142 136
adamc@142 137 As usual, the type-checker performs no reductions to simplify error messages. If we reduced the first term ourselves, we would see that [check_even 255] reduces to a [No], so that the first term is equivalent to [True], which certainly does not unify with [isEven 255]. *)
adamc@142 138 Abort.
adamc@143 139
adamc@143 140
adamc@143 141 (** * Reflecting the Syntax of a Trivial Tautology Language *)
adamc@143 142
adamc@143 143 (** We might also like to have reflective proofs of trivial tautologies like this one: *)
adamc@143 144
adamc@143 145 Theorem true_galore : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 146 tauto.
adamc@143 147 Qed.
adamc@143 148
adamc@143 149 Print true_galore.
adamc@143 150
adamc@143 151 (** [[
adamc@143 152
adamc@143 153 true_galore =
adamc@143 154 fun H : True /\ True =>
adamc@143 155 and_ind (fun _ _ : True => or_introl (True /\ (True -> True)) I) H
adamc@143 156 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 157 ]]
adamc@143 158
adamc@143 159 As we might expect, the proof that [tauto] builds contains explicit applications of natural deduction rules. For large formulas, this can add a linear amount of proof size overhead, beyond the size of the input.
adamc@143 160
adamc@143 161 To write a reflective procedure for this class of goals, we will need to get into the actual "reflection" part of "proof by reflection." It is impossible to case-analyze a [Prop] in any way in Gallina. We must %\textit{%#<i>#reflect#</i>#%}% [Prop] into some type that we %\textit{%#<i>#can#</i>#%}% analyze. This inductive type is a good candidate: *)
adamc@143 162
adamc@143 163 Inductive taut : Set :=
adamc@143 164 | TautTrue : taut
adamc@143 165 | TautAnd : taut -> taut -> taut
adamc@143 166 | TautOr : taut -> taut -> taut
adamc@143 167 | TautImp : taut -> taut -> taut.
adamc@143 168
adamc@143 169 (** We write a recursive function to "unreflect" this syntax back to [Prop]. *)
adamc@143 170
adamc@143 171 Fixpoint tautDenote (t : taut) : Prop :=
adamc@143 172 match t with
adamc@143 173 | TautTrue => True
adamc@143 174 | TautAnd t1 t2 => tautDenote t1 /\ tautDenote t2
adamc@143 175 | TautOr t1 t2 => tautDenote t1 \/ tautDenote t2
adamc@143 176 | TautImp t1 t2 => tautDenote t1 -> tautDenote t2
adamc@143 177 end.
adamc@143 178
adamc@143 179 (** It is easy to prove that every formula in the range of [tautDenote] is true. *)
adamc@143 180
adamc@143 181 Theorem tautTrue : forall t, tautDenote t.
adamc@143 182 induction t; crush.
adamc@143 183 Qed.
adamc@143 184
adamc@143 185 (** To use [tautTrue] to prove particular formulas, we need to implement the syntax reflection process. A recursive Ltac function does the job. *)
adamc@143 186
adamc@143 187 Ltac tautReflect P :=
adamc@143 188 match P with
adamc@143 189 | True => TautTrue
adamc@143 190 | ?P1 /\ ?P2 =>
adamc@143 191 let t1 := tautReflect P1 in
adamc@143 192 let t2 := tautReflect P2 in
adamc@143 193 constr:(TautAnd t1 t2)
adamc@143 194 | ?P1 \/ ?P2 =>
adamc@143 195 let t1 := tautReflect P1 in
adamc@143 196 let t2 := tautReflect P2 in
adamc@143 197 constr:(TautOr t1 t2)
adamc@143 198 | ?P1 -> ?P2 =>
adamc@143 199 let t1 := tautReflect P1 in
adamc@143 200 let t2 := tautReflect P2 in
adamc@143 201 constr:(TautImp t1 t2)
adamc@143 202 end.
adamc@143 203
adamc@143 204 (** With [tautReflect] available, it is easy to finish our reflective tactic. We look at the goal formula, reflect it, and apply [tautTrue] to the reflected formula. *)
adamc@143 205
adamc@143 206 Ltac obvious :=
adamc@143 207 match goal with
adamc@143 208 | [ |- ?P ] =>
adamc@143 209 let t := tautReflect P in
adamc@143 210 exact (tautTrue t)
adamc@143 211 end.
adamc@143 212
adamc@143 213 (** We can verify that [obvious] solves our original example, with a proof term that does not mention details of the proof. *)
adamc@143 214
adamc@143 215 Theorem true_galore' : (True /\ True) -> (True \/ (True /\ (True -> True))).
adamc@143 216 obvious.
adamc@143 217 Qed.
adamc@143 218
adamc@143 219 Print true_galore'.
adamc@143 220
adamc@143 221 (** [[
adamc@143 222
adamc@143 223 true_galore' =
adamc@143 224 tautTrue
adamc@143 225 (TautImp (TautAnd TautTrue TautTrue)
adamc@143 226 (TautOr TautTrue (TautAnd TautTrue (TautImp TautTrue TautTrue))))
adamc@143 227 : True /\ True -> True \/ True /\ (True -> True)
adamc@143 228
adamc@143 229 ]]
adamc@143 230
adamc@143 231 It is worth considering how the reflective tactic improves on a pure-Ltac implementation. The formula reflection process is just as ad-hoc as before, so we gain little there. In general, proofs will be more complicated than formula translation, and the "generic proof rule" that we apply here %\textit{%#<i>#is#</i>#%}% on much better formal footing than a recursive Ltac function. The dependent type of the proof guarantees that it "works" on any input formula. This is all in addition to the proof-size improvement that we have already seen. *)
adamc@144 232
adamc@144 233
adamc@145 234 (** * A Monoid Expression Simplifier *)
adamc@145 235
adamc@146 236 (** Proof by reflection does not require encoding of all of the syntax in a goal. We can insert "variables" in our syntax types to allow injection of arbitrary pieces, even if we cannot apply specialized reasoning to them. In this section, we explore that possibility by writing a tactic for normalizing monoid equations. *)
adamc@146 237
adamc@145 238 Section monoid.
adamc@145 239 Variable A : Set.
adamc@145 240 Variable e : A.
adamc@145 241 Variable f : A -> A -> A.
adamc@145 242
adamc@145 243 Infix "+" := f.
adamc@145 244
adamc@145 245 Hypothesis assoc : forall a b c, (a + b) + c = a + (b + c).
adamc@145 246 Hypothesis identl : forall a, e + a = a.
adamc@145 247 Hypothesis identr : forall a, a + e = a.
adamc@145 248
adamc@146 249 (** We add variables and hypotheses characterizing an arbitrary instance of the algebraic structure of monoids. We have an associative binary operator and an identity element for it.
adamc@146 250
adamc@146 251 It is easy to define an expression tree type for monoid expressions. A [Var] constructor is a "catch-all" case for subexpressions that we cannot model. These subexpressions could be actual Gallina variables, or they could just use functions that our tactic is unable to understand. *)
adamc@146 252
adamc@145 253 Inductive mexp : Set :=
adamc@145 254 | Ident : mexp
adamc@145 255 | Var : A -> mexp
adamc@145 256 | Op : mexp -> mexp -> mexp.
adamc@145 257
adamc@146 258 (** Next, we write an "un-reflect" function. *)
adamc@146 259
adamc@145 260 Fixpoint mdenote (me : mexp) : A :=
adamc@145 261 match me with
adamc@145 262 | Ident => e
adamc@145 263 | Var v => v
adamc@145 264 | Op me1 me2 => mdenote me1 + mdenote me2
adamc@145 265 end.
adamc@145 266
adamc@146 267 (** We will normalize expressions by flattening them into lists, via associativity, so it is helpful to have a denotation function for lists of monoid values. *)
adamc@146 268
adamc@145 269 Fixpoint mldenote (ls : list A) : A :=
adamc@145 270 match ls with
adamc@145 271 | nil => e
adamc@145 272 | x :: ls' => x + mldenote ls'
adamc@145 273 end.
adamc@145 274
adamc@146 275 (** The flattening function itself is easy to implement. *)
adamc@146 276
adamc@145 277 Fixpoint flatten (me : mexp) : list A :=
adamc@145 278 match me with
adamc@145 279 | Ident => nil
adamc@145 280 | Var x => x :: nil
adamc@145 281 | Op me1 me2 => flatten me1 ++ flatten me2
adamc@145 282 end.
adamc@145 283
adamc@146 284 (** [flatten] has a straightforward correctness proof in terms of our [denote] functions. *)
adamc@146 285
adamc@146 286 Lemma flatten_correct' : forall ml2 ml1,
adamc@146 287 mldenote ml1 + mldenote ml2 = mldenote (ml1 ++ ml2).
adamc@145 288 induction ml1; crush.
adamc@145 289 Qed.
adamc@145 290
adamc@145 291 Theorem flatten_correct : forall me, mdenote me = mldenote (flatten me).
adamc@145 292 Hint Resolve flatten_correct'.
adamc@145 293
adamc@145 294 induction me; crush.
adamc@145 295 Qed.
adamc@145 296
adamc@146 297 (** Now it is easy to prove a theorem that will be the main tool behind our simplification tactic. *)
adamc@146 298
adamc@146 299 Theorem monoid_reflect : forall me1 me2,
adamc@146 300 mldenote (flatten me1) = mldenote (flatten me2)
adamc@146 301 -> mdenote me1 = mdenote me2.
adamc@145 302 intros; repeat rewrite flatten_correct; assumption.
adamc@145 303 Qed.
adamc@145 304
adamc@146 305 (** We implement reflection into the [mexp] type. *)
adamc@146 306
adamc@146 307 Ltac reflect me :=
adamc@146 308 match me with
adamc@145 309 | e => Ident
adamc@146 310 | ?me1 + ?me2 =>
adamc@146 311 let r1 := reflect me1 in
adamc@146 312 let r2 := reflect me2 in
adamc@145 313 constr:(Op r1 r2)
adamc@146 314 | _ => constr:(Var me)
adamc@145 315 end.
adamc@145 316
adamc@146 317 (** The final [monoid] tactic works on goals that equate two monoid terms. We reflect each and change the goal to refer to the reflected versions, finishing off by applying [monoid_reflect] and simplifying uses of [mldenote]. *)
adamc@146 318
adamc@145 319 Ltac monoid :=
adamc@145 320 match goal with
adamc@146 321 | [ |- ?me1 = ?me2 ] =>
adamc@146 322 let r1 := reflect me1 in
adamc@146 323 let r2 := reflect me2 in
adamc@145 324 change (mdenote r1 = mdenote r2);
adamc@145 325 apply monoid_reflect; simpl mldenote
adamc@145 326 end.
adamc@145 327
adamc@146 328 (** We can make short work of theorems like this one: *)
adamc@146 329
adamc@145 330 Theorem t1 : forall a b c d, a + b + c + d = a + (b + c) + d.
adamc@146 331 intros; monoid.
adamc@146 332 (** [[
adamc@146 333
adamc@146 334 ============================
adamc@146 335 a + (b + (c + (d + e))) = a + (b + (c + (d + e)))
adamc@146 336 ]]
adamc@146 337
adamc@146 338 [monoid] has canonicalized both sides of the equality, such that we can finish the proof by reflexivity. *)
adamc@146 339
adamc@145 340 reflexivity.
adamc@145 341 Qed.
adamc@146 342
adamc@146 343 (** It is interesting to look at the form of the proof. *)
adamc@146 344
adamc@146 345 Print t1.
adamc@146 346 (** [[
adamc@146 347
adamc@146 348 t1 =
adamc@146 349 fun a b c d : A =>
adamc@146 350 monoid_reflect (Op (Op (Op (Var a) (Var b)) (Var c)) (Var d))
adamc@146 351 (Op (Op (Var a) (Op (Var b) (Var c))) (Var d))
adamc@146 352 (refl_equal (a + (b + (c + (d + e)))))
adamc@146 353 : forall a b c d : A, a + b + c + d = a + (b + c) + d
adamc@146 354 ]]
adamc@146 355
adamc@146 356 The proof term contains only restatements of the equality operands in reflected form, followed by a use of reflexivity on the shared canonical form. *)
adamc@145 357 End monoid.
adamc@145 358
adamc@146 359 (** Extensions of this basic approach are used in the implementations of the [ring] and [field] tactics that come packaged with Coq. *)
adamc@146 360
adamc@145 361
adamc@144 362 (** * A Smarter Tautology Solver *)
adamc@144 363
adamc@147 364 (** Now we are ready to revisit our earlier tautology solver example. We want to broaden the scope of the tactic to include formulas whose truth is not syntactically apparent. We will want to allow injection of arbitrary formulas, like we allowed arbitrary monoid expressions in the last example. Since we are working in a richer theory, it is important to be able to use equalities between different injected formulas. For instance, we cannott prove [P -> P] by translating the formula into a value like [Imp (Var P) (Var P)], because a Gallina function has no way of comparing the two [P]s for equality.
adamc@147 365
adamc@147 366 To arrive at a nice implementation satisfying these criteria, we introduce the [quote] tactic and its associated library. *)
adamc@147 367
adamc@144 368 Require Import Quote.
adamc@144 369
adamc@144 370 Inductive formula : Set :=
adamc@144 371 | Atomic : index -> formula
adamc@144 372 | Truth : formula
adamc@144 373 | Falsehood : formula
adamc@144 374 | And : formula -> formula -> formula
adamc@144 375 | Or : formula -> formula -> formula
adamc@144 376 | Imp : formula -> formula -> formula.
adamc@144 377
adamc@147 378 (** The type [index] comes from the [Quote] library and represents a countable variable type. The rest of [formula]'s definition should be old hat by now.
adamc@147 379
adamc@147 380 The [quote] tactic will implement injection from [Prop] into [formula] for us, but it is not quite as smart as we might like. In particular, it interprets implications incorrectly, so we will need to declare a wrapper definition for implication, as we did in the last chapter. *)
adamc@144 381
adamc@144 382 Definition imp (P1 P2 : Prop) := P1 -> P2.
adamc@144 383 Infix "-->" := imp (no associativity, at level 95).
adamc@144 384
adamc@147 385 (** Now we can define our denotation function. *)
adamc@147 386
adamc@147 387 Definition asgn := varmap Prop.
adamc@147 388
adamc@144 389 Fixpoint formulaDenote (atomics : asgn) (f : formula) : Prop :=
adamc@144 390 match f with
adamc@144 391 | Atomic v => varmap_find False v atomics
adamc@144 392 | Truth => True
adamc@144 393 | Falsehood => False
adamc@144 394 | And f1 f2 => formulaDenote atomics f1 /\ formulaDenote atomics f2
adamc@144 395 | Or f1 f2 => formulaDenote atomics f1 \/ formulaDenote atomics f2
adamc@144 396 | Imp f1 f2 => formulaDenote atomics f1 --> formulaDenote atomics f2
adamc@144 397 end.
adamc@144 398
adamc@147 399 (** The [varmap] type family implements maps from [index] values. In this case, we define an assignment as a map from variables to [Prop]s. [formulaDenote] works with an assignment, and we use the [varmap_find] function to consult the assignment in the [Atomic] case. The first argument to [varmap_find] is a default value, in case the variable is not found. *)
adamc@147 400
adamc@144 401 Section my_tauto.
adamc@144 402 Variable atomics : asgn.
adamc@144 403
adamc@144 404 Definition holds (v : index) := varmap_find False v atomics.
adamc@144 405
adamc@147 406 (** We define some shorthand for a particular variable being true, and now we are ready to define some helpful functions based on the [ListSet] module of the standard library, which (unsurprisingly) presents a view of lists as sets. *)
adamc@147 407
adamc@144 408 Require Import ListSet.
adamc@144 409
adamc@144 410 Definition index_eq : forall x y : index, {x = y} + {x <> y}.
adamc@144 411 decide equality.
adamc@144 412 Defined.
adamc@144 413
adamc@144 414 Definition add (s : set index) (v : index) := set_add index_eq v s.
adamc@147 415
adamc@144 416 Definition In_dec : forall v (s : set index), {In v s} + {~In v s}.
adamc@144 417 Open Local Scope specif_scope.
adamc@144 418
adamc@144 419 intro; refine (fix F (s : set index) : {In v s} + {~In v s} :=
adamc@144 420 match s return {In v s} + {~In v s} with
adamc@144 421 | nil => No
adamc@144 422 | v' :: s' => index_eq v' v || F s'
adamc@144 423 end); crush.
adamc@144 424 Defined.
adamc@144 425
adamc@147 426 (** We define what it means for all members of an index set to represent true propositions, and we prove some lemmas about this notion. *)
adamc@147 427
adamc@144 428 Fixpoint allTrue (s : set index) : Prop :=
adamc@144 429 match s with
adamc@144 430 | nil => True
adamc@144 431 | v :: s' => holds v /\ allTrue s'
adamc@144 432 end.
adamc@144 433
adamc@144 434 Theorem allTrue_add : forall v s,
adamc@144 435 allTrue s
adamc@144 436 -> holds v
adamc@144 437 -> allTrue (add s v).
adamc@144 438 induction s; crush;
adamc@144 439 match goal with
adamc@144 440 | [ |- context[if ?E then _ else _] ] => destruct E
adamc@144 441 end; crush.
adamc@144 442 Qed.
adamc@144 443
adamc@144 444 Theorem allTrue_In : forall v s,
adamc@144 445 allTrue s
adamc@144 446 -> set_In v s
adamc@144 447 -> varmap_find False v atomics.
adamc@144 448 induction s; crush.
adamc@144 449 Qed.
adamc@144 450
adamc@144 451 Hint Resolve allTrue_add allTrue_In.
adamc@144 452
adamc@144 453 Open Local Scope partial_scope.
adamc@144 454
adamc@147 455 (** Now we can write a function [forward] which implements deconstruction of hypotheses. It has a dependent type, in the style of Chapter 6, guaranteeing correctness. The arguments to [forward] are a goal formula [f], a set [known] of atomic formulas that we may assume are true, a hypothesis formula [hyp], and a success continuation [cont] that we call when we have extended [known] to hold new truths implied by [hyp]. *)
adamc@147 456
adamc@144 457 Definition forward (f : formula) (known : set index) (hyp : formula)
adamc@144 458 (cont : forall known', [allTrue known' -> formulaDenote atomics f])
adamc@144 459 : [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f].
adamc@144 460 refine (fix F (f : formula) (known : set index) (hyp : formula)
adamc@144 461 (cont : forall known', [allTrue known' -> formulaDenote atomics f]){struct hyp}
adamc@144 462 : [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f] :=
adamc@144 463 match hyp return [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f] with
adamc@144 464 | Atomic v => Reduce (cont (add known v))
adamc@144 465 | Truth => Reduce (cont known)
adamc@144 466 | Falsehood => Yes
adamc@144 467 | And h1 h2 =>
adamc@144 468 Reduce (F (Imp h2 f) known h1 (fun known' =>
adamc@144 469 Reduce (F f known' h2 cont)))
adamc@144 470 | Or h1 h2 => F f known h1 cont && F f known h2 cont
adamc@144 471 | Imp _ _ => Reduce (cont known)
adamc@144 472 end); crush.
adamc@144 473 Defined.
adamc@144 474
adamc@147 475 (** A [backward] function implements analysis of the final goal. It calls [forward] to handle implications. *)
adamc@147 476
adamc@144 477 Definition backward (known : set index) (f : formula) : [allTrue known -> formulaDenote atomics f].
adamc@144 478 refine (fix F (known : set index) (f : formula) : [allTrue known -> formulaDenote atomics f] :=
adamc@144 479 match f return [allTrue known -> formulaDenote atomics f] with
adamc@144 480 | Atomic v => Reduce (In_dec v known)
adamc@144 481 | Truth => Yes
adamc@144 482 | Falsehood => No
adamc@144 483 | And f1 f2 => F known f1 && F known f2
adamc@144 484 | Or f1 f2 => F known f1 || F known f2
adamc@144 485 | Imp f1 f2 => forward f2 known f1 (fun known' => F known' f2)
adamc@144 486 end); crush; eauto.
adamc@144 487 Defined.
adamc@144 488
adamc@147 489 (** A simple wrapper around [backward] gives us the usual type of a partial decision procedure. *)
adamc@147 490
adamc@144 491 Definition my_tauto (f : formula) : [formulaDenote atomics f].
adamc@144 492 intro; refine (Reduce (backward nil f)); crush.
adamc@144 493 Defined.
adamc@144 494 End my_tauto.
adamc@144 495
adamc@147 496 (** Our final tactic implementation is now fairly straightforward. First, we [intro] all quantifiers that do not bind [Prop]s. Then we call the [quote] tactic, which implements the reflection for us. Finally, we are able to construct an exact proof via [partialOut] and the [my_tauto] Gallina function. *)
adamc@147 497
adamc@144 498 Ltac my_tauto :=
adamc@144 499 repeat match goal with
adamc@144 500 | [ |- forall x : ?P, _ ] =>
adamc@144 501 match type of P with
adamc@144 502 | Prop => fail 1
adamc@144 503 | _ => intro
adamc@144 504 end
adamc@144 505 end;
adamc@144 506 quote formulaDenote;
adamc@144 507 match goal with
adamc@144 508 | [ |- formulaDenote ?m ?f ] => exact (partialOut (my_tauto m f))
adamc@144 509 end.
adamc@144 510
adamc@147 511 (** A few examples demonstrate how the tactic works. *)
adamc@147 512
adamc@144 513 Theorem mt1 : True.
adamc@144 514 my_tauto.
adamc@144 515 Qed.
adamc@144 516
adamc@144 517 Print mt1.
adamc@147 518 (** [[
adamc@147 519
adamc@147 520 mt1 = partialOut (my_tauto (Empty_vm Prop) Truth)
adamc@147 521 : True
adamc@147 522 ]]
adamc@147 523
adamc@147 524 We see [my_tauto] applied with an empty [varmap], since every subformula is handled by [formulaDenote]. *)
adamc@144 525
adamc@144 526 Theorem mt2 : forall x y : nat, x = y --> x = y.
adamc@144 527 my_tauto.
adamc@144 528 Qed.
adamc@144 529
adamc@144 530 Print mt2.
adamc@147 531 (** [[
adamc@147 532
adamc@147 533 mt2 =
adamc@147 534 fun x y : nat =>
adamc@147 535 partialOut
adamc@147 536 (my_tauto (Node_vm (x = y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 537 (Imp (Atomic End_idx) (Atomic End_idx)))
adamc@147 538 : forall x y : nat, x = y --> x = y
adamc@147 539 ]]
adamc@147 540
adamc@147 541 Crucially, both instances of [x = y] are represented with the same index, [End_idx]. The value of this index only needs to appear once in the [varmap], whose form reveals that [varmap]s are represented as binary trees, where [index] values denote paths from tree roots to leaves. *)
adamc@144 542
adamc@144 543 Theorem mt3 : forall x y z,
adamc@144 544 (x < y /\ y > z) \/ (y > z /\ x < S y)
adamc@144 545 --> y > z /\ (x < y \/ x < S y).
adamc@144 546 my_tauto.
adamc@144 547 Qed.
adamc@144 548
adamc@144 549 Print mt3.
adamc@147 550 (** [[
adamc@147 551
adamc@147 552 fun x y z : nat =>
adamc@147 553 partialOut
adamc@147 554 (my_tauto
adamc@147 555 (Node_vm (x < S y) (Node_vm (x < y) (Empty_vm Prop) (Empty_vm Prop))
adamc@147 556 (Node_vm (y > z) (Empty_vm Prop) (Empty_vm Prop)))
adamc@147 557 (Imp
adamc@147 558 (Or (And (Atomic (Left_idx End_idx)) (Atomic (Right_idx End_idx)))
adamc@147 559 (And (Atomic (Right_idx End_idx)) (Atomic End_idx)))
adamc@147 560 (And (Atomic (Right_idx End_idx))
adamc@147 561 (Or (Atomic (Left_idx End_idx)) (Atomic End_idx)))))
adamc@147 562 : forall x y z : nat,
adamc@147 563 x < y /\ y > z \/ y > z /\ x < S y --> y > z /\ (x < y \/ x < S y)
adamc@147 564 ]]
adamc@147 565
adamc@147 566 Our goal contained three distinct atomic formulas, and we see that a three-element [varmap] is generated.
adamc@147 567
adamc@147 568 It can be interesting to observe differences between the level of repetition in proof terms generated by [my_tauto] and [tauto] for especially trivial theorems. *)
adamc@144 569
adamc@144 570 Theorem mt4 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False.
adamc@144 571 my_tauto.
adamc@144 572 Qed.
adamc@144 573
adamc@144 574 Print mt4.
adamc@147 575 (** [[
adamc@147 576
adamc@147 577 mt4 =
adamc@147 578 partialOut
adamc@147 579 (my_tauto (Empty_vm Prop)
adamc@147 580 (Imp
adamc@147 581 (And Truth
adamc@147 582 (And Truth
adamc@147 583 (And Truth (And Truth (And Truth (And Truth Falsehood))))))
adamc@147 584 Falsehood))
adamc@147 585 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False
adamc@147 586 ]] *)
adamc@144 587
adamc@144 588 Theorem mt4' : True /\ True /\ True /\ True /\ True /\ True /\ False -> False.
adamc@144 589 tauto.
adamc@144 590 Qed.
adamc@144 591
adamc@144 592 Print mt4'.
adamc@147 593 (** [[
adamc@147 594
adamc@147 595 mt4' =
adamc@147 596 fun H : True /\ True /\ True /\ True /\ True /\ True /\ False =>
adamc@147 597 and_ind
adamc@147 598 (fun (_ : True) (H1 : True /\ True /\ True /\ True /\ True /\ False) =>
adamc@147 599 and_ind
adamc@147 600 (fun (_ : True) (H3 : True /\ True /\ True /\ True /\ False) =>
adamc@147 601 and_ind
adamc@147 602 (fun (_ : True) (H5 : True /\ True /\ True /\ False) =>
adamc@147 603 and_ind
adamc@147 604 (fun (_ : True) (H7 : True /\ True /\ False) =>
adamc@147 605 and_ind
adamc@147 606 (fun (_ : True) (H9 : True /\ False) =>
adamc@147 607 and_ind (fun (_ : True) (H11 : False) => False_ind False H11)
adamc@147 608 H9) H7) H5) H3) H1) H
adamc@147 609 : True /\ True /\ True /\ True /\ True /\ True /\ False -> False
adamc@147 610 ]] *)
adamc@147 611