changeset 488:31258618ef73

Incorporate feedback from Nathan Collins
author Adam Chlipala Tue, 08 Jan 2013 14:38:56 -0500 8bfb27cf0121 a95af5a59990 src/DataStruct.v src/Equality.v src/Generic.v src/InductiveTypes.v src/Match.v src/Predicates.v src/Universes.v 7 files changed, 20 insertions(+), 20 deletions(-) [+]
line wrap: on
line diff
--- a/src/DataStruct.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/DataStruct.v	Tue Jan 08 14:38:56 2013 -0500
@@ -584,7 +584,7 @@

Implicit Arguments Node [A n].

-(** We can redefine [sum] and [inc] for our new [tree] type.  Again, it is useful to define a generic fold function first.  This time, it takes in a function whose range is some [ffin] type, and it folds another function over the results of calling the first function at every possible [ffin] value. *)
+(** We can redefine [sum] and [inc] for our new [tree] type.  Again, it is useful to define a generic fold function first.  This time, it takes in a function whose domain is some [ffin] type, and it folds another function over the results of calling the first function at every possible [ffin] value. *)

Section rifoldr.
Variables A B : Set.
--- a/src/Equality.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/Equality.v	Tue Jan 08 14:38:56 2013 -0500
@@ -628,7 +628,7 @@
end
]]

-        The conclusion has gotten markedly simpler.  It seems counterintuitive that we can have an easier time of proving a more general theorem, but that is exactly the case here and for many other proofs that use dependent types heavily.  Speaking informally, the reason why this kind of activity helps is that [match] annotations only support variables in certain positions.  By reducing more elements of a goal to variables, built-in tactics can have more success building [match] terms under the hood.
+        The conclusion has gotten markedly simpler.  It seems counterintuitive that we can have an easier time of proving a more general theorem, but that is exactly the case here and for many other proofs that use dependent types heavily.  Speaking informally, the reason why this kind of activity helps is that [match] annotations contain some positions where only variables are allowed.  By reducing more elements of a goal to variables, built-in tactics can have more success building [match] terms under the hood.

In this case, it is helpful to generalize over our two proofs as well. *)

--- a/src/Generic.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/Generic.v	Tue Jan 08 14:38:56 2013 -0500
@@ -26,7 +26,7 @@

(** * Reifying Datatype Definitions *)

-(** The key to generic programming with dependent types is%\index{universe types}% _universe types_.  This concept should not be confused with the idea of _universes_ from the metatheory of CIC and related languages.  Rather, the idea of universe types is to define inductive types that provide _syntactic representations_ of Coq types.  We cannot directly write CIC programs that do case analysis on types, but we _can_ case analyze on reified syntactic versions of those types.
+(** The key to generic programming with dependent types is%\index{universe types}% _universe types_.  This concept should not be confused with the idea of _universes_ from the metatheory of CIC and related languages, which we will study in more detail in the next chapter.  Rather, the idea of universe types is to define inductive types that provide _syntactic representations_ of Coq types.  We cannot directly write CIC programs that do case analysis on types, but we _can_ case analyze on reified syntactic versions of those types.

Thus, to begin, we must define a syntactic representation of some class of datatypes.  In this chapter, our running example will have to do with basic algebraic datatypes, of the kind found in ML and Haskell, but without additional bells and whistles like type parameters and mutually recursive definitions.

@@ -138,7 +138,7 @@
Definition size T dt (fx : fixDenote T dt) : T -> nat :=
fx nat (hmake (B := constructorDenote nat) (fun _ _ r => foldr plus 1 r) dt).

-(** Our definition is parameterized over a recursion scheme [fx].  We instantiate [fx] by passing it the function result type and a set of function cases, where we build the latter with [hmake].  The function argument to [hmake] takes three arguments: the representation of a constructor, its non-recursive arguments, and the results of recursive calls on all of its recursive arguments.  We only need the recursive call results here, so we call them [r] and bind the other two inputs with wildcards.  The actual case body is simple: we add together the recursive call results and increment the result by one (to account for the current constructor).  This [foldr] function is an [hlist]-specific version defined in the [DepList] module.
+(** Our definition is parameterized over a recursion scheme [fx].  We instantiate [fx] by passing it the function result type and a set of function cases, where we build the latter with [hmake].  The function argument to [hmake] takes three arguments: the representation of a constructor, its non-recursive arguments, and the results of recursive calls on all of its recursive arguments.  We only need the recursive call results here, so we call them [r] and bind the other two inputs with wildcards.  The actual case body is simple: we add together the recursive call results and increment the result by one (to account for the current constructor).  This [foldr] function is an [ilist]-specific version defined in the [DepList] module.

It is instructive to build [fixDenote] values for our example types and see what specialized [size] functions result from them. *)

@@ -292,7 +292,7 @@
::: HNil) bool_fix.
(** %\vspace{-.15in}% [[
= fun b : bool => if b then "true()" else "false()"
-   : bool -> s
+   : bool -> string
]]
*)

@@ -656,7 +656,7 @@
(* end thide *)
(* end hide *)

-  (** The base case is discharged automatically, and the inductive case looks like this, where [H] is the outer IH (for induction over [T] values) and [IHn] is the inner IH (for induction over the recursive arguments).
+  (** The base case is discharged automatically, and the inductive case looks like this, where [H] is the outer IH (for induction over [T] values) and [IHr] is the inner IH (for induction over the recursive arguments).
[[
H : forall i : fin (S n),
fx T
--- a/src/InductiveTypes.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/InductiveTypes.v	Tue Jan 08 14:38:56 2013 -0500
@@ -507,7 +507,7 @@
forall l : list T, P l
]]

-Thus, even though we just saw that [T] is added as an extra argument to the constructor [Cons], the inductive case in the type of [list_ind] (i.e., the third line of the type) includes no quantifier for [T], even though all of the other arguments are quantified explicitly.  Parameters in other inductive definitions are treated similarly in stating induction principles. *)
+Thus, despite a very real sense in which the type [T] is an argument to the constructor [Cons], the inductive case in the type of [list_ind] (i.e., the third line of the type) includes no quantifier for [T], even though all of the other arguments are quantified explicitly.  Parameters in other inductive definitions are treated similarly in stating induction principles. *)

(** * Mutually Inductive Types *)
--- a/src/Match.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/Match.v	Tue Jan 08 14:38:56 2013 -0500
@@ -582,13 +582,13 @@
imp.
Qed.

-Theorem assoc_prem1 : forall P Q R S,
+Theorem pick_prem1 : forall P Q R S,
(P /\ (Q /\ R) --> S)
-> ((P /\ Q) /\ R --> S).
imp.
Qed.

-Theorem assoc_prem2 : forall P Q R S,
+Theorem pick_prem2 : forall P Q R S,
(Q /\ (P /\ R) --> S)
-> ((P /\ Q) /\ R --> S).
imp.
@@ -600,13 +600,13 @@
imp.
Qed.

-Theorem assoc_conc1 : forall P Q R S,
+Theorem pick_conc1 : forall P Q R S,
(S --> P /\ (Q /\ R))
-> (S --> (P /\ Q) /\ R).
imp.
Qed.

-Theorem assoc_conc2 : forall P Q R S,
+Theorem pick_conc2 : forall P Q R S,
(S --> Q /\ (P /\ R))
-> (S --> (P /\ Q) /\ R).
imp.
@@ -626,8 +626,8 @@
|| (apply and_True_prem; tac)
|| match P with
| ?P1 /\ ?P2 =>
-           (apply assoc_prem1; search P1)
-           || (apply assoc_prem2; search P2)
+           (apply pick_prem1; search P1)
+           || (apply pick_prem2; search P2)
end
in match goal with
| [ |- ?P /\ _ --> _ ] => search P
@@ -637,7 +637,7 @@

(** To understand how [search_prem] works, we turn first to the final [match].  If the premise begins with a conjunction, we call the [search] procedure on each of the conjuncts, or only the first conjunct, if that already yields a case where [tac] does not fail.  The call [search P] expects and maintains the invariant that the premise is of the form [P /\ Q] for some [Q].  We pass [P] explicitly as a kind of decreasing induction measure, to avoid looping forever when [tac] always fails.  The second [match] case calls a commutativity lemma to realize this invariant, before passing control to [search].  The final [match] case tries applying [tac] directly and then, if that fails, changes the form of the goal by adding an extraneous [True] conjunct and calls [tac] again.  The %\index{tactics!progress}%[progress] tactical fails when its argument tactic succeeds without changing the current subgoal.

-   The [search] function itself tries the same tricks as in the last case of the final [match].  Additionally, if neither works, it checks if [P] is a conjunction.  If so, it calls itself recursively on each conjunct, first applying associativity lemmas to maintain the goal-form invariant.
+   The [search] function itself tries the same tricks as in the last case of the final [match], using the [||] operator as a shorthand for trying one tactic and then, if the first fails, trying another.  Additionally, if neither works, it checks if [P] is a conjunction.  If so, it calls itself recursively on each conjunct, first applying associativity lemmas to maintain the goal-form invariant.

We will also want a dual function [search_conc], which does tree search through an [imp] conclusion. *)

@@ -647,8 +647,8 @@
|| (apply and_True_conc; tac)
|| match P with
| ?P1 /\ ?P2 =>
-           (apply assoc_conc1; search P1)
-           || (apply assoc_conc2; search P2)
+           (apply pick_conc1; search P1)
+           || (apply pick_conc2; search P2)
end
in match goal with
| [ |- _ --> ?P /\ _ ] => search P
@@ -718,7 +718,7 @@
(** %\vspace{-.15in}% [[
t2 =
fun P Q : Prop =>
-comm_prem (assoc_prem1 (assoc_prem2 (False_prem (P:=P /\ P /\ Q) (P /\ Q))))
+comm_prem (pick_prem1 (pick_prem2 (False_prem (P:=P /\ P /\ Q) (P /\ Q))))
: forall P Q : Prop, Q /\ (P /\ False) /\ P --> P /\ Q
]]

@@ -750,7 +750,7 @@
and_True_prem
(ex_prem (P:=fun x : nat => P x /\ Q)
(fun x : nat =>
-      assoc_prem2
+      pick_prem2
(Match (P:=Q)
(and_True_conc
(ex_conc (fun x0 : nat => P x0) x
--- a/src/Predicates.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/Predicates.v	Tue Jan 08 14:38:56 2013 -0500
@@ -129,7 +129,7 @@
: Prop -> Prop
]]

-     We see that [not] is just shorthand for implication of [False].  We can use that fact explicitly in proofs.  The syntax [~ P] expands to [not P]. *)
+     We see that [not] is just shorthand for implication of [False].  We can use that fact explicitly in proofs.  The syntax [~ P] %(written with a tilde in ASCII)% expands to [not P]. *)

Theorem arith_neq' : ~ (2 + 2 = 5).
(* begin thide *)
--- a/src/Universes.v	Mon Jan 07 15:23:16 2013 -0500
+++ b/src/Universes.v	Tue Jan 08 14:38:56 2013 -0500
@@ -91,7 +91,7 @@

In the outputs of our first [Check] query, we see that the type level of [Set]'s type is [(0)+1]. Here [0] stands for the level of [Set], and we increment it to arrive at the level that _classifies_ [Set].

-  In the second query's output, we see that the occurrence of [Type] that we check is assigned a fresh%\index{universe variable}% _universe variable_ [Top.3].  The output type increments [Top.3] to move up a level in the universe hierarchy.  As we write code that uses definitions whose types mention universe variables, unification may refine the values of those variables.  Luckily, the user rarely has to worry about the details.
+  In the third query's output, we see that the occurrence of [Type] that we check is assigned a fresh%\index{universe variable}% _universe variable_ [Top.3].  The output type increments [Top.3] to move up a level in the universe hierarchy.  As we write code that uses definitions whose types mention universe variables, unification may refine the values of those variables.  Luckily, the user rarely has to worry about the details.

Another crucial concept in CIC is%\index{predicativity}% _predicativity_.  Consider these queries. *)