changeset 398:05efde66559d

Get it working in Coq 8.4beta1; use nice coqdoc notation for italics
author Adam Chlipala <adam@chlipala.net>
date Wed, 06 Jun 2012 11:25:13 -0400
parents d62ed7381a0b
children 5986e9fd40b5
files src/Coinductive.v src/DataStruct.v src/Equality.v src/GeneralRec.v src/Generic.v src/InductiveTypes.v src/Intro.v src/Large.v src/LogicProg.v src/Match.v src/MoreDep.v src/Predicates.v src/ProgLang.v src/Reflection.v src/StackMachine.v src/Subset.v src/Universes.v
diffstat 17 files changed, 254 insertions(+), 243 deletions(-) [+]
line wrap: on
line diff
--- a/src/Coinductive.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Coinductive.v	Wed Jun 06 11:25:13 2012 -0400
@@ -1,4 +1,4 @@
-(* Copyright (c) 2008-2011, Adam Chlipala
+(* Copyright (c) 2008-2012, Adam Chlipala
  * 
  * This work is licensed under a
  * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
@@ -22,7 +22,7 @@
 
 %\index{laziness}%Laziness is easy to implement in Haskell, where all the definitions in a program may be thought of as mutually recursive.  In such an unconstrained setting, it is easy to implement an infinite loop when you really meant to build an infinite list, where any finite prefix of the list should be forceable in finite time.  Haskell programmers learn how to avoid such slip-ups.  In Coq, such a laissez-faire policy is not good enough.
 
-We spent some time in the last chapter discussing the %\index{Curry-Howard correspondence}%Curry-Howard isomorphism, where proofs are identified with functional programs.  In such a setting, infinite loops, intended or otherwise, are disastrous.  If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously.  That is, the addition of general recursion would make CIC %\textit{%#<i>#inconsistent#</i>#%}%.  For an arbitrary proposition [P], we could write:
+We spent some time in the last chapter discussing the %\index{Curry-Howard correspondence}%Curry-Howard isomorphism, where proofs are identified with functional programs.  In such a setting, infinite loops, intended or otherwise, are disastrous.  If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously.  That is, the addition of general recursion would make CIC _inconsistent_.  For an arbitrary proposition [P], we could write:
 [[
 Fixpoint bad (u : unit) : P := bad u.
 ]]
@@ -33,12 +33,12 @@
 
 One solution is to use types to contain the possibility of non-termination.  For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs; several such approaches are surveyed in Chapter 7.  This is a heavyweight solution, and so we would like to avoid it whenever possible.
 
-Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell.  That mechanism, %\index{co-inductive types}\textit{%#<i>#co-inductive types#</i>#%}%, is the subject of this chapter. *)
+Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell.  That mechanism, %\index{co-inductive types}%_co-inductive types_, is the subject of this chapter. *)
 
 
 (** * Computing with Infinite Data *)
 
-(** Let us begin with the most basic type of infinite data, %\textit{%#<i>#streams#</i>#%}%, or lazy lists.%\index{Vernacular commands!CoInductive}% *)
+(** Let us begin with the most basic type of infinite data, _streams_, or lazy lists.%\index{Vernacular commands!CoInductive}% *)
 
 Section stream.
   Variable A : Type.
@@ -49,7 +49,7 @@
 
 (** The definition is surprisingly simple.  Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive].  We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite.
 
-   How do we write down a stream constant?  Obviously simple application of constructors is not good enough, since we could only denote finite objects that way.  Rather, whereas recursive definitions were necessary to %\textit{%#<i>#use#</i>#%}% values of recursive inductive types effectively, here we find that we need %\index{co-recursive definitions}\textit{%#<i>#co-recursive definitions#</i>#%}% to %\textit{%#<i>#build#</i>#%}% values of co-inductive types effectively.
+   How do we write down a stream constant?  Obviously simple application of constructors is not good enough, since we could only denote finite objects that way.  Rather, whereas recursive definitions were necessary to _use_ values of recursive inductive types effectively, here we find that we need %\index{co-recursive definitions}%_co-recursive definitions_ to _build_ values of co-inductive types effectively.
 
    We can define a stream consisting only of zeroes.%\index{Vernacular commands!CoFixpoint}% *)
 
@@ -98,9 +98,9 @@
 
 (* end thide *)
 
-(** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks.  However, there are important restrictions that are dual to the restrictions on the use of inductive types.  Fixpoints %\textit{%#<i>#consume#</i>#%}% values of inductive types, with restrictions on which %\textit{%#<i>#arguments#</i>#%}% may be passed in recursive calls.  Dually, co-fixpoints %\textit{%#<i>#produce#</i>#%}% values of co-inductive types, with restrictions on what may be done with the %\textit{%#<i>#results#</i>#%}% of co-recursive calls.
+(** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks.  However, there are important restrictions that are dual to the restrictions on the use of inductive types.  Fixpoints _consume_ values of inductive types, with restrictions on which _arguments_ may be passed in recursive calls.  Dually, co-fixpoints _produce_ values of co-inductive types, with restrictions on what may be done with the _results_ of co-recursive calls.
 
-The restriction for co-inductive types shows up as the %\index{guardedness condition}\textit{%#<i>#guardedness condition#</i>#%}%, and it can be broken into two parts.  First, consider this stream definition, which would be legal in Haskell.
+The restriction for co-inductive types shows up as the %\index{guardedness condition}%_guardedness condition_, and it can be broken into two parts.  First, consider this stream definition, which would be legal in Haskell.
 [[
 CoFixpoint looper : stream nat := looper.
 ]]
@@ -114,7 +114,7 @@
 unguarded recursive call in "looper"
 >>
 
-The rule we have run afoul of here is that %\textit{%#<i>#every co-recursive call must be guarded by a constructor#</i>#%}%; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating.  It is a good thing that this rule is enforced.  If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency.
+The rule we have run afoul of here is that _every co-recursive call must be guarded by a constructor_; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating.  It is a good thing that this rule is enforced.  If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency.
 
 Some familiar functions are easy to write in co-recursive fashion. *)
 
@@ -130,7 +130,7 @@
 
 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint].  Many other standard functions on lazy data structures can be implemented just as easily.  Some, like [filter], cannot be implemented.  Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy the guardedness condition.
 
-   The implications of the condition can be subtle.  To illustrate how, we start off with another co-recursive function definition that %\textit{%#<i>#is#</i>#%}% legal.  The function [interleave] takes two streams and produces a new stream that alternates between their elements. *)
+   The implications of the condition can be subtle.  To illustrate how, we start off with another co-recursive function definition that _is_ legal.  The function [interleave] takes two streams and produces a new stream that alternates between their elements. *)
 
 Section interleave.
   Variable A : Type.
@@ -186,7 +186,7 @@
 Clearly in this case the [map'] calls are not immediate arguments to constructors, so we violate the guardedness condition. *)
 (* end thide *)
 
-(** A more interesting question is why that condition is the right one.  We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx].  The guardedness condition is an example of a syntactic check for %\index{productivity}\emph{%#<i>#productivity#</i>#%}% of co-recursive definitions.  A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx].  If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions.  However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks.  Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *)
+(** A more interesting question is why that condition is the right one.  We can make an intuitive argument that the original [map'] definition is perfectly reasonable and denotes a well-understood transformation on streams, such that every output would behave properly with [approx].  The guardedness condition is an example of a syntactic check for %\index{productivity}%_productivity_ of co-recursive definitions.  A productive definition can be thought of as one whose outputs can be forced in finite time to any finite approximation level, as with [approx].  If we replaced the guardedness condition with more involved checks, we might be able to detect and allow a broader range of productive definitions.  However, mistakes in these checks could cause inconsistency, and programmers would need to understand the new, more complex checks.  Coq's design strikes a balance between consistency and simplicity with its choice of guard condition, though we can imagine other worthwhile balances being struck, too. *)
 
 
 (** * Infinite Proofs *)
@@ -205,7 +205,7 @@
 
 Abort.
 
-(** Co-inductive datatypes make sense by analogy from Haskell.  What we need now is a %\textit{%#<i>#co-inductive proposition#</i>#%}%.  That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition.  The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures.  Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions.
+(** Co-inductive datatypes make sense by analogy from Haskell.  What we need now is a _co-inductive proposition_.  That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition.  The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures.  Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions.
 
 We are ready for our first %\index{co-inductive predicates}%co-inductive predicate. *)
 
@@ -262,7 +262,7 @@
 Guarded.
 ]]
 
-     Running [Guarded] here gives us the same error message that we got when we tried to run [Qed].  In larger proofs, [Guarded] can be helpful in detecting problems %\textit{%#<i>#before#</i>#%}% we think we are ready to run [Qed].
+     Running [Guarded] here gives us the same error message that we got when we tried to run [Qed].  In larger proofs, [Guarded] can be helpful in detecting problems _before_ we think we are ready to run [Qed].
      
      We need to start the co-induction by applying [stream_eq]'s constructor.  To do that, we need to know that both arguments to the predicate are [Cons]es.  Informally, this is trivial, but [simpl] is not able to help us. *)
 
@@ -344,7 +344,7 @@
 
 (** Why did this silly-looking trick help?  The answer has to do with the constraints placed on Coq's evaluation rules by the need for termination.  The [cofix]-related restriction that foiled our first attempt at using [simpl] is dual to a restriction for [fix].  In particular, an application of an anonymous [fix] only reduces when the top-level structure of the recursive argument is known.  Otherwise, we would be unfolding the recursive definition ad infinitum.
 
-   Fixpoints only reduce when enough is known about the %\textit{%#<i>#definitions#</i>#%}% of their arguments.  Dually, co-fixpoints only reduce when enough is known about %\textit{%#<i>#how their results will be used#</i>#%}%.  In particular, a [cofix] is only expanded when it is the discriminee of a [match].  Rewriting with our superficially silly lemma wrapped new [match]es around the two [cofix]es, triggering reduction.
+   Fixpoints only reduce when enough is known about the _definitions_ of their arguments.  Dually, co-fixpoints only reduce when enough is known about _how their results will be used_.  In particular, a [cofix] is only expanded when it is the discriminee of a [match].  Rewriting with our superficially silly lemma wrapped new [match]es around the two [cofix]es, triggering reduction.
 
    If [cofix]es reduced haphazardly, it would be easy to run into infinite loops in evaluation, since we are, after all, building infinite objects.
 
@@ -362,9 +362,9 @@
 
    %\medskip%
 
-   Must we always be cautious with automation in proofs by co-induction?  Induction seems to have dual versions of the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles.  It turns out that we can usually do the same with %\index{co-induction principles}\emph{%#<i>#co-induction principles#</i>#%}%.  Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq'].
+   Must we always be cautious with automation in proofs by co-induction?  Induction seems to have dual versions of the same pitfalls inherent in it, and yet we avoid those pitfalls by encapsulating safe Curry-Howard recursion schemes inside named induction principles.  It turns out that we can usually do the same with %\index{co-induction principles}%_co-induction principles_.  Let us take that tack here, so that we can arrive at an [induction x; crush]-style proof for [ones_eq'].
 
-   An induction principle is parameterized over a predicate characterizing what we mean to prove, %\emph{%#<i>#as a function of the inductive fact that we already know#</i>#%}%.  Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, %\emph{%#<i>#as a function of the arguments to the co-inductive predicate that we are trying to prove#</i>#%}%.
+   An induction principle is parameterized over a predicate characterizing what we mean to prove, _as a function of the inductive fact that we already know_.  Dually, a co-induction principle ought to be parameterized over a predicate characterizing what we mean to prove, _as a function of the arguments to the co-inductive predicate that we are trying to prove_.
 
    To state a useful principle for [stream_eq], it will be useful first to define the stream head function. *)
 
@@ -382,7 +382,7 @@
 
   Hypothesis Cons_case_hd : forall s1 s2, R s1 s2 -> hd s1 = hd s2.
   Hypothesis Cons_case_tl : forall s1 s2, R s1 s2 -> R (tl s1) (tl s2).
-  (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that an [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails.  An established technical term for such a relation is %\index{bisimulation}\emph{%#<i>#bisimulation#</i>#%}%. *)
+  (** Two hypotheses characterize what makes a good choice of [R]: it enforces equality of stream heads, and it is %``%#<i>#hereditary#</i>#%''% in the sense that an [R] stream pair passes on %``%#"#[R]-ness#"#%''% to its tails.  An established technical term for such a relation is %\index{bisimulation}%_bisimulation_. *)
 
   (** Now it is straightforward to prove the principle, which says that any stream pair in [R] is equal.  The reader may wish to step through the proof script to see what is going on. *)
 
@@ -519,7 +519,7 @@
 
 (** * Simple Modeling of Non-Terminating Programs *)
 
-(** We close the chapter with a quick motivating example for more complex uses of co-inductive types.  We will define a co-inductive semantics for a simple imperative programming language and use that semantics to prove the correctness of a trivial optimization that removes spurious additions by 0.  We follow the technique of %\index{co-inductive big-step operational semantics}\emph{%#<i>#co-inductive big-step operational semantics#</i>#%}~\cite{BigStep}%.
+(** We close the chapter with a quick motivating example for more complex uses of co-inductive types.  We will define a co-inductive semantics for a simple imperative programming language and use that semantics to prove the correctness of a trivial optimization that removes spurious additions by 0.  We follow the technique of %\index{co-inductive big-step operational semantics}%_co-inductive big-step operational semantics_ %\cite{BigStep}%.
 
    We define a suggestive synonym for [nat], as we will consider programs with infinitely many variables, represented as [nat]s. *)
 
@@ -553,7 +553,7 @@
 | Seq : cmd -> cmd -> cmd
 | While : exp -> cmd -> cmd.
 
-(** We could define an inductive relation to characterize the results of command evaluation.  However, such a relation would not capture %\emph{%#<i>#nonterminating#</i>#%}% executions.  With a co-inductive relation, we can capture both cases.  The parameters of the relation are an initial state, a command, and a final state.  A program that does not terminate in a particular initial state is related to %\emph{%#<i>#any#</i>#%}% final state. *)
+(** We could define an inductive relation to characterize the results of command evaluation.  However, such a relation would not capture _nonterminating_ executions.  With a co-inductive relation, we can capture both cases.  The parameters of the relation are an initial state, a command, and a final state.  A program that does not terminate in a particular initial state is related to _any_ final state. *)
 
 CoInductive evalCmd : vars -> cmd -> vars -> Prop :=
 | EvalAssign : forall vs v e, evalCmd vs (Assign v e) (set vs v (evalExp vs e))
@@ -582,7 +582,7 @@
     -> (evalExp vs1 e = 0 /\ vs3 = vs1)
     \/ exists vs2, evalExp vs1 e <> 0 /\ R vs1 c vs2 /\ R vs2 (While e c) vs3.
 
-  (** The proof is routine.  We make use of a form of %\index{tactics!destruct}%[destruct] that takes an %\index{intro pattern}\emph{%#<i>#intro pattern#</i>#%}% in an [as] clause.  These patterns control how deeply we break apart the components of an inductive value, and we refer the reader to the Coq manual for more details. *)
+  (** The proof is routine.  We make use of a form of %\index{tactics!destruct}%[destruct] that takes an %\index{intro pattern}%_intro pattern_ in an [as] clause.  These patterns control how deeply we break apart the components of an inductive value, and we refer the reader to the Coq manual for more details. *)
 
   Theorem evalCmd_coind : forall vs1 c vs2, R vs1 c vs2 -> evalCmd vs1 c vs2.
     cofix; intros; destruct c.
--- a/src/DataStruct.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/DataStruct.v	Wed Jun 06 11:25:13 2012 -0400
@@ -1,4 +1,4 @@
-(* Copyright (c) 2008-2011, Adam Chlipala
+(* Copyright (c) 2008-2012, Adam Chlipala
  * 
  * This work is licensed under a
  * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
@@ -369,7 +369,7 @@
 
 (** * Recursive Type Definitions *)
 
-(** %\index{recursive type definition}%There is another style of datatype definition that leads to much simpler definitions of the [get] and [hget] definitions above.  Because Coq supports %``%#"#type-level computation,#"#%''% we can redo our inductive definitions as %\textit{%#<i>#recursive#</i>#%}% definitions. *)
+(** %\index{recursive type definition}%There is another style of datatype definition that leads to much simpler definitions of the [get] and [hget] definitions above.  Because Coq supports %``%#"#type-level computation,#"#%''% we can redo our inductive definitions as _recursive_ definitions. *)
 
 (* EX: Come up with an alternate [ilist] definition that makes it easier to write [get]. *)
 
@@ -568,7 +568,7 @@
 
   The special-case rule for nested datatypes only works with nested uses of other inductive types, which could be replaced with uses of new mutually inductive types.  We defined [filist] recursively, so it may not be used for nested recursion.
 
-  Our final solution uses yet another of the inductive definition techniques introduced in Chapter 3, %\index{reflexive inductive type}%reflexive types.  Instead of merely using [fin] to get elements out of [ilist], we can %\textit{%#<i>#define#</i>#%}% [ilist] in terms of [fin].  For the reasons outlined above, it turns out to be easier to work with [ffin] in place of [fin]. *)
+  Our final solution uses yet another of the inductive definition techniques introduced in Chapter 3, %\index{reflexive inductive type}%reflexive types.  Instead of merely using [fin] to get elements out of [ilist], we can _define_ [ilist] in terms of [fin].  For the reasons outlined above, it turns out to be easier to work with [ffin] in place of [fin]. *)
 
   Inductive tree : Set :=
   | Leaf : A -> tree
@@ -608,7 +608,7 @@
     | Node _ f => Node (fun idx => inc (f idx))
   end.
 
-(** Now we are ready to prove the theorem where we got stuck before.  We will not need to define any new induction principle, but it %\textit{%#<i>#will#</i>#%}% be helpful to prove some lemmas. *)
+(** Now we are ready to prove the theorem where we got stuck before.  We will not need to define any new induction principle, but it _will_ be helpful to prove some lemmas. *)
 
 Lemma plus_ge : forall x1 y1 x2 y2,
   x1 >= x2
@@ -818,7 +818,7 @@
             end; crush).
 Qed.
 
-(** It is also useful to know that the result of a call to [cond] is not changed by substituting new tests and bodies functions, so long as the new functions have the same input-output behavior as the old.  It turns out that, in Coq, it is not possible to prove in general that functions related in this way are equal.  We treat this issue with our discussion of axioms in a later chapter.  For now, it suffices to prove that the particular function [cond] is %\textit{%#<i>#extensional#</i>#%}%; that is, it is unaffected by substitution of functions with input-output equivalents. *)
+(** It is also useful to know that the result of a call to [cond] is not changed by substituting new tests and bodies functions, so long as the new functions have the same input-output behavior as the old.  It turns out that, in Coq, it is not possible to prove in general that functions related in this way are equal.  We treat this issue with our discussion of axioms in a later chapter.  For now, it suffices to prove that the particular function [cond] is _extensional_; that is, it is unaffected by substitution of functions with input-output equivalents. *)
 
 Lemma cond_ext : forall (A : Set) (default : A) n (tests tests' : ffin n -> bool)
   (bodies bodies' : ffin n -> A),
--- a/src/Equality.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Equality.v	Wed Jun 06 11:25:13 2012 -0400
@@ -1,4 +1,4 @@
-(* Copyright (c) 2008-2011, Adam Chlipala
+(* Copyright (c) 2008-2012, Adam Chlipala
  * 
  * This work is licensed under a
  * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
@@ -23,7 +23,7 @@
 
 (** * The Definitional Equality *)
 
-(** We have seen many examples so far where proof goals follow %``%#"#by computation.#"#%''%  That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially.  Exactly when this works and when it does not depends on the details of Coq's %\index{definitional equality}\textit{%#<i>#definitional equality#</i>#%}%.  This is an untyped binary relation appearing in the formal metatheory of CIC.  CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal.
+(** We have seen many examples so far where proof goals follow %``%#"#by computation.#"#%''%  That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially.  Exactly when this works and when it does not depends on the details of Coq's %\index{definitional equality}%_definitional equality_.  This is an untyped binary relation appearing in the formal metatheory of CIC.  CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal.
 
    The %\index{tactics!cbv}%[cbv] tactic will help us illustrate the rules of Coq's definitional equality.  We redefine the natural number predecessor function in a somewhat convoluted way and construct a manual proof that it returns [0] when applied to [1]. *)
 
@@ -106,7 +106,7 @@
      = fun x : nat => (fix id' (n : nat) : nat := n) x
 ]]
 
-By running [compute], we ask Coq to run reduction steps until no more apply, so why do we see an application of a known function, where clearly no beta reduction has been performed?  The answer has to do with ensuring termination of all Gallina programs.  One candidate rule would say that we apply recursive definitions wherever possible.  However, this would clearly lead to nonterminating reduction sequences, since the function may appear fully applied within its own definition, and we would naively %``%#"#simplify#"#%''% such applications immediately.  Instead, Coq only applies the beta rule for a recursive function when %\emph{%#<i>#the top-level structure of the recursive argument is known#</i>#%}%.  For [id'] above, we have only one argument [n], so clearly it is the recursive argument, and the top-level structure of [n] is known when the function is applied to [O] or to some [S e] term.  The variable [x] is neither, so reduction is blocked.
+By running [compute], we ask Coq to run reduction steps until no more apply, so why do we see an application of a known function, where clearly no beta reduction has been performed?  The answer has to do with ensuring termination of all Gallina programs.  One candidate rule would say that we apply recursive definitions wherever possible.  However, this would clearly lead to nonterminating reduction sequences, since the function may appear fully applied within its own definition, and we would naively %``%#"#simplify#"#%''% such applications immediately.  Instead, Coq only applies the beta rule for a recursive function when _the top-level structure of the recursive argument is known_.  For [id'] above, we have only one argument [n], so clearly it is the recursive argument, and the top-level structure of [n] is known when the function is applied to [O] or to some [S e] term.  The variable [x] is neither, so reduction is blocked.
 
 What are recursive arguments in general?  Every recursive function is compiled by Coq to a %\index{Gallina terms!fix}%[fix] expression, for anonymous definition of recursive functions.  Further, every [fix] with multiple arguments has one designated as the recursive argument via a [struct] annotation.  The recursive argument is the one that must decrease across recursive calls, to appease Coq's termination checker.  Coq will generally infer which argument is recursive, though we may also specify it manually, if we want to tweak reduction behavior.  For instance, consider this definition of a function to add two lists of [nat]s elementwise: *)
 
@@ -167,7 +167,7 @@
 
 %\medskip%
 
-   The standard [eq] relation is critically dependent on the definitional equality.  The relation [eq] is often called a %\index{propositional equality}\textit{%#<i>#propositional equality#</i>#%}%, because it reifies definitional equality as a proposition that may or may not hold.  Standard axiomatizations of an equality predicate in first-order logic define equality in terms of properties it has, like reflexivity, symmetry, and transitivity.  In contrast, for [eq] in Coq, those properties are implicit in the properties of the definitional equality, which are built into CIC's metatheory and the implementation of Gallina.  We could add new rules to the definitional equality, and [eq] would keep its definition and methods of use.
+   The standard [eq] relation is critically dependent on the definitional equality.  The relation [eq] is often called a %\index{propositional equality}%_propositional equality_, because it reifies definitional equality as a proposition that may or may not hold.  Standard axiomatizations of an equality predicate in first-order logic define equality in terms of properties it has, like reflexivity, symmetry, and transitivity.  In contrast, for [eq] in Coq, those properties are implicit in the properties of the definitional equality, which are built into CIC's metatheory and the implementation of Gallina.  We could add new rules to the definitional equality, and [eq] would keep its definition and methods of use.
 
    This all may make it sound like the choice of [eq]'s definition is unimportant.  To the contrary, in this chapter, we will see examples where alternate definitions may simplify proofs.  Before that point, I will introduce proof methods for goals that use proofs of the standard propositional equality %``%#"#as data.#"#%''% *)
 
@@ -326,7 +326,7 @@
       end.
 (* end thide *)
 
-  (** Surprisingly, what seems at first like a %\textit{%#<i>#simpler#</i>#%}% lemma is harder to prove. *)
+  (** Surprisingly, what seems at first like a _simpler_ lemma is harder to prove. *)
 
   Lemma lemma2 : forall (x : A) (pf : x = x), O = match pf with refl_equal => O end.
 (* begin thide *)
@@ -381,9 +381,9 @@
  "x = x'"
 >>
 
-     The type error comes from our [return] annotation.  In that annotation, the [as]-bound variable [pf'] has type [x = x'], referring to the [in]-bound variable [x'].  To do a dependent [match], we %\textit{%#<i>#must#</i>#%}% choose a fresh name for the second argument of [eq].  We are just as constrained to use the %``%#"#real#"#%''% value [x] for the first argument.  Thus, within the [return] clause, the proof we are matching on %\textit{%#<i>#must#</i>#%}% equate two non-matching terms, which makes it impossible to equate that proof with reflexivity.
+     The type error comes from our [return] annotation.  In that annotation, the [as]-bound variable [pf'] has type [x = x'], referring to the [in]-bound variable [x'].  To do a dependent [match], we _must_ choose a fresh name for the second argument of [eq].  We are just as constrained to use the %``%#"#real#"#%''% value [x] for the first argument.  Thus, within the [return] clause, the proof we are matching on _must_ equate two non-matching terms, which makes it impossible to equate that proof with reflexivity.
 
-     Nonetheless, it turns out that, with one catch, we %\textit{%#<i>#can#</i>#%}% prove this lemma. *)
+     Nonetheless, it turns out that, with one catch, we _can_ prove this lemma. *)
 
   Lemma lemma3 : forall (x : A) (pf : x = x), pf = refl_equal x.
     intros; apply UIP_refl.
@@ -395,7 +395,7 @@
      : forall (U : Type) (x : U) (p : x = x), p = refl_equal x
      ]]
 
-     The theorem %\index{Gallina terms!UIP\_refl}%[UIP_refl] comes from the [Eqdep] module of the standard library.  Do the Coq authors know of some clever trick for building such proofs that we have not seen yet?  If they do, they did not use it for this proof.  Rather, the proof is based on an %\textit{%#<i>#axiom#</i>#%}%. *)
+     The theorem %\index{Gallina terms!UIP\_refl}%[UIP_refl] comes from the [Eqdep] module of the standard library.  Do the Coq authors know of some clever trick for building such proofs that we have not seen yet?  If they do, they did not use it for this proof.  Rather, the proof is based on an _axiom_. *)
 
   Print eq_rect_eq.
   (** %\vspace{-.15in}% [[
@@ -416,7 +416,7 @@
            end
      ]]
 
-     Perhaps surprisingly, we cannot prove [eq_rect_eq] from within Coq.  This proposition is introduced as an %\index{axioms}%axiom; that is, a proposition asserted as true without proof.  We cannot assert just any statement without proof.  Adding [False] as an axiom would allow us to prove any proposition, for instance, defeating the point of using a proof assistant.  In general, we need to be sure that we never assert %\textit{%#<i>#inconsistent#</i>#%}% sets of axioms.  A set of axioms is inconsistent if its conjunction implies [False].  For the case of [eq_rect_eq], consistency has been verified outside of Coq via %``%#"#informal#"#%''% metatheory%~\cite{AxiomK}%, in a study that also established unprovability of the axiom in CIC.
+     Perhaps surprisingly, we cannot prove [eq_rect_eq] from within Coq.  This proposition is introduced as an %\index{axioms}%axiom; that is, a proposition asserted as true without proof.  We cannot assert just any statement without proof.  Adding [False] as an axiom would allow us to prove any proposition, for instance, defeating the point of using a proof assistant.  In general, we need to be sure that we never assert _inconsistent_ sets of axioms.  A set of axioms is inconsistent if its conjunction implies [False].  For the case of [eq_rect_eq], consistency has been verified outside of Coq via %``%#"#informal#"#%''% metatheory%~\cite{AxiomK}%, in a study that also established unprovability of the axiom in CIC.
 
       This axiom is equivalent to another that is more commonly known and mentioned in type theory circles. *)
 
@@ -470,7 +470,7 @@
  while it is expected to have type "fhlist B (ls1 ++ ls2 ++ ls3)"
 >>
 
-     This first cut at the theorem statement does not even type-check.  We know that the two [fhlist] types appearing in the error message are always equal, by associativity of normal list append, but this fact is not apparent to the type checker.  This stems from the fact that Coq's equality is %\index{intensional type theory}\textit{%#<i>#intensional#</i>#%}%, in the sense that type equality theorems can never be applied after the fact to get a term to type-check.  Instead, we need to make use of equality explicitly in the theorem statement. *)
+     This first cut at the theorem statement does not even type-check.  We know that the two [fhlist] types appearing in the error message are always equal, by associativity of normal list append, but this fact is not apparent to the type checker.  This stems from the fact that Coq's equality is %\index{intensional type theory}%_intensional_, in the sense that type equality theorems can never be applied after the fact to get a term to type-check.  Instead, we need to make use of equality explicitly in the theorem statement. *)
 
   Theorem fhapp_ass : forall ls1 ls2 ls3
     (pf : (ls1 ++ ls2) ++ ls3 = ls1 ++ (ls2 ++ ls3))
@@ -570,7 +570,7 @@
    end
         ]]
 
-        We have made an important bit of progress, as now only a single call to [fhapp] appears in the conclusion, repeated twice.  Trying case analysis on our proofs still will not work, but there is a move we can make to enable it.  Not only does just one call to [fhapp] matter to us now, but it also %\textit{%#<i>#does not matter what the result of the call is#</i>#%}%.  In other words, the subgoal should remain true if we replace this [fhapp] call with a fresh variable.  The %\index{tactics!generalize}%[generalize] tactic helps us do exactly that. *)
+        We have made an important bit of progress, as now only a single call to [fhapp] appears in the conclusion, repeated twice.  Trying case analysis on our proofs still will not work, but there is a move we can make to enable it.  Not only does just one call to [fhapp] matter to us now, but it also _does not matter what the result of the call is_.  In other words, the subgoal should remain true if we replace this [fhapp] call with a fresh variable.  The %\index{tactics!generalize}%[generalize] tactic helps us do exactly that. *)
 
     generalize (fhapp (fhapp b hls2) hls3).
     (** [[
@@ -604,7 +604,7 @@
 
         To an experienced dependent types hacker, the appearance of this goal term calls for a celebration.  The formula has a critical property that indicates that our problems are over.  To get our proofs into the right form to apply [UIP_refl], we need to use associativity of list append to rewrite their types.  We could not do that before because other parts of the goal require the proofs to retain their original types.  In particular, the call to [fhapp] that we generalized must have type [(ls1 ++ ls2) ++ ls3], for some values of the list variables.  If we rewrite the type of the proof used to type-cast this value to something like [ls1 ++ ls2 ++ ls3 = ls1 ++ ls2 ++ ls3], then the lefthand side of the equality would no longer match the type of the term we are trying to cast.
 
-        However, now that we have generalized over the [fhapp] call, the type of the term being type-cast appears explicitly in the goal and %\textit{%#<i>#may be rewritten as well#</i>#%}%.  In particular, the final masterstroke is rewriting everywhere in our goal using associativity of list append. *)
+        However, now that we have generalized over the [fhapp] call, the type of the term being type-cast appears explicitly in the goal and _may be rewritten as well_.  In particular, the final masterstroke is rewriting everywhere in our goal using associativity of list append. *)
 
     rewrite app_ass.
     (** [[
@@ -638,7 +638,7 @@
 
 (** * Heterogeneous Equality *)
 
-(** There is another equality predicate, defined in the %\index{Gallina terms!JMeq}%[JMeq] module of the standard library, implementing %\index{heterogeneous equality}\textit{%#<i>#heterogeneous equality#</i>#%}%. *)
+(** There is another equality predicate, defined in the %\index{Gallina terms!JMeq}%[JMeq] module of the standard library, implementing %\index{heterogeneous equality}%_heterogeneous equality_. *)
 
 Print JMeq.
 (** %\vspace{-.15in}% [[
@@ -646,7 +646,7 @@
     JMeq_refl : JMeq x x
     ]]
 
-The identity [JMeq] stands for %\index{John Major equality}``%#"#John Major equality,#"#%''% a name coined by Conor McBride%~\cite{JMeq}% as a sort of pun about British politics.  [JMeq] starts out looking a lot like [eq].  The crucial difference is that we may use [JMeq] %\textit{%#<i>#on arguments of different types#</i>#%}%.  For instance, a lemma that we failed to establish before is trivial with [JMeq].  It makes for prettier theorem statements to define some syntactic shorthand first. *)
+The identity [JMeq] stands for %\index{John Major equality}``%#"#John Major equality,#"#%''% a name coined by Conor McBride%~\cite{JMeq}% as a sort of pun about British politics.  [JMeq] starts out looking a lot like [eq].  The crucial difference is that we may use [JMeq] _on arguments of different types_.  For instance, a lemma that we failed to establish before is trivial with [JMeq].  It makes for prettier theorem statements to define some syntactic shorthand first. *)
 
 Infix "==" := JMeq (at level 70, no associativity).
 
@@ -788,18 +788,18 @@
        P x -> forall y : A, y = x -> P y
 ]]
 
-The corresponding lemma used for [JMeq] in the proof of [pair_cong] is %\index{Gallina terms!JMeq\_rew\_r}%[JMeq_rew_r], which, confusingly, is defined by [rewrite] as needed, so it is not available for checking until after we apply it. *)
+The corresponding lemma used for [JMeq] in the proof of [pair_cong] is %\index{Gallina terms!internal\_JMeq\_rew\_r}%[internal_JMeq_rew_r], which, confusingly, is defined by [rewrite] as needed, so it is not available for checking until after we apply it. *)
 
-Check JMeq_rew_r.
+Check internal_JMeq_rew_r.
 (** %\vspace{-.15in}%[[
-JMeq_rew_r
+internal_JMeq_rew_r
      : forall (A : Type) (x : A) (B : Type) (b : B)
          (P : forall B0 : Type, B0 -> Type), P B b -> x == b -> P A x
 ]]
 
-The key difference is that, where the [eq] lemma is parametrized on a predicate of type [A -> Prop], the [JMeq] lemma is parameterized on a predicate of type more like [forall A : Type, A -> Prop].  To apply [eq_ind_r] with a proof of [x = y], it is only necessary to rearrange the goal into an application of a [fun] abstraction to [y].  In contrast, to apply [JMeq_rew_r], it is necessary to rearrange the goal to an application of a [fun] abstraction to both [y] and %\emph{%#<i>#its type#</i>#%}%.  In other words, the predicate must be %\emph{%#<i>#polymorphic#</i>#%}% in [y]'s type; any type must make sense, from a type-checking standpoint.  There may be cases where the former rearrangement is easy to do in a type-correct way, but the second rearrangement done naively leads to a type error.
+The key difference is that, where the [eq] lemma is parametrized on a predicate of type [A -> Prop], the [JMeq] lemma is parameterized on a predicate of type more like [forall A : Type, A -> Prop].  To apply [eq_ind_r] with a proof of [x = y], it is only necessary to rearrange the goal into an application of a [fun] abstraction to [y].  In contrast, to apply [internal_JMeq_rew_r], it is necessary to rearrange the goal to an application of a [fun] abstraction to both [y] and _its type_.  In other words, the predicate must be _polymorphic_ in [y]'s type; any type must make sense, from a type-checking standpoint.  There may be cases where the former rearrangement is easy to do in a type-correct way, but the second rearrangement done naively leads to a type error.
 
-When [rewrite] cannot figure out how to apply [JMeq_rew_r] for [x == y] where [x] and [y] have the same type, the tactic can instead use an alternate theorem, which is easy to prove as a composition of [eq_ind_r] and [JMeq_eq]. *)
+When [rewrite] cannot figure out how to apply [internal_JMeq_rew_r] for [x == y] where [x] and [y] have the same type, the tactic can instead use an alternate theorem, which is easy to prove as a composition of [eq_ind_r] and [JMeq_eq]. *)
 
 Check JMeq_ind_r.
 (** %\vspace{-.15in}%[[
@@ -808,7 +808,7 @@
        P x -> forall y : A, y == x -> P y
 ]]
 
-Ironically, where in the proof of [fhapp_ass'] we used [rewrite app_ass] to make it clear that a use of [JMeq] was actually homogeneously typed, we created a situation where [rewrite] applied the axiom-based [JMeq_ind_r] instead of the axiom-free [JMeq_rew_r]!
+Ironically, where in the proof of [fhapp_ass'] we used [rewrite app_ass] to make it clear that a use of [JMeq] was actually homogeneously typed, we created a situation where [rewrite] applied the axiom-based [JMeq_ind_r] instead of the axiom-free [internal_JMeq_rew_r]!
 
 For another simple example, consider this theorem that applies a heterogeneous equality to prove a congruence fact. *)
 
@@ -830,7 +830,7 @@
 *)
   apply JMeq_ind_r with (x := m); auto.
 
-  (** However, we run into trouble trying to get the goal into a form compatible with [JMeq_rew_r.] *)
+  (** However, we run into trouble trying to get the goal into a form compatible with [internal_JMeq_rew_r.] *)
   Undo 2.
 (** %\vspace{-.15in}%[[
   pattern nat, n.
@@ -850,7 +850,7 @@
 
 Abort.
 
-(** Why did we not run into this problem in our proof of [fhapp_ass'']?  The reason is that the pair constructor is polymorphic in the types of the pair components, while functions like [S] are not polymorphic at all.  Use of such non-polymorphic functions with [JMeq] tends to push toward use of axioms.  The example with [nat] here is a bit unrealistic; more likely cases would involve functions that have %\emph{%#<i>#some#</i>#%}% polymorphism, but not enough to allow abstractions of the sort we attempted above with [pattern].  For instance, we might have an equality between two lists, where the goal only type-checks when the terms involved really are lists, though everything is polymorphic in the types of list data elements.  The #<a href="http://www.mpi-sws.org/~gil/Heq/">#Heq%\footnote{\url{http://www.mpi-sws.org/~gil/Heq/}}%#</a># library builds up a slightly different foundation to help avoid such problems. *)
+(** Why did we not run into this problem in our proof of [fhapp_ass'']?  The reason is that the pair constructor is polymorphic in the types of the pair components, while functions like [S] are not polymorphic at all.  Use of such non-polymorphic functions with [JMeq] tends to push toward use of axioms.  The example with [nat] here is a bit unrealistic; more likely cases would involve functions that have _some_ polymorphism, but not enough to allow abstractions of the sort we attempted above with [pattern].  For instance, we might have an equality between two lists, where the goal only type-checks when the terms involved really are lists, though everything is polymorphic in the types of list data elements.  The #<a href="http://www.mpi-sws.org/~gil/Heq/">#Heq%\footnote{\url{http://www.mpi-sws.org/~gil/Heq/}}%#</a># library builds up a slightly different foundation to help avoid such problems. *)
 
 
 (** * Equivalence of Equality Axioms *)
@@ -934,7 +934,7 @@
    Theorem S_eta : S = (fun n => S n).
    ]]
 
-   Unfortunately, this theorem is not provable in CIC without additional axioms.  None of the definitional equality rules force function equality to be %\index{extensionality of function equality}\textit{%#<i>#extensional#</i>#%}%.  That is, the fact that two functions return equal results on equal inputs does not imply that the functions are equal.  We %\textit{%#<i>#can#</i>#%}% assert function extensionality as an axiom. *)
+   Unfortunately, this theorem is not provable in CIC without additional axioms.  None of the definitional equality rules force function equality to be %\index{extensionality of function equality}%_extensional_.  That is, the fact that two functions return equal results on equal inputs does not imply that the functions are equal.  We _can_ assert function extensionality as an axiom. *)
 
 (* begin thide *)
 Axiom ext_eq : forall A B (f g : A -> B),
@@ -989,4 +989,4 @@
 Qed.
 (* end thide *)
 
-(** Unlike in the case of [eq_rect_eq], we have no way of deriving this axiom of %\index{functional extensionality}\emph{%#<i>#functional extensionality#</i>#%}% for types with decidable equality.  To allow equality reasoning without axioms, it may be worth rewriting a development to replace functions with alternate representations, such as finite map types for which extensionality is derivable in CIC. *)
+(** Unlike in the case of [eq_rect_eq], we have no way of deriving this axiom of %\index{functional extensionality}%_functional extensionality_ for types with decidable equality.  To allow equality reasoning without axioms, it may be worth rewriting a development to replace functions with alternate representations, such as finite map types for which extensionality is derivable in CIC. *)
--- a/src/GeneralRec.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/GeneralRec.v	Wed Jun 06 11:25:13 2012 -0400
@@ -20,16 +20,16 @@
 
 (** Termination of all programs is a crucial property of Gallina.  Non-terminating programs introduce logical inconsistency, where any theorem can be proved with an infinite loop.  Coq uses a small set of conservative, syntactic criteria to check termination of all recursive definitions.  These criteria are insufficient to support the natural encodings of a variety of important programming idioms.  Further, since Coq makes it so convenient to encode mathematics computationally, with functional programs, we may find ourselves wanting to employ more complicated recursion in mathematical definitions.
 
-   What exactly are the conservative criteria that we run up against?  For %\emph{%#<i>#recursive#</i>#%}% definitions, recursive calls are only allowed on %\emph{%#<i>#syntactic subterms#</i>#%}% of the original primary argument, a restriction known as %\index{primitive recursion}\emph{%#<i>#primitive recursion#</i>#%}%.  In fact, Coq's handling of reflexive inductive types (those defined in terms of functions returning the same type) gives a bit more flexibility than in traditional primitive recursion, but the term is still applied commonly.  In Chapter 5, we saw how %\emph{%#<i>#co-recursive#</i>#%}% definitions are checked against a syntactic guardness condition that guarantees productivity.
+   What exactly are the conservative criteria that we run up against?  For _recursive_ definitions, recursive calls are only allowed on _syntactic subterms_ of the original primary argument, a restriction known as %\index{primitive recursion}%_primitive recursion_.  In fact, Coq's handling of reflexive inductive types (those defined in terms of functions returning the same type) gives a bit more flexibility than in traditional primitive recursion, but the term is still applied commonly.  In Chapter 5, we saw how _co-recursive_ definitions are checked against a syntactic guardness condition that guarantees productivity.
 
    Many natural recursion patterns satisfy neither condition.  For instance, there is our simple running example in this chapter, merge sort.  We will study three different approaches to more flexible recursion, and the latter two of the approaches will even support definitions that may fail to terminate on certain inputs, without any up-front characterization of which inputs those may be.
 
-   Before proceeding, it is important to note that the problem here is not as fundamental as it may appear.  The final example of Chapter 5 demonstrated what is called a %\index{deep embedding}\emph{%#<i>#deep embedding#</i>#%}% of the syntax and semantics of a programming language.  That is, we gave a mathematical definition of a language of programs and their meanings.  This language clearly admitted non-termination, and we could think of writing all our sophisticated recursive functions with such explicit syntax types.  However, in doing so, we forfeit our chance to take advantage of Coq's very good built-in support for reasoning about Gallina programs.  We would rather use a %\index{shallow embedding}\emph{%#<i>#shallow embedding#</i>#%}%, where we model informal constructs by encoding them as normal Gallina programs.  Each of the three techniques of this chapter follows that style. *)
+   Before proceeding, it is important to note that the problem here is not as fundamental as it may appear.  The final example of Chapter 5 demonstrated what is called a %\index{deep embedding}%_deep embedding_ of the syntax and semantics of a programming language.  That is, we gave a mathematical definition of a language of programs and their meanings.  This language clearly admitted non-termination, and we could think of writing all our sophisticated recursive functions with such explicit syntax types.  However, in doing so, we forfeit our chance to take advantage of Coq's very good built-in support for reasoning about Gallina programs.  We would rather use a %\index{shallow embedding}%_shallow embedding_, where we model informal constructs by encoding them as normal Gallina programs.  Each of the three techniques of this chapter follows that style. *)
 
 
 (** * Well-Founded Recursion *)
 
-(** The essence of terminating recursion is that there are no infinite chains of nested recursive calls.  This intuition is commonly mapped to the mathematical idea of a %\index{well-founded relation}\emph{%#<i>#well-founded relation#</i>#%}%, and the associated standard technique in Coq is %\index{well-founded recursion}\emph{%#<i>#well-founded recursion#</i>#%}%.  The syntactic-subterm relation that Coq applies by default is well-founded, but many cases demand alternate well-founded relations.  To demonstrate, let us see where we get stuck on attempting a standard merge sort implementation. *)
+(** The essence of terminating recursion is that there are no infinite chains of nested recursive calls.  This intuition is commonly mapped to the mathematical idea of a %\index{well-founded relation}%_well-founded relation_, and the associated standard technique in Coq is %\index{well-founded recursion}%_well-founded recursion_.  The syntactic-subterm relation that Coq applies by default is well-founded, but many cases demand alternate well-founded relations.  To demonstrate, let us see where we get stuck on attempting a standard merge sort implementation. *)
 
 Section mergeSort.
   Variable A : Type.
@@ -90,7 +90,7 @@
 fun (A : Type) (R : A -> A -> Prop) => forall a : A, Acc R a
 ]]
 
-The bulk of the definitional work devolves to the %\index{accessibility relation}\index{Gallina terms!Acc}\emph{%#<i>#accessibility#</i>#%}% relation [Acc], whose definition we may also examine. *)
+The bulk of the definitional work devolves to the %\index{accessibility relation}\index{Gallina terms!Acc}%_accessibility_ relation [Acc], whose definition we may also examine. *)
 
   Print Acc.
 (** %\vspace{-.15in}% [[
@@ -162,7 +162,7 @@
     red; intro; eapply lengthOrder_wf'; eauto.
   Defined.
 
-  (** Notice that we end these proofs with %\index{Vernacular commands!Defined}%[Defined], not [Qed].  Recall that [Defined] marks the theorems as %\emph{transparent}%, so that the details of their proofs may be used during program execution.  Why could such details possibly matter for computation?  It turns out that [Fix] satisfies the primitive recursion restriction by declaring itself as %\emph{%#<i>#recursive in the structure of [Acc] proofs#</i>#%}%.  This is possible because [Acc] proofs follow a predictable inductive structure.  We must do work, as in the last theorem's proof, to establish that all elements of a type belong to [Acc], but the automatic unwinding of those proofs during recursion is straightforward.  If we ended the proof with [Qed], the proof details would be hidden from computation, in which case the unwinding process would get stuck.
+  (** Notice that we end these proofs with %\index{Vernacular commands!Defined}%[Defined], not [Qed].  Recall that [Defined] marks the theorems as %\emph{transparent}%, so that the details of their proofs may be used during program execution.  Why could such details possibly matter for computation?  It turns out that [Fix] satisfies the primitive recursion restriction by declaring itself as _recursive in the structure of [Acc] proofs_.  This is possible because [Acc] proofs follow a predictable inductive structure.  We must do work, as in the last theorem's proof, to establish that all elements of a type belong to [Acc], but the automatic unwinding of those proofs during recursion is straightforward.  If we ended the proof with [Qed], the proof details would be hidden from computation, in which case the unwinding process would get stuck.
 
      To justify our two recursive [mergeSort] calls, we will also need to prove that [partition] respects the [lengthOrder] relation.  These proofs, too, must be kept transparent, to avoid stuckness of [Fix] evaluation. *)
 
@@ -224,7 +224,7 @@
     else ls.
   intros; apply (Fix_eq (@lengthOrder_wf A) (fun _ => list A)); intros.
 
-  (** The library theorem [Fix_eq] imposes one more strange subgoal upon us.  We must prove that the function body is unable to distinguish between %``%#"#self#"#%''% arguments that map equal inputs to equal outputs.  One might think this should be true of any Gallina code, but in fact this general %\index{extensionality}\emph{%#<i>#function extensionality#</i>#%}% property is neither provable nor disprovable within Coq.  The type of [Fix_eq] makes clear what we must show manually: *)
+  (** The library theorem [Fix_eq] imposes one more strange subgoal upon us.  We must prove that the function body is unable to distinguish between %``%#"#self#"#%''% arguments that map equal inputs to equal outputs.  One might think this should be true of any Gallina code, but in fact this general %\index{extensionality}%_function extensionality_ property is neither provable nor disprovable within Coq.  The type of [Fix_eq] makes clear what we must show manually: *)
 
   Check Fix_eq.
 (** %\vspace{-.15in}%[[
@@ -278,7 +278,7 @@
 
 (** * A Non-Termination Monad Inspired by Domain Theory *)
 
-(** The key insights of %\index{domain theory}%domain theory%~\cite{WinskelDomains}% inspire the next approach to modeling non-termination.  Domain theory is based on %\emph{%#<i>#information orders#</i>#%}% that relate values representing computation results, according to how much information these values convey.  For instance, a simple domain might include values %``%#"#the program does not terminate#"#%''% and %``%#"#the program terminates with the answer 5.#"#%''%  The former is considered to be an %\emph{%#<i>#approximation#</i>#%}% of the latter, while the latter is %\emph{%#<i>#not#</i>#%}% an approximation of %``%#"#the program terminates with the answer 6.#"#%''%  The details of domain theory will not be important in what follows; we merely borrow the notion of an approximation ordering on computation results.
+(** The key insights of %\index{domain theory}%domain theory%~\cite{WinskelDomains}% inspire the next approach to modeling non-termination.  Domain theory is based on _information orders_ that relate values representing computation results, according to how much information these values convey.  For instance, a simple domain might include values %``%#"#the program does not terminate#"#%''% and %``%#"#the program terminates with the answer 5.#"#%''%  The former is considered to be an _approximation_ of the latter, while the latter is _not_ an approximation of %``%#"#the program terminates with the answer 6.#"#%''%  The details of domain theory will not be important in what follows; we merely borrow the notion of an approximation ordering on computation results.
 
    Consider this definition of a type of computations. *)
 
@@ -295,7 +295,7 @@
 	-> forall (n' : nat), n' >= n
 	  -> f n' = Some v}.
 
-  (** A computation is fundamentally a function [f] from an %\emph{%#<i>#approximation level#</i>#%}% [n] to an optional result.  Intuitively, higher [n] values enable termination in more cases than lower values.  A call to [f] may return [None] to indicate that [n] was not high enough to run the computation to completion; higher [n] values may yield [Some].  Further, the proof obligation within the sigma type asserts that [f] is %\emph{%#<i>#monotone#</i>#%}% in an appropriate sense: when some [n] is sufficient to produce termination, so are all higher [n] values, and they all yield the same program result [v].
+  (** A computation is fundamentally a function [f] from an _approximation level_ [n] to an optional result.  Intuitively, higher [n] values enable termination in more cases than lower values.  A call to [f] may return [None] to indicate that [n] was not high enough to run the computation to completion; higher [n] values may yield [Some].  Further, the proof obligation within the sigma type asserts that [f] is _monotone_ in an appropriate sense: when some [n] is sufficient to produce termination, so are all higher [n] values, and they all yield the same program result [v].
 
   It is easy to define a relation characterizing when a computation runs to a particular result at a particular approximation level. *)
 
@@ -457,7 +457,7 @@
   run.
 Qed.
 
-(** Now we come to the piece most directly inspired by domain theory.  We want to support general recursive function definitions, but domain theory tells us that not all definitions are reasonable; some fail to be %\emph{%#<i>#continuous#</i>#%}% and thus represent unrealizable computations.  To formalize an analogous notion of continuity for our non-termination monad, we write down the approximation relation on computation results that we have had in mind all along. *)
+(** Now we come to the piece most directly inspired by domain theory.  We want to support general recursive function definitions, but domain theory tells us that not all definitions are reasonable; some fail to be _continuous_ and thus represent unrealizable computations.  To formalize an analogous notion of continuity for our non-termination monad, we write down the approximation relation on computation results that we have had in mind all along. *)
 
 Section lattice.
   Variable A : Type.
@@ -768,7 +768,7 @@
 | Ret : A -> comp A
 | Bnd : forall B, comp B -> (B -> comp A) -> comp A.
 
-(** This example shows off Coq's support for %\index{recursively non-uniform parameters}\emph{%#<i>#recursively non-uniform parameters#</i>#%}%, as in the case of the parameter [A] declared above, where each constructor's type ends in [comp A], but there is a recursive use of [comp] with a different parameter [B].  Beside that technical wrinkle, we see the simplest possible definition of a monad, via a type whose two constructors are precisely the monad operators.
+(** This example shows off Coq's support for %\index{recursively non-uniform parameters}%_recursively non-uniform parameters_, as in the case of the parameter [A] declared above, where each constructor's type ends in [comp A], but there is a recursive use of [comp] with a different parameter [B].  Beside that technical wrinkle, we see the simplest possible definition of a monad, via a type whose two constructors are precisely the monad operators.
 
    It is easy to define the semantics of terminating [comp] computations. *)
 
--- a/src/Generic.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Generic.v	Wed Jun 06 11:25:13 2012 -0400
@@ -1,4 +1,4 @@
-(* Copyright (c) 2008-2010, Adam Chlipala
+(* Copyright (c) 2008-2010, 2012, Adam Chlipala
  * 
  * This work is licensed under a
  * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
@@ -18,13 +18,13 @@
 
 (** %\chapter{Generic Programming}% *)
 
-(** %\index{generic programming}\textit{%#<i>#Generic programming#</i>#%}% makes it possible to write functions that operate over different types of data.  %\index{parametric polymorphism}%Parametric polymorphism in ML and Haskell is one of the simplest examples.  ML-style %\index{module systems}%module systems%~\cite{modules}% and Haskell %\index{type classes}%type classes%~\cite{typeclasses}% are more flexible cases.  These language features are often not as powerful as we would like.  For instance, while Haskell includes a type class classifying those types whose values can be pretty-printed, per-type pretty-printing is usually either implemented manually or implemented via a %\index{deriving clauses}%[deriving] clause%~\cite{deriving}%, which triggers ad-hoc code generation.  Some clever encoding tricks have been used to achieve better within Haskell and other languages, but we can do %\index{datatype-generic programming}\emph{%#<i>#datatype-generic programming#</i>#%}% much more cleanly with dependent types.  Thanks to the expressive power of CIC, we need no special language support.
+(** %\index{generic programming}%_Generic programming_ makes it possible to write functions that operate over different types of data.  %\index{parametric polymorphism}%Parametric polymorphism in ML and Haskell is one of the simplest examples.  ML-style %\index{module systems}%module systems%~\cite{modules}% and Haskell %\index{type classes}%type classes%~\cite{typeclasses}% are more flexible cases.  These language features are often not as powerful as we would like.  For instance, while Haskell includes a type class classifying those types whose values can be pretty-printed, per-type pretty-printing is usually either implemented manually or implemented via a %\index{deriving clauses}%[deriving] clause%~\cite{deriving}%, which triggers ad-hoc code generation.  Some clever encoding tricks have been used to achieve better within Haskell and other languages, but we can do %\index{datatype-generic programming}%_datatype-generic programming_ much more cleanly with dependent types.  Thanks to the expressive power of CIC, we need no special language support.
 
    Generic programming can often be very useful in Coq developments, so we devote this chapter to studying it.  In a proof assistant, there is the new possibility of generic proofs about generic programs, which we also devote some space to. *)
 
 (** * Reflecting Datatype Definitions *)
 
-(** The key to generic programming with dependent types is %\index{universe types}\textit{%#<i>#universe types#</i>#%}%.  This concept should not be confused with the idea of %\textit{%#<i>#universes#</i>#%}% from the metatheory of CIC and related languages.  Rather, the idea of universe types is to define inductive types that provide %\textit{%#<i>#syntactic representations#</i>#%}% of Coq types.  We cannot directly write CIC programs that do case analysis on types, but we %\textit{%#<i>#can#</i>#%}% case analyze on reflected syntactic versions of those types.
+(** The key to generic programming with dependent types is %\index{universe types}%_universe types_.  This concept should not be confused with the idea of _universes_ from the metatheory of CIC and related languages.  Rather, the idea of universe types is to define inductive types that provide _syntactic representations_ of Coq types.  We cannot directly write CIC programs that do case analysis on types, but we _can_ case analyze on reflected syntactic versions of those types.
 
    Thus, to begin, we must define a syntactic representation of some class of datatypes.  In this chapter, our running example will have to do with basic algebraic datatypes, of the kind found in ML and Haskell, but without additional bells and whistles like type parameters and mutually recursive definitions.
 
@@ -69,7 +69,7 @@
 (* begin thide *)
 Definition tree_dt (A : Type) : datatype := Con A 0 :: Con unit 2 :: nil.
 
-(** Each datatype representation stands for a family of inductive types.  For a specific real datatype and a reputed representation for it, it is useful to define a type of %\textit{%#<i>#evidence#</i>#%}% that the datatype is compatible with the encoding. *)
+(** Each datatype representation stands for a family of inductive types.  For a specific real datatype and a reputed representation for it, it is useful to define a type of _evidence_ that the datatype is compatible with the encoding. *)
 
 Section denote.
   Variable T : Type.
@@ -116,7 +116,7 @@
 
 (* EX: Define a generic [size] function. *)
 
-(** We built these encodings of datatypes to help us write datatype-generic recursive functions.  To do so, we will want a reflected representation of a %\index{recursion schemes}\textit{%#<i>#recursion scheme#</i>#%}% for each type, similar to the [T_rect] principle generated automatically for an inductive definition of [T].  A clever reuse of [datatypeDenote] yields a short definition. *)
+(** We built these encodings of datatypes to help us write datatype-generic recursive functions.  To do so, we will want a reflected representation of a %\index{recursion schemes}%_recursion scheme_ for each type, similar to the [T_rect] principle generated automatically for an inductive definition of [T].  A clever reuse of [datatypeDenote] yields a short definition. *)
 
 (* begin thide *)
 Definition fixDenote (T : Type) (dt : datatype) :=
--- a/src/InductiveTypes.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/InductiveTypes.v	Wed Jun 06 11:25:13 2012 -0400
@@ -1,4 +1,4 @@
-(* Copyright (c) 2008-2011, Adam Chlipala
+(* Copyright (c) 2008-2012, Adam Chlipala
  * 
  * This work is licensed under a
  * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
@@ -29,7 +29,7 @@
 
 (** Mainstream presentations of mathematics treat proofs as objects that exist outside of the universe of mathematical objects.  However, for a variety of reasoning, it is convenient to encode proofs, traditional mathematical objects, and programs within a single formal language.  Validity checks on mathematical objects are useful in any setting, to catch typoes and other uninteresting errors.  The benefits of static typing for programs are widely recognized, and Coq brings those benefits to both mathematical objects and programs via a uniform mechanism.  In fact, from this point on, we will not bother to distinguish between programs and mathematical objects.  Many mathematical formalisms are most easily encoded in terms of programs.
 
-Proofs are fundamentally different from programs, because any two proofs of a theorem are considered equivalent, from a formal standpoint if not from an engineering standpoint.  However, we can use the same type-checking technology to check proofs as we use to validate our programs.  This is the %\index{Curry-Howard correspondence}\emph{%#<i>#Curry-Howard correspondence#</i>#%}~\cite{Curry,Howard}%, an approach for relating proofs and programs.  We represent mathematical theorems as types, such that a theorem's proofs are exactly those programs that type-check at the corresponding type.
+Proofs are fundamentally different from programs, because any two proofs of a theorem are considered equivalent, from a formal standpoint if not from an engineering standpoint.  However, we can use the same type-checking technology to check proofs as we use to validate our programs.  This is the %\index{Curry-Howard correspondence}%_Curry-Howard correspondence_ %\cite{Curry,Howard}%, an approach for relating proofs and programs.  We represent mathematical theorems as types, such that a theorem's proofs are exactly those programs that type-check at the corresponding type.
 
 The last chapter's example already snuck in an instance of Curry-Howard.  We used the token [->] to stand for both function types and logical implications.  One reasonable conclusion upon seeing this might be that some fancy overloading of notations is at work.  In fact, functions and implications are precisely identical according to Curry-Howard!  That is, they are just two ways of describing the same computational phenomenon.
 
@@ -65,11 +65,11 @@
 Check (fun x : False => match x with end : True).
 (** [: False -> True] *)
 
-(** Every one of these example programs whose type looks like a logical formula is a %\index{proof term}\emph{%#<i>#proof term#</i>#%}%.  We use that name for any Gallina term of a logical type, and we will elaborate shortly on what makes a type logical.
+(** Every one of these example programs whose type looks like a logical formula is a %\index{proof term}%_proof term_.  We use that name for any Gallina term of a logical type, and we will elaborate shortly on what makes a type logical.
 
 In the rest of this chapter, we will introduce different ways of defining types.  Every example type can be interpreted alternatively as a type of programs or %\index{proposition}%propositions (i.e., formulas or theorem statements).
 
-One of the first types we introduce will be [bool], with constructors [true] and [false].  Newcomers to Coq often wonder about the distinction between [True] and [true] and the distinction between [False] and [false].  One glib answer is that [True] and [False] are types, but [true] and [false] are not.  A more useful answer is that Coq's metatheory guarantees that any term of type [bool] %\emph{%#<i>#evaluates#</i>#%}% to either [true] or [false].  This means that we have an %\emph{%#<i>#algorithm#</i>#%}% for answering any question phrased as an expression of type [bool].  Conversely, most propositions do not evaluate to [True] or [False]; the language of inductively defined propositions is much richer than that.  We ought to be glad that we have no algorithm for deciding mathematical truth, since otherwise it would be clear that we could not formalize undecidable properties, like most any properties of general-purpose programs. *)
+One of the first types we introduce will be [bool], with constructors [true] and [false].  Newcomers to Coq often wonder about the distinction between [True] and [true] and the distinction between [False] and [false].  One glib answer is that [True] and [False] are types, but [true] and [false] are not.  A more useful answer is that Coq's metatheory guarantees that any term of type [bool] _evaluates_ to either [true] or [false].  This means that we have an _algorithm_ for answering any question phrased as an expression of type [bool].  Conversely, most propositions do not evaluate to [True] or [False]; the language of inductively defined propositions is much richer than that.  We ought to be glad that we have no algorithm for deciding mathematical truth, since otherwise it would be clear that we could not formalize undecidable properties, like most any properties of general-purpose programs. *)
 
 
 (** * Enumerations *)
@@ -118,7 +118,7 @@
 
 %\noindent%...which corresponds to %``%#"#proof by case analysis#"#%''% in classical math.  For non-recursive inductive types, the two tactics will always have identical behavior.  Often case analysis is sufficient, even in proofs about recursive types, and it is nice to avoid introducing unneeded induction hypotheses.
 
-What exactly %\textit{%#<i>#is#</i>#%}% the %\index{induction principles}%induction principle for [unit]?  We can ask Coq: *)
+What exactly _is_ the %\index{induction principles}%induction principle for [unit]?  We can ask Coq: *)
 
 Check unit_ind.
 (** [unit_ind : forall P : unit -> Prop, P tt -> forall u : unit, P u] *)
@@ -290,7 +290,7 @@
 (* begin thide *)
   induction n.
 
-(** Our first subgoal is [plus O O = O], which %\textit{%#<i>#is#</i>#%}% trivial by computation. *)
+(** Our first subgoal is [plus O O = O], which _is_ trivial by computation. *)
 
   reflexivity.
 
@@ -348,7 +348,7 @@
 
 (** [injection] refers to a premise by number, adding new equalities between the corresponding arguments of equated terms that are formed with the same constructor.  We end up needing to prove [n = m -> n = m], so it is unsurprising that a tactic named [trivial] is able to finish the proof.
 
-There is also a very useful tactic called %\index{tactics!congruence}%[congruence] that can prove this theorem immediately.  [congruence] generalizes [discriminate] and [injection], and it also adds reasoning about the general properties of equality, such as that a function returns equal results on equal arguments.  That is, [congruence] is a %\index{theory of equality and uninterpreted functions}\textit{%#<i>#complete decision procedure for the theory of equality and uninterpreted functions#</i>#%}%, plus some smarts about inductive types.
+There is also a very useful tactic called %\index{tactics!congruence}%[congruence] that can prove this theorem immediately.  [congruence] generalizes [discriminate] and [injection], and it also adds reasoning about the general properties of equality, such as that a function returns equal results on equal arguments.  That is, [congruence] is a %\index{theory of equality and uninterpreted functions}%_complete decision procedure for the theory of equality and uninterpreted functions_, plus some smarts about inductive types.
 
 %\medskip%
 
@@ -470,7 +470,7 @@
 Qed.
 (* end thide *)
 
-(** There is a useful shorthand for writing many definitions that share the same parameter, based on Coq's %\index{sections}\index{Vernacular commands!Section}\index{Vernacular commands!Variable}\textit{%#<i>#section#</i>#%}% mechanism.  The following block of code is equivalent to the above: *)
+(** There is a useful shorthand for writing many definitions that share the same parameter, based on Coq's %\index{sections}\index{Vernacular commands!Section}\index{Vernacular commands!Variable}%_section_ mechanism.  The following block of code is equivalent to the above: *)
 
 (* begin hide *)
 Reset list.
@@ -644,14 +644,14 @@
 
 (** * Reflexive Types *)
 
-(** A kind of inductive type called a %\textit{%#<i>#reflexive type#</i>#%}% is defined in terms of functions that have the type being defined as their range.  One very useful class of examples is in modeling variable binders.  Our example will be an encoding of the syntax of first-order logic.  Since the idea of syntactic encodings of logic may require a bit of acclimation, let us first consider a simpler formula type for a subset of propositional logic. *)
+(** A kind of inductive type called a _reflexive type_ is defined in terms of functions that have the type being defined as their range.  One very useful class of examples is in modeling variable binders.  Our example will be an encoding of the syntax of first-order logic.  Since the idea of syntactic encodings of logic may require a bit of acclimation, let us first consider a simpler formula type for a subset of propositional logic. *)
 
 Inductive pformula : Set :=
 | Truth : pformula
 | Falsehood : pformula
 | Conjunction : pformula -> pformula -> pformula.
 
-(** A key distinction here is between, for instance, the %\emph{%#<i>#syntax#</i>#%}% [Truth] and its %\emph{%#<i>#semantics#</i>#%}% [True].  We can make the semantics explicit with a recursive function.  This function uses the infix operator %\index{Gallina operators!/\textbackslash}%[/\], which desugars to uses of the type family %\index{Gallina terms!and}%[and] from the standard library.  The family [and] implements conjunction, the [Prop] Curry-Howard analogue of the usual pair type from functional programming (which is the type family %\index{Gallina terms!prod}%[prod] in Coq's standard library). *)
+(** A key distinction here is between, for instance, the _syntax_ [Truth] and its _semantics_ [True].  We can make the semantics explicit with a recursive function.  This function uses the infix operator %\index{Gallina operators!/\textbackslash}%[/\], which desugars to uses of the type family %\index{Gallina terms!and}%[and] from the standard library.  The family [and] implements conjunction, the [Prop] Curry-Howard analogue of the usual pair type from functional programming (which is the type family %\index{Gallina terms!prod}%[prod] in Coq's standard library). *)
 
 Fixpoint pformulaDenote (f : pformula) : Prop :=
   match f with
@@ -712,13 +712,13 @@
  
 ]]
 
-Focusing on the [Forall] case, which comes third, we see that we are allowed to assume that the theorem holds %\textit{%#<i>#for any application of the argument function [f1]#</i>#%}%.  That is, Coq induction principles do not follow a simple rule that the textual representations of induction variables must get shorter in appeals to induction hypotheses.  Luckily for us, the people behind the metatheory of Coq have verified that this flexibility does not introduce unsoundness.
+Focusing on the [Forall] case, which comes third, we see that we are allowed to assume that the theorem holds _for any application of the argument function [f1]_.  That is, Coq induction principles do not follow a simple rule that the textual representations of induction variables must get shorter in appeals to induction hypotheses.  Luckily for us, the people behind the metatheory of Coq have verified that this flexibility does not introduce unsoundness.
 
 %\medskip%
 
 Up to this point, we have seen how to encode in Coq more and more of what is possible with algebraic datatypes in %\index{Haskell}%Haskell and %\index{ML}%ML.  This may have given the inaccurate impression that inductive types are a strict extension of algebraic datatypes.  In fact, Coq must rule out some types allowed by Haskell and ML, for reasons of soundness.  Reflexive types provide our first good example of such a case.
 
-Given our last example of an inductive type, many readers are probably eager to try encoding the syntax of %\index{lambda calculus}%lambda calculus.  Indeed, the function-based representation technique that we just used, called %\index{higher-order abstract syntax}\index{HOAS|see{higher-order abstract syntax}}\textit{%#<i>#higher-order abstract syntax (HOAS)#</i>#%}~\cite{HOAS}%, is the representation of choice for lambda calculi in %\index{Twelf}%Twelf and in many applications implemented in Haskell and ML.  Let us try to import that choice to Coq: *)
+Given our last example of an inductive type, many readers are probably eager to try encoding the syntax of %\index{lambda calculus}%lambda calculus.  Indeed, the function-based representation technique that we just used, called %\index{higher-order abstract syntax}\index{HOAS|see{higher-order abstract syntax}}%_higher-order abstract syntax (HOAS)_ %\cite{HOAS}%, is the representation of choice for lambda calculi in %\index{Twelf}%Twelf and in many applications implemented in Haskell and ML.  Let us try to import that choice to Coq: *)
 (** [[
 Inductive term : Set :=
 | App : term -> term -> term
@@ -729,7 +729,7 @@
 Error: Non strictly positive occurrence of "term" in "(term -> term) -> term"
 >>
 
-We have run afoul of the %\index{strict positivity requirement}\index{positivity requirement}\textit{%#<i>#strict positivity requirement#</i>#%}% for inductive definitions, which says that the type being defined may not occur to the left of an arrow in the type of a constructor argument.  It is important that the type of a constructor is viewed in terms of a series of arguments and a result, since obviously we need recursive occurrences to the lefts of the outermost arrows if we are to have recursive occurrences at all.  Our candidate definition above violates the positivity requirement because it involves an argument of type [term -> term], where the type [term] that we are defining appears to the left of an arrow.  The candidate type of [App] is fine, however, since every occurrence of [term] is either a constructor argument or the final result type.
+We have run afoul of the %\index{strict positivity requirement}\index{positivity requirement}%_strict positivity requirement_ for inductive definitions, which says that the type being defined may not occur to the left of an arrow in the type of a constructor argument.  It is important that the type of a constructor is viewed in terms of a series of arguments and a result, since obviously we need recursive occurrences to the lefts of the outermost arrows if we are to have recursive occurrences at all.  Our candidate definition above violates the positivity requirement because it involves an argument of type [term -> term], where the type [term] that we are defining appears to the left of an arrow.  The candidate type of [App] is fine, however, since every occurrence of [term] is either a constructor argument or the final result type.
 
 Why must Coq enforce this restriction?  Imagine that our last definition had been accepted, allowing us to write this function:
 
@@ -812,7 +812,7 @@
  
 ]]
 
-The only new wrinkle here is the annotations on the [match] expression.  This is a %\index{dependent pattern matching}\emph{%#<i>#dependently typed#</i>#%}% pattern match, because the %\emph{%#<i>#type#</i>#%}% of the expression depends on the %\emph{%#<i>#value#</i>#%}% being matched on.  Of course, for this example, the dependency is degenerate; the value being matched on has type [unit], so it may only take on a single known value, [tt].  We will meet more involved examples later, especially in Part II of the book.
+The only new wrinkle here is the annotations on the [match] expression.  This is a %\index{dependent pattern matching}%_dependently typed_ pattern match, because the _type_ of the expression depends on the _value_ being matched on.  Of course, for this example, the dependency is degenerate; the value being matched on has type [unit], so it may only take on a single known value, [tt].  We will meet more involved examples later, especially in Part II of the book.
 
 %\index{type inference}%Type inference for dependent pattern matching is undecidable, which can be proved by reduction from %\index{higher-order unification}%higher-order unification%~\cite{HOU}%.  Thus, we often find ourselves needing to annotate our programs in a way that explains dependencies to the type checker.  In the example of [unit_rect], we have an %\index{Gallina terms!as}%[as] clause, which binds a name for the discriminee; and a %\index{Gallina terms!return}%[return] clause, which gives a way to compute the [match] result type as a function of the discriminee.
 
@@ -957,7 +957,7 @@
 | NLeaf' : nat_tree
 | NNode' : nat -> list nat_tree -> nat_tree.
 
-(** This is an example of a %\index{nested inductive type}\textit{%#<i>#nested#</i>#%}% inductive type definition, because we use the type we are defining as an argument to a parametrized type family.  Coq will not allow all such definitions; it effectively pretends that we are defining [nat_tree] mutually with a version of [list] specialized to [nat_tree], checking that the resulting expanded definition satisfies the usual rules.  For instance, if we replaced [list] with a type family that used its parameter as a function argument, then the definition would be rejected as violating the positivity restriction.
+(** This is an example of a %\index{nested inductive type}%_nested_ inductive type definition, because we use the type we are defining as an argument to a parametrized type family.  Coq will not allow all such definitions; it effectively pretends that we are defining [nat_tree] mutually with a version of [list] specialized to [nat_tree], checking that the resulting expanded definition satisfies the usual rules.  For instance, if we replaced [list] with a type family that used its parameter as a function argument, then the definition would be rejected as violating the positivity restriction.
 
 Like we encountered for mutual inductive types, we find that the automatically generated induction principle for [nat_tree] is too weak. *)
 
@@ -1070,7 +1070,7 @@
           end) ls)
     end.
 
-  (** We include an anonymous [fix] version of [list_nat_tree_ind] that is literally %\textit{%#<i>#nested#</i>#%}% inside the definition of the recursive function corresponding to the inductive definition that had the nested use of [list].   *)
+  (** We include an anonymous [fix] version of [list_nat_tree_ind] that is literally _nested_ inside the definition of the recursive function corresponding to the inductive definition that had the nested use of [list].   *)
 
 End nat_tree_ind'.
 
--- a/src/Intro.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Intro.v	Wed Jun 06 11:25:13 2012 -0400
@@ -22,7 +22,7 @@
 
 Many other recent projects have attracted attention by proving important theorems using computer proof assistant software.  For instance, the L4.verified project%~\cite{seL4}% has given a mechanized proof of correctness for a realistic microkernel, using the Isabelle/HOL proof assistant%~\cite{Isabelle/HOL}\index{Isabelle/HOL}%.  The amount of ongoing work in the area is so large that I cannot hope to list all the recent successes, so from this point I will assume that the reader is convinced both that we ought to want machine-checked proofs and that they seem to be feasible to produce.  (To readers not yet convinced, I suggest a Web search for %``%#"#machine-checked proof#"#%''%!)
 
-The idea of %\index{certified program}\emph{%#<i>#certified program#</i>#%}% features prominently in this book's title.  Here the word %``%#"#certified#"#%''% does %\emph{%#<i>#not#</i>#%}% refer to governmental rules for how the reliability of engineered systems may be demonstrated to sufficiently high standards.  Rather, this concept of certification, a standard one in the programming languages and formal methods communities, has to do with the idea of a %\emph{%#<i>#certificate#</i>#%}%, or formal mathematical artifact proving that a program meets its specification.  Government certification procedures rarely provide strong mathematical guarantees, while certified programming provides guarantees about as strong as anything we could hope for.  We trust the definition of a foundational mathematical logic, we trust an implementation of the logic, and we trust that we have encoded our informal intent properly in formal specifications, but little else is left open as an opportunity to certify incorrect software.  For programs like compilers that run in batch mode, the notion of a %\index{certifying program}\emph{%#<i>#certifying#</i>#%}% program is also common, where each run of the program outputs both an answer and a proof that it is correct.  Certified software can be considered to subsume certifying software, and this book focuses on the certified case, while also introducing principles and techniques of general interest for stating and proving theorems in Coq.
+The idea of %\index{certified program}%_certified program_ features prominently in this book's title.  Here the word %``%#"#certified#"#%''% does _not_ refer to governmental rules for how the reliability of engineered systems may be demonstrated to sufficiently high standards.  Rather, this concept of certification, a standard one in the programming languages and formal methods communities, has to do with the idea of a _certificate_, or formal mathematical artifact proving that a program meets its specification.  Government certification procedures rarely provide strong mathematical guarantees, while certified programming provides guarantees about as strong as anything we could hope for.  We trust the definition of a foundational mathematical logic, we trust an implementation of the logic, and we trust that we have encoded our informal intent properly in formal specifications, but little else is left open as an opportunity to certify incorrect software.  For programs like compilers that run in batch mode, the notion of a %\index{certifying program}%_certifying_ program is also common, where each run of the program outputs both an answer and a proof that it is correct.  Certified software can be considered to subsume certifying software, and this book focuses on the certified case, while also introducing principles and techniques of general interest for stating and proving theorems in Coq.
 
 %\medskip%
 
@@ -68,27 +68,27 @@
 (**
 %\index{higher-order vs. first-order languages}%There is no reason to give up the familiar comforts of functional programming when you start writing certified programs.  All of the tools I listed are based on functional programming languages, which means you can use them without their proof-related aspects to write and run regular programs.
 
-%\index{ACL2}%ACL2 is notable in this field for having only a %\textit{%#<i>#first-order#</i>#%}% language at its foundation.  That is, you cannot work with functions over functions and all those other treats of functional programming.  By giving up this facility, ACL2 can make broader assumptions about how well its proof automation will work, but we can generally recover the same advantages in other proof assistants when we happen to be programming in first-order fragments.
+%\index{ACL2}%ACL2 is notable in this field for having only a _first-order_ language at its foundation.  That is, you cannot work with functions over functions and all those other treats of functional programming.  By giving up this facility, ACL2 can make broader assumptions about how well its proof automation will work, but we can generally recover the same advantages in other proof assistants when we happen to be programming in first-order fragments.
 *)
 
 
 (** ** Dependent Types *)
 
 (**
-A language with %\textit{%#<i>#dependent types#</i>#%}% may include references to programs inside of types.  For instance, the type of an array might include a program expression giving the size of the array, making it possible to verify absence of out-of-bounds accesses statically.  Dependent types can go even further than this, effectively capturing any correctness property in a type.  For instance, later in this book, we will see how to give a Mini-ML compiler a type that guarantees that it maps well-typed source programs to well-typed target programs.
+A language with _dependent types_ may include references to programs inside of types.  For instance, the type of an array might include a program expression giving the size of the array, making it possible to verify absence of out-of-bounds accesses statically.  Dependent types can go even further than this, effectively capturing any correctness property in a type.  For instance, later in this book, we will see how to give a Mini-ML compiler a type that guarantees that it maps well-typed source programs to well-typed target programs.
 
-%\index{ACL2}%ACL2 and %\index{HOL}%HOL lack dependent types outright.  %\index{PVS}%PVS and %\index{Twelf}%Twelf each supports a different strict subset of Coq's dependent type language.  Twelf's type language is restricted to a bare-bones, monomorphic lambda calculus, which places serious restrictions on how complicated %\textit{%#<i>#computations inside types#</i>#%}% can be.  This restriction is important for the soundness argument behind Twelf's approach to representing and checking proofs.
+%\index{ACL2}%ACL2 and %\index{HOL}%HOL lack dependent types outright.  %\index{PVS}%PVS and %\index{Twelf}%Twelf each supports a different strict subset of Coq's dependent type language.  Twelf's type language is restricted to a bare-bones, monomorphic lambda calculus, which places serious restrictions on how complicated _computations inside types_ can be.  This restriction is important for the soundness argument behind Twelf's approach to representing and checking proofs.
 
-In contrast, %\index{PVS}%PVS's dependent types are much more general, but they are squeezed inside the single mechanism of %\textit{%#<i>#subset types#</i>#%}%, where a normal type is refined by attaching a predicate over its elements.  Each member of the subset type is an element of the base type that satisfies the predicate.  Chapter 7 of this book introduces that style of programming in Coq, while the remaining chapters of Part II deal with features of dependent typing in Coq that go beyond what PVS supports.
+In contrast, %\index{PVS}%PVS's dependent types are much more general, but they are squeezed inside the single mechanism of _subset types_, where a normal type is refined by attaching a predicate over its elements.  Each member of the subset type is an element of the base type that satisfies the predicate.  Chapter 7 of this book introduces that style of programming in Coq, while the remaining chapters of Part II deal with features of dependent typing in Coq that go beyond what PVS supports.
 
-Dependent types are not just useful because they help you express correctness properties in types.  Dependent types also often let you write certified programs %\textit{%#<i>#without writing anything that looks like a proof#</i>#%}%.  Even with subset types, which for many contexts can be used to express any relevant property with enough acrobatics, the human driving the proof assistant usually has to build some proofs explicitly.  Writing formal proofs is hard, so we want to avoid it as far as possible, so dependent types are invaluable.
+Dependent types are not just useful because they help you express correctness properties in types.  Dependent types also often let you write certified programs _without writing anything that looks like a proof_.  Even with subset types, which for many contexts can be used to express any relevant property with enough acrobatics, the human driving the proof assistant usually has to build some proofs explicitly.  Writing formal proofs is hard, so we want to avoid it as far as possible, so dependent types are invaluable.
 
 *)
 
 (** ** An Easy-to-Check Kernel Proof Language *)
 
 (**
-%\index{de Bruijn criterion}%Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure.  Proof assistants satisfy the %``%#"#de Bruijn criterion#"#%''% when they produce %\textit{%#<i>#proof terms#</i>#%}% in small kernel languages, even when they use complicated and extensible procedures to seek out proofs in the first place.  These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics (e.g., ZF set theory).  To believe a proof, we can ignore the possibility of bugs during %\textit{%#<i>#search#</i>#%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%#<i>#result#</i>#%}% of the search.
+%\index{de Bruijn criterion}%Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure.  Proof assistants satisfy the %``%#"#de Bruijn criterion#"#%''% when they produce _proof terms_ in small kernel languages, even when they use complicated and extensible procedures to seek out proofs in the first place.  These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics (e.g., ZF set theory).  To believe a proof, we can ignore the possibility of bugs during _search_ and just rely on a (relatively small) proof-checking kernel that we apply to the _result_ of the search.
 
 Coq meets the de Bruijn criterion, while %\index{ACL2}%ACL2 and %\index{PVS}%PVS do not, as they employ fancy decision procedures that produce no %``%#"#evidence trails#"#%''% justifying their results.
 *)
@@ -108,9 +108,9 @@
 (** ** Proof by Reflection *)
 
 (**
-%\index{reflection}\index{proof by reflection}%A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation.  Coq includes programs and proof terms in the same syntactic class.  This makes it easy to write programs that compute proofs.  With rich enough dependent types, such programs are %\textit{%#<i>#certified decision procedures#</i>#%}%.  In such cases, these certified procedures can be put to good use %\textit{%#<i>#without ever running them#</i>#%}%!  Their types guarantee that, if we did bother to run them, we would receive proper %``%#"#ground#"#%''% proofs.
+%\index{reflection}\index{proof by reflection}%A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation.  Coq includes programs and proof terms in the same syntactic class.  This makes it easy to write programs that compute proofs.  With rich enough dependent types, such programs are _certified decision procedures_.  In such cases, these certified procedures can be put to good use _without ever running them_!  Their types guarantee that, if we did bother to run them, we would receive proper %``%#"#ground#"#%''% proofs.
 
-The critical ingredient for this technique, many of whose instances are referred to as %\textit{%#<i>#proof by reflection#</i>#%}%, is a way of inducing non-trivial computation inside of logical propositions during proof checking.  Further, most of these instances require dependent types to make it possible to state the appropriate theorems.  Of the proof assistants I listed, only Coq really provides this support.
+The critical ingredient for this technique, many of whose instances are referred to as _proof by reflection_, is a way of inducing non-trivial computation inside of logical propositions during proof checking.  Further, most of these instances require dependent types to make it possible to state the appropriate theorems.  Of the proof assistants I listed, only Coq really provides this support.
 *)
 
 
@@ -141,7 +141,7 @@
 (**
 I try to keep the required background knowledge to a minimum in this book.  I will assume familiarity with the material from usual discrete math and logic courses taken by undergraduate computer science majors, and I will assume that readers have significant experience programming in one of the ML dialects, in Haskell, or in some other, closely related language.  Experience with only dynamically typed functional languages might lead to befuddlement in some places, but a reader who has come to understand Scheme deeply will probably be fine.
 
-My background is in programming languages, formal semantics, and program verification.  I sometimes use examples from that domain.  As a reference on these topics, I recommend %\emph{%#<i>#Types and Programming Languages#</i>#%}~\cite{TAPL}%, by Benjamin C. Pierce; however, I have tried to choose examples so that they may be understood without background in semantics.
+My background is in programming languages, formal semantics, and program verification.  I sometimes use examples from that domain.  As a reference on these topics, I recommend _Types and Programming Languages_ %\cite{TAPL}%, by Benjamin C. Pierce; however, I have tried to choose examples so that they may be understood without background in semantics.
 *)
 
 
@@ -179,7 +179,7 @@
 (** ** On the Tactic Library *)
 
 (**
-To make it possible to start from fancy proof automation, rather than working up to it, I have included with the book source a library of %\emph{%#<i>#tactics#</i>#%}%, or programs that find proofs, since the built-in Coq tactics do not support a high enough level of automation.  I use these tactics even from the first chapter with code examples.
+To make it possible to start from fancy proof automation, rather than working up to it, I have included with the book source a library of _tactics_, or programs that find proofs, since the built-in Coq tactics do not support a high enough level of automation.  I use these tactics even from the first chapter with code examples.
 
 Some readers have asked about the pragmatics of using this tactic library in their own developments.  My position there is that this tactic library was designed with the specific examples of the book in mind; I do not recommend using it in other settings.  Part III should impart the necessary skills to reimplement these tactics and beyond.  One generally deals with undecidable problems in interactive theorem proving, so there can be no tactic that solves all goals, though the %\index{tactics!crush}%[crush] tactic that we will meet soon may sometimes feel like that!  There are still very useful tricks found in the implementations of [crush] and its cousins, so it may be useful to examine the commented source file %\texttt{%#<tt>#CpdtTactics.v#</tt>.#%}.%  I implement a new tactic library for each new project, since each project involves a different mix of undecidable theories where a different set of heuristics turns out to work well; and that is what I recommend others do, too.
 *)
@@ -187,7 +187,7 @@
 (** ** Installation and Emacs Set-Up *)
 
 (**
-At the start of the next chapter, I assume that you have installed Coq and Proof General.  The code in this book is tested with Coq versions 8.3pl2 and 8.4 beta, though parts may work with other versions.
+At the start of the next chapter, I assume that you have installed Coq and Proof General.  The code in this book is tested with Coq version 8.4beta1, though parts may work with other versions.
 
 %\index{Proof General|(}%To set up your Proof General environment to process the source to this chapter, a few simple steps are required.
 
@@ -208,7 +208,7 @@
 )#</pre>#%\end{verbatim}%
 The extra arguments demonstrated here are the proper choices for working with the code for this book.  The ellipses stand for other Emacs customization settings you may already have.  It can be helpful to save several alternate sets of flags in your %\texttt{%#<tt>#.emacs#</tt>#%}% file, with all but one commented out within the %\texttt{%#<tt>#custom-set-variables#</tt>#%}% block at any given time.
 
-Alternatively, Proof General configuration can be set on a per-directory basis, using a %\index{.dir-locals.el file@\texttt{.dir-locals.el} file}\texttt{%#<i>#.dir-locals.el#</i>#%}% file in the directory of the source files for which you want the settings to apply.  Here is an example that could be written in such a file to enable use of the book source.  Note the need to include an argument that starts Coq in Emacs support mode.
+Alternatively, Proof General configuration can be set on a per-directory basis, using a %\index{.dir-locals.el file@\texttt{.dir-locals.el} file}\texttt{%#<tt>#.dir-locals.el#</tt>#%}% file in the directory of the source files for which you want the settings to apply.  Here is an example that could be written in such a file to enable use of the book source.  Note the need to include an argument that starts Coq in Emacs support mode.
 %\begin{verbatim}%#<pre>#((coq-mode . ((coq-prog-args . ("-emacs-U" "-I" "DIR/src")))))#</pre>#%\end{verbatim}%
  #</li>#
 
--- a/src/Large.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Large.v	Wed Jun 06 11:25:13 2012 -0400
@@ -103,7 +103,7 @@
   trivial.
 Qed.
 
-(** We pass %\index{tactics!induction}%[induction] an %\index{intro pattern}\textit{%#<i>#intro pattern#</i>#%}%, using a [|] character to separate out instructions for the different inductive cases.  Within a case, we write [?] to ask Coq to generate a name automatically, and we write an explicit name to assign that name to the corresponding new variable.  It is apparent that, to use intro patterns to avoid proof brittleness, one needs to keep track of the seemingly unimportant facts of the orders in which variables are introduced.  Thus, the script keeps working if we replace [e] by [x], but it has become more cluttered.  Arguably, neither proof is particularly easy to follow.
+(** We pass %\index{tactics!induction}%[induction] an %\index{intro pattern}%_intro pattern_, using a [|] character to separate out instructions for the different inductive cases.  Within a case, we write [?] to ask Coq to generate a name automatically, and we write an explicit name to assign that name to the corresponding new variable.  It is apparent that, to use intro patterns to avoid proof brittleness, one needs to keep track of the seemingly unimportant facts of the orders in which variables are introduced.  Thus, the script keeps working if we replace [e] by [x], but it has become more cluttered.  Arguably, neither proof is particularly easy to follow.
 
    That category of complaint has to do with understanding proofs as static artifacts.  As with programming in general, with serious projects, it tends to be much more important to be able to support evolution of proofs as specifications change.  Unstructured proofs like the above examples can be very hard to update in concert with theorem statements.  For instance, consider how the last proof script plays out when we modify [times] to introduce a bug. *)
 
@@ -133,7 +133,7 @@
 
 Abort.
 
-(** Can you spot what went wrong, without stepping through the script step-by-step?  The problem is that [trivial] never fails.  Originally, [trivial] had been succeeding in proving an equality that follows by reflexivity.  Our change to [times] leads to a case where that equality is no longer true.  The invocation [trivial] happily leaves the false equality in place, and we continue on to the span of tactics intended for the second inductive case.  Unfortunately, those tactics end up being applied to the %\textit{%#<i>#first#</i>#%}% case instead.
+(** Can you spot what went wrong, without stepping through the script step-by-step?  The problem is that [trivial] never fails.  Originally, [trivial] had been succeeding in proving an equality that follows by reflexivity.  Our change to [times] leads to a case where that equality is no longer true.  The invocation [trivial] happily leaves the false equality in place, and we continue on to the span of tactics intended for the second inductive case.  Unfortunately, those tactics end up being applied to the _first_ case instead.
 
    The problem with [trivial] could be %``%#"#solved#"#%''% by writing, e.g., [solve [ trivial ]] instead, so that an error is signaled early on if something unexpected happens.  However, the root problem is that the syntax of a tactic invocation does not imply how many subgoals it produces.  Much more confusing instances of this problem are possible.  For example, if a lemma [L] is modified to take an extra hypothesis, then uses of [apply L] will generate more subgoals than before.  Old unstructured proof scripts will become hopelessly jumbled, with tactics applied to inappropriate subgoals.  Because of the lack of structure, there is usually relatively little to be gleaned from knowledge of the precise point in a proof script where an error is raised. *)
 
@@ -220,7 +220,7 @@
 
 (** This style is motivated by a hard truth: one person's manual proof script is almost always mostly inscrutable to most everyone else.  I claim that step-by-step formal proofs are a poor way of conveying information.  Thus, we had might as well cut out the steps and automate as much as possible.
 
-   What about the illustrative value of proofs?  Most informal proofs are read to convey the big ideas of proofs.  How can reading [induction e; crush] convey any big ideas?  My position is that any ideas that standard automation can find are not very big after all, and the %\textit{%#<i>#real#</i>#%}% big ideas should be expressed through lemmas that are added as hints.
+   What about the illustrative value of proofs?  Most informal proofs are read to convey the big ideas of proofs.  How can reading [induction e; crush] convey any big ideas?  My position is that any ideas that standard automation can find are not very big after all, and the _real_ big ideas should be expressed through lemmas that are added as hints.
 
    An example should help illustrate what I mean.  Consider this function, which rewrites an expression using associativity of addition and multiplication. *)
 
@@ -285,11 +285,11 @@
 
    The more common situation is that a large induction has several easy cases that automation makes short work of.  In the remaining cases, automation performs some standard simplification.  Among these cases, some may require quite involved proofs; such a case may deserve a hint lemma of its own, where the lemma statement may copy the simplified version of the case.  Alternatively, the proof script for the main theorem may be extended with some automation code targeted at the specific case.  Even such targeted scripting is more desirable than manual proving, because it may be read and understood without knowledge of a proof's hierarchical structure, case ordering, or name binding structure.
 
-   A competing alternative to the common style of Coq tactics is the %\index{declarative proof scripts}\emph{%#<i>#declarative#</i>#%}% style, most frequently associated today with the %\index{Isar}%Isar%~\cite{Isar}% language.  A declarative proof script is very explicit about subgoal structure and introduction of local names, aiming for human readability.  The coding of proof automation is taken to be outside the scope of the proof language, an assumption related to the idea that it is not worth building new automation for each serious theorem.  I have shown in this book many examples of theorem-specific automation, which I believe is crucial for scaling to significant results.  Declarative proof scripts make it easier to read scripts to modify them for theorem statement changes, but the alternate %\index{adaptive proof scripts}\emph{%#<i>#adaptive#</i>#%}% style from this book allows use of the %\emph{%#<i>#same#</i>#%}% scripts for many versions of a theorem.
+   A competing alternative to the common style of Coq tactics is the %\index{declarative proof scripts}%_declarative_ style, most frequently associated today with the %\index{Isar}%Isar%~\cite{Isar}% language.  A declarative proof script is very explicit about subgoal structure and introduction of local names, aiming for human readability.  The coding of proof automation is taken to be outside the scope of the proof language, an assumption related to the idea that it is not worth building new automation for each serious theorem.  I have shown in this book many examples of theorem-specific automation, which I believe is crucial for scaling to significant results.  Declarative proof scripts make it easier to read scripts to modify them for theorem statement changes, but the alternate %\index{adaptive proof scripts}%_adaptive_ style from this book allows use of the _same_ scripts for many versions of a theorem.
 
    Perhaps I am a pessimist for thinking that fully formal proofs will inevitably consist of details that are uninteresting to people, but it is my preference to focus on conveying proof-specific details through choice of lemmas.  Additionally, adaptive Ltac scripts contain bits of automation that can be understood in isolation.  For instance, in a big [repeat match] loop, each case can generally be digested separately, which is a big contrast from trying to understand the hierarchical structure of a script in a more common style.  Adaptive scripts rely on variable binding, but generally only over very small scopes, whereas understanding a traditional script requires tracking the identities of local variables potentially across pages of code.
 
-   One might also wonder why it makes sense to prove all theorems automatically (in the sense of adaptive proof scripts) but not construct all programs automatically.  My view there is that %\emph{%#<i>#program synthesis#</i>#%}% is a very useful idea that deserves broader application!  In practice, there are difficult obstacles in the way of finding a program automatically from its specification.  A typical specification is not exhaustive in its description of program properties.  For instance, details of performance on particular machine architectures are often omitted.  As a result, a synthesized program may be correct in some sense while suffering from deficiencies in other senses.  Program synthesis research will continue to come up with ways of dealing with this problem, but the situation for theorem proving is fundamentally similar.  Following mathematical practice, the only property of a formal proof that we care about is which theorem it proves, and it is trivial to check this property automatically.  In other words, with a simple criterion for what makes a proof acceptable, automatic search is straightforward.  Of course, in practice we also care about understandability of proofs to facilitate long-term maintenance, and that is just what the techniques outlined above are meant to support, and the next section gives some related advice. *)
+   One might also wonder why it makes sense to prove all theorems automatically (in the sense of adaptive proof scripts) but not construct all programs automatically.  My view there is that _program synthesis_ is a very useful idea that deserves broader application!  In practice, there are difficult obstacles in the way of finding a program automatically from its specification.  A typical specification is not exhaustive in its description of program properties.  For instance, details of performance on particular machine architectures are often omitted.  As a result, a synthesized program may be correct in some sense while suffering from deficiencies in other senses.  Program synthesis research will continue to come up with ways of dealing with this problem, but the situation for theorem proving is fundamentally different.  Following mathematical practice, the only property of a formal proof that we care about is which theorem it proves, and it is trivial to check this property automatically.  In other words, with a simple criterion for what makes a proof acceptable, automatic search is straightforward.  Of course, in practice we also care about understandability of proofs to facilitate long-term maintenance, and that is just what the techniques outlined above are meant to support, and the next section gives some related advice. *)
 
 
 (** * Debugging and Maintaining Automation *)
@@ -639,16 +639,16 @@
 
 (** As aggravating as the above situation may be, there is greater aggravation to be had from importing library modules with commands like %\index{Vernacular commands!Require Import}%[Require Import].  Such a command imports not just the Gallina terms from a module, but also all the hints for [auto], [eauto], and [autorewrite].  Some very recent versions of Coq include mechanisms for removing hints from databases, but the proper solution is to be very conservative in exporting hints from modules.  Consider putting hints in named databases, so that they may be used only when called upon explicitly, as demonstrated in Chapter 13.
 
-It is also easy to end up with a proof script that uses too much memory.  As tactics run, they avoid generating proof terms, since serious proof search will consider many possible avenues, and we do not want to build proof terms for subproofs that end up unused.  Instead, tactic execution maintains %\index{thunks}\textit{%#<i>#thunks#</i>#%}% (suspended computations, represented with closures), such that a tactic's proof-producing thunk is only executed when we run %\index{Vernacular commands!Qed}%[Qed].  These thunks can use up large amounts of space, such that a proof script exhausts available memory, even when we know that we could have used much less memory by forcing some thunks earlier.
+It is also easy to end up with a proof script that uses too much memory.  As tactics run, they avoid generating proof terms, since serious proof search will consider many possible avenues, and we do not want to build proof terms for subproofs that end up unused.  Instead, tactic execution maintains %\index{thunks}%_thunks_ (suspended computations, represented with closures), such that a tactic's proof-producing thunk is only executed when we run %\index{Vernacular commands!Qed}%[Qed].  These thunks can use up large amounts of space, such that a proof script exhausts available memory, even when we know that we could have used much less memory by forcing some thunks earlier.
 
    The %\index{tactics!abstract}%[abstract] tactical helps us force thunks by proving some subgoals as their own lemmas.  For instance, a proof [induction x; crush] can in many cases be made to use significantly less peak memory by changing it to [induction x; abstract crush].  The main limitation of [abstract] is that it can only be applied to subgoals that are proved completely, with no undetermined unification variables remaining.  Still, many large automated proofs can realize vast memory savings via [abstract]. *)
 
 
 (** * Modules *)
 
-(** Last chapter's examples of proof by reflection demonstrate opportunities for implementing abstract proof strategies with stronger formal guarantees than can be had with Ltac scripting.  Coq's %\textit{%#<i>#module system#</i>#%}% provides another tool for more rigorous development of generic theorems.  This feature is inspired by the module systems found in Standard ML%~\cite{modules}% and Objective Caml, and the discussion that follows assumes familiarity with the basics of one of those systems.
+(** Last chapter's examples of proof by reflection demonstrate opportunities for implementing abstract proof strategies with stronger formal guarantees than can be had with Ltac scripting.  Coq's _module system_ provides another tool for more rigorous development of generic theorems.  This feature is inspired by the module systems found in Standard ML%~\cite{modules}% and Objective Caml, and the discussion that follows assumes familiarity with the basics of one of those systems.
 
-   ML modules facilitate the grouping of %\index{abstract type}%abstract types with operations over those types.  Moreover, there is support for %\index{functor}\textit{%#<i>#functors#</i>#%}%, which are functions from modules to modules.  A canonical example of a functor is one that builds a data structure implementation from a module that describes a domain of keys and its associated comparison operations.
+   ML modules facilitate the grouping of %\index{abstract type}%abstract types with operations over those types.  Moreover, there is support for %\index{functor}%_functors_, which are functions from modules to modules.  A canonical example of a functor is one that builds a data structure implementation from a module that describes a domain of keys and its associated comparison operations.
 
    When we add modules to a base language with dependent types, it becomes possible to use modules and functors to formalize kinds of reasoning that are common in algebra.  For instance, this module signature captures the essence of the algebraic structure known as a group.  A group consists of a carrier set [G], an associative binary operation [f], a left identity element [e] for [f], and an operation [i] that is a left inverse for [f].%\index{Vernacular commands!Module Type}% *)
 
@@ -678,7 +678,7 @@
 (** We implement generic proofs of these theorems with a functor, whose input is an arbitrary group [M]. %\index{Vernacular commands!Module}% *)
 
 Module GroupProofs (M : GROUP) : GROUP_THEOREMS with Module M := M.
-  (** As in ML, Coq provides multiple options for ascribing signatures to modules.  Here we use just the colon operator, which implements %\index{opaque ascription}\emph{%#<i>#opaque ascription#</i>#%}%, hiding all details of the module not exposed by the signature.  Another option is %\index{transparent ascription}\emph{%#<i>#transparent ascription#</i>#%}% via the [<:] operator, which checks for signature compatibility without hiding implementation details.  Here we stick with opaque ascription but employ the [with] operation to add more detail to a signature, exposing just those implementation details that we need to.  For instance, here we expose the underlying group representation set and operator definitions.  Without such a refinement, we would get an output module proving theorems about some unknown group, which is not very useful.  Also note that opaque ascription can in Coq have some undesirable consequences without analogues in ML, since not just the types but also the %\emph{%#<i>#definitions#</i>#%}% of identifiers have significance in type checking and theorem proving. *)
+  (** As in ML, Coq provides multiple options for ascribing signatures to modules.  Here we use just the colon operator, which implements %\index{opaque ascription}%_opaque ascription_, hiding all details of the module not exposed by the signature.  Another option is %\index{transparent ascription}%_transparent ascription_ via the [<:] operator, which checks for signature compatibility without hiding implementation details.  Here we stick with opaque ascription but employ the [with] operation to add more detail to a signature, exposing just those implementation details that we need to.  For instance, here we expose the underlying group representation set and operator definitions.  Without such a refinement, we would get an output module proving theorems about some unknown group, which is not very useful.  Also note that opaque ascription can in Coq have some undesirable consequences without analogues in ML, since not just the types but also the _definitions_ of identifiers have significance in type checking and theorem proving. *)
 
   Module M := M.
   (** To ensure that the module we are building meets the [GROUP_THEOREMS] signature, we add an extra local name for [M], the functor argument. *)
@@ -854,7 +854,7 @@
 
    When working on multiple projects, it is useful to leave multiple versions of this setting in your %\texttt{%#<tt>#.emacs#</tt>#%}% file, commenting out all but one of them at any moment in time.  To switch between projects, change the commenting structure and restart Emacs.
 
-   Alternatively, we can revisit the directory-local settings approach and write the following into a file %\texttt{%#<tt>#.dir-locals.el#</tt>#%}% in %\texttt{%#<i>#CLIENT#</i>#%}%:
+   Alternatively, we can revisit the directory-local settings approach and write the following into a file %\texttt{%#<tt>#.dir-locals.el#</tt>#%}% in %\texttt{%#<tt>#CLIENT#</tt>#%}%:
 
 <<
 ((coq-mode . ((coq-prog-args .
--- a/src/LogicProg.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/LogicProg.v	Wed Jun 06 11:25:13 2012 -0400
@@ -155,7 +155,7 @@
 Qed.
 (* end thide *)
 
-(** The two key components of logic programming are %\index{backtracking}\emph{%#<i>#backtracking#</i>#%}% and %\index{unification}\emph{%#<i>#unification#</i>#%}%.  To see these techniques in action, consider this further silly example.  Here our candidate proof steps will be reflexivity and quantifier instantiation. *)
+(** The two key components of logic programming are %\index{backtracking}%_backtracking_ and %\index{unification}%_unification_.  To see these techniques in action, consider this further silly example.  Here our candidate proof steps will be reflexivity and quantifier instantiation. *)
 
 Example seven_minus_three : exists x, x + 3 = 7.
 (* begin thide *)
@@ -170,7 +170,7 @@
   Error: Impossible to unify "7" with "0 + 3".
 >>
 
-This seems to be a dead end.  Let us %\emph{%#<i>#backtrack#</i>#%}% to the point where we ran [apply] and make a better alternate choice.
+This seems to be a dead end.  Let us _backtrack_ to the point where we ran [apply] and make a better alternate choice.
 *)
 
 Restart.
@@ -185,7 +185,7 @@
 
 Example seven_minus_three' : exists x, plusR x 3 7.
 (* begin thide *)
-(** We could attempt to guess the quantifier instantiation manually as before, but here there is no need.  Instead of [apply], we use %\index{tactics!eapply}%[eapply] instead, which proceeds with placeholder %\index{unification variable}\emph{%#<i>#unification variables#</i>#%}% standing in for those parameters we wish to postpone guessing. *)
+(** We could attempt to guess the quantifier instantiation manually as before, but here there is no need.  Instead of [apply], we use %\index{tactics!eapply}%[eapply] instead, which proceeds with placeholder %\index{unification variable}%_unification variables_ standing in for those parameters we wish to postpone guessing. *)
 
   eapply ex_intro.
 (** [[
@@ -380,7 +380,7 @@
   Abort.
 End slow.
 
-(** Sometimes, though, transitivity is just what is needed to get a proof to go through automatically with [eauto].  For those cases, we can use named %\index{hint databases}\emph{%#<i>#hint databases#</i>#%}% to segragate hints into different groups that may be called on as needed.  Here we put [eq_trans] into the database [slow]. *)
+(** Sometimes, though, transitivity is just what is needed to get a proof to go through automatically with [eauto].  For those cases, we can use named %\index{hint databases}%_hint databases_ to segragate hints into different groups that may be called on as needed.  Here we put [eq_trans] into the database [slow]. *)
 
 (* begin thide *)
 Hint Resolve eq_trans : slow.
@@ -413,7 +413,7 @@
 Qed.
 (* end thide *)
 
-(** When we %\emph{%#<i>#do#</i>#%}% need transitivity, we ask for it explicitly. *)
+(** When we _do_ need transitivity, we ask for it explicitly. *)
 
 Example needs_trans : forall x y, 1 + x = y
   -> y = 2
@@ -487,7 +487,7 @@
 Existential 2 = ?20252 : [ |- nat]
 >>
 
-Coq complains that we finished the proof without determining the values of some unification variables created during proof search.  The error message may seem a bit silly, since %\emph{%#<i>#any#</i>#%}% value of type [nat] (for instance, 0) can be plugged in for either variable!  However, for more complex types, finding their inhabitants may be as complex as theorem-proving in general.
+Coq complains that we finished the proof without determining the values of some unification variables created during proof search.  The error message may seem a bit silly, since _any_ value of type [nat] (for instance, 0) can be plugged in for either variable!  However, for more complex types, finding their inhabitants may be as complex as theorem-proving in general.
 
 The %\index{Vernacular commands!Show Proof}%[Show Proof] command shows exactly which proof term [eauto] has found, with the undetermined unification variables appearing explicitly where they are used. *)
 
@@ -595,7 +595,7 @@
 Print length_and_sum''.
 (* end hide *)
 
-(** We could continue through exercises of this kind, but even more interesting than finding lists automatically is finding %\emph{%#<i>#programs#</i>#%}% automatically. *)
+(** We could continue through exercises of this kind, but even more interesting than finding lists automatically is finding _programs_ automatically. *)
 
 
 (** * Synthesizing Programs *)
@@ -753,7 +753,7 @@
 
 Hint Resolve plus_0 times_1.
 
-(** We finish with one more arithmetic lemma that is particularly specialized to this theorem.  This fact happens to follow by the axioms of the %\emph{%#<i>#ring#</i>#%}% algebraic structure, so, since the naturals form a ring, we can use the built-in tactic %\index{tactics!ring}%[ring]. *)
+(** We finish with one more arithmetic lemma that is particularly specialized to this theorem.  This fact happens to follow by the axioms of the _ring_ algebraic structure, so, since the naturals form a ring, we can use the built-in tactic %\index{tactics!ring}%[ring]. *)
 
 Require Import Arith Ring.
 
@@ -781,7 +781,7 @@
 
 (** * More on [auto] Hints *)
 
-(** Let us stop at this point and take stock of the possibilities for [auto] and [eauto] hints.  Hints are contained within %\textit{%#<i>#hint databases#</i>#%}%, which we have seen extended in many examples so far.  When no hint database is specified, a default database is used.  Hints in the default database are always used by [auto] or [eauto].  The chance to extend hint databases imperatively is important, because, in Ltac programming, we cannot create %``%#"#global variables#"#%''% whose values can be extended seamlessly by different modules in different source files.  We have seen the advantages of hints so far, where [crush] can be defined once and for all, while still automatically applying the hints we add throughout developments.  In fact, [crush] is defined in terms of [auto], which explains how we achieve this extensibility.  Other user-defined tactics can take similar advantage of [auto] and [eauto].
+(** Let us stop at this point and take stock of the possibilities for [auto] and [eauto] hints.  Hints are contained within _hint databases_, which we have seen extended in many examples so far.  When no hint database is specified, a default database is used.  Hints in the default database are always used by [auto] or [eauto].  The chance to extend hint databases imperatively is important, because, in Ltac programming, we cannot create %``%#"#global variables#"#%''% whose values can be extended seamlessly by different modules in different source files.  We have seen the advantages of hints so far, where [crush] can be defined once and for all, while still automatically applying the hints we add throughout developments.  In fact, [crush] is defined in terms of [auto], which explains how we achieve this extensibility.  Other user-defined tactics can take similar advantage of [auto] and [eauto].
 
 The basic hints for [auto] and [eauto] are %\index{Vernacular commands!Hint Immediate}%[Hint Immediate lemma], asking to try solving a goal immediately by applying a lemma and discharging any hypotheses with a single proof step each; %\index{Vernacular commands!Hint Resolve}%[Resolve lemma], which does the same but may add new premises that are themselves to be subjects of nested proof search; %\index{Vernacular commands!Hint Constructors}%[Constructors type], which acts like [Resolve] applied to every constructor of an inductive type; and %\index{Vernacular commands!Hint Unfold}%[Unfold ident], which tries unfolding [ident] when it appears at the head of a proof goal.  Each of these [Hint] commands may be used with a suffix, as in [Hint Resolve lemma : my_db].  This adds the hint only to the specified database, so that it would only be used by, for instance, [auto with my_db].  An additional argument to [auto] specifies the maximum depth of proof trees to search in depth-first order, as in [auto 8] or [auto 8 with my_db].  The default depth is 5.
 
@@ -842,7 +842,7 @@
 User error: Bound head variable
 >>
 
-   Coq's [auto] hint databases work as tables mapping %\index{head symbol}\textit{%#<i>#head symbols#</i>#%}% to lists of tactics to try.  Because of this, the constant head of an [Extern] pattern must be determinable statically.  In our first [Extern] hint, the head symbol was [not], since [x <> y] desugars to [not (eq x y)]; and, in the second example, the head symbol was [P].
+   Coq's [auto] hint databases work as tables mapping %\index{head symbol}%_head symbols_ to lists of tactics to try.  Because of this, the constant head of an [Extern] pattern must be determinable statically.  In our first [Extern] hint, the head symbol was [not], since [x <> y] desugars to [not (eq x y)]; and, in the second example, the head symbol was [P].
 
    Fortunately, a more basic form of [Hint Extern] also applies.  We may simply leave out the pattern to the left of the [=>], incorporating the corresponding logic into the Ltac script. *)
 
@@ -851,7 +851,7 @@
     | [ H : forall x, ?P x /\ _ |- ?P ?X ] => apply (proj1 (H X))
   end.
 
-(** Be forewarned that a [Hint Extern] of this kind will be applied at %\emph{%#<i>#every#</i>#%}% node of a proof tree, so an expensive Ltac script may slow proof search significantly. *)
+(** Be forewarned that a [Hint Extern] of this kind will be applied at _every_ node of a proof tree, so an expensive Ltac script may slow proof search significantly. *)
 
 
 (** * Rewrite Hints *)
--- a/src/Match.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Match.v	Wed Jun 06 11:25:13 2012 -0400
@@ -26,7 +26,7 @@
 
    The %\index{tactics!ring}%[ring] tactic solves goals by appealing to the axioms of rings or semi-rings (as in algebra), depending on the type involved.  Coq developments may declare new types to be parts of rings and semi-rings by proving the associated axioms.  There is a similar tactic %\index{tactics!field}\coqdockw{%#<tt>#field#</tt>#%}% for simplifying values in fields by conversion to fractions over rings.  Both [ring] and %\coqdockw{%#<tt>#field#</tt>#%}% can only solve goals that are equalities.  The %\index{tactics!fourier}\coqdockw{%#<tt>#fourier#</tt>#%}% tactic uses Fourier's method to prove inequalities over real numbers, which are axiomatized in the Coq standard library.
 
-   The %\index{setoids}\textit{%#<i>#setoid#</i>#%}% facility makes it possible to register new equivalence relations to be understood by tactics like [rewrite].  For instance, [Prop] is registered as a setoid with the equivalence relation %``%#"#if and only if.#"#%''%  The ability to register new setoids can be very useful in proofs of a kind common in math, where all reasoning is done after %``%#"#modding out by a relation.#"#%''%
+   The %\index{setoids}%_setoid_ facility makes it possible to register new equivalence relations to be understood by tactics like [rewrite].  For instance, [Prop] is registered as a setoid with the equivalence relation %``%#"#if and only if.#"#%''%  The ability to register new setoids can be very useful in proofs of a kind common in math, where all reasoning is done after %``%#"#modding out by a relation.#"#%''%
 
    There are several other built-in %``%#"#black box#"#%''% automation tactics, which one can learn about by perusing the Coq manual.  The real promise of Coq, though, is in the coding of problem-specific tactics with Ltac. *)
 
@@ -59,9 +59,9 @@
 Qed.
 (* end thide *)
 
-(** The %\index{tactics!repeat}%[repeat] that we use here is called a %\index{tactical}\textit{%#<i>#tactical#</i>#%}%, or tactic combinator.  The behavior of [repeat t] is to loop through running [t], running [t] on all generated subgoals, running [t] on %\textit{%#<i>#their#</i>#%}% generated subgoals, and so on.  When [t] fails at any point in this search tree, that particular subgoal is left to be handled by later tactics.  Thus, it is important never to use [repeat] with a tactic that always succeeds.
+(** The %\index{tactics!repeat}%[repeat] that we use here is called a %\index{tactical}%_tactical_, or tactic combinator.  The behavior of [repeat t] is to loop through running [t], running [t] on all generated subgoals, running [t] on _their_ generated subgoals, and so on.  When [t] fails at any point in this search tree, that particular subgoal is left to be handled by later tactics.  Thus, it is important never to use [repeat] with a tactic that always succeeds.
 
-   Another very useful Ltac building block is %\index{context patterns}\textit{%#<i>#context patterns#</i>#%}%. *)
+   Another very useful Ltac building block is %\index{context patterns}%_context patterns_. *)
 
 (* begin thide *)
 Ltac find_if_inside :=
@@ -133,7 +133,7 @@
 
 It is tempting to assume that [match] works like it does in ML.  In fact, there are a few critical differences in its behavior.  One is that we may include arbitrary expressions in patterns, instead of being restricted to variables and constructors.  Another is that the same variable may appear multiple times, inducing an implicit equality constraint.
 
-There is a related pair of two other differences that are much more important than the others.  The [match] construct has a %\textit{%#<i>#backtracking semantics for failure#</i>#%}%.  In ML, pattern matching works by finding the first pattern to match and then executing its body.  If the body raises an exception, then the overall match raises the same exception.  In Coq, failures in case bodies instead trigger continued search through the list of cases.
+There is a related pair of two other differences that are much more important than the others.  The [match] construct has a _backtracking semantics for failure_.  In ML, pattern matching works by finding the first pattern to match and then executing its body.  If the body raises an exception, then the overall match raises the same exception.  In Coq, failures in case bodies instead trigger continued search through the list of cases.
 
 For instance, this (unnecessarily verbose) proof script works: *)
 
@@ -148,7 +148,7 @@
 
 (** The first case matches trivially, but its body tactic fails, since the conclusion does not begin with a quantifier or implication.  In a similar ML match, that would mean that the whole pattern-match fails.  In Coq, we backtrack and try the next pattern, which also matches.  Its body tactic succeeds, so the overall tactic succeeds as well.
 
-   The example shows how failure can move to a different pattern within a [match].  Failure can also trigger an attempt to find %\textit{%#<i>#a different way of matching a single pattern#</i>#%}%.  Consider another example: *)
+   The example shows how failure can move to a different pattern within a [match].  Failure can also trigger an attempt to find _a different way of matching a single pattern_.  Consider another example: *)
 
 Theorem m2 : forall P Q R : Prop, P -> Q -> R -> Q.
   intros; match goal with
@@ -435,7 +435,7 @@
 Abort.
 (* end thide *)
 
-(** Each position within an Ltac script has a default applicable non-terminal, where [constr] and [ltac] are the main options worth thinking about, standing respectively for terms of Gallina and Ltac.  The explicit colon notation can always be used to override the default non-terminal choice, though code being parsed as Gallina can no longer use such overrides.  Within the [ltac] non-terminal, top-level function applications are treated as applications in Ltac, not Gallina; but the %\emph{%#<i>#arguments#</i>#%}% to such functions are parsed with [constr] by default.  This choice may seem strange, until we realize that we have been relying on it all along in all the proof scripts we write!  For instance, the [apply] tactic is an Ltac function, and it is natural to interpret its argument as a term of Gallina, not Ltac.  We use an [ltac] prefix to parse Ltac function arguments as Ltac terms themselves, as in the call to [map] above.  For some simple cases, Ltac terms may be passed without an extra prefix.  For instance, an identifier that has an Ltac meaning but no Gallina meaning will be interpreted in Ltac automatically.
+(** Each position within an Ltac script has a default applicable non-terminal, where [constr] and [ltac] are the main options worth thinking about, standing respectively for terms of Gallina and Ltac.  The explicit colon notation can always be used to override the default non-terminal choice, though code being parsed as Gallina can no longer use such overrides.  Within the [ltac] non-terminal, top-level function applications are treated as applications in Ltac, not Gallina; but the _arguments_ to such functions are parsed with [constr] by default.  This choice may seem strange, until we realize that we have been relying on it all along in all the proof scripts we write!  For instance, the [apply] tactic is an Ltac function, and it is natural to interpret its argument as a term of Gallina, not Ltac.  We use an [ltac] prefix to parse Ltac function arguments as Ltac terms themselves, as in the call to [map] above.  For some simple cases, Ltac terms may be passed without an extra prefix.  For instance, an identifier that has an Ltac meaning but no Gallina meaning will be interpreted in Ltac automatically.
 
 One other gotcha shows up when we want to debug our Ltac functional programs.  We might expect the following code to work, to give us a version of [length] that prints a debug trace of the arguments it is called with. *)
 
@@ -467,11 +467,11 @@
 >> *)
 Abort.
 
-(** What is going wrong here?  The answer has to do with the dual status of Ltac as both a purely functional and an imperative programming language.  The basic programming language is purely functional, but tactic scripts are one %``%#"#datatype#"#%''% that can be returned by such programs, and Coq will run such a script using an imperative semantics that mutates proof states.  Readers familiar with %\index{monad}\index{Haskell}%monadic programming in Haskell%~\cite{Monads,IO}% may recognize a similarity.  Side-effecting Haskell programs can be thought of as pure programs that return %\emph{%#<i>#the code of programs in an imperative language#</i>#%}%, where some out-of-band mechanism takes responsibility for running these derived programs.  In this way, Haskell remains pure, while supporting usual input-output side effects and more.  Ltac uses the same basic mechanism, but in a dynamically typed setting.  Here the embedded imperative language includes all the tactics we have been applying so far.
+(** What is going wrong here?  The answer has to do with the dual status of Ltac as both a purely functional and an imperative programming language.  The basic programming language is purely functional, but tactic scripts are one %``%#"#datatype#"#%''% that can be returned by such programs, and Coq will run such a script using an imperative semantics that mutates proof states.  Readers familiar with %\index{monad}\index{Haskell}%monadic programming in Haskell%~\cite{Monads,IO}% may recognize a similarity.  Side-effecting Haskell programs can be thought of as pure programs that return _the code of programs in an imperative language_, where some out-of-band mechanism takes responsibility for running these derived programs.  In this way, Haskell remains pure, while supporting usual input-output side effects and more.  Ltac uses the same basic mechanism, but in a dynamically typed setting.  Here the embedded imperative language includes all the tactics we have been applying so far.
 
    Even basic [idtac] is an embedded imperative program, so we may not automatically mix it with purely functional code.  In fact, a semicolon operator alone marks a span of Ltac code as an embedded tactic script.  This makes some amount of sense, since pure functional languages have no need for sequencing: since they lack side effects, there is no reason to run an expression and then just throw away its value and move on to another expression.
 
-   The solution is like in Haskell: we must %``%#"#monadify#"#%''% our pure program to give it access to side effects.  The trouble is that the embedded tactic language has no [return] construct.  Proof scripts are about proving theorems, not calculating results.  We can apply a somewhat awkward workaround that requires translating our program into %\index{continuation-passing style}\emph{%#<i>#continuation-passing style#</i>#%}~\cite{continuations}%, a program structuring idea popular in functional programming. *)
+   The solution is like in Haskell: we must %``%#"#monadify#"#%''% our pure program to give it access to side effects.  The trouble is that the embedded tactic language has no [return] construct.  Proof scripts are about proving theorems, not calculating results.  We can apply a somewhat awkward workaround that requires translating our program into %\index{continuation-passing style}%_continuation-passing style_ %\cite{continuations}%, a program structuring idea popular in functional programming. *)
 
 (* begin hide *)
 Reset length.
@@ -486,7 +486,7 @@
   end.
 (* end thide *)
 
-(** The new [length] takes a new input: a %\emph{%#<i>#continuation#</i>#%}% [k], which is a function to be called to continue whatever proving process we were in the middle of when we called [length].  The argument passed to [k] may be thought of as the return value of [length]. *)
+(** The new [length] takes a new input: a _continuation_ [k], which is a function to be called to continue whatever proving process we were in the middle of when we called [length].  The argument passed to [k] may be thought of as the return value of [length]. *)
 
 (* begin thide *)
 Goal False.
@@ -505,7 +505,7 @@
 
    Considering the comparison with Haskell's IO monad, there is an important subtlety that deserves to be mentioned.  A Haskell IO computation represents (theoretically speaking, at least) a transformer from one state of the real world to another, plus a pure value to return.  Some of the state can be very specific to the program, as in the case of heap-allocated mutable references, but some can be along the lines of the favorite example %``%#"#launch missile,#"#%''% where the program has a side effect on the real world that is not possible to undo.
 
-   In contrast, Ltac scripts can be thought of as controlling just two simple kinds of mutable state.  First, there is the current sequence of proof subgoals.  Second, there is a partial assignment of discovered values to unification variables introduced by proof search (for instance, by [eauto], as we saw in the previous chapter).  Crucially, %\emph{%#<i>#every mutation of this state can be undone#</i>#%}% during backtracking introduced by [match], [auto], and other built-in Ltac constructs.  Ltac proof scripts have state, but it is purely local, and all changes to it are reversible, which is a very useful semantics for proof search. *)
+   In contrast, Ltac scripts can be thought of as controlling just two simple kinds of mutable state.  First, there is the current sequence of proof subgoals.  Second, there is a partial assignment of discovered values to unification variables introduced by proof search (for instance, by [eauto], as we saw in the previous chapter).  Crucially, _every mutation of this state can be undone_ during backtracking introduced by [match], [auto], and other built-in Ltac constructs.  Ltac proof scripts have state, but it is purely local, and all changes to it are reversible, which is a very useful semantics for proof search. *)
 
 
 (** * Recursive Proof Search *)
@@ -758,7 +758,7 @@
 
 (** * Creating Unification Variables *)
 
-(** A final useful ingredient in tactic crafting is the ability to allocate new unification variables explicitly.  Tactics like [eauto] introduce unification variables internally to support flexible proof search.  While [eauto] and its relatives do %\textit{%#<i>#backward#</i>#%}% reasoning, we often want to do similar %\textit{%#<i>#forward#</i>#%}% reasoning, where unification variables can be useful for similar reasons.
+(** A final useful ingredient in tactic crafting is the ability to allocate new unification variables explicitly.  Tactics like [eauto] introduce unification variables internally to support flexible proof search.  While [eauto] and its relatives do _backward_ reasoning, we often want to do similar _forward_ reasoning, where unification variables can be useful for similar reasons.
 
    For example, we can write a tactic that instantiates the quantifiers of a universally quantified hypothesis.  The tactic should not need to know what the appropriate instantiantiations are; rather, we want these choices filled with placeholders.  We hope that, when we apply the specialized hypothesis later, syntactic unification will determine concrete values.
 
--- a/src/MoreDep.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/MoreDep.v	Wed Jun 06 11:25:13 2012 -0400
@@ -36,7 +36,7 @@
 
 (** We see that, within its section, [ilist] is given type [nat -> Set].  Previously, every inductive type we have seen has either had plain [Set] as its type or has been a predicate with some type ending in [Prop].  The full generality of inductive definitions lets us integrate the expressivity of predicates directly into our normal programming.
 
-   The [nat] argument to [ilist] tells us the length of the list.  The types of [ilist]'s constructors tell us that a [Nil] list has length [O] and that a [Cons] list has length one greater than the length of its tail.  We may apply [ilist] to any natural number, even natural numbers that are only known at runtime.  It is this breaking of the %\index{phase distinction}\textit{%#<i>#phase distinction#</i>#%}% that characterizes [ilist] as %\textit{%#<i>#dependently typed#</i>#%}%.
+   The [nat] argument to [ilist] tells us the length of the list.  The types of [ilist]'s constructors tell us that a [Nil] list has length [O] and that a [Cons] list has length one greater than the length of its tail.  We may apply [ilist] to any natural number, even natural numbers that are only known at runtime.  It is this breaking of the %\index{phase distinction}%_phase distinction_ that characterizes [ilist] as _dependently typed_.
 
    In expositions of list types, we usually see the length function defined first, but here that would not be a very productive function to code.  Instead, let us implement list concatenation. *)
 
@@ -56,11 +56,11 @@
     end.
 (* end thide *)
 
-(** Using [return] alone allowed us to express a dependency of the [match] result type on the %\textit{%#<i>#value#</i>#%}% of the discriminee.  What %\index{Gallina terms!in}%[in] adds to our arsenal is a way of expressing a dependency on the %\textit{%#<i>#type#</i>#%}% of the discriminee.  Specifically, the [n1] in the [in] clause above is a %\textit{%#<i>#binding occurrence#</i>#%}% whose scope is the [return] clause.
+(** Using [return] alone allowed us to express a dependency of the [match] result type on the _value_ of the discriminee.  What %\index{Gallina terms!in}%[in] adds to our arsenal is a way of expressing a dependency on the _type_ of the discriminee.  Specifically, the [n1] in the [in] clause above is a _binding occurrence_ whose scope is the [return] clause.
 
-We may use [in] clauses only to bind names for the arguments of an inductive type family.  That is, each [in] clause must be an inductive type family name applied to a sequence of underscores and variable names of the proper length.  The positions for %\textit{%#<i>#parameters#</i>#%}% to the type family must all be underscores.  Parameters are those arguments declared with section variables or with entries to the left of the first colon in an inductive definition.  They cannot vary depending on which constructor was used to build the discriminee, so Coq prohibits pointless matches on them.  It is those arguments defined in the type to the right of the colon that we may name with [in] clauses.
+We may use [in] clauses only to bind names for the arguments of an inductive type family.  That is, each [in] clause must be an inductive type family name applied to a sequence of underscores and variable names of the proper length.  The positions for _parameters_ to the type family must all be underscores.  Parameters are those arguments declared with section variables or with entries to the left of the first colon in an inductive definition.  They cannot vary depending on which constructor was used to build the discriminee, so Coq prohibits pointless matches on them.  It is those arguments defined in the type to the right of the colon that we may name with [in] clauses.
 
-Our [app] function could be typed in so-called %\index{stratified type systems}\textit{%#<i>#stratified#</i>#%}% type systems, which avoid true dependency.  That is, we could consider the length indices to lists to live in a separate, compile-time-only universe from the lists themselves.  This stratification between a compile-time universe and a run-time universe, with no references to the latter in the former, gives rise to the terminology %``%#"#stratified.#"#%''%  Our next example would be harder to implement in a stratified system.  We write an injection function from regular lists to length-indexed lists.  A stratified implementation would need to duplicate the definition of lists across compile-time and run-time versions, and the run-time versions would need to be indexed by the compile-time versions. *)
+Our [app] function could be typed in so-called %\index{stratified type systems}%_stratified_ type systems, which avoid true dependency.  That is, we could consider the length indices to lists to live in a separate, compile-time-only universe from the lists themselves.  This stratification between a compile-time universe and a run-time universe, with no references to the latter in the former, gives rise to the terminology %``%#"#stratified.#"#%''%  Our next example would be harder to implement in a stratified system.  We write an injection function from regular lists to length-indexed lists.  A stratified implementation would need to duplicate the definition of lists across compile-time and run-time versions, and the run-time versions would need to be indexed by the compile-time versions. *)
 
 (* EX: Implement injection from normal lists *)
 
@@ -110,7 +110,7 @@
 Error: Non exhaustive pattern-matching: no clause found for pattern Nil
 >>
 
-Unlike in ML, we cannot use inexhaustive pattern matching, because there is no conception of a %\texttt{%#<tt>#Match#</tt>#%}% exception to be thrown.  In fact, recent versions of Coq %\textit{%#<i>#do#</i>#%}% allow this, by implicit translation to a [match] that considers all constructors.  It is educational to discover that encoding ourselves directly.  We might try using an [in] clause somehow.
+Unlike in ML, we cannot use inexhaustive pattern matching, because there is no conception of a %\texttt{%#<tt>#Match#</tt>#%}% exception to be thrown.  In fact, recent versions of Coq _do_ allow this, by implicit translation to a [match] that considers all constructors.  It is educational to discover that encoding ourselves directly.  We might try using an [in] clause somehow.
 
 [[
 Definition hd n (ls : ilist (S n)) : A :=
@@ -123,7 +123,7 @@
 Error: The reference n was not found in the current environment
 >>
 
-In this and other cases, we feel like we want [in] clauses with type family arguments that are not variables.  Unfortunately, Coq only supports variables in those positions.  A completely general mechanism could only be supported with a solution to the problem of higher-order unification%~\cite{HOU}%, which is undecidable.  There %\textit{%#<i>#are#</i>#%}% useful heuristics for handling non-variable indices which are gradually making their way into Coq, but we will spend some time in this and the next few chapters on effective pattern matching on dependent types using only the primitive [match] annotations.
+In this and other cases, we feel like we want [in] clauses with type family arguments that are not variables.  Unfortunately, Coq only supports variables in those positions.  A completely general mechanism could only be supported with a solution to the problem of higher-order unification%~\cite{HOU}%, which is undecidable.  There _are_ useful heuristics for handling non-variable indices which are gradually making their way into Coq, but we will spend some time in this and the next few chapters on effective pattern matching on dependent types using only the primitive [match] annotations.
 
 Our final, working attempt at [hd] uses an auxiliary function and a surprising [return] annotation. *)
 
@@ -155,11 +155,11 @@
 
 (** * The One Rule of Dependent Pattern Matching in Coq *)
 
-(** The rest of this chapter will demonstrate a few other elegant applications of dependent types in Coq.  Readers encountering such ideas for the first time often feel overwhelmed, concluding that there is some magic at work whereby Coq sometimes solves the halting problem for the programmer and sometimes does not, applying automated program understanding in a way far beyond what is found in conventional languages.  The point of this section is to cut off that sort of thinking right now!  Dependent type-checking in Coq follows just a few algorithmic rules.  Chapters 10 and 12 introduce many of those rules more formally, and the main additional rule is centered on %\index{dependent pattern matching}\emph{%#<i>#dependent pattern matching#</i>#%}% of the kind we met in the previous section.
+(** The rest of this chapter will demonstrate a few other elegant applications of dependent types in Coq.  Readers encountering such ideas for the first time often feel overwhelmed, concluding that there is some magic at work whereby Coq sometimes solves the halting problem for the programmer and sometimes does not, applying automated program understanding in a way far beyond what is found in conventional languages.  The point of this section is to cut off that sort of thinking right now!  Dependent type-checking in Coq follows just a few algorithmic rules.  Chapters 10 and 12 introduce many of those rules more formally, and the main additional rule is centered on %\index{dependent pattern matching}%_dependent pattern matching_ of the kind we met in the previous section.
 
-A dependent pattern match is a [match] expression where the type of the overall [match] is a function of the value and/or the type of the %\index{discriminee}\emph{%#<i>#discriminee#</i>#%}%, the value being matched on.  In other words, the [match] type %\emph{%#<i>#depends#</i>#%}% on the discriminee.
+A dependent pattern match is a [match] expression where the type of the overall [match] is a function of the value and/or the type of the %\index{discriminee}%_discriminee_, the value being matched on.  In other words, the [match] type _depends_ on the discriminee.
 
-When exactly will Coq accept a dependent pattern match as well-typed?  Some other dependently typed languages employ fancy decision procedures to determine when programs satisfy their very expressive types.  The situation in Coq is just the opposite.  Only very straightforward symbolic rules are applied.  Such a design choice has its drawbacks, as it forces programmers to do more work to convince the type checker of program validity.  However, the great advantage of a simple type checking algorithm is that its action on %\emph{%#<i>#invalid#</i>#%}% programs is easier to understand!
+When exactly will Coq accept a dependent pattern match as well-typed?  Some other dependently typed languages employ fancy decision procedures to determine when programs satisfy their very expressive types.  The situation in Coq is just the opposite.  Only very straightforward symbolic rules are applied.  Such a design choice has its drawbacks, as it forces programmers to do more work to convince the type checker of program validity.  However, the great advantage of a simple type checking algorithm is that its action on _invalid_ programs is easier to understand!
 
 We come now to the one rule of dependent pattern matching in Coq.  A general dependent pattern match assumes this form (with unnecessary parentheses included to make the syntax easier to parse):
 [[
@@ -179,12 +179,12 @@
 
 This is an exhaustive description of the ways to specify how to take advantage of which pattern has matched!  No other mechanisms come into play.  For instance, there is no way to specify that the types of certain free variables should be refined based on which pattern has matched.  In the rest of the book, we will learn design patterns for achieving similar effects, where each technique leads to an encoding only in terms of [in], [as], and [return] clauses.
 
-A few details have been omitted above.  In Chapter 3, we learned that inductive type families may have both %\index{parameters}\emph{%#<i>#parameters#</i>#%}% and regular arguments.  Within an [in] clause, a parameter position must have the wildcard [_] written, instead of a variable.  (In general, Coq uses wildcard [_]'s either to indicate pattern variables that will not be mentioned again or to indicate positions where we would like type inference to infer the appropriate terms.)  Furthermore, recent Coq versions are adding more and more heuristics to infer dependent [match] annotations in certain conditions.  The general annotation inference problem is undecidable, so there will always be serious limitations on how much work these heuristics can do.  When in doubt about why a particular dependent [match] is failing to type-check, add an explicit [return] annotation!  At that point, the mechanical rule sketched in this section will provide a complete account of %``%#"#what the type checker is thinking.#"#%''%  Be sure to avoid the common pitfall of writing a [return] annotation that does not mention any variables bound by [in] or [as]; such a [match] will never refine typing requirements based on which pattern has matched.  (One simple exception to this rule is that, when the discriminee is a variable, that same variable may be treated as if it were repeated as an [as] clause.) *)
+A few details have been omitted above.  In Chapter 3, we learned that inductive type families may have both %\index{parameters}%_parameters_ and regular arguments.  Within an [in] clause, a parameter position must have the wildcard [_] written, instead of a variable.  (In general, Coq uses wildcard [_]'s either to indicate pattern variables that will not be mentioned again or to indicate positions where we would like type inference to infer the appropriate terms.)  Furthermore, recent Coq versions are adding more and more heuristics to infer dependent [match] annotations in certain conditions.  The general annotation inference problem is undecidable, so there will always be serious limitations on how much work these heuristics can do.  When in doubt about why a particular dependent [match] is failing to type-check, add an explicit [return] annotation!  At that point, the mechanical rule sketched in this section will provide a complete account of %``%#"#what the type checker is thinking.#"#%''%  Be sure to avoid the common pitfall of writing a [return] annotation that does not mention any variables bound by [in] or [as]; such a [match] will never refine typing requirements based on which pattern has matched.  (One simple exception to this rule is that, when the discriminee is a variable, that same variable may be treated as if it were repeated as an [as] clause.) *)
 
 
 (** * A Tagless Interpreter *)
 
-(** A favorite example for motivating the power of functional programming is implementation of a simple expression language interpreter.  In ML and Haskell, such interpreters are often implemented using an algebraic datatype of values, where at many points it is checked that a value was built with the right constructor of the value type.  With dependent types, we can implement a %\index{tagless interpreters}\textit{%#<i>#tagless#</i>#%}% interpreter that both removes this source of runtime inefficiency and gives us more confidence that our implementation is correct. *)
+(** A favorite example for motivating the power of functional programming is implementation of a simple expression language interpreter.  In ML and Haskell, such interpreters are often implemented using an algebraic datatype of values, where at many points it is checked that a value was built with the right constructor of the value type.  With dependent types, we can implement a %\index{tagless interpreters}%_tagless_ interpreter that both removes this source of runtime inefficiency and gives us more confidence that our implementation is correct. *)
 
 Inductive type : Set :=
 | Nat : type
@@ -206,7 +206,7 @@
 
 (** We have a standard algebraic datatype [type], defining a type language of naturals, booleans, and product (pair) types.  Then we have the indexed inductive type [exp], where the argument to [exp] tells us the encoded type of an expression.  In effect, we are defining the typing rules for expressions simultaneously with the syntax.
 
-   We can give types and expressions semantics in a new style, based critically on the chance for %\textit{%#<i>#type-level computation#</i>#%}%. *)
+   We can give types and expressions semantics in a new style, based critically on the chance for _type-level computation_. *)
 
 Fixpoint typeDenote (t : type) : Set :=
   match t with
@@ -215,7 +215,7 @@
     | Prod t1 t2 => typeDenote t1 * typeDenote t2
   end%type.
 
-(** The [typeDenote] function compiles types of our object language into %``%#"#native#"#%''% Coq types.  It is deceptively easy to implement.  The only new thing we see is the [%][type] annotation, which tells Coq to parse the [match] expression using the notations associated with types.  Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor.  The token [type] is one example of an identifer bound to a %\textit{%#<i>#notation scope#</i>#%}%.  In this book, we will not go into more detail on notation scopes, but the Coq manual can be consulted for more information.
+(** The [typeDenote] function compiles types of our object language into %``%#"#native#"#%''% Coq types.  It is deceptively easy to implement.  The only new thing we see is the [%][type] annotation, which tells Coq to parse the [match] expression using the notations associated with types.  Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor.  The token [type] is one example of an identifer bound to a _notation scope_.  In this book, we will not go into more detail on notation scopes, but the Coq manual can be consulted for more information.
 
    We can define a function [expDenote] that is typed in terms of [typeDenote]. *)
 
@@ -473,7 +473,7 @@
  
    ]]
 
-   We see that [IHt1] is %\textit{%#<i>#almost#</i>#%}% the fact we need, but it is not quite strong enough.  We will need to strengthen our induction hypothesis to get the proof to go through. *)
+   We see that [IHt1] is _almost_ the fact we need, but it is not quite strong enough.  We will need to strengthen our induction hypothesis to get the proof to go through. *)
 
 Abort.
 
@@ -573,9 +573,9 @@
       end t2
   end.
 
-(** We apply a trick that I call the %\index{convoy pattern}\textit{%#<i>#convoy pattern#</i>#%}%.  Recall that [match] annotations only make it possible to describe a dependence of a [match] %\textit{%#<i>#result type#</i>#%}% on the discriminee.  There is no automatic refinement of the types of free variables.  However, it is possible to effect such a refinement by finding a way to encode free variable type dependencies in the [match] result type, so that a [return] clause can express the connection.
+(** We apply a trick that I call the %\index{convoy pattern}%_convoy pattern_.  Recall that [match] annotations only make it possible to describe a dependence of a [match] _result type_ on the discriminee.  There is no automatic refinement of the types of free variables.  However, it is possible to effect such a refinement by finding a way to encode free variable type dependencies in the [match] result type, so that a [return] clause can express the connection.
 
-   In particular, we can extend the [match] to return %\textit{%#<i>#functions over the free variables whose types we want to refine#</i>#%}%.  In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time.  We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn.  We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern.  Such a [match] expression is applied immediately to the %``%#"#old version#"#%''% of the variable to be refined, and the type checker is happy.
+   In particular, we can extend the [match] to return _functions over the free variables whose types we want to refine_.  In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time.  We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn.  We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern.  Such a [match] expression is applied immediately to the %``%#"#old version#"#%''% of the variable to be refined, and the type checker is happy.
 
    Here is the symmetric function [balance2], for cases where the possibly invalid tree appears on the right rather than on the left. *)
 
@@ -634,7 +634,7 @@
             end (ins b)
     end.
 
-  (** The one new trick is a variation of the convoy pattern.  In each of the last two pattern matches, we want to take advantage of the typing connection between the trees [a] and [b].  We might naively apply the convoy pattern directly on [a] in the first [match] and on [b] in the second.  This satisfies the type checker per se, but it does not satisfy the termination checker.  Inside each [match], we would be calling [ins] recursively on a locally bound variable.  The termination checker is not smart enough to trace the dataflow into that variable, so the checker does not know that this recursive argument is smaller than the original argument.  We make this fact clearer by applying the convoy pattern on %\textit{%#<i>#the result of a recursive call#</i>#%}%, rather than just on that call's argument.
+  (** The one new trick is a variation of the convoy pattern.  In each of the last two pattern matches, we want to take advantage of the typing connection between the trees [a] and [b].  We might naively apply the convoy pattern directly on [a] in the first [match] and on [b] in the second.  This satisfies the type checker per se, but it does not satisfy the termination checker.  Inside each [match], we would be calling [ins] recursively on a locally bound variable.  The termination checker is not smart enough to trace the dataflow into that variable, so the checker does not know that this recursive argument is smaller than the original argument.  We make this fact clearer by applying the convoy pattern on _the result of a recursive call_, rather than just on that call's argument.
 
      Finally, we are in the home stretch of our effort to define [insert].  We just need a few more definitions of non-recursive functions.  First, we need to give the final characterization of [insert]'s return type.  Inserting into a red-rooted tree gives a black-rooted tree where black depth has increased, and inserting into a black-rooted tree gives a tree where black depth has stayed the same and where the root is an arbitrary color. *)
 
@@ -772,7 +772,7 @@
   End present.
 End insert.
 
-(** We can generate executable OCaml code with the command %\index{Vernacular commands!Recursive Extraction}%[Recursive Extraction insert], which also automatically outputs the OCaml versions of all of [insert]'s dependencies.  In our previous extractions, we wound up with clean OCaml code.  Here, we find uses of %\index{Obj.magic}\texttt{%#<tt>#Obj.magic#</tt>#%}%, OCaml's unsafe cast operator for tweaking the apparent type of an expression in an arbitrary way.  Casts appear for this example because the return type of [insert] depends on the %\textit{%#<i>#value#</i>#%}% of the function's argument, a pattern which OCaml cannot handle.  Since Coq's type system is much more expressive than OCaml's, such casts are unavoidable in general.  Since the OCaml type-checker is no longer checking full safety of programs, we must rely on Coq's extractor to use casts only in provably safe ways. *)
+(** We can generate executable OCaml code with the command %\index{Vernacular commands!Recursive Extraction}%[Recursive Extraction insert], which also automatically outputs the OCaml versions of all of [insert]'s dependencies.  In our previous extractions, we wound up with clean OCaml code.  Here, we find uses of %\index{Obj.magic}\texttt{%#<tt>#Obj.magic#</tt>#%}%, OCaml's unsafe cast operator for tweaking the apparent type of an expression in an arbitrary way.  Casts appear for this example because the return type of [insert] depends on the _value_ of the function's argument, a pattern which OCaml cannot handle.  Since Coq's type system is much more expressive than OCaml's, such casts are unavoidable in general.  Since the OCaml type-checker is no longer checking full safety of programs, we must rely on Coq's extractor to use casts only in provably safe ways. *)
 
 (* begin hide *)
 Recursive Extraction insert.
@@ -1237,7 +1237,7 @@
 Eval hnf in matches a_b "b".
 (* end hide *)
 
-(** Many regular expression matching problems are easy to test.  The reader may run each of the following queries to verify that it gives the correct answer.  We use evaluation strategy %\index{tactics!hnf}%[hnf] to reduce each term to %\index{head-normal form}\emph{%#<i>#head-normal form#</i>#%}%, where the datatype constructor used to build its value is known. *)
+(** Many regular expression matching problems are easy to test.  The reader may run each of the following queries to verify that it gives the correct answer.  We use evaluation strategy %\index{tactics!hnf}%[hnf] to reduce each term to %\index{head-normal form}%_head-normal form_, where the datatype constructor used to build its value is known. *)
 
 Example a_star := Star (Char "a"%char).
 Eval hnf in matches a_star "".
--- a/src/Predicates.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Predicates.v	Wed Jun 06 11:25:13 2012 -0400
@@ -54,9 +54,9 @@
 
 (** Recall that [unit] is the type with only one value, and [True] is the proposition that always holds.  Despite this superficial difference between the two concepts, in both cases we can use the same inductive definition mechanism.  The connection goes further than this.  We see that we arrive at the definition of [True] by replacing [unit] by [True], [tt] by [I], and [Set] by [Prop].  The first two of these differences are superficial changes of names, while the third difference is the crucial one for separating programs from proofs.  A term [T] of type [Set] is a type of programs, and a term of type [T] is a program.  A term [T] of type [Prop] is a logical proposition, and its proofs are of type [T].  Chapter 12 goes into more detail about the theoretical differences between [Prop] and [Set].  For now, we will simply follow common intuitions about what a proof is.
 
-The type [unit] has one value, [tt].  The type [True] has one proof, [I].  Why distinguish between these two types?  Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact %\textit{%#<i>#should not#</i>#%}% be distinguished.  There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving.  There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving.
+The type [unit] has one value, [tt].  The type [True] has one proof, [I].  Why distinguish between these two types?  Many people who have read about Curry-Howard in an abstract context and not put it to use in proof engineering answer that the two types in fact _should not_ be distinguished.  There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving.  There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving.
 
-The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are.  This idea is known as %\index{proof irrelevance}\textit{%#<i>#proof irrelevance#</i>#%}%, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition.  Proof irrelevance is compatible with, but not derivable in, Gallina.  Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs.  Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.
+The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are.  This idea is known as %\index{proof irrelevance}%_proof irrelevance_, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition.  Proof irrelevance is compatible with, but not derivable in, Gallina.  Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs.  Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.
 
 With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions.  We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus %``%#"#datatypes.#"#%''%  This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now.  A future chapter introduces more manual application of dependent types. *)
 
@@ -381,11 +381,11 @@
 
 (** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop].  [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False].  Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth?
 
-The answer comes from the fact that Coq implements %\index{constructive logic}\textit{%#<i>#constructive#</i>#%}% or %\index{intuitionistic logic|see{constructive logic}}\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\index{classical logic}\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with.  In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold.  In general, we can only prove these tautologies when [P] is %\index{decidability}\textit{%#<i>#decidable#</i>#%}%, in the sense of %\index{computability|see{decidability}}%computability theory.  The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P].  Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like %``%#"#this particular Turing machine halts.#"#%''%
+The answer comes from the fact that Coq implements %\index{constructive logic}%_constructive_ or %\index{intuitionistic logic|see{constructive logic}}%_intuitionistic_ logic, in contrast to the %\index{classical logic}%_classical_ logic that you may be more familiar with.  In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold.  In general, we can only prove these tautologies when [P] is %\index{decidability}%_decidable_, in the sense of %\index{computability|see{decidability}}%computability theory.  The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P].  Since our proofs are just functional programs which we can run, a general %\index{law of the excluded middle}%law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like %``%#"#this particular Turing machine halts.#"#%''%
 
 Hence the distinction between [bool] and [Prop].  Programs of type [bool] are computational by construction; we can always run them to determine their results.  Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply %``%#"#run a [Prop] to determine its truth.#"#%''%
 
-Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions.  That is, each connective is defined independently using a simple, shared mechanism.  Constructivity also enables a trick called %\index{program extraction}\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved.  Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.
+Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions.  That is, each connective is defined independently using a simple, shared mechanism.  Constructivity also enables a trick called %\index{program extraction}%_program extraction_, where we write programs by phrasing them as theorems to be proved.  Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.
 
 We will see more about Coq's program extraction facility in a later chapter.  However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally.  It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it.  It is almost always most useful to maintain the distinction between programs and proofs.  If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove.  It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies. *)
 
@@ -492,9 +492,9 @@
 (* end thide *)
 Qed.
 
-(** We can call [isZero] a %\index{judgment}\textit{%#<i>#judgment#</i>#%}%, in the sense often used in the semantics of programming languages.  Judgments are typically defined in the style of %\index{natural deduction}\textit{%#<i>#natural deduction#</i>#%}%, where we write a number of %\index{inference rules}\textit{%#<i>#inference rules#</i>#%}% with premises appearing above a solid line and a conclusion appearing below the line.  In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it.  The proof of [isZero_zero] demonstrates how we can apply an inference rule.
+(** We can call [isZero] a %\index{judgment}%_judgment_, in the sense often used in the semantics of programming languages.  Judgments are typically defined in the style of %\index{natural deduction}%_natural deduction_, where we write a number of %\index{inference rules}%_inference rules_ with premises appearing above a solid line and a conclusion appearing below the line.  In this example, the sole constructor [IsZero] of [isZero] can be thought of as the single inference rule for deducing [isZero], with nothing above the line and [isZero 0] below it.  The proof of [isZero_zero] demonstrates how we can apply an inference rule.
 
-The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter.  Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop].  We saw examples of parameterized types like [list], but there the parameters appeared with names %\textit{%#<i>#before#</i>#%}% the colon.  Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors.
+The definition of [isZero] differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter.  Instead of writing just [Set] or [Prop] after the colon, here we write [nat -> Prop].  We saw examples of parameterized types like [list], but there the parameters appeared with names _before_ the colon.  Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to use different arguments to the type for different constructors.
 
 For instance, our definition [isZero] makes the predicate provable only when the argument is [0].  We can see that the concept of equality is somehow implicit in the inductive definition mechanism.  The way this is accomplished is similar to the way that logic variables are used in %\index{Prolog}%Prolog, and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics.  In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type!%\index{Gallina terms!eq}\index{Gallina terms!refl\_equal}% *)
 
@@ -507,7 +507,7 @@
  
   ]]
 
-  Behind the scenes, uses of infix [=] are expanded to instances of [eq].  We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type.  The type of [eq] allows us to state any equalities, even those that are provably false.  However, examining the type of equality's sole constructor [refl_equal], we see that we can only %\textit{%#<i>#prove#</i>#%}% equality when its two arguments are syntactically equal.  This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition.  Another way of stating that definition is: equality is defined as the least reflexive relation.
+  Behind the scenes, uses of infix [=] are expanded to instances of [eq].  We see that [eq] has both a parameter [x] that is fixed and an extra unnamed argument of the same type.  The type of [eq] allows us to state any equalities, even those that are provably false.  However, examining the type of equality's sole constructor [refl_equal], we see that we can only _prove_ equality when its two arguments are syntactically equal.  This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like [reflexivity] and [rewrite], are implemented treating [eq] as just another inductive type with a well-chosen definition.  Another way of stating that definition is: equality is defined as the least reflexive relation.
 
 Returning to the example of [isZero], we can see how to work with hypotheses that use this predicate. *)
 
@@ -739,7 +739,7 @@
   IHn : forall m : nat, even n -> even m -> even (n + m)
   ]]
 
-  Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn].  We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy.  Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%.  This technique is commonly called %\index{rule induction}\textit{%#<i>#rule induction#</i>#%}% in programming language semantics.  In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with %``%#"#normal#"#%''% induction is apparent.
+  Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn].  We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy.  Instead of inducting on the structure of [n], we should induct _on the structure of one of the [even] proofs_.  This technique is commonly called %\index{rule induction}%_rule induction_ in programming language semantics.  In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with %``%#"#normal#"#%''% induction is apparent.
 
   Recall that tactics like [induction] and [destruct] may be passed numbers to refer to unnamed lefthand sides of implications in the conclusion, where the argument [n] refers to the [n]th such hypothesis. *)
 
@@ -892,7 +892,7 @@
   intros; eapply even_contra'; eauto.
 Qed.
 
-(** We use a variant %\index{tactics!apply}%[eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto].  An invocation of [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments.  In this case, [eauto] is able to determine the right values for those unification variables, using (unsurprisingly) a variant of the classic algorithm for %\emph{%#<i>#unification#</i>#%}~\cite{unification}%.
+(** We use a variant %\index{tactics!apply}%[eapply] of [apply] which has the same relationship to [apply] as [eauto] has to [auto].  An invocation of [apply] only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas [eapply] will introduce unification variables for undetermined arguments.  In this case, [eauto] is able to determine the right values for those unification variables, using (unsurprisingly) a variant of the classic algorithm for _unification_ %\cite{unification}%.
 
 By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq.  Imagine that we had tried to prove [even_contra'] with all of the [forall] quantifiers moved to the front of the lemma statement. *)
 
@@ -913,7 +913,7 @@
  
    ]]
 
-   We are out of luck here.  The inductive hypothesis is trivially true, since its assumption is false.  In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n].  This is because the quantification of [n] %\textit{%#<i>#appeared after the thing we are inducting on#</i>#%}% in the theorem statement.  In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof.  Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. *)
+   We are out of luck here.  The inductive hypothesis is trivially true, since its assumption is false.  In the version of this proof that succeeded, [IHeven] had an explicit quantification over [n].  This is because the quantification of [n] _appeared after the thing we are inducting on_ in the theorem statement.  In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof.  Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses. *)
 
 Abort.
 
--- a/src/ProgLang.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/ProgLang.v	Wed Jun 06 11:25:13 2012 -0400
@@ -82,7 +82,7 @@
 
 (** * Dependent de Bruijn Indices *)
 
-(** The first encoding is one we met first in Chapter 9, the %\emph{%#<i>#dependent de Bruijn index#</i>#%}% encoding.  We represent program syntax terms in a type familiy parametrized by a list of types, representing the %\emph{%#<i>#typing context#</i>#%}%, or information on which free variables are in scope and what their types are.  Variables are represented in a way isomorphic to the natural numbers, where number 0 represents the first element in the context, number 1 the second element, and so on.  Actually, instead of numbers, we use the [member] dependent type family from Chapter 9. *)
+(** The first encoding is one we met first in Chapter 9, the _dependent de Bruijn index_ encoding.  We represent program syntax terms in a type familiy parametrized by a list of types, representing the _typing context_, or information on which free variables are in scope and what their types are.  Variables are represented in a way isomorphic to the natural numbers, where number 0 represents the first element in the context, number 1 the second element, and so on.  Actually, instead of numbers, we use the [member] dependent type family from Chapter 9. *)
 
 Module FirstOrder.
 
@@ -124,7 +124,7 @@
       | Let _ _ _ e1 e2 => fun s => termDenote e2 (termDenote e1 s ::: s)
     end.
 
-  (** With this term representation, some program transformations are easy to implement and prove correct.  Certainly we would be worried if this were not the the case for the %\emph{%#<i>#identity#</i>#%}% transformation, which takes a term apart and reassembles it. *)
+  (** With this term representation, some program transformations are easy to implement and prove correct.  Certainly we would be worried if this were not the the case for the _identity_ transformation, which takes a term apart and reassembles it. *)
 
   Fixpoint ident G t (e : term G t) : term G t :=
     match e with
@@ -144,17 +144,27 @@
     induction e; t.
   Qed.
 
-  (** A slightly more ambitious transformation belongs to the family of %\emph{%#<i>#constant folding#</i>#%}% optimizations we have used as examples in other chapters. *)
+  (** A slightly more ambitious transformation belongs to the family of _constant folding_ optimizations we have used as examples in other chapters. *)
+
+  Axiom admit : forall T, T.
 
   Fixpoint cfold G t (e : term G t) : term G t :=
     match e with
       | Plus G e1 e2 =>
         let e1' := cfold e1 in
         let e2' := cfold e2 in
-          match e1', e2' return _ with
-            | Const _ n1, Const _ n2 => Const (n1 + n2)
-            | _, _ => Plus e1' e2'
-          end
+        let maybeOpt := match e1' return _ with
+                          | Const _ n1 =>
+                            match e2' return _ with
+                              | Const _ n2 => Some (Const (n1 + n2))
+                              | _ => None
+                            end
+                          | _ => None
+                        end in
+        match maybeOpt with
+          | None => Plus e1' e2'
+          | Some e' => e'
+        end
 
       | Abs _ _ _ e1 => Abs (cfold e1)
       | App _ _ _ e1 e2 => App (cfold e1) (cfold e2)
@@ -178,9 +188,9 @@
               end; t).
   Qed.
 
-  (** The transformations we have tried so far have been straightforward because they do not have interesting effects on the variable binding structure of terms.  The dependent de Bruijn representation is called %\index{first-order syntax}\emph{%#<i>#first-order#</i>#%}% because it encodes variable identity explicitly; all such representations incur bookkeeping overheads in transformations that rearrange binding structure.
+  (** The transformations we have tried so far have been straightforward because they do not have interesting effects on the variable binding structure of terms.  The dependent de Bruijn representation is called %\index{first-order syntax}%_first-order_ because it encodes variable identity explicitly; all such representations incur bookkeeping overheads in transformations that rearrange binding structure.
 
-     As an example of a tricky transformation, consider one that removes all uses of %``%#"#[let x = e1 in e2]#"#%''% by substituting [e1] for [x] in [e2].  We will implement the translation by pairing the %``%#"#compile-time#"#%''% typing environment with a %``%#"#run-time#"#%''% value environment or %\emph{%#<i>#substitution#</i>#%}%, mapping each variable to a value to be substituted for it.  Such a substitute term may be placed within a program in a position with a larger typing environment than applied at the point where the substitute term was chosen.  To support such context transplantation, we need %\emph{%#<i>#lifting#</i>#%}%, a standard de Bruijn indices operation.  With dependent typing, lifting corresponds to weakening for typing judgments.
+     As an example of a tricky transformation, consider one that removes all uses of %``%#"#[let x = e1 in e2]#"#%''% by substituting [e1] for [x] in [e2].  We will implement the translation by pairing the %``%#"#compile-time#"#%''% typing environment with a %``%#"#run-time#"#%''% value environment or _substitution_, mapping each variable to a value to be substituted for it.  Such a substitute term may be placed within a program in a position with a larger typing environment than applied at the point where the substitute term was chosen.  To support such context transplantation, we need _lifting_, a standard de Bruijn indices operation.  With dependent typing, lifting corresponds to weakening for typing judgments.
 
      The fundamental goal of lifting is to add a new variable to a typing context, maintaining the validity of a term in the expanded context.  To express the operation of adding a type to a context, we use a helper function [insertAt]. *)
 
@@ -222,7 +232,7 @@
       | Let _ _ _ e1 e2 => Let (lift' t' n e1) (lift' t' (S n) e2)
     end.
 
-  (** In the [Let] removal transformation, we only need to apply lifting to add a new variable at the %\emph{%#<i>#beginning#</i>#%}% of a typing context, so we package lifting into this final, simplified form. *)
+  (** In the [Let] removal transformation, we only need to apply lifting to add a new variable at the _beginning_ of a typing context, so we package lifting into this final, simplified form. *)
 
   Definition lift G t' t (e : term G t) : term (t' :: G) t :=
     lift' t' O e.
@@ -308,11 +318,11 @@
 
 (** * Parametric Higher-Order Abstract Syntax *)
 
-(** In contrast to first-order encodings, %\index{higher-order syntax}\emph{%#<i>#higher-order#</i>#%}% encodings avoid explicit modeling of variable identity.  Instead, the binding constructs of an %\index{object language}\emph{%#<i>#object language#</i>#%}% (the language being formalized) can be represented using the binding constructs of the %\index{meta language}\emph{%#<i>#meta language#</i>#%}% (the language in which the formalization is done).  The best known higher-order encoding is called %\index{higher-order abstract syntax}\index{HOAS}\emph{%#<i>#higher-order abstract syntax (HOAS)#</i>#%}~\cite{HOAS}%, and we can start by attempting to apply it directly in Coq. *)
+(** In contrast to first-order encodings, %\index{higher-order syntax}%_higher-order_ encodings avoid explicit modeling of variable identity.  Instead, the binding constructs of an %\index{object language}%_object language_ (the language being formalized) can be represented using the binding constructs of the %\index{meta language}%_meta language_ (the language in which the formalization is done).  The best known higher-order encoding is called %\index{higher-order abstract syntax}\index{HOAS}%_higher-order abstract syntax (HOAS)_ %\cite{HOAS}%, and we can start by attempting to apply it directly in Coq. *)
 
 Module HigherOrder.
 
-  (** With HOAS, each object language binding construct is represented with a %\emph{%#<i>#function#</i>#%}% of the meta language.  Here is what we get if we apply that idea within an inductive definition of term syntax. *)
+  (** With HOAS, each object language binding construct is represented with a _function_ of the meta language.  Here is what we get if we apply that idea within an inductive definition of term syntax. *)
 
 (** %\vspace{-.15in}%[[
   Inductive term : type -> Type :=
@@ -327,7 +337,7 @@
 
    However, Coq rejects this definition for failing to meet the %\index{strict positivity restriction}%strict positivity restriction.  For instance, the constructor [Abs] takes an argument that is a function over the same type family [term] that we are defining.  Inductive definitions of this kind can be used to write non-terminating Gallina programs, which breaks the consistency of Coq's logic.
 
-   An alternate higher-order encoding is %\index{parametric higher-order abstract syntax}\index{PHOAS}\emph{%#<i>#parametric HOAS#</i>#%}%, as introduced by Washburn and Weirich%~\cite{BGB}% for Haskell and tweaked by me%~\cite{PhoasICFP08}% for use in Coq.  Here the idea is to parametrize the syntax type by a type family standing for a %\emph{%#<i>#representation of variables#</i>#%}%. *)
+   An alternate higher-order encoding is %\index{parametric higher-order abstract syntax}\index{PHOAS}%_parametric HOAS_, as introduced by Washburn and Weirich%~\cite{BGB}% for Haskell and tweaked by me%~\cite{PhoasICFP08}% for use in Coq.  Here the idea is to parametrize the syntax type by a type family standing for a _representation of variables_. *)
 
   Section var.
     Variable var : type -> Type.
@@ -348,13 +358,13 @@
   Implicit Arguments Const [var].
   Implicit Arguments Abs [var dom ran].
 
-  (** Coq accepts this definition because our embedded functions now merely take %\emph{%#<i>#variables#</i>#%}% as arguments, instead of arbitrary terms.  One might wonder whether there is an easy loophole to exploit here, instantiating the parameter [var] as [term] itself.  However, to do that, we would need to choose a variable representation for this nested mention of [term], and so on through an infinite descent into [term] arguments.
+  (** Coq accepts this definition because our embedded functions now merely take _variables_ as arguments, instead of arbitrary terms.  One might wonder whether there is an easy loophole to exploit here, instantiating the parameter [var] as [term] itself.  However, to do that, we would need to choose a variable representation for this nested mention of [term], and so on through an infinite descent into [term] arguments.
 
   We write the final type of a closed term using polymorphic quantification over all possible choices of [var] type family. *)
 
   Definition Term t := forall var, term var t.
 
-  (** Here are the new representations of the example terms from the last section.  Note how each is written as a function over a [var] choice, such that the specific choice has no impact on the %\emph{%#<i>#structure#</i>#%}% of the term. *)
+  (** Here are the new representations of the example terms from the last section.  Note how each is written as a function over a [var] choice, such that the specific choice has no impact on the _structure_ of the term. *)
 
   Example add : Term (Func Nat (Func Nat Nat)) := fun var =>
     Abs (fun x => Abs (fun y => Plus (Var x) (Var y))).
@@ -362,7 +372,7 @@
   Example three_the_hard_way : Term Nat := fun var =>
     App (App (add var) (Const 1)) (Const 2).
 
-  (** The argument [var] does not even appear in the function body for [add].  How can that be?  By giving our terms expressive types, we allow Coq to infer many arguments for us.  In fact, we do not even need to name the [var] argument!  Even though these formal parameters appear as underscores, they %\emph{%#<i>#are#</i>#%}% mentioned in the function bodies that type inference calculates. *)
+  (** The argument [var] does not even appear in the function body for [add].  How can that be?  By giving our terms expressive types, we allow Coq to infer many arguments for us.  In fact, we do not even need to name the [var] argument!  Even though these formal parameters appear as underscores, they _are_ mentioned in the function bodies that type inference calculates. *)
 
   Example add' : Term (Func Nat (Func Nat Nat)) := fun _ =>
     Abs (fun x => Abs (fun y => Plus (Var x) (Var y))).
@@ -373,7 +383,7 @@
 
   (** ** Functional Programming with PHOAS *)
 
-  (** It may not be at all obvious that the PHOAS representation admits the crucial computable operations.  The key to effective deconstruction of PHOAS terms is one principle: treat the [var] parameter as an unconstrained choice of %\emph{%#<i>#which data should be annotated on each variable#</i>#%}%.  We will begin with a simple example, that of counting how many variable nodes appear in a PHOAS term.  This operation requires no data annotated on variables, so we simply annotate variables with [unit] values.  Note that, when we go under binders in the cases for [Abs] and [Let], we must provide the data value to annotate on the new variable we pass beneath.  For our current choice of [unit] data, we always pass [tt]. *)
+  (** It may not be at all obvious that the PHOAS representation admits the crucial computable operations.  The key to effective deconstruction of PHOAS terms is one principle: treat the [var] parameter as an unconstrained choice of _which data should be annotated on each variable_.  We will begin with a simple example, that of counting how many variable nodes appear in a PHOAS term.  This operation requires no data annotated on variables, so we simply annotate variables with [unit] values.  Note that, when we go under binders in the cases for [Abs] and [Let], we must provide the data value to annotate on the new variable we pass beneath.  For our current choice of [unit] data, we always pass [tt]. *)
 
   Fixpoint countVars t (e : term (fun _ => unit) t) : nat :=
     match e with
@@ -427,7 +437,7 @@
      ]]
      *)
 
-  (** However, it is not necessary to convert to first-order form to support many common operations on terms.  For instance, we can implement substitution of one term in another.  The key insight here is to %\emph{%#<i>#tag variables with terms#</i>#%}%, so that, on encountering a variable, we can simply replace it by the term in its tag.  We will call this function initially on a term with exactly one free variable, tagged with the appropriate substitute.  During recursion, new variables are added, but they are only tagged with their own term equivalents.  Note that this function [squash] is parametrized over a specific [var] choice. *)
+  (** However, it is not necessary to convert to first-order form to support many common operations on terms.  For instance, we can implement substitution of one term in another.  The key insight here is to _tag variables with terms_, so that, on encountering a variable, we can simply replace it by the term in its tag.  We will call this function initially on a term with exactly one free variable, tagged with the appropriate substitute.  During recursion, new variables are added, but they are only tagged with their own term equivalents.  Note that this function [squash] is parametrized over a specific [var] choice. *)
 
   Fixpoint squash var t (e : term (term var) t) : term var t :=
     match e with
@@ -463,7 +473,7 @@
                (Const 1)) (Const 2)) (Const 3)
      ]]
 
-     One further development, which may seem surprising at first, is that we can also implement a usual term denotation function, when we %\emph{%#<i>#tag variables with their denotations#</i>#%}%. *)
+     One further development, which may seem surprising at first, is that we can also implement a usual term denotation function, when we _tag variables with their denotations_. *)
 
   Fixpoint termDenote t (e : term typeDenote t) : typeDenote t :=
     match e with
@@ -694,11 +704,11 @@
   
   (** ** Establishing Term Well-Formedness *)
 
-  (** Can there be values of type [Term t] that are not well-formed according to [Wf]?  We expect that Gallina satisfies key %\index{parametricity}\emph{%#<i>#parametricity#</i>#%}~\cite{parametricity}% properties, which indicate how polymorphic types may only be inhabited by specific values.  We omit details of parametricity theorems here, but [forall t (E : Term t), Wf E] follows the flavor of such theorems.  One option would be to assert that fact as an axiom, %``%#"#proving#"#%''% that any output of any of our translations is well-formed.  We could even prove the soundness of the theorem on paper meta-theoretically, say by considering some particular model of CIC.
+  (** Can there be values of type [Term t] that are not well-formed according to [Wf]?  We expect that Gallina satisfies key %\index{parametricity}%_parametricity_ %\cite{parametricity}% properties, which indicate how polymorphic types may only be inhabited by specific values.  We omit details of parametricity theorems here, but [forall t (E : Term t), Wf E] follows the flavor of such theorems.  One option would be to assert that fact as an axiom, %``%#"#proving#"#%''% that any output of any of our translations is well-formed.  We could even prove the soundness of the theorem on paper meta-theoretically, say by considering some particular model of CIC.
 
      To be more cautious, we could prove [Wf] for every term that interests us, threading such proofs through all transformations.  Here is an example exercise of that kind, for [Unlet].
 
-     First, we prove that [wf] is %\emph{%#<i>#monotone#</i>#%}%, in that a given instance continues to hold as we add new variable pairs to the variable isomorphism. *)
+     First, we prove that [wf] is _monotone_, in that a given instance continues to hold as we add new variable pairs to the variable isomorphism. *)
 
   Hint Constructors wf.
   Hint Extern 1 (In _ _) => simpl; tauto.
--- a/src/Reflection.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Reflection.v	Wed Jun 06 11:25:13 2012 -0400
@@ -18,7 +18,7 @@
 
 (** %\chapter{Proof by Reflection}% *)
 
-(** The last chapter highlighted a very heuristic approach to proving.  In this chapter, we will study an alternative technique, %\index{proof by reflection}\textit{%#<i>#proof by reflection#</i>#%}~\cite{reflection}%.  We will write, in Gallina, decision procedures with proofs of correctness, and we will appeal to these procedures in writing very short proofs.  Such a proof is checked by running the decision procedure.  The term %\textit{%#<i>#reflection#</i>#%}% applies because we will need to translate Gallina propositions into values of inductive types representing syntax, so that Gallina programs may analyze them. *)
+(** The last chapter highlighted a very heuristic approach to proving.  In this chapter, we will study an alternative technique, %\index{proof by reflection}%_proof by reflection_ %\cite{reflection}%.  We will write, in Gallina, decision procedures with proofs of correctness, and we will appeal to these procedures in writing very short proofs.  Such a proof is checked by running the decision procedure.  The term _reflection_ applies because we will need to translate Gallina propositions into values of inductive types representing syntax, so that Gallina programs may analyze them. *)
 
 
 (** * Proving Evenness *)
@@ -81,7 +81,7 @@
     end); auto.
 Defined.
 
-(** The function [check_even] may be viewed as a %\emph{%#<i>#verified decision procedure#</i>#%}%, because its type guarantees that it never returns [Yes] for inputs that are not even.
+(** The function [check_even] may be viewed as a _verified decision procedure_, because its type guarantees that it never returns [Yes] for inputs that are not even.
 
    Now we can use dependent pattern-matching to write a function that performs a surprising feat.  When given a [partial P], this function [partialOut] returns a proof of [P] if the [partial] value contains a proof, and it returns a (useless) proof of [True] otherwise.  From the standpoint of ML and Haskell programming, it seems impossible to write such a type, but it is trivial with a [return] annotation. *)
 
@@ -167,7 +167,7 @@
 
     As we might expect, the proof that [tauto] builds contains explicit applications of natural deduction rules.  For large formulas, this can add a linear amount of proof size overhead, beyond the size of the input.
 
-   To write a reflective procedure for this class of goals, we will need to get into the actual %``%#"#reflection#"#%''% part of %``%#"#proof by reflection.#"#%''%  It is impossible to case-analyze a [Prop] in any way in Gallina.  We must %\index{reification}\textit{%#<i>#reify#</i>#%}% [Prop] into some type that we %\textit{%#<i>#can#</i>#%}% analyze.  This inductive type is a good candidate: *)
+   To write a reflective procedure for this class of goals, we will need to get into the actual %``%#"#reflection#"#%''% part of %``%#"#proof by reflection.#"#%''%  It is impossible to case-analyze a [Prop] in any way in Gallina.  We must %\index{reification}%_reify_ [Prop] into some type that we _can_ analyze.  This inductive type is a good candidate: *)
 
 (* begin thide *)
 Inductive taut : Set :=
@@ -176,7 +176,7 @@
 | TautOr : taut -> taut -> taut
 | TautImp : taut -> taut -> taut.
 
-(** We write a recursive function to %\emph{%#<i>#reflect#</i>#%}% this syntax back to [Prop].  Such functions are also called %\index{interpretation function}\emph{%#<i>#interpretation functions#</i>#%}%, and have used them in previous examples to give semantics to small programming languages. *)
+(** We write a recursive function to _reflect_ this syntax back to [Prop].  Such functions are also called %\index{interpretation function}%_interpretation functions_, and have used them in previous examples to give semantics to small programming languages. *)
 
 Fixpoint tautDenote (t : taut) : Prop :=
   match t with
@@ -237,7 +237,7 @@
      : True /\ True -> True \/ True /\ (True -> True)
     ]]
 
-    It is worth considering how the reflective tactic improves on a pure-Ltac implementation.  The formula reification process is just as ad-hoc as before, so we gain little there.  In general, proofs will be more complicated than formula translation, and the %``%#"#generic proof rule#"#%''% that we apply here %\textit{%#<i>#is#</i>#%}% on much better formal footing than a recursive Ltac function.  The dependent type of the proof guarantees that it %``%#"#works#"#%''% on any input formula.  This is all in addition to the proof-size improvement that we have already seen.
+    It is worth considering how the reflective tactic improves on a pure-Ltac implementation.  The formula reification process is just as ad-hoc as before, so we gain little there.  In general, proofs will be more complicated than formula translation, and the %``%#"#generic proof rule#"#%''% that we apply here _is_ on much better formal footing than a recursive Ltac function.  The dependent type of the proof guarantees that it %``%#"#works#"#%''% on any input formula.  This is all in addition to the proof-size improvement that we have already seen.
 
     It may also be worth pointing out that our previous example of evenness testing used a function [partialOut] for sound handling of input goals that the verified decision procedure fails to prove.  Here, we prove that our procedure [tautTrue] (recall that an inductive proof may be viewed as a recursive procedure) is able to prove any goal representable in [taut], so no extra step is necessary. *)
 
@@ -785,7 +785,7 @@
     | _ => constr:(Const e)
   end.
 
-(** Recall that a regular Ltac pattern variable [?X] only matches terms that %\emph{%#<i>#do not mention new variables introduced within the pattern#</i>#%}%.  In our naive implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument!  Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
+(** Recall that a regular Ltac pattern variable [?X] only matches terms that _do not mention new variables introduced within the pattern_.  In our naive implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument!  Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
 
    To handle functions in general, we will use the pattern variable form [@?X], which allows [X] to mention newly introduced variables that are declared explicitly.  For instance: *)
 
@@ -804,7 +804,7 @@
     | _ => constr:(Const e)
   end.
 
-(** Now, in the abstraction case, we bind [E1] as a function from an [x] value to the value of the abstraction body.  Unfortunately, our recursive call there is not destined for success.  It will match the same abstraction pattern and trigger another recursive call, and so on through infinite recursion.  One last refactoring yields a working procedure.  The key idea is to consider every input to [refl'] as %\emph{%#<i>#a function over the values of variables introduced during recursion#</i>#%}%. *)
+(** Now, in the abstraction case, we bind [E1] as a function from an [x] value to the value of the abstraction body.  Unfortunately, our recursive call there is not destined for success.  It will match the same abstraction pattern and trigger another recursive call, and so on through infinite recursion.  One last refactoring yields a working procedure.  The key idea is to consider every input to [refl'] as _a function over the values of variables introduced during recursion_. *)
 
 Reset refl'.
 Ltac refl' e :=
--- a/src/StackMachine.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/StackMachine.v	Wed Jun 06 11:25:13 2012 -0400
@@ -1,4 +1,4 @@
-(* Copyright (c) 2008-2011, Adam Chlipala
+(* Copyright (c) 2008-2012, Adam Chlipala
  * 
  * This work is licensed under a
  * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
@@ -85,15 +85,15 @@
 
 ]]
 
-Languages like Haskell and ML have a convenient %\index{principal types}\index{type inference}\emph{%#<i>#principal types#</i>#%}% property, which gives us strong guarantees about how effective type inference will be.  Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice.  Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language.
+Languages like Haskell and ML have a convenient %\index{principal types}\index{type inference}%_principal types_ property, which gives us strong guarantees about how effective type inference will be.  Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice.  Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language.
 
-This is as good a time as any to mention the preponderance of different languages associated with Coq.  The theoretical foundation of Coq is a formal system called the %\index{Calculus of Inductive Constructions}\index{CIC|see{Calculus of Inductive Constructions}}\emph{%#<i>#Calculus of Inductive Constructions (CIC)#</i>#%}~\cite{CIC}%, which is an extension of the older %\index{Calculus of Constructions}\index{CoC|see{Calculus of Constructions}}\emph{%#<i>#Calculus of Constructions (CoC)#</i>#%}~\cite{CoC}%.  CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development.  Still, it is nice to know that it has been proved that CIC enjoys properties like %\index{strong normalization}\emph{%#<i>#strong normalization#</i>#%}~\cite{CIC}%, meaning that every program (and, more importantly, every proof term) terminates; and %\index{relative consistency}\emph{%#<i>#relative consistency#</i>#%}~\cite{SetsInTypes}% with systems like versions of %\index{Zermelo-Fraenkel set theory}%Zermelo-Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory.
+This is as good a time as any to mention the preponderance of different languages associated with Coq.  The theoretical foundation of Coq is a formal system called the %\index{Calculus of Inductive Constructions}\index{CIC|see{Calculus of Inductive Constructions}}%_Calculus of Inductive Constructions (CIC)_ %\cite{CIC}%, which is an extension of the older %\index{Calculus of Constructions}\index{CoC|see{Calculus of Constructions}}%_Calculus of Constructions (CoC)_ %\cite{CoC}%.  CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development.  Still, it is nice to know that it has been proved that CIC enjoys properties like %\index{strong normalization}%_strong normalization_ %\cite{CIC}%, meaning that every program (and, more importantly, every proof term) terminates; and %\index{relative consistency}%_relative consistency_ %\cite{SetsInTypes}% with systems like versions of %\index{Zermelo-Fraenkel set theory}%Zermelo-Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory.
 
-Coq is actually based on an extension of CIC called %\index{Gallina}\emph{%#<i>#Gallina#</i>#%}%.  The text after the [:=] and before the period in the last code example is a term of Gallina.  Gallina adds many useful features that are not compiled internally to more primitive CIC features.  The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission.
+Coq is actually based on an extension of CIC called %\index{Gallina}%_Gallina_.  The text after the [:=] and before the period in the last code example is a term of Gallina.  Gallina adds many useful features that are not compiled internally to more primitive CIC features.  The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission.
 
-Next, there is %\index{Ltac}\emph{%#<i>#Ltac#</i>#%}%, Coq's domain-specific language for writing proofs and decision procedures. We will see some basic examples of Ltac later in this chapter, and much of this book is devoted to more involved Ltac examples.
+Next, there is %\index{Ltac}%_Ltac_, Coq's domain-specific language for writing proofs and decision procedures. We will see some basic examples of Ltac later in this chapter, and much of this book is devoted to more involved Ltac examples.
 
-Finally, commands like [Inductive] and [Definition] are part of %\index{Vernacular commands}\emph{%#<i>#the Vernacular#</i>#%}%, which includes all sorts of useful queries and requests to the Coq system.  Every Coq source file is a series of vernacular commands, where many command forms take arguments that are Gallina or Ltac programs.  (Actually, Coq source files are more like %\emph{%#<i>#trees#</i>#%}% of vernacular commands, thanks to various nested scoping constructs.)
+Finally, commands like [Inductive] and [Definition] are part of %\index{Vernacular commands}%_the Vernacular_, which includes all sorts of useful queries and requests to the Coq system.  Every Coq source file is a series of vernacular commands, where many command forms take arguments that are Gallina or Ltac programs.  (Actually, Coq source files are more like _trees_ of vernacular commands, thanks to various nested scoping constructs.)
 
 %\medskip%
 
@@ -107,7 +107,7 @@
 
 (** We declare explicitly that this is a recursive definition, using the keyword [Fixpoint].  The rest should be old hat for functional programmers. *)
 
-(** It is convenient to be able to test definitions before starting to prove things about them.  We can verify that our semantics is sensible by evaluating some sample uses, using the command %\index{Vernacular commands!Eval}%[Eval].  This command takes an argument expressing a %\index{reduction strategy}\emph{%#<i>#reduction strategy#</i>#%}%, or an %``%#"#order of evaluation.#"#%''%  Unlike with ML, which hardcodes an %\emph{%#<i>#eager#</i>#%}% reduction strategy, or Haskell, which hardcodes a %\emph{%#<i>#lazy#</i>#%}% strategy, in Coq we are free to choose between these and many other orders of evaluation, because all Coq programs terminate.  In fact, Coq silently checked %\index{termination checking}%termination of our [Fixpoint] definition above, using a simple heuristic based on monotonically decreasing size of arguments across recursive calls.  Specifically, recursive calls must be made on arguments that were pulled out of the original recursive argument with [match] expressions.  (In Chapter 7, we will see some ways of getting around this restriction, though simply removing the restriction would leave Coq useless as a theorem proving tool, for reasons we will start to learn about in the next chapter.)
+(** It is convenient to be able to test definitions before starting to prove things about them.  We can verify that our semantics is sensible by evaluating some sample uses, using the command %\index{Vernacular commands!Eval}%[Eval].  This command takes an argument expressing a %\index{reduction strategy}%_reduction strategy_, or an %``%#"#order of evaluation.#"#%''%  Unlike with ML, which hardcodes an _eager_ reduction strategy, or Haskell, which hardcodes a _lazy_ strategy, in Coq we are free to choose between these and many other orders of evaluation, because all Coq programs terminate.  In fact, Coq silently checked %\index{termination checking}%termination of our [Fixpoint] definition above, using a simple heuristic based on monotonically decreasing size of arguments across recursive calls.  Specifically, recursive calls must be made on arguments that were pulled out of the original recursive argument with [match] expressions.  (In Chapter 7, we will see some ways of getting around this restriction, though simply removing the restriction would leave Coq useless as a theorem proving tool, for reasons we will start to learn about in the next chapter.)
 
 To return to our test evaluations, we run the [Eval] command using the [simpl] evaluation strategy, whose definition is best postponed until we have learned more about Coq's foundations, but which usually gets the job done. *)
 
@@ -203,12 +203,12 @@
 (* end hide *)
 (* begin thide *)
 
-(** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly.  We need to use the standard trick of %\index{strengthening the induction hypothesis}\emph{%#<i>#strengthening the induction hypothesis#</i>#%}%.  We do that by proving an auxiliary lemma, using the command [Lemma] that is a synonym for [Theorem], conventionally used for less important theorems that appear in the proofs of primary theorems.%\index{Vernacular commands!Lemma}% *)
+(** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly.  We need to use the standard trick of %\index{strengthening the induction hypothesis}%_strengthening the induction hypothesis_.  We do that by proving an auxiliary lemma, using the command [Lemma] that is a synonym for [Theorem], conventionally used for less important theorems that appear in the proofs of primary theorems.%\index{Vernacular commands!Lemma}% *)
 
 Lemma compile_correct' : forall e p s,
   progDenote (compile e ++ p) s = progDenote p (expDenote e :: s).
 
-(** After the period in the [Lemma] command, we are in %\index{interactive proof-editing mode}\emph{%#<i>#the interactive proof-editing mode#</i>#%}%.  We find ourselves staring at this ominous screen of text:
+(** After the period in the [Lemma] command, we are in %\index{interactive proof-editing mode}%_the interactive proof-editing mode_.  We find ourselves staring at this ominous screen of text:
 
 [[
 1 subgoal
@@ -223,7 +223,7 @@
 
 Next in the output, we see our single subgoal described in full detail.  There is a double-dashed line, above which would be our free variables and %\index{hypotheses}%hypotheses, if we had any.  Below the line is the %\index{conclusion}%conclusion, which, in general, is to be proved from the hypotheses.
 
-We manipulate the proof state by running commands called %\index{tactics}\emph{%#<i>#tactics#</i>#%}%.  Let us start out by running one of the most important tactics:%\index{tactics!induction}%
+We manipulate the proof state by running commands called %\index{tactics}%_tactics_.  Let us start out by running one of the most important tactics:%\index{tactics!induction}%
 *)
 
   induction e.
@@ -509,7 +509,7 @@
 
 The [crush] tactic comes from the library associated with this book and is not part of the Coq standard library.  The book's library contains a number of other tactics that are especially helpful in highly-automated proofs.
 
-The %\index{Vernacular commands!Qed}%[Qed] command checks that the proof is finished and, if so, saves it.  The tactic commands we have written above are an example of a %\emph{%#<i>#proof script#</i>#%}%, or a series of Ltac programs; while [Qed] uses the result of the script to generate a %\emph{%#<i>#proof term#</i>#%}%, a well-typed term of Gallina.  To believe that a theorem is true, we only need to trust that the (relatively simple) checker for proof terms is correct; the use of proof scripts is immaterial.  Part I of this book will introduce the principles behind encoding all proofs as terms of Gallina.
+The %\index{Vernacular commands!Qed}%[Qed] command checks that the proof is finished and, if so, saves it.  The tactic commands we have written above are an example of a _proof script_, or a series of Ltac programs; while [Qed] uses the result of the script to generate a _proof term_, a well-typed term of Gallina.  To believe that a theorem is true, we only need to trust that the (relatively simple) checker for proof terms is correct; the use of proof scripts is immaterial.  Part I of this book will introduce the principles behind encoding all proofs as terms of Gallina.
 
 The proof of our main theorem is now easy.  We prove it with four period-terminated tactics, though separating them with semicolons would work as well; the version here is easier to step through.%\index{tactics!intros}% *)
 
@@ -581,18 +581,18 @@
 | TEq : forall t, tbinop t t Bool
 | TLt : tbinop Nat Nat Bool.
 
-(** The definition of [tbinop] is different from [binop] in an important way.  Where we declared that [binop] has type [Set], here we declare that [tbinop] has type [type -> type -> type -> Set].  We define [tbinop] as an %\emph{%#<i>#indexed type family#</i>#%}%.  Indexed inductive types are at the heart of Coq's expressive power; almost everything else of interest is defined in terms of them.
+(** The definition of [tbinop] is different from [binop] in an important way.  Where we declared that [binop] has type [Set], here we declare that [tbinop] has type [type -> type -> type -> Set].  We define [tbinop] as an _indexed type family_.  Indexed inductive types are at the heart of Coq's expressive power; almost everything else of interest is defined in terms of them.
 
-The inuitive explanation of [tbinop] is that a [tbinop t1 t2 t] is a binary operator whose operands should have types [t1] and [t2], and whose result has type [t].  For instance, constructor [TLt] (for less-than comparison of numbers) is assigned type [tbinop Nat Nat Bool], meaning the operator's arguments are naturals and its result is boolean.  The type of [TEq] introduces a small bit of additional complication via polymorphism: we want to allow equality comparison of any two values of any type, as long as they have the %\emph{%#<i>#same#</i>#%}% type.
+The inuitive explanation of [tbinop] is that a [tbinop t1 t2 t] is a binary operator whose operands should have types [t1] and [t2], and whose result has type [t].  For instance, constructor [TLt] (for less-than comparison of numbers) is assigned type [tbinop Nat Nat Bool], meaning the operator's arguments are naturals and its result is boolean.  The type of [TEq] introduces a small bit of additional complication via polymorphism: we want to allow equality comparison of any two values of any type, as long as they have the _same_ type.
 
 ML and Haskell have indexed algebraic datatypes.  For instance, their list types are indexed by the type of data that the list carries.  However, compared to Coq, ML and Haskell 98 place two important restrictions on datatype definitions.
 
-First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition.  There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat].  %\index{generalized algebraic datatypes}\index{GADTs|see{generalized algebraic datatypes}}\emph{%#<i>#Generalized algebraic datatypes (GADTs)#</i>#%}~\cite{GADT}% are a popular feature in %\index{GHC Haskell}%GHC Haskell and other languages that removes this first restriction.
+First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition.  There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat].  %\index{generalized algebraic datatypes}\index{GADTs|see{generalized algebraic datatypes}}%_Generalized algebraic datatypes (GADTs)_ %\cite{GADT}% are a popular feature in %\index{GHC Haskell}%GHC Haskell and other languages that removes this first restriction.
 
-The second restriction is not lifted by GADTs.  In ML and Haskell, indices of types must be types and may not be %\emph{%#<i>#expressions#</i>#%}%.  In Coq, types may be indexed by arbitrary Gallina terms.  Type indices can live in the same universe as programs, and we can compute with them just like regular programs.  Haskell supports a hobbled form of computation in type indices based on %\index{Haskell}%multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally.
+The second restriction is not lifted by GADTs.  In ML and Haskell, indices of types must be types and may not be _expressions_.  In Coq, types may be indexed by arbitrary Gallina terms.  Type indices can live in the same universe as programs, and we can compute with them just like regular programs.  Haskell supports a hobbled form of computation in type indices based on %\index{Haskell}%multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally.
 *)
 
-(** We can define a similar type family for typed expressions, where a term of type [texp t] can be assigned object language type [t].  (It is conventional in the world of interactive theorem proving to call the language of the proof assistant the %\index{meta language}\emph{%#<i>#meta language#</i>#%}% and a language being formalized the %\index{object language}\emph{%#<i>#object language#</i>#%}%.) *)
+(** We can define a similar type family for typed expressions, where a term of type [texp t] can be assigned object language type [t].  (It is conventional in the world of interactive theorem proving to call the language of the proof assistant the %\index{meta language}%_meta language_ and a language being formalized the %\index{object language}%_object language_.) *)
 
 Inductive texp : type -> Set :=
 | TNConst : nat -> texp Nat
@@ -619,7 +619,7 @@
     | TLt => leb
   end.
 
-(** This function has just a few differences from the denotation functions we saw earlier.  First, [tbinop] is an indexed type, so its indices become additional arguments to [tbinopDenote].  Second, we need to perform a genuine %\index{dependent pattern matching}\emph{%#<i>#dependent pattern match#</i>#%}%, where the necessary %\emph{%#<i>#type#</i>#%}% of each case body depends on the %\emph{%#<i>#value#</i>#%}% that has been matched.  At this early stage, we will not go into detail on the many subtle aspects of Gallina that support dependent pattern-matching, but the subject is central to Part II of the book.
+(** This function has just a few differences from the denotation functions we saw earlier.  First, [tbinop] is an indexed type, so its indices become additional arguments to [tbinopDenote].  Second, we need to perform a genuine %\index{dependent pattern matching}%_dependent pattern match_, where the necessary _type_ of each case body depends on the _value_ that has been matched.  At this early stage, we will not go into detail on the many subtle aspects of Gallina that support dependent pattern-matching, but the subject is central to Part II of the book.
 
 The same tricks suffice to define an expression denotation function in an unsurprising way:
 *)
@@ -755,7 +755,7 @@
       (tconcat (tcompile e1 _) (TCons (TiBinop _ b) (TNil _)))
   end.
 
-(** One interesting feature of the definition is the underscores appearing to the right of [=>] arrows.  Haskell and ML programmers are quite familiar with compilers that infer type parameters to polymorphic values.  In Coq, it is possible to go even further and ask the system to infer arbitrary terms, by writing underscores in place of specific values.  You may have noticed that we have been calling functions without specifying all of their arguments.  For instance, the recursive calls here to [tcompile] omit the [t] argument.  Coq's %\emph{%#<i>#implicit argument#</i>#%}% mechanism automatically inserts underscores for arguments that it will probably be able to infer.  Inference of such values is far from complete, though; generally, it only works in cases similar to those encountered with polymorphic type instantiation in Haskell and ML.
+(** One interesting feature of the definition is the underscores appearing to the right of [=>] arrows.  Haskell and ML programmers are quite familiar with compilers that infer type parameters to polymorphic values.  In Coq, it is possible to go even further and ask the system to infer arbitrary terms, by writing underscores in place of specific values.  You may have noticed that we have been calling functions without specifying all of their arguments.  For instance, the recursive calls here to [tcompile] omit the [t] argument.  Coq's _implicit argument_ mechanism automatically inserts underscores for arguments that it will probably be able to infer.  Inference of such values is far from complete, though; generally, it only works in cases similar to those encountered with polymorphic type instantiation in Haskell and ML.
 
 The underscores here are being filled in with stack types.  That is, the Coq type inferencer is, in a sense, inferring something about the flow of control in the translated programs.  We can take a look at exactly which values are filled in: *)
 
@@ -852,7 +852,7 @@
 (* end hide *)
 (** %\noindent%[Hint] %\coqdockw{%#<tt>#Rewrite#</tt>#%}% [tconcat_correct.] *)
 
-(** Here we meet the pervasive concept of a %\emph{%#<i>#hint#</i>#%}%.  Many proofs can be found through exhaustive enumerations of combinations of possible proof steps; hints provide the set of steps to consider.  The tactic [crush] is applying such brute force search for us silently, and it will consider more possibilities as we add more hints.  This particular hint asks that the lemma be used for left-to-right rewriting, and we ask for the hint to be added to the hint database called [cpdt], which is the database used by [crush].  In general, segragating hints into different databases is helpful to control the performance of proof search, in cases where domain knowledge allows us to narrow the set of proof steps to be considered in brute force search.  Part III of this book considers such pragmatic aspects of proof search in much more detail.
+(** Here we meet the pervasive concept of a _hint_.  Many proofs can be found through exhaustive enumerations of combinations of possible proof steps; hints provide the set of steps to consider.  The tactic [crush] is applying such brute force search for us silently, and it will consider more possibilities as we add more hints.  This particular hint asks that the lemma be used for left-to-right rewriting, and we ask for the hint to be added to the hint database called [cpdt], which is the database used by [crush].  In general, segragating hints into different databases is helpful to control the performance of proof search, in cases where domain knowledge allows us to narrow the set of proof steps to be considered in brute force search.  Part III of this book considers such pragmatic aspects of proof search in much more detail.
 
 Now we are ready to return to [tcompile_correct'], proving it automatically this time. *)
 
@@ -874,7 +874,7 @@
 Qed.
 (* end thide *)
 
-(** It is probably worth emphasizing that we are doing more than building mathematical models.  Our compilers are functional programs that can be executed efficiently.  One strategy for doing so is based on %\index{program extraction}\emph{%#<i>#program extraction#</i>#%}%, which generates OCaml code from Coq developments.  For instance, we run a command to output the OCaml version of [tcompile]:%\index{Vernacular commands!Extraction}% *)
+(** It is probably worth emphasizing that we are doing more than building mathematical models.  Our compilers are functional programs that can be executed efficiently.  One strategy for doing so is based on %\index{program extraction}%_program extraction_, which generates OCaml code from Coq developments.  For instance, we run a command to output the OCaml version of [tcompile]:%\index{Vernacular commands!Extraction}% *)
 
 (* begin hide *)
 Extraction tcompile.
--- a/src/Subset.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Subset.v	Wed Jun 06 11:25:13 2012 -0400
@@ -20,7 +20,7 @@
 
 \chapter{Subset Types and Variations}% *)
 
-(** So far, we have seen many examples of what we might call %``%#"#classical program verification.#"#%''%  We write programs, write their specifications, and then prove that the programs satisfy their specifications.  The programs that we have written in Coq have been normal functional programs that we could just as well have written in Haskell or ML.  In this chapter, we start investigating uses of %\index{dependent types}\textit{%#<i>#dependent types#</i>#%}% to integrate programming, specification, and proving into a single phase.  The techniques we will learn make it possible to reduce the cost of program verification dramatically. *)
+(** So far, we have seen many examples of what we might call %``%#"#classical program verification.#"#%''%  We write programs, write their specifications, and then prove that the programs satisfy their specifications.  The programs that we have written in Coq have been normal functional programs that we could just as well have written in Haskell or ML.  In this chapter, we start investigating uses of %\index{dependent types}%_dependent types_ to integrate programming, specification, and proving into a single phase.  The techniques we will learn make it possible to reduce the cost of program verification dramatically. *)
 
 
 (** * Introducing Subset Types *)
@@ -69,9 +69,9 @@
     | S n' => fun _ => n'
   end.
 
-(** We expand the type of [pred] to include a %\textit{%#<i>#proof#</i>#%}% that its argument [n] is greater than 0.  When [n] is 0, we use the proof to derive a contradiction, which we can use to build a value of any type via a vacuous pattern match.  When [n] is a successor, we have no need for the proof and just return the answer.  The proof argument can be said to have a %\textit{%#<i>#dependent#</i>#%}% type, because its type depends on the %\textit{%#<i>#value#</i>#%}% of the argument [n].
+(** We expand the type of [pred] to include a _proof_ that its argument [n] is greater than 0.  When [n] is 0, we use the proof to derive a contradiction, which we can use to build a value of any type via a vacuous pattern match.  When [n] is a successor, we have no need for the proof and just return the answer.  The proof argument can be said to have a _dependent_ type, because its type depends on the _value_ of the argument [n].
 
-   Coq's [Eval] command can execute particular invocations of [pred_strong1] just as easily as it can execute more traditional functional programs.  Note that Coq has decided that argument [n] of [pred_strong1] can be made %\textit{%#<i>#implicit#</i>#%}%, since it can be deduced from the type of the second argument, so we need not write [n] in function calls. *)
+   Coq's [Eval] command can execute particular invocations of [pred_strong1] just as easily as it can execute more traditional functional programs.  Note that Coq has decided that argument [n] of [pred_strong1] can be made _implicit_, since it can be deduced from the type of the second argument, so we need not write [n] in function calls. *)
 
 Theorem two_gt0 : 2 > 0.
   crush.
@@ -104,7 +104,7 @@
 
 The term [zgtz pf] fails to type-check.  Somehow the type checker has failed to take into account information that follows from which [match] branch that term appears in.  The problem is that, by default, [match] does not let us use such implied information.  To get refined typing, we must always rely on [match] annotations, either written explicitly or inferred.
 
-In this case, we must use a [return] annotation to declare the relationship between the %\textit{%#<i>#value#</i>#%}% of the [match] discriminee and the %\textit{%#<i>#type#</i>#%}% of the result.  There is no annotation that lets us declare a relationship between the discriminee and the type of a variable that is already in scope; hence, we delay the binding of [pf], so that we can use the [return] annotation to express the needed relationship.
+In this case, we must use a [return] annotation to declare the relationship between the _value_ of the [match] discriminee and the _type_ of the result.  There is no annotation that lets us declare a relationship between the discriminee and the type of a variable that is already in scope; hence, we delay the binding of [pf], so that we can use the [return] annotation to express the needed relationship.
 
 We are lucky that Coq's heuristics infer the [return] clause (specifically, [return n > 0 -> nat]) for us in this case. *)
 
@@ -114,7 +114,7 @@
     | S n' => fun _ => n'
   end.
 
-(** By making explicit the functional relationship between value [n] and the result type of the [match], we guide Coq toward proper type checking.  The clause for this example follows by simple copying of the original annotation on the definition.  In general, however, the [match] annotation inference problem is undecidable.  The known undecidable problem of %\index{higher-order unification}\textit{%#<i>#higher-order unification#</i>#%}~\cite{HOU}% reduces to the [match] type inference problem.  Over time, Coq is enhanced with more and more heuristics to get around this problem, but there must always exist [match]es whose types Coq cannot infer without annotations.
+(** By making explicit the functional relationship between value [n] and the result type of the [match], we guide Coq toward proper type checking.  The clause for this example follows by simple copying of the original annotation on the definition.  In general, however, the [match] annotation inference problem is undecidable.  The known undecidable problem of %\index{higher-order unification}%_higher-order unification_ %\cite{HOU}% reduces to the [match] type inference problem.  Over time, Coq is enhanced with more and more heuristics to get around this problem, but there must always exist [match]es whose types Coq cannot infer without annotations.
 
 Let us now take a look at the OCaml code Coq generates for [pred_strong1]. *)
 
@@ -138,7 +138,7 @@
 
 (** The proof argument has disappeared!  We get exactly the OCaml code we would have written manually.  This is our first demonstration of the main technically interesting feature of Coq program extraction: program components of type [Prop] are erased systematically.
 
-We can reimplement our dependently typed [pred] based on %\index{subset types}\textit{%#<i>#subset types#</i>#%}%, defined in the standard library with the type family %\index{Gallina terms!sig}%[sig]. *)
+We can reimplement our dependently typed [pred] based on %\index{subset types}%_subset types_, defined in the standard library with the type family %\index{Gallina terms!sig}%[sig]. *)
 
 Print sig.
 (** %\vspace{-.15in}% [[
@@ -268,7 +268,7 @@
 (* end thide *)
 Defined.
 
-(** We end the %``%#"#proof#"#%''% with %\index{Vernacular commands!Defined}%[Defined] instead of [Qed], so that the definition we constructed remains visible.  This contrasts to the case of ending a proof with [Qed], where the details of the proof are hidden afterward.  (More formally, [Defined] marks an identifier as %\index{transparent}\emph{%#<i>#transparent#</i>#%}%, allowing it to be unfolded; while [Qed] marks an identifier as %\index{opaque}\emph{%#<i>#opaque#</i>#%}%, preventing unfolding.)  Let us see what our proof script constructed. *)
+(** We end the %``%#"#proof#"#%''% with %\index{Vernacular commands!Defined}%[Defined] instead of [Qed], so that the definition we constructed remains visible.  This contrasts to the case of ending a proof with [Qed], where the details of the proof are hidden afterward.  (More formally, [Defined] marks an identifier as %\index{transparent}%_transparent_, allowing it to be unfolded; while [Qed] marks an identifier as %\index{opaque}%_opaque_, preventing unfolding.)  Let us see what our proof script constructed. *)
 
 Print pred_strong4.
 (** %\vspace{-.15in}% [[
@@ -636,7 +636,7 @@
                            end)
 (right associativity, at level 60).
 
-(** The meaning of [x <- e1; e2] is: First run [e1].  If it fails to find an answer, then announce failure for our derived computation, too.  If [e1] %\textit{%#<i>#does#</i>#%}% find an answer, pass that answer on to [e2] to find the final result.  The variable [x] can be considered bound in [e2].
+(** The meaning of [x <- e1; e2] is: First run [e1].  If it fails to find an answer, then announce failure for our derived computation, too.  If [e1] _does_ find an answer, pass that answer on to [e2] to find the final result.  The variable [x] can be considered bound in [e2].
 
    This notation is very helpful for composing richly typed procedures.  For instance, here is a very simple implementation of a function to take the predecessors of two naturals at once. *)
 
--- a/src/Universes.v	Sun May 06 17:15:15 2012 -0400
+++ b/src/Universes.v	Wed Jun 06 11:25:13 2012 -0400
@@ -15,6 +15,10 @@
 Set Implicit Arguments.
 (* end hide *)
 
+(** printing $ %({}*% #(<a/>*# *)
+(** printing ^ %*{})% #*<a/>)# *)
+
+
 
 (** %\chapter{Universes and Axioms}% *)
 
@@ -52,7 +56,7 @@
  
      ]]
 
-  The type [Set] may be considered as the set of all sets, a concept that set theory handles in terms of %\index{class (in set theory)}\textit{%#<i>#classes#</i>#%}%.  In Coq, this more general notion is [Type]. *)
+  The type [Set] may be considered as the set of all sets, a concept that set theory handles in terms of %\index{class (in set theory)}%_classes_.  In Coq, this more general notion is [Type]. *)
 
 Check Type.
 (** %\vspace{-.15in}% [[
@@ -72,10 +76,7 @@
   nat
      : Set
      ]]
-     *)
-
-(** printing $ %({}*% #(<a/>*# *)
-(** printing ^ %*{})% #*<a/>)# *)
+*)
 
 Check Set.
 (** %\vspace{-.15in}% [[
@@ -94,11 +95,11 @@
 
   Occurrences of [Type] are annotated with some additional information, inside comments.  These annotations have to do with the secret behind [Type]: it really stands for an infinite hierarchy of types.  The type of [Set] is [Type(0)], the type of [Type(0)] is [Type(1)], the type of [Type(1)] is [Type(2)], and so on.  This is how we avoid the %``%#"#[Type : Type]#"#%''% paradox.  As a convenience, the universe hierarchy drives Coq's one variety of subtyping.  Any term whose type is [Type] at level [i] is automatically also described by [Type] at level [j] when [j > i].
 
-  In the outputs of our first [Check] query, we see that the type level of [Set]'s type is [(0)+1]. Here [0] stands for the level of [Set], and we increment it to arrive at the level that %\textit{%#<i>#classifies#</i>#%}% [Set].
+  In the outputs of our first [Check] query, we see that the type level of [Set]'s type is [(0)+1]. Here [0] stands for the level of [Set], and we increment it to arrive at the level that _classifies_ [Set].
 
-  In the second query's output, we see that the occurrence of [Type] that we check is assigned a fresh %\index{universe variable}\textit{%#<i>#universe variable#</i>#%}% [Top.3].  The output type increments [Top.3] to move up a level in the universe hierarchy.  As we write code that uses definitions whose types mention universe variables, unification may refine the values of those variables.  Luckily, the user rarely has to worry about the details.
+  In the second query's output, we see that the occurrence of [Type] that we check is assigned a fresh %\index{universe variable}%_universe variable_ [Top.3].  The output type increments [Top.3] to move up a level in the universe hierarchy.  As we write code that uses definitions whose types mention universe variables, unification may refine the values of those variables.  Luckily, the user rarely has to worry about the details.
 
-  Another crucial concept in CIC is %\index{predicativity}\textit{%#<i>#predicativity#</i>#%}%.  Consider these queries. *)
+  Another crucial concept in CIC is %\index{predicativity}%_predicativity_.  Consider these queries. *)
 
 Check forall T : nat, fin T.
 (** %\vspace{-.15in}% [[
@@ -175,7 +176,7 @@
 Error: Universe inconsistency (cannot enforce Top.16 < Top.16).
 >>
 
-  %\index{universe inconsistency}%This error message reminds us that the universe variable for [T] still exists, even though it is usually hidden.  To apply [id] to itself, that variable would need to be less than itself in the type hierarchy.  Universe inconsistency error messages announce cases like this one where a term could only type-check by violating an implied constraint over universe variables.  Such errors demonstrate that [Type] is %\textit{%#<i>#predicative#</i>#%}%, where this word has a CIC meaning closely related to its usual mathematical meaning.  A predicative system enforces the constraint that, for any object of quantified type, none of those quantifiers may ever be instantiated with the object itself.  %\index{impredicativity}%Impredicativity is associated with popular paradoxes in set theory, involving inconsistent constructions like %``%#"#the set of all sets that do not contain themselves#"#%''% (%\index{Russell's paradox}%Russell's paradox).  Similar paradoxes would result from uncontrolled impredicativity in Coq. *)
+  %\index{universe inconsistency}%This error message reminds us that the universe variable for [T] still exists, even though it is usually hidden.  To apply [id] to itself, that variable would need to be less than itself in the type hierarchy.  Universe inconsistency error messages announce cases like this one where a term could only type-check by violating an implied constraint over universe variables.  Such errors demonstrate that [Type] is _predicative_, where this word has a CIC meaning closely related to its usual mathematical meaning.  A predicative system enforces the constraint that, for any object of quantified type, none of those quantifiers may ever be instantiated with the object itself.  %\index{impredicativity}%Impredicativity is associated with popular paradoxes in set theory, involving inconsistent constructions like %``%#"#the set of all sets that do not contain themselves#"#%''% (%\index{Russell's paradox}%Russell's paradox).  Similar paradoxes would result from uncontrolled impredicativity in Coq. *)
 
 
 (** ** Inductive Definitions *)
@@ -193,7 +194,7 @@
 Error: Large non-propositional inductive types must be in Type.
 >>
 
-   This definition is %\index{large inductive types}\textit{%#<i>#large#</i>#%}% in the sense that at least one of its constructors takes an argument whose type has type [Type].  Coq would be inconsistent if we allowed definitions like this one in their full generality.  Instead, we must change [exp] to live in [Type].  We will go even further and move [exp]'s index to [Type] as well. *)
+   This definition is %\index{large inductive types}%_large_ in the sense that at least one of its constructors takes an argument whose type has type [Type].  Coq would be inconsistent if we allowed definitions like this one in their full generality.  Instead, we must change [exp] to live in [Type].  We will go even further and move [exp]'s index to [Type] as well. *)
 
 Inductive exp : Type -> Type :=
 | Const : forall T, T -> exp T
@@ -250,7 +251,7 @@
  
   ]]
 
-  We see that the index type of [exp] has been assigned to universe level [Top.8].  In addition, each of the four occurrences of [Type] in the types of the constructors gets its own universe variable.  Each of these variables appears explicitly in the type of [exp].  In particular, any type [exp T] lives at a universe level found by incrementing by one the maximum of the four argument variables.  A consequence of this is that [exp] %\textit{%#<i>#must#</i>#%}% live at a higher universe level than any type which may be passed to one of its constructors.  This consequence led to the universe inconsistency.
+  We see that the index type of [exp] has been assigned to universe level [Top.8].  In addition, each of the four occurrences of [Type] in the types of the constructors gets its own universe variable.  Each of these variables appears explicitly in the type of [exp].  In particular, any type [exp T] lives at a universe level found by incrementing by one the maximum of the four argument variables.  A consequence of this is that [exp] _must_ live at a higher universe level than any type which may be passed to one of its constructors.  This consequence led to the universe inconsistency.
 
   Strangely, the universe variable [Top.8] only appears in one place.  Is there no restriction imposed on which types are valid arguments to [exp]?  In fact, there is a restriction, but it only appears in a global set of universe constraints that are maintained %``%#"#off to the side,#"#%''% not appearing explicitly in types.  We can print the current database.%\index{Vernacular commands!Print Universes}% *)
 
@@ -282,7 +283,7 @@
 
   %\medskip%
 
-  Something interesting is revealed in the annotated definition of [prod].  A type [prod A B] lives at a universe that is the maximum of the universes of [A] and [B].  From our earlier experiments, we might expect that [prod]'s universe would in fact need to be %\textit{%#<i>#one higher#</i>#%}% than the maximum.  The critical difference is that, in the definition of [prod], [A] and [B] are defined as %\textit{%#<i>#parameters#</i>#%}%; that is, they appear named to the left of the main colon, rather than appearing (possibly unnamed) to the right.
+  Something interesting is revealed in the annotated definition of [prod].  A type [prod A B] lives at a universe that is the maximum of the universes of [A] and [B].  From our earlier experiments, we might expect that [prod]'s universe would in fact need to be _one higher_ than the maximum.  The critical difference is that, in the definition of [prod], [A] and [B] are defined as _parameters_; that is, they appear named to the left of the main colon, rather than appearing (possibly unnamed) to the right.
 
   Parameters are not as flexible as normal inductive type arguments.  The range types of all of the constructors of a parameterized type must share the same parameters.  Nonetheless, when it is possible to define a polymorphic type in this way, we gain the ability to use the new type family in more ways, without triggering universe inconsistencies.  For instance, nested pairs of types are perfectly legal. *)
 
@@ -371,7 +372,7 @@
 Error: Impossible to unify "?35 = ?34" with "unit = unit".
 >>
 
-Coq tells us that we cannot, in fact, apply our lemma [symmetry] here, but the error message seems defective.  In particular, one might think that [apply] should unify [?35] and [?34] with [unit] to ensure that the unification goes through.  In fact, the problem is in a part of the unification problem that is %\emph{%#<i>#not#</i>#%}% shown to us in this error message!
+Coq tells us that we cannot, in fact, apply our lemma [symmetry] here, but the error message seems defective.  In particular, one might think that [apply] should unify [?35] and [?34] with [unit] to ensure that the unification goes through.  In fact, the problem is in a part of the unification problem that is _not_ shown to us in this error message!
 
 The following command is the secret to getting better error messages in such cases: *)
 
@@ -383,7 +384,7 @@
 Error: Impossible to unify "@eq Type ?46 ?45" with "@eq Set unit unit".
 >>
 
-Now we can see the problem: it is the first, %\emph{%#<i>#implicit#</i>#%}% argument to the underlying equality function [eq] that disagrees across the two terms.  The universe [Set] may be both an element and a subtype of [Type], but the two are not definitionally equal. *)
+Now we can see the problem: it is the first, _implicit_ argument to the underlying equality function [eq] that disagrees across the two terms.  The universe [Set] may be both an element and a subtype of [Type], but the two are not definitionally equal. *)
 
 Abort.
 
@@ -429,7 +430,7 @@
 "?99 = 0".
 >>
 
-  The problem here is that variable [x] was introduced by [destruct] %\emph{%#<i>#after#</i>#%}% we introduced [?99] with [eexists], so the instantiation of [?99] may not mention [x].  A simple reordering of the proof solves the problem. *)
+  The problem here is that variable [x] was introduced by [destruct] _after_ we introduced [?99] with [eexists], so the instantiation of [?99] may not mention [x].  A simple reordering of the proof solves the problem. *)
 
   Restart.
   destruct 1 as [x]; apply ex_intro with x; symmetry; assumption.
@@ -485,7 +486,7 @@
 
   This restriction matches informal practice.  We think of programs and proofs as clearly separated, and, outside of constructive logic, the idea of computing with proofs is ill-formed.  The distinction also has practical importance in Coq, where it affects the behavior of extraction.
 
-  Recall that %\index{program extraction}%extraction is Coq's facility for translating Coq developments into programs in general-purpose programming languages like OCaml.  Extraction %\textit{%#<i>#erases#</i>#%}% proofs and leaves programs intact.  A simple example with [sig] and [ex] demonstrates the distinction. *)
+  Recall that %\index{program extraction}%extraction is Coq's facility for translating Coq developments into programs in general-purpose programming languages like OCaml.  Extraction _erases_ proofs and leaves programs intact.  A simple example with [sig] and [ex] demonstrates the distinction. *)
 
 Definition sym_sig (x : sig (fun n => n = 0)) : sig (fun n => 0 = n) :=
   match x with
@@ -517,11 +518,11 @@
 
 Extraction is very helpful as an optimization over programs that contain proofs.  In languages like Haskell, advanced features make it possible to program with proofs, as a way of convincing the type checker to accept particular definitions.  Unfortunately, when proofs are encoded as values in GADTs%~\cite{GADT}%, these proofs exist at runtime and consume resources.  In contrast, with Coq, as long as all proofs are kept within [Prop], extraction is guaranteed to erase them.
 
-Many fans of the %\index{Curry-Howard correspondence}%Curry-Howard correspondence support the idea of %\textit{%#<i>#extracting programs from proofs#</i>#%}%.  In reality, few users of Coq and related tools do any such thing.  Instead, extraction is better thought of as an optimization that reduces the runtime costs of expressive typing.
+Many fans of the %\index{Curry-Howard correspondence}%Curry-Howard correspondence support the idea of _extracting programs from proofs_.  In reality, few users of Coq and related tools do any such thing.  Instead, extraction is better thought of as an optimization that reduces the runtime costs of expressive typing.
 
 %\medskip%
 
-We have seen two of the differences between proofs and programs: proofs are subject to an elimination restriction and are elided by extraction.  The remaining difference is that [Prop] is %\index{impredicativity}\textit{%#<i>#impredicative#</i>#%}%, as this example shows. *)
+We have seen two of the differences between proofs and programs: proofs are subject to an elimination restriction and are elided by extraction.  The remaining difference is that [Prop] is %\index{impredicativity}%_impredicative_, as this example shows. *)
 
 Check forall P Q : Prop, P \/ Q -> Q \/ P.
 (** %\vspace{-.15in}% [[
@@ -601,7 +602,7 @@
 
 (** * Axioms *)
 
-(** While the specific logic Gallina is hardcoded into Coq's implementation, it is possible to add certain logical rules in a controlled way.  In other words, Coq may be used to reason about many different refinements of Gallina where strictly more theorems are provable.  We achieve this by asserting %\index{axioms}\textit{%#<i>#axioms#</i>#%}% without proof.
+(** While the specific logic Gallina is hardcoded into Coq's implementation, it is possible to add certain logical rules in a controlled way.  In other words, Coq may be used to reason about many different refinements of Gallina where strictly more theorems are provable.  We achieve this by asserting %\index{axioms}%_axioms_ without proof.
 
    We will motivate the idea by touring through some standard axioms, as enumerated in Coq's online FAQ.  I will add additional commentary as appropriate. *)
 
@@ -627,7 +628,7 @@
 
 (** This kind of %``%#"#axiomatic presentation#"#%''% of a theory is very common outside of higher-order logic.  However, in Coq, it is almost always preferable to stick to defining your objects, functions, and predicates via inductive definitions and functional programming.
 
-   In general, there is a significant burden associated with any use of axioms.  It is easy to assert a set of axioms that together is %\index{inconsistent axioms}\textit{%#<i>#inconsistent#</i>#%}%.   That is, a set of axioms may imply [False], which allows any theorem to be proved, which defeats the purpose of a proof assistant.  For example, we could assert the following axiom, which is consistent by itself but inconsistent when combined with [classic]. *)
+   In general, there is a significant burden associated with any use of axioms.  It is easy to assert a set of axioms that together is %\index{inconsistent axioms}%_inconsistent_.   That is, a set of axioms may imply [False], which allows any theorem to be proved, which defeats the purpose of a proof assistant.  For example, we could assert the following axiom, which is consistent by itself but inconsistent when combined with [classic]. *)
 
 Axiom not_classic : ~ forall P : Prop, P \/ ~ P.
 
@@ -641,11 +642,11 @@
 
 Reset not_classic.
 
-(** On the subject of the law of the excluded middle itself, this axiom is usually quite harmless, and many practical Coq developments assume it.  It has been proved metatheoretically to be consistent with CIC.  Here, %``%#"#proved metatheoretically#"#%''% means that someone proved on paper that excluded middle holds in a %\textit{%#<i>#model#</i>#%}% of CIC in set theory%~\cite{SetsInTypes}%.  All of the other axioms that we will survey in this section hold in the same model, so they are all consistent together.
+(** On the subject of the law of the excluded middle itself, this axiom is usually quite harmless, and many practical Coq developments assume it.  It has been proved metatheoretically to be consistent with CIC.  Here, %``%#"#proved metatheoretically#"#%''% means that someone proved on paper that excluded middle holds in a _model_ of CIC in set theory%~\cite{SetsInTypes}%.  All of the other axioms that we will survey in this section hold in the same model, so they are all consistent together.
 
-   Recall that Coq implements %\index{constructive logic}\textit{%#<i>#constructive#</i>#%}% logic by default, where excluded middle is not provable.  Proofs in constructive logic can be thought of as programs.  A [forall] quantifier denotes a dependent function type, and a disjunction denotes a variant type.  In such a setting, excluded middle could be interpreted as a decision procedure for arbitrary propositions, which computability theory tells us cannot exist.  Thus, constructive logic with excluded middle can no longer be associated with our usual notion of programming.
+   Recall that Coq implements %\index{constructive logic}%_constructive_ logic by default, where excluded middle is not provable.  Proofs in constructive logic can be thought of as programs.  A [forall] quantifier denotes a dependent function type, and a disjunction denotes a variant type.  In such a setting, excluded middle could be interpreted as a decision procedure for arbitrary propositions, which computability theory tells us cannot exist.  Thus, constructive logic with excluded middle can no longer be associated with our usual notion of programming.
 
-   Given all this, why is it all right to assert excluded middle as an axiom?  The intuitive justification is that the elimination restriction for [Prop] prevents us from treating proofs as programs.  An excluded middle axiom that quantified over [Set] instead of [Prop] %\textit{%#<i>#would#</i>#%}% be problematic.  If a development used that axiom, we would not be able to extract the code to OCaml (soundly) without implementing a genuine universal decision procedure.  In contrast, values whose types belong to [Prop] are always erased by extraction, so we sidestep the axiom's algorithmic consequences.
+   Given all this, why is it all right to assert excluded middle as an axiom?  The intuitive justification is that the elimination restriction for [Prop] prevents us from treating proofs as programs.  An excluded middle axiom that quantified over [Set] instead of [Prop] _would_ be problematic.  If a development used that axiom, we would not be able to extract the code to OCaml (soundly) without implementing a genuine universal decision procedure.  In contrast, values whose types belong to [Prop] are always erased by extraction, so we sidestep the axiom's algorithmic consequences.
 
    Because the proper use of axioms is so precarious, there are helpful commands for determining which axioms a theorem relies on.%\index{Vernacular commands!Print Assumptions}% *)
 
@@ -676,7 +677,7 @@
   classic : forall P : Prop, P \/ ~ P
   ]]
 
-  It is possible to avoid this dependence in some specific cases, where excluded middle %\textit{%#<i>#is#</i>#%}% provable, for decidable families of propositions. *)
+  It is possible to avoid this dependence in some specific cases, where excluded middle _is_ provable, for decidable families of propositions. *)
 
 Theorem nat_eq_dec : forall n m : nat, n = m \/ n <> m.
   induction n; destruct m; intuition; generalize (IHn m); intuition.
@@ -693,7 +694,7 @@
 
   %\bigskip%
 
-  Mainstream mathematical practice assumes excluded middle, so it can be useful to have it available in Coq developments, though it is also nice to know that a theorem is proved in a simpler formal system than classical logic.  There is a similar story for %\index{proof irrelevance}\textit{%#<i>#proof irrelevance#</i>#%}%, which simplifies proof issues that would not even arise in mainstream math. *)
+  Mainstream mathematical practice assumes excluded middle, so it can be useful to have it available in Coq developments, though it is also nice to know that a theorem is proved in a simpler formal system than classical logic.  There is a similar story for %\index{proof irrelevance}%_proof irrelevance_, which simplifies proof issues that would not even arise in mainstream math. *)
 
 Require Import ProofIrrelevance.
 Print proof_irrelevance.
@@ -793,7 +794,7 @@
 
 (** Some Coq axioms are also points of contention in mainstream math.  The most prominent example is the %\index{axiom of choice}%axiom of choice.  In fact, there are multiple versions that we might consider, and, considered in isolation, none of these versions means quite what it means in classical set theory.
 
-   First, it is possible to implement a choice operator %\textit{%#<i>#without#</i>#%}% axioms in some potentially surprising cases. *)
+   First, it is possible to implement a choice operator _without_ axioms in some potentially surprising cases. *)
 
 Require Import ConstructiveEpsilon.
 Check constructive_definite_description.
@@ -812,7 +813,7 @@
 Closed under the global context
 >>
 
-  This function transforms a decidable predicate [P] into a function that produces an element satisfying [P] from a proof that such an element exists.  The functions [f] and [g], in conjunction with an associated injectivity property, are used to express the idea that the set [A] is countable.  Under these conditions, a simple brute force algorithm gets the job done: we just enumerate all elements of [A], stopping when we find one satisfying [P].  The existence proof, specified in terms of %\textit{%#<i>#unique#</i>#%}% existence [exists!], guarantees termination.  The definition of this operator in Coq uses some interesting techniques, as seen in the implementation of the [ConstructiveEpsilon] module.
+  This function transforms a decidable predicate [P] into a function that produces an element satisfying [P] from a proof that such an element exists.  The functions [f] and [g], in conjunction with an associated injectivity property, are used to express the idea that the set [A] is countable.  Under these conditions, a simple brute force algorithm gets the job done: we just enumerate all elements of [A], stopping when we find one satisfying [P].  The existence proof, specified in terms of _unique_ existence [exists!], guarantees termination.  The definition of this operator in Coq uses some interesting techniques, as seen in the implementation of the [ConstructiveEpsilon] module.
 
   Countable choice is provable in set theory without appealing to the general axiom of choice.  To support the more general principle in Coq, we must also add an axiom.  Here is a functional version of the axiom of unique choice. *)
 
@@ -859,7 +860,7 @@
 
 (** ** Axioms and Computation *)
 
-(** One additional axiom-related wrinkle arises from an aspect of Gallina that is very different from set theory: a notion of %\textit{%#<i>#computational equivalence#</i>#%}% is central to the definition of the formal system.  Axioms tend not to play well with computation.  Consider this example.  We start by implementing a function that uses a type equality proof to perform a safe type-cast. *)
+(** One additional axiom-related wrinkle arises from an aspect of Gallina that is very different from set theory: a notion of _computational equivalence_ is central to the definition of the formal system.  Axioms tend not to play well with computation.  Consider this example.  We start by implementing a function that uses a type equality proof to perform a safe type-cast. *)
 
 Definition cast (x y : Set) (pf : x = y) (v : x) : y :=
   match pf with
@@ -908,7 +909,7 @@
      ....
      ]]
 
-  We elide most of the details.  A very unwieldy tree of nested matches on equality proofs appears.  This time evaluation really %\textit{%#<i>#is#</i>#%}% stuck on a use of an axiom.
+  We elide most of the details.  A very unwieldy tree of nested matches on equality proofs appears.  This time evaluation really _is_ stuck on a use of an axiom.
 
   If we are careful in using tactics to prove an equality, we can still compute with casts over the proof. *)
 
@@ -931,7 +932,7 @@
 
 (** ** Methods for Avoiding Axioms *)
 
-(** The last section demonstrated one reason to avoid axioms: they interfere with computational behavior of terms.  A further reason is to reduce the philosophical commitment of a theorem.  The more axioms one assumes, the harder it becomes to convince oneself that the formal system corresponds appropriately to one's intuitions.  A refinement of this last point, in applications like %\index{proof-carrying code}%proof-carrying code%~\cite{PCC}% in computer security, has to do with minimizing the size of a %\index{trusted code base}\emph{%#<i>#trusted code base#</i>#%}%.  To convince ourselves that a theorem is true, we must convince ourselves of the correctness of the program that checks the theorem.  Axioms effectively become new source code for the checking program, increasing the effort required to perform a correctness audit.
+(** The last section demonstrated one reason to avoid axioms: they interfere with computational behavior of terms.  A further reason is to reduce the philosophical commitment of a theorem.  The more axioms one assumes, the harder it becomes to convince oneself that the formal system corresponds appropriately to one's intuitions.  A refinement of this last point, in applications like %\index{proof-carrying code}%proof-carrying code%~\cite{PCC}% in computer security, has to do with minimizing the size of a %\index{trusted code base}%_trusted code base_.  To convince ourselves that a theorem is true, we must convince ourselves of the correctness of the program that checks the theorem.  Axioms effectively become new source code for the checking program, increasing the effort required to perform a correctness audit.
 
    An earlier section gave one example of avoiding an axiom.  We proved that [pred_strong1] is agnostic to details of the proofs passed to it as arguments, by unfolding the definition of the function.  A %``%#"#simpler#"#%''% proof keeps the function definition opaque and instead applies a proof irrelevance axiom.  By accepting a more complex proof, we reduce our philosophical commitment and trusted base.  (By the way, the less-than relation that the proofs in question here prove turns out to admit proof irrelevance as a theorem provable within normal Gallina!)
 
@@ -1109,7 +1110,7 @@
 
 %\medskip%
 
-To close the chapter, we consider one final way to avoid dependence on axioms.  Often this task is equivalent to writing definitions such that they %\emph{%#<i>#compute#</i>#%}%.  That is, we want Coq's normal reduction to be able to run certain programs to completion.  Here is a simple example where such computation can get stuck.  In proving properties of such functions, we would need to apply axioms like %\index{axiom K}%K manually to make progress.
+To close the chapter, we consider one final way to avoid dependence on axioms.  Often this task is equivalent to writing definitions such that they _compute_.  That is, we want Coq's normal reduction to be able to run certain programs to completion.  Here is a simple example where such computation can get stuck.  In proving properties of such functions, we would need to apply axioms like %\index{axiom K}%K manually to make progress.
 
 Imagine we are working with %\index{deep embedding}%deeply embedded syntax of some programming language, where each term is considered to be in the scope of a number of free variables that hold normal Coq values.  To enforce proper typing, we will need to model a Coq typing environment somehow.  One natural choice is as a list of types, where variable number [i] will be treated as a reference to the [i]th element of the list. *)
 
@@ -1173,7 +1174,7 @@
      = 3
 ]]
 
-We have not hit the problem yet, since we proceeded with a concrete equality proof for [myNatIndex_ok].  However, consider a case where we want to reason about the behavior of [getNat] %\emph{%#<i>#independently#</i>#%}% of a specific proof. *)
+We have not hit the problem yet, since we proceeded with a concrete equality proof for [myNatIndex_ok].  However, consider a case where we want to reason about the behavior of [getNat] _independently_ of a specific proof. *)
 
 Theorem getNat_is_reasonable : forall pf, getNat myValues myNatIndex pf = 3.
   intro; compute.
@@ -1226,7 +1227,7 @@
   Variable types : list Set.
   Variable natIndex : nat.
 
-  (** Here is the trick: instead of asserting properties about the list [types], we build a %``%#"#new#"#%''% list that is %\emph{%#<i>#guaranteed by construction#</i>#%}% to have those properties. *)
+  (** Here is the trick: instead of asserting properties about the list [types], we build a %``%#"#new#"#%''% list that is _guaranteed by construction_ to have those properties. *)
 
   Definition types' := update types natIndex nat.
 
@@ -1254,7 +1255,7 @@
     end.
 End withTypes'.
 
-(** Now the surprise comes in how easy it is to %\emph{%#<i>#use#</i>#%}% [getNat'].  While typing works by modification of a types list, we can choose parameters so that the modification has no effect. *)
+(** Now the surprise comes in how easy it is to _use_ [getNat'].  While typing works by modification of a types list, we can choose parameters so that the modification has no effect. *)
 
 Theorem getNat_is_reasonable : getNat' myTypes myNatIndex myValues = 3.
   reflexivity.