Library Reflection
Proving Evenness
Inductive isEven : nat -> Prop :=
| Even_O : isEven O
| Even_SS : forall n, isEven n -> isEven (S (S n)).
Ltac prove_even := repeat constructor.
Theorem even_256 : isEven 256.
prove_even.
Qed.
Print even_256.
even_256 =
Even_SS
(Even_SS
(Even_SS
(Even_SS
Print partial.
Inductive partial (P : Prop) : Set := Proved : P -> [P] | Uncertain : [P]
Local Open Scope partial_scope.
We bring into scope some notations for the partial type. These overlap with some of the notations we have seen previously for specification types, so they were placed in a separate scope that needs separate opening.
Definition check_even : forall n : nat, [isEven n].
Hint Constructors isEven.
refine (fix F (n : nat) : [isEven n] :=
match n with
| 0 => Yes
| 1 => No
| S (S n') => Reduce (F n')
end); auto.
Defined.
The function check_even may be viewed as a verified decision procedure, because its type guarantees that it never returns Yes for inputs that are not even.
Now we can use dependent pattern-matching to write a function that performs a surprising feat. When given a partial P, this function partialOut returns a proof of P if the partial value contains a proof, and it returns a (useless) proof of True otherwise. From the standpoint of ML and Haskell programming, it seems impossible to write such a type, but it is trivial with a return annotation.
Definition partialOut (P : Prop) (x : [P]) :=
match x return (match x with
| Proved _ => P
| Uncertain => True
end) with
| Proved pf => pf
| Uncertain => I
end.
It may seem strange to define a function like this. However, it turns out to be very useful in writing a reflective version of our earlier prove_even tactic:
Ltac prove_even_reflective :=
match goal with
| [ |- isEven ?N] => exact (partialOut (check_even N))
end.
We identify which natural number we are considering, and we "prove" its evenness by pulling the proof out of the appropriate check_even call. Recall that the exact tactic proves a proposition P when given a proof term of precisely type P.
even_256' = partialOut (check_even 256)
: isEven 256
prove_even_reflective.
User error: No matching clauses for match goal
exact (partialOut (check_even 255)).
Error: The term "partialOut (check_even 255)" has type "match check_even 255 with | Yes => isEven 255 | No => True end" while it is expected to have type "isEven 255"
Abort.
Our tactic prove_even_reflective is reflective because it performs a proof search process (a trivial one, in this case) wholly within Gallina, where the only use of Ltac is to translate a goal into an appropriate use of check_even.
We might also like to have reflective proofs of trivial tautologies like this one:
Reifying the Syntax of a Trivial Tautology Language
Theorem true_galore : (True /\ True) -> (True \/ (True /\ (True -> True))).
tauto.
Qed.
Print true_galore.
true_galore =
fun H : True /\ True =>
and_ind (fun _ _ : True => or_introl (True /\ (True -> True)) I) H
: True /\ True -> True \/ True /\ (True -> True)
Inductive taut : Set :=
| TautTrue : taut
| TautAnd : taut -> taut -> taut
| TautOr : taut -> taut -> taut
| TautImp : taut -> taut -> taut.
We write a recursive function to reflect this syntax back to Prop. Such functions are also called interpretation functions, and we have used them in previous examples to give semantics to small programming languages.
Fixpoint tautDenote (t : taut) : Prop :=
match t with
| TautTrue => True
| TautAnd t1 t2 => tautDenote t1 /\ tautDenote t2
| TautOr t1 t2 => tautDenote t1 \/ tautDenote t2
| TautImp t1 t2 => tautDenote t1 -> tautDenote t2
end.
It is easy to prove that every formula in the range of tautDenote is true.
To use tautTrue to prove particular formulas, we need to implement the syntax reification process. A recursive Ltac function does the job.
Ltac tautReify P :=
match P with
| True => TautTrue
| ?P1 /\ ?P2 =>
let t1 := tautReify P1 in
let t2 := tautReify P2 in
constr:(TautAnd t1 t2)
| ?P1 \/ ?P2 =>
let t1 := tautReify P1 in
let t2 := tautReify P2 in
constr:(TautOr t1 t2)
| ?P1 -> ?P2 =>
let t1 := tautReify P1 in
let t2 := tautReify P2 in
constr:(TautImp t1 t2)
end.
With tautReify available, it is easy to finish our reflective tactic. We look at the goal formula, reify it, and apply tautTrue to the reified formula.
We can verify that obvious solves our original example, with a proof term that does not mention details of the proof.
Theorem true_galore' : (True /\ True) -> (True \/ (True /\ (True -> True))).
obvious.
Qed.
Print true_galore'.
true_galore' =
tautTrue
(TautImp (TautAnd TautTrue TautTrue)
(TautOr TautTrue (TautAnd TautTrue (TautImp TautTrue TautTrue))))
: True /\ True -> True \/ True /\ (True -> True)
A Monoid Expression Simplifier
Section monoid.
Variable A : Set.
Variable e : A.
Variable f : A -> A -> A.
Infix "+" := f.
Hypothesis assoc : forall a b c, (a + b) + c = a + (b + c).
Hypothesis identl : forall a, e + a = a.
Hypothesis identr : forall a, a + e = a.
We add variables and hypotheses characterizing an arbitrary instance of the algebraic structure of monoids. We have an associative binary operator and an identity element for it.
It is easy to define an expression tree type for monoid expressions. A Var constructor is a "catch-all" case for subexpressions that we cannot model. These subexpressions could be actual Gallina variables, or they could just use functions that our tactic is unable to understand.
Next, we write an interpretation function.
Fixpoint mdenote (me : mexp) : A :=
match me with
| Ident => e
| Var v => v
| Op me1 me2 => mdenote me1 + mdenote me2
end.
We will normalize expressions by flattening them into lists, via associativity, so it is helpful to have a denotation function for lists of monoid values.
The flattening function itself is easy to implement.
Fixpoint flatten (me : mexp) : list A :=
match me with
| Ident => nil
| Var x => x :: nil
| Op me1 me2 => flatten me1 ++ flatten me2
end.
This function has a straightforward correctness proof in terms of our denote functions.
Lemma flatten_correct' : forall ml2 ml1,
mldenote ml1 + mldenote ml2 = mldenote (ml1 ++ ml2).
induction ml1; crush.
Qed.
Theorem flatten_correct : forall me, mdenote me = mldenote (flatten me).
Hint Resolve flatten_correct'.
induction me; crush.
Qed.
Now it is easy to prove a theorem that will be the main tool behind our simplification tactic.
Theorem monoid_reflect : forall me1 me2,
mldenote (flatten me1) = mldenote (flatten me2)
-> mdenote me1 = mdenote me2.
intros; repeat rewrite flatten_correct; assumption.
Qed.
We implement reification into the mexp type.
Ltac reify me :=
match me with
| e => Ident
| ?me1 + ?me2 =>
let r1 := reify me1 in
let r2 := reify me2 in
constr:(Op r1 r2)
| _ => constr:(Var me)
end.
The final monoid tactic works on goals that equate two monoid terms. We reify each and change the goal to refer to the reified versions, finishing off by applying monoid_reflect and simplifying uses of mldenote. Recall that the change tactic replaces a conclusion formula with another that is definitionally equal to it.
Ltac monoid :=
match goal with
| [ |- ?me1 = ?me2 ] =>
let r1 := reify me1 in
let r2 := reify me2 in
change (mdenote r1 = mdenote r2);
apply monoid_reflect; simpl
end.
We can make short work of theorems like this one:
============================
a + (b + (c + (d + e))) = a + (b + (c + (d + e)))
reflexivity.
Qed.
It is interesting to look at the form of the proof.
t1 =
fun a b c d : A =>
monoid_reflect (Op (Op (Op (Var a) (Var b)) (Var c)) (Var d))
(Op (Op (Var a) (Op (Var b) (Var c))) (Var d))
(eq_refl (a + (b + (c + (d + e)))))
: forall a b c d : A, a + b + c + d = a + (b + c) + d
Extensions of this basic approach are used in the implementations of the ring and field tactics that come packaged with Coq.
Now we are ready to revisit our earlier tautology solver example. We want to broaden the scope of the tactic to include formulas whose truth is not syntactically apparent. We will want to allow injection of arbitrary formulas, like we allowed arbitrary monoid expressions in the last example. Since we are working in a richer theory, it is important to be able to use equalities between different injected formulas. For instance, we cannot prove P -> P by translating the formula into a value like Imp (Var P) (Var P), because a Gallina function has no way of comparing the two Ps for equality.
To arrive at a nice implementation satisfying these criteria, we introduce the quote tactic and its associated library.
A Smarter Tautology Solver
Require Import Quote.
Inductive formula : Set :=
| Atomic : index -> formula
| Truth : formula
| Falsehood : formula
| And : formula -> formula -> formula
| Or : formula -> formula -> formula
| Imp : formula -> formula -> formula.
The type index comes from the Quote library and represents a countable variable type. The rest of formula's definition should be old hat by now.
The quote tactic will implement injection from Prop into formula for us, but it is not quite as smart as we might like. In particular, it wants to treat function types specially, so it gets confused if function types are part of the structure we want to encode syntactically. To trick quote into not noticing our uses of function types to express logical implication, we will need to declare a wrapper definition for implication, as we did in the last chapter.
Now we can define our denotation function.
Definition asgn := varmap Prop.
Fixpoint formulaDenote (atomics : asgn) (f : formula) : Prop :=
match f with
| Atomic v => varmap_find False v atomics
| Truth => True
| Falsehood => False
| And f1 f2 => formulaDenote atomics f1 /\ formulaDenote atomics f2
| Or f1 f2 => formulaDenote atomics f1 \/ formulaDenote atomics f2
| Imp f1 f2 => formulaDenote atomics f1 --> formulaDenote atomics f2
end.
The varmap type family implements maps from index values. In this case, we define an assignment as a map from variables to Props. Our interpretation function formulaDenote works with an assignment, and we use the varmap_find function to consult the assignment in the Atomic case. The first argument to varmap_find is a default value, in case the variable is not found.
Section my_tauto.
Variable atomics : asgn.
Definition holds (v : index) := varmap_find False v atomics.
We define some shorthand for a particular variable being true, and now we are ready to define some helpful functions based on the ListSet module of the standard library, which (unsurprisingly) presents a view of lists as sets.
Require Import ListSet.
Definition index_eq : forall x y : index, {x = y} + {x <> y}.
decide equality.
Defined.
Definition add (s : set index) (v : index) := set_add index_eq v s.
Definition In_dec : forall v (s : set index), {In v s} + {~ In v s}.
Local Open Scope specif_scope.
intro; refine (fix F (s : set index) : {In v s} + {~ In v s} :=
match s with
| nil => No
| v' :: s' => index_eq v' v || F s'
end); crush.
Defined.
We define what it means for all members of an index set to represent true propositions, and we prove some lemmas about this notion.
Fixpoint allTrue (s : set index) : Prop :=
match s with
| nil => True
| v :: s' => holds v /\ allTrue s'
end.
Theorem allTrue_add : forall v s,
allTrue s
-> holds v
-> allTrue (add s v).
induction s; crush;
match goal with
| [ |- context[if ?E then _ else _] ] => destruct E
end; crush.
Qed.
Theorem allTrue_In : forall v s,
allTrue s
-> set_In v s
-> varmap_find False v atomics.
induction s; crush.
Qed.
Hint Resolve allTrue_add allTrue_In.
Local Open Scope partial_scope.
Now we can write a function forward that implements deconstruction of hypotheses, expanding a compound formula into a set of sets of atomic formulas covering all possible cases introduced with use of Or. To handle consideration of multiple cases, the function takes in a continuation argument, which will be called once for each case.
The forward function has a dependent type, in the style of Chapter 6, guaranteeing correctness. The arguments to forward are a goal formula f, a set known of atomic formulas that we may assume are true, a hypothesis formula hyp, and a success continuation cont that we call when we have extended known to hold new truths implied by hyp.
Definition forward : forall (f : formula) (known : set index) (hyp : formula)
(cont : forall known', [allTrue known' -> formulaDenote atomics f]),
[allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f].
refine (fix F (f : formula) (known : set index) (hyp : formula)
(cont : forall known', [allTrue known' -> formulaDenote atomics f])
: [allTrue known -> formulaDenote atomics hyp -> formulaDenote atomics f] :=
match hyp with
| Atomic v => Reduce (cont (add known v))
| Truth => Reduce (cont known)
| Falsehood => Yes
| And h1 h2 =>
Reduce (F (Imp h2 f) known h1 (fun known' =>
Reduce (F f known' h2 cont)))
| Or h1 h2 => F f known h1 cont && F f known h2 cont
| Imp _ _ => Reduce (cont known)
end); crush.
Defined.
Definition backward : forall (known : set index) (f : formula),
[allTrue known -> formulaDenote atomics f].
refine (fix F (known : set index) (f : formula)
: [allTrue known -> formulaDenote atomics f] :=
match f with
| Atomic v => Reduce (In_dec v known)
| Truth => Yes
| Falsehood => No
| And f1 f2 => F known f1 && F known f2
| Or f1 f2 => F known f1 || F known f2
| Imp f1 f2 => forward f2 known f1 (fun known' => F known' f2)
end); crush; eauto.
Defined.
A simple wrapper around backward gives us the usual type of a partial decision procedure.
Definition my_tauto : forall f : formula, [formulaDenote atomics f].
intro; refine (Reduce (backward nil f)); crush.
Defined.
End my_tauto.
Our final tactic implementation is now fairly straightforward. First, we intro all quantifiers that do not bind Props. Then we call the quote tactic, which implements the reification for us. Finally, we are able to construct an exact proof via partialOut and the my_tauto Gallina function.
Ltac my_tauto :=
repeat match goal with
| [ |- forall x : ?P, _ ] =>
match type of P with
| Prop => fail 1
| _ => intro
end
end;
quote formulaDenote;
match goal with
| [ |- formulaDenote ?m ?f ] => exact (partialOut (my_tauto m f))
end.
A few examples demonstrate how the tactic works.
mt1 = partialOut (my_tauto (Empty_vm Prop) Truth)
: True
mt2 =
fun x y : nat =>
partialOut
(my_tauto (Node_vm (x = y) (Empty_vm Prop) (Empty_vm Prop))
(Imp (Atomic End_idx) (Atomic End_idx)))
: forall x y : nat, x = y --> x = y
Theorem mt3 : forall x y z,
(x < y /\ y > z) \/ (y > z /\ x < S y)
--> y > z /\ (x < y \/ x < S y).
my_tauto.
Qed.
Print mt3.
fun x y z : nat =>
partialOut
(my_tauto
(Node_vm (x < S y) (Node_vm (x < y) (Empty_vm Prop) (Empty_vm Prop))
(Node_vm (y > z) (Empty_vm Prop) (Empty_vm Prop)))
(Imp
(Or (And (Atomic (Left_idx End_idx)) (Atomic (Right_idx End_idx)))
(And (Atomic (Right_idx End_idx)) (Atomic End_idx)))
(And (Atomic (Right_idx End_idx))
(Or (Atomic (Left_idx End_idx)) (Atomic End_idx)))))
: forall x y z : nat,
x < y /\ y > z \/ y > z /\ x < S y --> y > z /\ (x < y \/ x < S y)
Theorem mt4 : True /\ True /\ True /\ True /\ True /\ True /\ False --> False.
my_tauto.
Qed.
Print mt4.
mt4 =
partialOut
(my_tauto (Empty_vm Prop)
(Imp
(And Truth
(And Truth
(And Truth (And Truth (And Truth (And Truth Falsehood))))))
Falsehood))
: True /\ True /\ True /\ True /\ True /\ True /\ False --> False
Theorem mt4' : True /\ True /\ True /\ True /\ True /\ True /\ False -> False.
tauto.
Qed.
Print mt4'.
mt4' =
fun H : True /\ True /\ True /\ True /\ True /\ True /\ False =>
and_ind
(fun (_ : True) (H1 : True /\ True /\ True /\ True /\ True /\ False) =>
and_ind
(fun (_ : True) (H3 : True /\ True /\ True /\ True /\ False) =>
and_ind
(fun (_ : True) (H5 : True /\ True /\ True /\ False) =>
and_ind
(fun (_ : True) (H7 : True /\ True /\ False) =>
and_ind
(fun (_ : True) (H9 : True /\ False) =>
and_ind (fun (_ : True) (H11 : False) => False_ind False H11)
H9) H7) H5) H3) H1) H
: True /\ True /\ True /\ True /\ True /\ True /\ False -> False
Manual Reification of Terms with Variables
The action of the quote tactic above may seem like magic. Somehow it performs equality comparison between subterms of arbitrary types, so that these subterms may be represented with the same reified variable. While quote is implemented in OCaml, we can code the reification process completely in Ltac, as well. To make our job simpler, we will represent variables as nats, indexing into a simple list of variable values that may be referenced.
Step one of the process is to crawl over a term, building a duplicate-free list of all values that appear in positions we will encode as variables. A useful helper function adds an element to a list, preventing duplicates. Note how we use Ltac pattern matching to implement an equality test on Gallina terms; this is simple syntactic equality, not even the richer definitional equality. We also represent lists as nested tuples, to allow different list elements to have different Gallina types.
Ltac inList x xs :=
match xs with
| tt => false
| (x, _) => true
| (_, ?xs') => inList x xs'
end.
Ltac addToList x xs :=
let b := inList x xs in
match b with
| true => xs
| false => constr:(x, xs)
end.
Now we can write our recursive function to calculate the list of variable values we will want to use to represent a term.
Ltac allVars xs e :=
match e with
| True => xs
| False => xs
| ?e1 /\ ?e2 =>
let xs := allVars xs e1 in
allVars xs e2
| ?e1 \/ ?e2 =>
let xs := allVars xs e1 in
allVars xs e2
| ?e1 -> ?e2 =>
let xs := allVars xs e1 in
allVars xs e2
| _ => addToList e xs
end.
We will also need a way to map a value to its position in a list.
Ltac lookup x xs :=
match xs with
| (x, _) => O
| (_, ?xs') =>
let n := lookup x xs' in
constr:(S n)
end.
The next building block is a procedure for reifying a term, given a list of all allowed variable values. We are free to make this procedure partial, where tactic failure may be triggered upon attempting to reify a term containing subterms not included in the list of variables. The type of the output term is a copy of formula where index is replaced by nat, in the type of the constructor for atomic formulas.
Inductive formula' : Set :=
| Atomic' : nat -> formula'
| Truth' : formula'
| Falsehood' : formula'
| And' : formula' -> formula' -> formula'
| Or' : formula' -> formula' -> formula'
| Imp' : formula' -> formula' -> formula'.
Note that, when we write our own Ltac procedure, we can work directly with the normal -> operator, rather than needing to introduce a wrapper for it.
Ltac reifyTerm xs e :=
match e with
| True => constr:Truth'
| False => constr:Falsehood'
| ?e1 /\ ?e2 =>
let p1 := reifyTerm xs e1 in
let p2 := reifyTerm xs e2 in
constr:(And' p1 p2)
| ?e1 \/ ?e2 =>
let p1 := reifyTerm xs e1 in
let p2 := reifyTerm xs e2 in
constr:(Or' p1 p2)
| ?e1 -> ?e2 =>
let p1 := reifyTerm xs e1 in
let p2 := reifyTerm xs e2 in
constr:(Imp' p1 p2)
| _ =>
let n := lookup e xs in
constr:(Atomic' n)
end.
Finally, we bring all the pieces together.
Ltac reify :=
match goal with
| [ |- ?G ] => let xs := allVars tt G in
let p := reifyTerm xs G in
pose p
end.
A quick test verifies that we are doing reification correctly.
Theorem mt3' : forall x y z,
(x < y /\ y > z) \/ (y > z /\ x < S y)
-> y > z /\ (x < y \/ x < S y).
do 3 intro; reify.
Our simple tactic adds the translated term as a new variable:
f := Imp'
(Or' (And' (Atomic' 2) (Atomic' 1)) (And' (Atomic' 1) (Atomic' 0)))
(And' (Atomic' 1) (Or' (Atomic' 2) (Atomic' 0))) : formula'
f := Imp'
(Or' (And' (Atomic' 2) (Atomic' 1)) (And' (Atomic' 1) (Atomic' 0)))
(And' (Atomic' 1) (Or' (Atomic' 2) (Atomic' 0))) : formula'
Abort.
More work would be needed to complete the reflective tactic, as we must connect our new syntax type with the real meanings of formulas, but the details are the same as in our prior implementation with quote.
Building a Reification Tactic that Recurses Under Binders
Inductive type : Type :=
| Nat : type
| NatFunc : type -> type.
Inductive term : type -> Type :=
| Const : nat -> term Nat
| Plus : term Nat -> term Nat -> term Nat
| Abs : forall t, (nat -> term t) -> term (NatFunc t).
Fixpoint typeDenote (t : type) : Type :=
match t with
| Nat => nat
| NatFunc t => nat -> typeDenote t
end.
Fixpoint termDenote t (e : term t) : typeDenote t :=
match e with
| Const n => n
| Plus e1 e2 => termDenote e1 + termDenote e2
| Abs _ e1 => fun x => termDenote (e1 x)
end.
Here is a naive first attempt at a reification tactic.
Ltac refl' e :=
match e with
| ?E1 + ?E2 =>
let r1 := refl' E1 in
let r2 := refl' E2 in
constr:(Plus r1 r2)
| fun x : nat => ?E1 =>
let r1 := refl' E1 in
constr:(Abs (fun x => r1 x))
| _ => constr:(Const e)
end.
Recall that a regular Ltac pattern variable ?X only matches terms that do not mention new variables introduced within the pattern. In our naive implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument! Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
To handle functions in general, we will use the pattern variable form @?X, which allows X to mention newly introduced variables that are declared explicitly. A use of @?X must be followed by a list of the local variables that may be mentioned. The variable X then comes to stand for a Gallina function over the values of those variables. For instance:
Reset refl'.
Ltac refl' e :=
match e with
| ?E1 + ?E2 =>
let r1 := refl' E1 in
let r2 := refl' E2 in
constr:(Plus r1 r2)
| fun x : nat => @?E1 x =>
let r1 := refl' E1 in
constr:(Abs r1)
| _ => constr:(Const e)
end.
Now, in the abstraction case, we bind E1 as a function from an x value to the value of the abstraction body. Unfortunately, our recursive call there is not destined for success. It will match the same abstraction pattern and trigger another recursive call, and so on through infinite recursion. One last refactoring yields a working procedure. The key idea is to consider every input to refl' as a function over the values of variables introduced during recursion.
Reset refl'.
Ltac refl' e :=
match eval simpl in e with
| fun x : ?T => @?E1 x + @?E2 x =>
let r1 := refl' E1 in
let r2 := refl' E2 in
constr:(fun x => Plus (r1 x) (r2 x))
| fun (x : ?T) (y : nat) => @?E1 x y =>
let r1 := refl' (fun p : T * nat => E1 (fst p) (snd p)) in
constr:(fun x => Abs (fun y => r1 (x, y)))
| _ => constr:(fun x => Const (e x))
end.
Note how now even the addition case works in terms of functions, with @?X patterns. The abstraction case introduces a new variable by extending the type used to represent the free variables. In particular, the argument to refl' used type T to represent all free variables. We extend the type to T * nat for the type representing free variable values within the abstraction body. A bit of bookkeeping with pairs and their projections produces an appropriate version of the abstraction body to pass in a recursive call. To ensure that all this repackaging of terms does not interfere with pattern matching, we add an extra simpl reduction on the function argument, in the first line of the body of refl'.
Now one more tactic provides an example of how to apply reification. Let us consider goals that are equalities between terms that can be reified. We want to change such goals into equalities between appropriate calls to termDenote.
Ltac refl :=
match goal with
| [ |- ?E1 = ?E2 ] =>
let E1' := refl' (fun _ : unit => E1) in
let E2' := refl' (fun _ : unit => E2) in
change (termDenote (E1' tt) = termDenote (E2' tt));
cbv beta iota delta [fst snd]
end.
Goal (fun (x y : nat) => x + y + 13) = (fun (_ z : nat) => z).
refl.
============================
termDenote
(Abs
(fun y : nat =>
Abs (fun y0 : nat => Plus (Plus (Const y) (Const y0)) (Const 13)))) =
termDenote (Abs (fun _ : nat => Abs (fun y0 : nat => Const y0)))
Abort.
Our encoding here uses Coq functions to represent binding within the terms we reify, which makes it difficult to implement certain functions over reified terms. An alternative would be to represent variables with numbers. This can be done by writing a slightly smarter reification function that identifies variable references by detecting when term arguments are just compositions of fst and snd; from the order of the compositions we may read off the variable number. We leave the details as an exercise (though not a trivial one!) for the reader.