# Library Predicates

Print unit.

Print True.

Inductive True : Prop := I : True

*should not*be distinguished. There is a certain aesthetic appeal to this point of view, but I want to argue that it is best to treat Curry-Howard very loosely in practical proving. There are Coq-specific reasons for preferring the distinction, involving efficient compilation and avoidance of paradoxes in the presence of classical math, but I will argue that there is a more general principle that should lead us to avoid conflating programming and proving.

*proof irrelevance*, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.

# Propositional Logic

In Coq, the most basic propositional connective is implication, written ->, which we have already used in almost every proof. Rather than being defined inductively, implication is built into Coq as the function type constructor.
We have also already seen the definition of True. For a demonstration of a lower-level way of establishing proofs of inductive predicates, we turn to this trivial theorem.

We may always use the apply tactic to take a proof step based on applying a particular constructor of the inductive predicate that we are trying to establish. Sometimes there is only one constructor that could possibly apply, in which case a shortcut is available:

There is also a predicate False, which is the Curry-Howard mirror image of Empty_set from the last chapter.

Print False.

Inductive False : Prop :=

In a consistent context, we can never build a proof of False. In inconsistent contexts that appear in the courses of proofs, it is usually easiest to proceed by demonstrating the inconsistency with an explicit proof of False.

At this point, we have an inconsistent hypothesis 2 + 2 = 5, so the specific conclusion is not important. We use the elimtype tactic. For a full description of it, see the Coq manual. For our purposes, we only need the variant elimtype False, which lets us replace any conclusion formula with False, because any fact follows from an inconsistent context.

elimtype False.

H : 2 + 2 = 5

============================

False

crush.

Qed.

A related notion to False is logical negation.

Print not.

not = fun A : Prop => A -> False

: Prop -> Prop

============================

2 + 2 = 5 -> False

crush.

Qed.

We also have conjunction, which we introduced in the last chapter.

Print and.

Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B

destruct 1.

H : P

H0 : Q

============================

Q /\ P

split.

2 subgoals

H : P

H0 : Q

============================

Q

subgoal 2 is

P

assumption.

assumption.

Qed.

Coq disjunction is called or and abbreviated with the infix operator \/.

Print or.

Inductive or (A : Prop) (B : Prop) : Prop :=

or_introl : A -> A \/ B | or_intror : B -> A \/ B

As in the proof for and, we begin with case analysis, though this time we are met by two cases instead of one.

destruct 1.

2 subgoals

H : P

============================

Q \/ P

subgoal 2 is

Q \/ P

right; assumption.

left; assumption.

Qed.

It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is tauto, which is a complete decision procedure for constructive propositional logic. (More on what "constructive" means in the next section.) We can use tauto to dispatch all of the purely propositional theorems we have proved so far.

Sometimes propositional reasoning forms important plumbing for the proof of a theorem, but we still need to apply some other smarts about, say, arithmetic. The tactic intuition is a generalization of tauto that proves everything it can using propositional reasoning. When some further facts must be established to finish the proof, intuition uses propositional laws to simplify them as far as possible. Consider this example, which uses the list concatenation operator

`++`from the standard library.Theorem arith_comm : forall ls1 ls2 : list nat,

length ls1 = length ls2 \/ length ls1 + length ls2 = 6

-> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.

intuition.

A lot of the proof structure has been generated for us by intuition, but the final proof depends on a fact about lists. The remaining subgoal hints at what cleverness we need to inject.

ls1 : list nat

ls2 : list nat

H0 : length ls1 + length ls2 = 6

============================

length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2

We can see that we need a theorem about lengths of concatenated lists, which we proved last chapter and is also in the standard library.

ls1 : list nat

ls2 : list nat

H0 : length ls1 + length ls2 = 6

============================

length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2

rewrite app_length.

ls1 : list nat

ls2 : list nat

H0 : length ls1 + length ls2 = 6

============================

length ls1 + length ls2 = 6 \/ length ls1 = length ls2

tauto.

Qed.

The intuition tactic is one of the main bits of glue in the implementation of crush, so, with a little help, we can get a short automated proof of the theorem.

Theorem arith_comm' : forall ls1 ls2 : list nat,

length ls1 = length ls2 \/ length ls1 + length ls2 = 6

-> length (ls1 ++ ls2) = 6 \/ length ls1 = length ls2.

Hint Rewrite app_length.

crush.

Qed.

End Propositional.

Ending the section here has the same effect as always. Each of our propositional theorems becomes universally quantified over the propositional variables that we used.
One potential point of confusion in the presentation so far is the distinction between bool and Prop. The datatype bool is built from two values true and false, while Prop is a more primitive type that includes among its members True and False. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth, True and False?
The answer comes from the fact that Coq implements
A similar paradoxical situation would result if every proposition evaluated to either True or False. Evaluation in Coq is decidable, so we would be limiting ourselves to decidable propositions only.
Hence the distinction between bool and Prop. Programs of type bool are computational by construction; we can always run them to determine their results. Many Props are undecidable, and so we can write more expressive formulas with Props than with bools, but the inevitable consequence is that we cannot simply "run a Prop to determine its truth."
Constructive logic lets us define all of the logical connectives in an aesthetically appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called
We will see more about Coq's program extraction facility in a later chapter. However, I think it is worth interjecting another warning at this point, following up on the prior warning about taking the Curry-Howard correspondence too literally. It is possible to write programs by theorem-proving methods in Coq, but hardly anyone does it. It is almost always most useful to maintain the distinction between programs and proofs. If you write a program by proving a theorem, you are likely to run into algorithmic inefficiencies that you introduced in your proof to make it easier to prove. It is a shame to have to worry about such situations while proving tricky theorems, and it is a happy state of affairs that you almost certainly will not need to, with the ideal of extracting programs from proofs being confined mostly to theoretical studies.
The forall connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula P -> Q is equivalent to forall x : P, Q, where x does not appear in Q. That is, the "real" type of the implication says "for every proof of P, there exists a proof of Q."
Existential quantification is defined in the standard library.

# What Does It Mean to Be Constructive?

*constructive*or*intuitionistic*logic, in contrast to the*classical*logic that you may be more familiar with. In constructive logic, classical tautologies like ~ ~ P -> P and P \/ ~ P do not always hold. In general, we can only prove these tautologies when P is*decidable*, in the sense of computability theory. The Curry-Howard encoding that Coq uses for or allows us to extract either a proof of P or a proof of ~ P from any proof of P \/ ~ P. Since our proofs are just functional programs which we can run, a general law of the excluded middle would give us a decision procedure for the halting problem, where the instantiations of P would be formulas like "this particular Turing machine halts."*program extraction*, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.# First-Order Logic

Print ex.

Inductive ex (A : Type) (P : A -> Prop) : Prop :=

ex_intro : forall x : A, P x -> ex P

We can start this proof with a tactic exists, which should not be confused with the formula constructor shorthand of the same name.

exists 1.

The conclusion is replaced with a version using the existential witness that we announced.

============================

1 + 1 = 2

============================

1 + 1 = 2

reflexivity.

Qed.

We can also use tactics to reason about existential hypotheses.

We start by case analysis on the proof of the existential fact.

destruct 1.

n : nat

m : nat

x : nat

H : n + x = m

============================

n <= m

crush.

Qed.

The tactic intuition has a first-order cousin called firstorder, which proves many formulas when only first-order reasoning is needed, and it tries to perform first-order simplifications in any case. First-order reasoning is much harder than propositional reasoning, so firstorder is much more likely than intuition to get stuck in a way that makes it run for long enough to be useless.
We start our exploration of a more complicated class of predicates with a simple example: an alternative way of characterizing when a natural number is zero.

# Predicates with Implicit Equality

Inductive isZero : nat -> Prop :=

| IsZero : isZero 0.

Theorem isZero_zero : isZero 0.

constructor.

Qed.

We can call isZero a
The definition of isZero differs in an important way from all of the other inductive definitions that we have seen in this and the previous chapter. Instead of writing just Set or Prop after the colon, here we write nat -> Prop. We saw examples of parameterized types like list, but there the parameters appeared with names
For instance, our definition isZero makes the predicate provable only when the argument is 0. We can see that the concept of equality is somehow implicit in the inductive definition mechanism. The way this is accomplished is similar to the way that logic variables are used in Prolog (but worry not if not familiar with Prolog), and it is a very powerful mechanism that forms a foundation for formalizing all of mathematics. In fact, though it is natural to think of inductive types as folding in the functionality of equality, in Coq, the true situation is reversed, with equality defined as just another inductive type!

*judgment*, in the sense often used in the semantics of programming languages. Judgments are typically defined in the style of*natural deduction*, where we write a number of*inference rules*with premises appearing above a solid line and a conclusion appearing below the line. In this example, the sole constructor IsZero of isZero can be thought of as the single inference rule for deducing isZero, with nothing above the line and isZero 0 below it. The proof of isZero_zero demonstrates how we can apply an inference rule. (Readers not familiar with formal semantics should not worry about not following this paragraph!)*before*the colon. Every constructor of a parameterized inductive type must have a range type that uses the same parameter, whereas the form we use here enables us to choose different arguments to the type for different constructors.Print eq.

Inductive eq (A : Type) (x : A) : A -> Prop := eq_refl : x = x

*prove*equality when its two arguments are syntactically equal. This definition turns out to capture all of the basic properties of equality, and the equality-manipulating tactics that we have seen so far, like reflexivity and rewrite, are implemented treating eq as just another inductive type with a well-chosen definition. Another way of stating that definition is: equality is defined as the least reflexive relation.

We want to proceed by cases on the proof of the assumption about isZero.

destruct 1.

n : nat

============================

n + 0 = n

crush.

Qed.

Another example seems at first like it should admit an analogous proof, but in fact provides a demonstration of one of the most basic gotchas of Coq proving.

Let us try a proof by cases on the assumption, as in the last proof.

destruct 1.

============================

False

Undo.

inversion 1.

Qed.

What does inversion do? Think of it as a version of destruct that does its best to take advantage of the structure of arguments to inductive types. In this case, inversion completed the proof immediately, because it was able to detect that we were using isZero with an impossible argument.
Sometimes using destruct when you should have used inversion can lead to confusing results. To illustrate, consider an alternate proof attempt for the last theorem.

============================

1 + 1 = 4

Abort.

To see more clearly what is happening, we can consider the type of isZero's induction principle.

isZero_ind

: forall P : nat -> Prop, P 0 -> forall n : nat, isZero n -> P n

# Recursive Predicates

Think of even as another judgment defined by natural deduction rules. The rule EvenO has nothing above the line and even O below the line, and EvenSS is a rule with even n above the line and even (S (S n)) below.
The proof techniques of the last section are easily adapted.

Theorem even_0 : even 0.

constructor.

Qed.

Theorem even_4 : even 4.

constructor; constructor; constructor.

Qed.

It is not hard to see that sequences of constructor applications like the above can get tedious. We can avoid them using Coq's hint facility, with a new Hint variant that asks to consider all constructors of an inductive type during proof search. The tactic auto performs exhaustive proof search up to a fixed depth, considering only the proof steps we have registered as hints.

We may also use inversion with even.

Theorem even_1_contra : even 1 -> False.

inversion 1.

Qed.

Theorem even_3_contra : even 3 -> False.

inversion 1.

H : even 3

n : nat

H1 : even 1

H0 : n = 1

============================

False

inversion H1.

Qed.

We can also do inductive proofs about even.

It seems a reasonable first choice to proceed by induction on n.

induction n; crush.

n : nat

IHn : forall m : nat, even n -> even m -> even (n + m)

m : nat

H : even (S n)

H0 : even m

============================

even (S (n + m))

inversion H.

n : nat

IHn : forall m : nat, even n -> even m -> even (n + m)

m : nat

H : even (S n)

H0 : even m

n0 : nat

H2 : even n0

H1 : S n0 = n

============================

even (S (S n0 + m))

simpl.

constructor.

============================

even (n0 + m)

IHn : forall m : nat, even n -> even m -> even (n + m)

*on the structure of one of the even proofs*. This technique is commonly called

*rule induction*in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent.

Restart.

induction 1.

m : nat

============================

even m -> even (0 + m)

`subgoal 2 is`

even m -> even (S (S n) + m)

crush.

Now we focus on the second case:

intro.

m : nat

n : nat

H : even n

IHeven : even m -> even (n + m)

H0 : even m

============================

even (S (S n) + m)

simpl; constructor.

============================

even (n + m)

apply IHeven; assumption.

In fact, crush can handle all of the details of the proof once we declare the induction strategy.

Restart.

induction 1; crush.

Qed.

Induction on recursive predicates has similar pitfalls to those we encountered with inversion in the last section.

n : nat

============================

False

`subgoal 2 is`

False

Abort.

Lemma even_contra' : forall n', even n' -> forall n, n' = S (n + n) -> False.

induction 1; crush.

At this point, it is useful to consider all cases of n and n0 being zero or nonzero. Only one of these cases has any trickiness to it.

destruct n; destruct n0; crush.

n : nat

H : even (S n)

IHeven : forall n0 : nat, S n = S (n0 + n0) -> False

n0 : nat

H0 : S n = n0 + S n0

============================

False

SearchRewrite (_ + S _).

rewrite <- plus_n_Sm in H0.

The induction hypothesis lets us complete the proof, if we use a variant of apply that has a with clause to give instantiations of quantified variables.

apply IHeven with n0; assumption.

As usual, we can rewrite the proof to avoid referencing any locally generated names, which makes our proof script more readable and more robust to changes in the theorem statement. We use the notation <- to request a hint that does right-to-left rewriting, just like we can with the rewrite tactic.

Restart.

Hint Rewrite <- plus_n_Sm.

induction 1; crush;

match goal with

| [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0

end; crush.

Qed.

We write the proof in a way that avoids the use of local variable or hypothesis names, using the match tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with plus_n_Sm in a particular direction saves us from having to figure out the right place to apply that theorem.
The original theorem now follows trivially from our lemma, using a new tactic eauto, a fancier version of auto whose explanation we postpone to Chapter 13.

We use a variant eapply of apply which has the same relationship to apply as eauto has to auto. An invocation of apply only succeeds if all arguments to the rule being used can be determined from the form of the goal, whereas eapply will introduce unification variables for undetermined arguments. In this case, eauto is able to determine the right values for those unification variables, using (unsurprisingly) a variant of the classic algorithm for
By considering an alternate attempt at proving the lemma, we can see another common pitfall of inductive proofs in Coq. Imagine that we had tried to prove even_contra' with all of the forall quantifiers moved to the front of the lemma statement.

*unification*.Lemma even_contra'' : forall n' n, even n' -> n' = S (n + n) -> False.

induction 1; crush;

match goal with

| [ H : S ?N = ?N0 + ?N0 |- _ ] => destruct N; destruct N0

end; crush.

One subgoal remains:

n : nat

H : even (S (n + n))

IHeven : S (n + n) = S (S (S (n + n))) -> False

============================

False

We are out of luck here. The inductive hypothesis is trivially true, since its assumption is false. In the version of this proof that succeeded, IHeven had an explicit quantification over n. This is because the quantification of n

n : nat

H : even (S (n + n))

IHeven : S (n + n) = S (S (S (n + n))) -> False

============================

False

*appeared after the thing we are inducting on*in the theorem statement. In general, quantified variables and hypotheses that appear before the induction object in the theorem statement stay fixed throughout the inductive proof. Variables and hypotheses that are quantified after the induction object may be varied explicitly in uses of inductive hypotheses.Abort.

Why should Coq implement induction this way? One answer is that it avoids burdening this basic tactic with additional heuristic smarts, but that is not the whole picture. Imagine that induction analyzed dependencies among variables and reordered quantifiers to preserve as much freedom as possible in later uses of inductive hypotheses. This could make the inductive hypotheses more complex, which could in turn cause particular automation machinery to fail when it would have succeeded before. In general, we want to avoid quantifiers in our proofs whenever we can, and that goal is furthered by the refactoring that the induction tactic forces us to do.