comparison src/MoreDep.v @ 338:c7faf3551c5d

Pass over MoreDep
author Adam Chlipala <adam@chlipala.net>
date Mon, 10 Oct 2011 16:01:31 -0400
parents 1f57a8d0ed3d
children ad315efc3b6b
comparison
equal deleted inserted replaced
337:4186722d329b 338:c7faf3551c5d
18 18
19 (** %\chapter{More Dependent Types}% *) 19 (** %\chapter{More Dependent Types}% *)
20 20
21 (** Subset types and their relatives help us integrate verification with programming. Though they reorganize the certified programmer's workflow, they tend not to have deep effects on proofs. We write largely the same proofs as we would for classical verification, with some of the structure moved into the programs themselves. It turns out that, when we use dependent types to their full potential, we warp the development and proving process even more than that, picking up %``%#"#free theorems#"#%''% to the extent that often a certified program is hardly more complex than its uncertified counterpart in Haskell or ML. 21 (** Subset types and their relatives help us integrate verification with programming. Though they reorganize the certified programmer's workflow, they tend not to have deep effects on proofs. We write largely the same proofs as we would for classical verification, with some of the structure moved into the programs themselves. It turns out that, when we use dependent types to their full potential, we warp the development and proving process even more than that, picking up %``%#"#free theorems#"#%''% to the extent that often a certified program is hardly more complex than its uncertified counterpart in Haskell or ML.
22 22
23 In particular, we have only scratched the tip of the iceberg that is Coq's inductive definition mechanism. The inductive types we have seen so far have their counterparts in the other proof assistants that we surveyed in Chapter 1. This chapter explores the strange new world of dependent inductive datatypes (that is, dependent inductive types outside [Prop]), a possibility which sets Coq apart from all of the competition not based on type theory. *) 23 In particular, we have only scratched the tip of the iceberg that is Coq's inductive definition mechanism. The inductive types we have seen so far have their counterparts in the other proof assistants that we surveyed in Chapter 1. This chapter explores the strange new world of dependent inductive datatypes (that is, dependent inductive types outside [Prop]), a possibility that sets Coq apart from all of the competition not based on type theory. *)
24 24
25 25
26 (** * Length-Indexed Lists *) 26 (** * Length-Indexed Lists *)
27 27
28 (** Many introductions to dependent types start out by showing how to use them to eliminate array bounds checks. When the type of an array tells you how many elements it has, your compiler can detect out-of-bounds dereferences statically. Since we are working in a pure functional language, the next best thing is length-indexed lists, which the following code defines. *) 28 (** Many introductions to dependent types start out by showing how to use them to eliminate array bounds checks%\index{array bounds checks}%. When the type of an array tells you how many elements it has, your compiler can detect out-of-bounds dereferences statically. Since we are working in a pure functional language, the next best thing is length-indexed lists%\index{length-indexed lists}%, which the following code defines. *)
29 29
30 Section ilist. 30 Section ilist.
31 Variable A : Set. 31 Variable A : Set.
32 32
33 Inductive ilist : nat -> Set := 33 Inductive ilist : nat -> Set :=
34 | Nil : ilist O 34 | Nil : ilist O
35 | Cons : forall n, A -> ilist n -> ilist (S n). 35 | Cons : forall n, A -> ilist n -> ilist (S n).
36 36
37 (** We see that, within its section, [ilist] is given type [nat -> Set]. Previously, every inductive type we have seen has either had plain [Set] as its type or has been a predicate with some type ending in [Prop]. The full generality of inductive definitions lets us integrate the expressivity of predicates directly into our normal programming. 37 (** We see that, within its section, [ilist] is given type [nat -> Set]. Previously, every inductive type we have seen has either had plain [Set] as its type or has been a predicate with some type ending in [Prop]. The full generality of inductive definitions lets us integrate the expressivity of predicates directly into our normal programming.
38 38
39 The [nat] argument to [ilist] tells us the length of the list. The types of [ilist]'s constructors tell us that a [Nil] list has length [O] and that a [Cons] list has length one greater than the length of its sublist. We may apply [ilist] to any natural number, even natural numbers that are only known at runtime. It is this breaking of the %\textit{%#<i>#phase distinction#</i>#%}% that characterizes [ilist] as %\textit{%#<i>#dependently typed#</i>#%}%. 39 The [nat] argument to [ilist] tells us the length of the list. The types of [ilist]'s constructors tell us that a [Nil] list has length [O] and that a [Cons] list has length one greater than the length of its tail. We may apply [ilist] to any natural number, even natural numbers that are only known at runtime. It is this breaking of the %\index{phase distinction}\textit{%#<i>#phase distinction#</i>#%}% that characterizes [ilist] as %\textit{%#<i>#dependently typed#</i>#%}%.
40 40
41 In expositions of list types, we usually see the length function defined first, but here that would not be a very productive function to code. Instead, let us implement list concatenation. *) 41 In expositions of list types, we usually see the length function defined first, but here that would not be a very productive function to code. Instead, let us implement list concatenation. *)
42 42
43 Fixpoint app n1 (ls1 : ilist n1) n2 (ls2 : ilist n2) : ilist (n1 + n2) := 43 Fixpoint app n1 (ls1 : ilist n1) n2 (ls2 : ilist n2) : ilist (n1 + n2) :=
44 match ls1 with 44 match ls1 with
45 | Nil => ls2 45 | Nil => ls2
46 | Cons _ x ls1' => Cons x (app ls1' ls2) 46 | Cons _ x ls1' => Cons x (app ls1' ls2)
47 end. 47 end.
48 48
49 (** In Coq version 8.1 and earlier, this definition leads to an error message: 49 (** Past Coq versions signalled an error for this definition. The code is still invalid within Coq's core language, but current Coq versions automatically add annotations to the original program, producing a valid core program. These are the annotations on [match] discriminees that we began to study in the previous chapter. We can rewrite [app] to give the annotations explicitly. *)
50
51 [[
52 The term "ls2" has type "ilist n2" while it is expected to have type
53 "ilist (?14 + n2)"
54
55 ]]
56
57 In Coq's core language, without explicit annotations, Coq does not enrich our typing assumptions in the branches of a [match] expression. It is clear that the unification variable [?14] should be resolved to 0 in this context, so that we have [0 + n2] reducing to [n2], but Coq does not realize that. We cannot fix the problem using just the simple [return] clauses we applied in the last chapter. We need to combine a [return] clause with a new kind of annotation, an [in] clause. This is exactly what the inference heuristics do in Coq 8.2 and later.
58
59 Specifically, Coq infers the following definition from the simpler one. *)
60
61 (* EX: Implement concatenation *)
62 50
63 (* begin thide *) 51 (* begin thide *)
64 Fixpoint app' n1 (ls1 : ilist n1) n2 (ls2 : ilist n2) : ilist (n1 + n2) := 52 Fixpoint app' n1 (ls1 : ilist n1) n2 (ls2 : ilist n2) : ilist (n1 + n2) :=
65 match ls1 in (ilist n1) return (ilist (n1 + n2)) with 53 match ls1 in (ilist n1) return (ilist (n1 + n2)) with
66 | Nil => ls2 54 | Nil => ls2
67 | Cons _ x ls1' => Cons x (app' ls1' ls2) 55 | Cons _ x ls1' => Cons x (app' ls1' ls2)
68 end. 56 end.
69 (* end thide *) 57 (* end thide *)
70 58
71 (** Using [return] alone allowed us to express a dependency of the [match] result type on the %\textit{%#<i>#value#</i>#%}% of the discriminee. What [in] adds to our arsenal is a way of expressing a dependency on the %\textit{%#<i>#type#</i>#%}% of the discriminee. Specifically, the [n1] in the [in] clause above is a %\textit{%#<i>#binding occurrence#</i>#%}% whose scope is the [return] clause. 59 (** Using [return] alone allowed us to express a dependency of the [match] result type on the %\textit{%#<i>#value#</i>#%}% of the discriminee. What %\index{Gallina terms!in}%[in] adds to our arsenal is a way of expressing a dependency on the %\textit{%#<i>#type#</i>#%}% of the discriminee. Specifically, the [n1] in the [in] clause above is a %\textit{%#<i>#binding occurrence#</i>#%}% whose scope is the [return] clause.
72 60
73 We may use [in] clauses only to bind names for the arguments of an inductive type family. That is, each [in] clause must be an inductive type family name applied to a sequence of underscores and variable names of the proper length. The positions for %\textit{%#<i>#parameters#</i>#%}% to the type family must all be underscores. Parameters are those arguments declared with section variables or with entries to the left of the first colon in an inductive definition. They cannot vary depending on which constructor was used to build the discriminee, so Coq prohibits pointless matches on them. It is those arguments defined in the type to the right of the colon that we may name with [in] clauses. 61 We may use [in] clauses only to bind names for the arguments of an inductive type family. That is, each [in] clause must be an inductive type family name applied to a sequence of underscores and variable names of the proper length. The positions for %\textit{%#<i>#parameters#</i>#%}% to the type family must all be underscores. Parameters are those arguments declared with section variables or with entries to the left of the first colon in an inductive definition. They cannot vary depending on which constructor was used to build the discriminee, so Coq prohibits pointless matches on them. It is those arguments defined in the type to the right of the colon that we may name with [in] clauses.
74 62
75 Our [app] function could be typed in so-called %\textit{%#<i>#stratified#</i>#%}% type systems, which avoid true dependency. That is, we could consider the length indices to lists to live in a separate, compile-time-only universe from the lists themselves. This stratification between a compile-time universe and a run-time universe, with no references to the latter in the former, gives rise to the terminology %``%#"#stratified.#"#%''% Our next example would be harder to implement in a stratified system. We write an injection function from regular lists to length-indexed lists. A stratified implementation would need to duplicate the definition of lists across compile-time and run-time versions, and the run-time versions would need to be indexed by the compile-time versions. *) 63 Our [app] function could be typed in so-called %\index{stratified type systems}\textit{%#<i>#stratified#</i>#%}% type systems, which avoid true dependency. That is, we could consider the length indices to lists to live in a separate, compile-time-only universe from the lists themselves. This stratification between a compile-time universe and a run-time universe, with no references to the latter in the former, gives rise to the terminology %``%#"#stratified.#"#%''% Our next example would be harder to implement in a stratified system. We write an injection function from regular lists to length-indexed lists. A stratified implementation would need to duplicate the definition of lists across compile-time and run-time versions, and the run-time versions would need to be indexed by the compile-time versions. *)
76 64
77 (* EX: Implement injection from normal lists *) 65 (* EX: Implement injection from normal lists *)
78 66
79 (* begin thide *) 67 (* begin thide *)
80 Fixpoint inject (ls : list A) : ilist (length ls) := 68 Fixpoint inject (ls : list A) : ilist (length ls) :=
94 Theorem inject_inverse : forall ls, unject (inject ls) = ls. 82 Theorem inject_inverse : forall ls, unject (inject ls) = ls.
95 induction ls; crush. 83 induction ls; crush.
96 Qed. 84 Qed.
97 (* end thide *) 85 (* end thide *)
98 86
99 (* EX: Implement statically-checked "car"/"hd" *) 87 (* EX: Implement statically checked "car"/"hd" *)
100 88
101 (** Now let us attempt a function that is surprisingly tricky to write. In ML, the list head function raises an exception when passed an empty list. With length-indexed lists, we can rule out such invalid calls statically, and here is a first attempt at doing so. We write [???] as a placeholder for a term that we do not know how to write, not for any real Coq notation like those introduced in the previous chapter. 89 (** Now let us attempt a function that is surprisingly tricky to write. In ML, the list head function raises an exception when passed an empty list. With length-indexed lists, we can rule out such invalid calls statically, and here is a first attempt at doing so. We write [???] as a placeholder for a term that we do not know how to write, not for any real Coq notation like those introduced in the previous chapter.
102 90
103 [[ 91 [[
104 Definition hd n (ls : ilist (S n)) : A := 92 Definition hd n (ls : ilist (S n)) : A :=
114 [[ 102 [[
115 Definition hd n (ls : ilist (S n)) : A := 103 Definition hd n (ls : ilist (S n)) : A :=
116 match ls with 104 match ls with
117 | Cons _ h _ => h 105 | Cons _ h _ => h
118 end. 106 end.
119 107 ]]
108
109 <<
120 Error: Non exhaustive pattern-matching: no clause found for pattern Nil 110 Error: Non exhaustive pattern-matching: no clause found for pattern Nil
121 111 >>
122 ]]
123 112
124 Unlike in ML, we cannot use inexhaustive pattern matching, because there is no conception of a %\texttt{%#<tt>#Match#</tt>#%}% exception to be thrown. In fact, recent versions of Coq %\textit{%#<i>#do#</i>#%}% allow this, by implicit translation to a [match] that considers all constructors. It is educational to discover that encoding ourselves directly. We might try using an [in] clause somehow. 113 Unlike in ML, we cannot use inexhaustive pattern matching, because there is no conception of a %\texttt{%#<tt>#Match#</tt>#%}% exception to be thrown. In fact, recent versions of Coq %\textit{%#<i>#do#</i>#%}% allow this, by implicit translation to a [match] that considers all constructors. It is educational to discover that encoding ourselves directly. We might try using an [in] clause somehow.
125 114
126 [[ 115 [[
127 Definition hd n (ls : ilist (S n)) : A := 116 Definition hd n (ls : ilist (S n)) : A :=
128 match ls in (ilist (S n)) with 117 match ls in (ilist (S n)) with
129 | Cons _ h _ => h 118 | Cons _ h _ => h
130 end. 119 end.
131 120 ]]
121
122 <<
132 Error: The reference n was not found in the current environment 123 Error: The reference n was not found in the current environment
133 124 >>
134 ]] 125
135 126 In this and other cases, we feel like we want [in] clauses with type family arguments that are not variables. Unfortunately, Coq only supports variables in those positions. A completely general mechanism could only be supported with a solution to the problem of higher-order unification%~\cite{HOU}%, which is undecidable. There %\textit{%#<i>#are#</i>#%}% useful heuristics for handling non-variable indices which are gradually making their way into Coq, but we will spend some time in this and the next few chapters on effective pattern matching on dependent types using only the primitive [match] annotations.
136 In this and other cases, we feel like we want [in] clauses with type family arguments that are not variables. Unfortunately, Coq only supports variables in those positions. A completely general mechanism could only be supported with a solution to the problem of higher-order unification, which is undecidable. There %\textit{%#<i>#are#</i>#%}% useful heuristics for handling non-variable indices which are gradually making their way into Coq, but we will spend some time in this and the next few chapters on effective pattern matching on dependent types using only the primitive [match] annotations.
137 127
138 Our final, working attempt at [hd] uses an auxiliary function and a surprising [return] annotation. *) 128 Our final, working attempt at [hd] uses an auxiliary function and a surprising [return] annotation. *)
139 129
140 (* begin thide *) 130 (* begin thide *)
141 Definition hd' n (ls : ilist n) := 131 Definition hd' n (ls : ilist n) :=
156 *) 146 *)
157 147
158 Definition hd n (ls : ilist (S n)) : A := hd' ls. 148 Definition hd n (ls : ilist (S n)) : A := hd' ls.
159 (* end thide *) 149 (* end thide *)
160 150
151 End ilist.
152
161 (** We annotate our main [match] with a type that is itself a [match]. We write that the function [hd'] returns [unit] when the list is empty and returns the carried type [A] in all other cases. In the definition of [hd], we just call [hd']. Because the index of [ls] is known to be nonzero, the type checker reduces the [match] in the type of [hd'] to [A]. *) 153 (** We annotate our main [match] with a type that is itself a [match]. We write that the function [hd'] returns [unit] when the list is empty and returns the carried type [A] in all other cases. In the definition of [hd], we just call [hd']. Because the index of [ls] is known to be nonzero, the type checker reduces the [match] in the type of [hd'] to [A]. *)
162 154
163 End ilist.
164
165 155
166 (** * A Tagless Interpreter *) 156 (** * A Tagless Interpreter *)
167 157
168 (** A favorite example for motivating the power of functional programming is implementation of a simple expression language interpreter. In ML and Haskell, such interpreters are often implemented using an algebraic datatype of values, where at many points it is checked that a value was built with the right constructor of the value type. With dependent types, we can implement a %\textit{%#<i>#tagless#</i>#%}% interpreter that both removes this source of runtime inefficiency and gives us more confidence that our implementation is correct. *) 158 (** A favorite example for motivating the power of functional programming is implementation of a simple expression language interpreter. In ML and Haskell, such interpreters are often implemented using an algebraic datatype of values, where at many points it is checked that a value was built with the right constructor of the value type. With dependent types, we can implement a %\index{tagless interpreters}\textit{%#<i>#tagless#</i>#%}% interpreter that both removes this source of runtime inefficiency and gives us more confidence that our implementation is correct. *)
169 159
170 Inductive type : Set := 160 Inductive type : Set :=
171 | Nat : type 161 | Nat : type
172 | Bool : type 162 | Bool : type
173 | Prod : type -> type -> type. 163 | Prod : type -> type -> type.
194 | Nat => nat 184 | Nat => nat
195 | Bool => bool 185 | Bool => bool
196 | Prod t1 t2 => typeDenote t1 * typeDenote t2 186 | Prod t1 t2 => typeDenote t1 * typeDenote t2
197 end%type. 187 end%type.
198 188
199 (** [typeDenote] compiles types of our object language into %``%#"#native#"#%''% Coq types. It is deceptively easy to implement. The only new thing we see is the [%type] annotation, which tells Coq to parse the [match] expression using the notations associated with types. Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor. [type] is one example of an identifer bound to a %\textit{%#<i>#notation scope#</i>#%}%. We will deal more explicitly with notations and notation scopes in later chapters. 189 (** The [typeDenote] function compiles types of our object language into %``%#"#native#"#%''% Coq types. It is deceptively easy to implement. The only new thing we see is the [%][type] annotation, which tells Coq to parse the [match] expression using the notations associated with types. Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor. The token [type] is one example of an identifer bound to a %\textit{%#<i>#notation scope#</i>#%}%. In this book, we will not go into more detail on notation scopes, but the Coq manual can be consulted for more information.
200 190
201 We can define a function [expDenote] that is typed in terms of [typeDenote]. *) 191 We can define a function [expDenote] that is typed in terms of [typeDenote]. *)
202 192
203 Fixpoint expDenote t (e : exp t) : typeDenote t := 193 Fixpoint expDenote t (e : exp t) : typeDenote t :=
204 match e with 194 match e with
223 Definition pairOut t1 t2 (e : exp (Prod t1 t2)) : option (exp t1 * exp t2) := 213 Definition pairOut t1 t2 (e : exp (Prod t1 t2)) : option (exp t1 * exp t2) :=
224 match e in (exp (Prod t1 t2)) return option (exp t1 * exp t2) with 214 match e in (exp (Prod t1 t2)) return option (exp t1 * exp t2) with
225 | Pair _ _ e1 e2 => Some (e1, e2) 215 | Pair _ _ e1 e2 => Some (e1, e2)
226 | _ => None 216 | _ => None
227 end. 217 end.
228 218 ]]
219
220 <<
229 Error: The reference t2 was not found in the current environment 221 Error: The reference t2 was not found in the current environment
230 ]] 222 >>
231 223
232 We run again into the problem of not being able to specify non-variable arguments in [in] clauses. The problem would just be hopeless without a use of an [in] clause, though, since the result type of the [match] depends on an argument to [exp]. Our solution will be to use a more general type, as we did for [hd]. First, we define a type-valued function to use in assigning a type to [pairOut]. *) 224 We run again into the problem of not being able to specify non-variable arguments in [in] clauses. The problem would just be hopeless without a use of an [in] clause, though, since the result type of the [match] depends on an argument to [exp]. Our solution will be to use a more general type, as we did for [hd]. First, we define a type-valued function to use in assigning a type to [pairOut]. *)
233 225
234 (* EX: Define a function [pairOut : forall t1 t2, exp (Prod t1 t2) -> option (exp t1 * exp t2)] *) 226 (* EX: Define a function [pairOut : forall t1 t2, exp (Prod t1 t2) -> option (exp t1 * exp t2)] *)
235 227
255 | Pair _ _ e1 e2 => Some (e1, e2) 247 | Pair _ _ e1 e2 => Some (e1, e2)
256 | _ => pairOutDefault _ 248 | _ => pairOutDefault _
257 end. 249 end.
258 (* end thide *) 250 (* end thide *)
259 251
260 (** There is one important subtlety in this definition. Coq allows us to use convenient ML-style pattern matching notation, but, internally and in proofs, we see that patterns are expanded out completely, matching one level of inductive structure at a time. Thus, the default case in the [match] above expands out to one case for each constructor of [exp] besides [Pair], and the underscore in [pairOutDefault _] is resolved differently in each case. From an ML or Haskell programmer's perspective, what we have here is type inference determining which code is run (returning either [None] or [tt]), which goes beyond what is possible with type inference guiding parametric polymorphism in Hindley-Milner languages, but is similar to what goes on with Haskell type classes. 252 (** There is one important subtlety in this definition. Coq allows us to use convenient ML-style pattern matching notation, but, internally and in proofs, we see that patterns are expanded out completely, matching one level of inductive structure at a time. Thus, the default case in the [match] above expands out to one case for each constructor of [exp] besides [Pair], and the underscore in [pairOutDefault _] is resolved differently in each case. From an ML or Haskell programmer's perspective, what we have here is type inference determining which code is run (returning either [None] or [tt]), which goes beyond what is possible with type inference guiding parametric polymorphism in Hindley-Milner languages%\index{Hindley-Milner}%, but is similar to what goes on with Haskell type classes%\index{type classes}%.
261 253
262 With [pairOut] available, we can write [cfold] in a straightforward way. There are really no surprises beyond that Coq verifies that this code has such an expressive type, given the small annotation burden. In some places, we see that Coq's [match] annotation inference is too smart for its own good, and we have to turn that inference off by writing [return _]. *) 254 With [pairOut] available, we can write [cfold] in a straightforward way. There are really no surprises beyond that Coq verifies that this code has such an expressive type, given the small annotation burden. In some places, we see that Coq's [match] annotation inference is too smart for its own good, and we have to turn that inference off by writing [return _]. *)
263 255
264 Fixpoint cfold t (e : exp t) : exp t := 256 Fixpoint cfold t (e : exp t) : exp t :=
265 match e with 257 match e with
348 340
349 We would like to do a case analysis on [cfold e1], and we attempt that in the way that has worked so far. 341 We would like to do a case analysis on [cfold e1], and we attempt that in the way that has worked so far.
350 342
351 [[ 343 [[
352 destruct (cfold e1). 344 destruct (cfold e1).
353 345 ]]
346
347 <<
354 User error: e1 is used in hypothesis e 348 User error: e1 is used in hypothesis e
355 349 >>
356 ]]
357 350
358 Coq gives us another cryptic error message. Like so many others, this one basically means that Coq is not able to build some proof about dependent types. It is hard to generate helpful and specific error messages for problems like this, since that would require some kind of understanding of the dependency structure of a piece of code. We will encounter many examples of case-specific tricks for recovering from errors like this one. 351 Coq gives us another cryptic error message. Like so many others, this one basically means that Coq is not able to build some proof about dependent types. It is hard to generate helpful and specific error messages for problems like this, since that would require some kind of understanding of the dependency structure of a piece of code. We will encounter many examples of case-specific tricks for recovering from errors like this one.
359 352
360 For our current proof, we can use a tactic [dep_destruct] defined in the book [Tactics] module. General elimination/inversion of dependently-typed hypotheses is undecidable, since it must be implemented with [match] expressions that have the restriction on [in] clauses that we have already discussed. [dep_destruct] makes a best effort to handle some common cases, relying upon the more primitive [dependent destruction] tactic that comes with Coq. In a future chapter, we will learn about the explicit manipulation of equality proofs that is behind [dep_destruct]'s implementation in Ltac, but for now, we treat it as a useful black box. *) 353 For our current proof, we can use a tactic [dep_destruct]%\index{tactics!dep\_destruct}% defined in the book [CpdtTactics] module. General elimination/inversion of dependently typed hypotheses is undecidable, since it must be implemented with [match] expressions that have the restriction on [in] clauses that we have already discussed. The tactic [dep_destruct] makes a best effort to handle some common cases, relying upon the more primitive %\index{tactics!dependent destruction}%[dependent destruction] tactic that comes with Coq. In a future chapter, we will learn about the explicit manipulation of equality proofs that is behind [dep_destruct]'s implementation in Ltac, but for now, we treat it as a useful black box. (In Chapter 11, we will also see how [dependent destruction] forces us to make a larger philosophical commitment about our logic than we might like, and we will see some workarounds.) *)
361 354
362 dep_destruct (cfold e1). 355 dep_destruct (cfold e1).
363 356
364 (** This successfully breaks the subgoal into 5 new subgoals, one for each constructor of [exp] that could produce an [exp Nat]. Note that [dep_destruct] is successful in ruling out the other cases automatically, in effect automating some of the work that we have done manually in implementing functions like [hd] and [pairOut]. 357 (** This successfully breaks the subgoal into 5 new subgoals, one for each constructor of [exp] that could produce an [exp Nat]. Note that [dep_destruct] is successful in ruling out the other cases automatically, in effect automating some of the work that we have done manually in implementing functions like [hd] and [pairOut].
365 358
381 end; crush). 374 end; crush).
382 Qed. 375 Qed.
383 (* end thide *) 376 (* end thide *)
384 377
385 378
386 (** * Dependently-Typed Red-Black Trees *) 379 (** * Dependently Typed Red-Black Trees *)
387 380
388 (** Red-black trees are a favorite purely-functional data structure with an interesting invariant. We can use dependent types to enforce that operations on red-black trees preserve the invariant. For simplicity, we specialize our red-black trees to represent sets of [nat]s. *) 381 (** Red-black trees are a favorite purely functional data structure with an interesting invariant. We can use dependent types to enforce that operations on red-black trees preserve the invariant. For simplicity, we specialize our red-black trees to represent sets of [nat]s. *)
389 382
390 Inductive color : Set := Red | Black. 383 Inductive color : Set := Red | Black.
391 384
392 Inductive rbtree : color -> nat -> Set := 385 Inductive rbtree : color -> nat -> Set :=
393 | Leaf : rbtree Black 0 386 | Leaf : rbtree Black 0
412 | RedNode _ t1 _ t2 => S (f (depth t1) (depth t2)) 405 | RedNode _ t1 _ t2 => S (f (depth t1) (depth t2))
413 | BlackNode _ _ _ t1 _ t2 => S (f (depth t1) (depth t2)) 406 | BlackNode _ _ _ t1 _ t2 => S (f (depth t1) (depth t2))
414 end. 407 end.
415 End depth. 408 End depth.
416 409
417 (** Our proof of balanced-ness decomposes naturally into a lower bound and an upper bound. We prove the lower bound first. Unsurprisingly, a tree's black depth provides such a bound on the minimum path length. We use the richly-typed procedure [min_dec] to do case analysis on whether [min X Y] equals [X] or [Y]. *) 410 (** Our proof of balanced-ness decomposes naturally into a lower bound and an upper bound. We prove the lower bound first. Unsurprisingly, a tree's black depth provides such a bound on the minimum path length. We use the richly typed procedure [min_dec] to do case analysis on whether [min X Y] equals [X] or [Y]. *)
418 411
419 Check min_dec. 412 Check min_dec.
420 (** %\vspace{-.15in}% [[ 413 (** %\vspace{-.15in}% [[
421 min_dec 414 min_dec
422 : forall n m : nat, {min n m = n} + {min n m = m} 415 : forall n m : nat, {min n m = n} + {min n m = m}
423
424 ]] 416 ]]
425 *) 417 *)
426 418
427 Theorem depth_min : forall c n (t : rbtree c n), depth min t >= n. 419 Theorem depth_min : forall c n (t : rbtree c n), depth min t >= n.
428 induction t; crush; 420 induction t; crush;
470 | [ H : context[match ?C with Red => _ | Black => _ end] |- _ ] => 462 | [ H : context[match ?C with Red => _ | Black => _ end] |- _ ] =>
471 destruct C 463 destruct C
472 end; crush). 464 end; crush).
473 Qed. 465 Qed.
474 466
475 (** The original theorem follows easily from the lemma. We use the tactic [generalize pf], which, when [pf] proves the proposition [P], changes the goal from [Q] to [P -> Q]. It is useful to do this because it makes the truth of [P] manifest syntactically, so that automation machinery can rely on [P], even if that machinery is not smart enough to establish [P] on its own. *) 467 (** The original theorem follows easily from the lemma. We use the tactic %\index{tactics!generalize}%[generalize pf], which, when [pf] proves the proposition [P], changes the goal from [Q] to [P -> Q]. This transformation is useful because it makes the truth of [P] manifest syntactically, so that automation machinery can rely on [P], even if that machinery is not smart enough to establish [P] on its own. *)
476 468
477 Theorem depth_max : forall c n (t : rbtree c n), depth max t <= 2 * n + 1. 469 Theorem depth_max : forall c n (t : rbtree c n), depth max t <= 2 * n + 1.
478 intros; generalize (depth_max' t); destruct c; crush. 470 intros; generalize (depth_max' t); destruct c; crush.
479 Qed. 471 Qed.
480 472
488 (** Now we are ready to implement an example operation on our trees, insertion. Insertion can be thought of as breaking the tree invariants locally but then rebalancing. In particular, in intermediate states we find red nodes that may have red children. The type [rtree] captures the idea of such a node, continuing to track black depth as a type index. *) 480 (** Now we are ready to implement an example operation on our trees, insertion. Insertion can be thought of as breaking the tree invariants locally but then rebalancing. In particular, in intermediate states we find red nodes that may have red children. The type [rtree] captures the idea of such a node, continuing to track black depth as a type index. *)
489 481
490 Inductive rtree : nat -> Set := 482 Inductive rtree : nat -> Set :=
491 | RedNode' : forall c1 c2 n, rbtree c1 n -> nat -> rbtree c2 n -> rtree n. 483 | RedNode' : forall c1 c2 n, rbtree c1 n -> nat -> rbtree c2 n -> rtree n.
492 484
493 (** Before starting to define [insert], we define predicates capturing when a data value is in the set represented by a normal or possibly-invalid tree. *) 485 (** Before starting to define [insert], we define predicates capturing when a data value is in the set represented by a normal or possibly invalid tree. *)
494 486
495 Section present. 487 Section present.
496 Variable x : nat. 488 Variable x : nat.
497 489
498 Fixpoint present c n (t : rbtree c n) : Prop := 490 Fixpoint present c n (t : rbtree c n) : Prop :=
506 match t with 498 match t with
507 | RedNode' _ _ _ a y b => present a \/ x = y \/ present b 499 | RedNode' _ _ _ a y b => present a \/ x = y \/ present b
508 end. 500 end.
509 End present. 501 End present.
510 502
511 (** Insertion relies on two balancing operations. It will be useful to give types to these operations using a relative of the subset types from last chapter. While subset types let us pair a value with a proof about that value, here we want to pair a value with another non-proof dependently-typed value. The [sigT] type fills this role. *) 503 (** Insertion relies on two balancing operations. It will be useful to give types to these operations using a relative of the subset types from last chapter. While subset types let us pair a value with a proof about that value, here we want to pair a value with another non-proof dependently typed value. The %\index{Gallina terms!sigT}%[sigT] type fills this role. *)
512 504
513 Locate "{ _ : _ & _ }". 505 Locate "{ _ : _ & _ }".
514 (** [[ 506 (** [[
515 Notation Scope 507 Notation Scope
516 "{ x : A & P }" := sigT (fun x : A => P) 508 "{ x : A & P }" := sigT (fun x : A => P)
528 520
529 Notation "{< x >}" := (existT _ _ x). 521 Notation "{< x >}" := (existT _ _ x).
530 522
531 (** Each balance function is used to construct a new tree whose keys include the keys of two input trees, as well as a new key. One of the two input trees may violate the red-black alternation invariant (that is, it has an [rtree] type), while the other tree is known to be valid. Crucially, the two input trees have the same black depth. 523 (** Each balance function is used to construct a new tree whose keys include the keys of two input trees, as well as a new key. One of the two input trees may violate the red-black alternation invariant (that is, it has an [rtree] type), while the other tree is known to be valid. Crucially, the two input trees have the same black depth.
532 524
533 A balance operation may return a tree whose root is of either color. Thus, we use a [sigT] type to package the result tree with the color of its root. Here is the definition of the first balance operation, which applies when the possibly-invalid [rtree] belongs to the left of the valid [rbtree]. *) 525 A balance operation may return a tree whose root is of either color. Thus, we use a [sigT] type to package the result tree with the color of its root. Here is the definition of the first balance operation, which applies when the possibly invalid [rtree] belongs to the left of the valid [rbtree].
526
527 A quick word of encouragement: After writing this code, even I do not understand the precise details of how balancing works! I consulted Chris Okasaki's paper %``%#"#Red-Black Trees in a Functional Setting#"#%''~\cite{Okasaki}% and transcribed the code to use dependent types. Luckily, the details are not so important here; types alone will tell us that insertion preserves balanced-ness, and we will prove that insertion produces trees containing the right keys.*)
534 528
535 Definition balance1 n (a : rtree n) (data : nat) c2 := 529 Definition balance1 n (a : rtree n) (data : nat) c2 :=
536 match a in rtree n return rbtree c2 n 530 match a in rtree n return rbtree c2 n
537 -> { c : color & rbtree c (S n) } with 531 -> { c : color & rbtree c (S n) } with
538 | RedNode' _ _ _ t1 y t2 => 532 | RedNode' _ _ _ t1 y t2 =>
548 | b => fun a t => {<BlackNode (RedNode a y b) data t>} 542 | b => fun a t => {<BlackNode (RedNode a y b) data t>}
549 end t1' 543 end t1'
550 end t2 544 end t2
551 end. 545 end.
552 546
553 (** We apply a trick that I call the %\textit{%#<i>#convoy pattern#</i>#%}%. Recall that [match] annotations only make it possible to describe a dependence of a [match] %\textit{%#<i>#result type#</i>#%}% on the discriminee. There is no automatic refinement of the types of free variables. However, it is possible to effect such a refinement by finding a way to encode free variable type dependencies in the [match] result type, so that a [return] clause can express the connection. 547 (** We apply a trick that I call the %\index{convoy pattern}\textit{%#<i>#convoy pattern#</i>#%}%. Recall that [match] annotations only make it possible to describe a dependence of a [match] %\textit{%#<i>#result type#</i>#%}% on the discriminee. There is no automatic refinement of the types of free variables. However, it is possible to effect such a refinement by finding a way to encode free variable type dependencies in the [match] result type, so that a [return] clause can express the connection.
554 548
555 In particular, we can extend the [match] to return %\textit{%#<i>#functions over the free variables whose types we want to refine#</i>#%}%. In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time. We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn. We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern. Such a [match] expression is applied immediately to the %``%#"#old version#"#%''% of the variable to be refined, and the type checker is happy. 549 In particular, we can extend the [match] to return %\textit{%#<i>#functions over the free variables whose types we want to refine#</i>#%}%. In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time. We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn. We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern. Such a [match] expression is applied immediately to the %``%#"#old version#"#%''% of the variable to be refined, and the type checker is happy.
556 550
557 After writing this code, even I do not understand the precise details of how balancing works. I consulted Chris Okasaki's paper %``%#"#Red-Black Trees in a Functional Setting#"#%''% and transcribed the code to use dependent types. Luckily, the details are not so important here; types alone will tell us that insertion preserves balanced-ness, and we will prove that insertion produces trees containing the right keys. 551 Here is the symmetric function [balance2], for cases where the possibly invalid tree appears on the right rather than on the left. *)
558
559 Here is the symmetric function [balance2], for cases where the possibly-invalid tree appears on the right rather than on the left. *)
560 552
561 Definition balance2 n (a : rtree n) (data : nat) c2 := 553 Definition balance2 n (a : rtree n) (data : nat) c2 :=
562 match a in rtree n return rbtree c2 n -> { c : color & rbtree c (S n) } with 554 match a in rtree n return rbtree c2 n -> { c : color & rbtree c (S n) } with
563 | RedNode' _ _ _ t1 z t2 => 555 | RedNode' _ _ _ t1 z t2 =>
564 match t1 in rbtree c n return rbtree _ n -> rbtree c2 n 556 match t1 in rbtree c n return rbtree _ n -> rbtree c2 n
586 match c with 578 match c with
587 | Red => rtree n 579 | Red => rtree n
588 | Black => { c' : color & rbtree c' n } 580 | Black => { c' : color & rbtree c' n }
589 end. 581 end.
590 582
591 (** That is, inserting into a tree with root color [c] and black depth [n], the variety of tree we get out depends on [c]. If we started with a red root, then we get back a possibly-invalid tree of depth [n]. If we started with a black root, we get back a valid tree of depth [n] with a root node of an arbitrary color. 583 (** That is, inserting into a tree with root color [c] and black depth [n], the variety of tree we get out depends on [c]. If we started with a red root, then we get back a possibly invalid tree of depth [n]. If we started with a black root, we get back a valid tree of depth [n] with a root node of an arbitrary color.
592 584
593 Here is the definition of [ins]. Again, we do not want to dwell on the functional details. *) 585 Here is the definition of [ins]. Again, we do not want to dwell on the functional details. *)
594 586
595 Fixpoint ins c n (t : rbtree c n) : insResult c n := 587 Fixpoint ins c n (t : rbtree c n) : insResult c n :=
596 match t with 588 match t with
611 | Red => fun ins_b => balance2 ins_b y a 603 | Red => fun ins_b => balance2 ins_b y a
612 | _ => fun ins_b => {< BlackNode a y (projT2 ins_b) >} 604 | _ => fun ins_b => {< BlackNode a y (projT2 ins_b) >}
613 end (ins b) 605 end (ins b)
614 end. 606 end.
615 607
616 (** The one new trick is a variation of the convoy pattern. In each of the last two pattern matches, we want to take advantage of the typing connection between the trees [a] and [b]. We might naively apply the convoy pattern directly on [a] in the first [match] and on [b] in the second. This satisfies the type checker per se, but it does not satisfy the termination checker. Inside each [match], we would be calling [ins] recursively on a locally-bound variable. The termination checker is not smart enough to trace the dataflow into that variable, so the checker does not know that this recursive argument is smaller than the original argument. We make this fact clearer by applying the convoy pattern on %\textit{%#<i>#the result of a recursive call#</i>#%}%, rather than just on that call's argument. 608 (** The one new trick is a variation of the convoy pattern. In each of the last two pattern matches, we want to take advantage of the typing connection between the trees [a] and [b]. We might naively apply the convoy pattern directly on [a] in the first [match] and on [b] in the second. This satisfies the type checker per se, but it does not satisfy the termination checker. Inside each [match], we would be calling [ins] recursively on a locally bound variable. The termination checker is not smart enough to trace the dataflow into that variable, so the checker does not know that this recursive argument is smaller than the original argument. We make this fact clearer by applying the convoy pattern on %\textit{%#<i>#the result of a recursive call#</i>#%}%, rather than just on that call's argument.
617 609
618 Finally, we are in the home stretch of our effort to define [insert]. We just need a few more definitions of non-recursive functions. First, we need to give the final characterization of [insert]'s return type. Inserting into a red-rooted tree gives a black-rooted tree where black depth has increased, and inserting into a black-rooted tree gives a tree where black depth has stayed the same and where the root is an arbitrary color. *) 610 Finally, we are in the home stretch of our effort to define [insert]. We just need a few more definitions of non-recursive functions. First, we need to give the final characterization of [insert]'s return type. Inserting into a red-rooted tree gives a black-rooted tree where black depth has increased, and inserting into a black-rooted tree gives a tree where black depth has stayed the same and where the root is an arbitrary color. *)
619 611
620 Definition insertResult c n := 612 Definition insertResult c n :=
621 match c with 613 match c with
648 Section present. 640 Section present.
649 Variable z : nat. 641 Variable z : nat.
650 642
651 (** The variable [z] stands for an arbitrary key. We will reason about [z]'s presence in particular trees. As usual, outside the section the theorems we prove will quantify over all possible keys, giving us the facts we wanted. 643 (** The variable [z] stands for an arbitrary key. We will reason about [z]'s presence in particular trees. As usual, outside the section the theorems we prove will quantify over all possible keys, giving us the facts we wanted.
652 644
653 We start by proving the correctness of the balance operations. It is useful to define a custom tactic [present_balance] that encapsulates the reasoning common to the two proofs. We use the keyword [Ltac] to assign a name to a proof script. This particular script just iterates between [crush] and identification of a tree that is being pattern-matched on and should be destructed. *) 645 We start by proving the correctness of the balance operations. It is useful to define a custom tactic [present_balance] that encapsulates the reasoning common to the two proofs. We use the keyword %\index{Verncular commands!Ltac}%[Ltac] to assign a name to a proof script. This particular script just iterates between [crush] and identification of a tree that is being pattern-matched on and should be destructed. *)
654 646
655 Ltac present_balance := 647 Ltac present_balance :=
656 crush; 648 crush;
657 repeat (match goal with 649 repeat (match goal with
658 | [ H : context[match ?T with 650 | [ _ : context[match ?T with
659 | Leaf => _ 651 | Leaf => _
660 | RedNode _ _ _ _ => _ 652 | RedNode _ _ _ _ => _
661 | BlackNode _ _ _ _ _ _ => _ 653 | BlackNode _ _ _ _ _ _ => _
662 end] |- _ ] => dep_destruct T 654 end] |- _ ] => dep_destruct T
663 | [ |- context[match ?T with 655 | [ |- context[match ?T with
695 687
696 Theorem present_ins : forall c n (t : rbtree c n), 688 Theorem present_ins : forall c n (t : rbtree c n),
697 present_insResult t (ins t). 689 present_insResult t (ins t).
698 induction t; crush; 690 induction t; crush;
699 repeat (match goal with 691 repeat (match goal with
700 | [ H : context[if ?E then _ else _] |- _ ] => destruct E 692 | [ _ : context[if ?E then _ else _] |- _ ] => destruct E
701 | [ |- context[if ?E then _ else _] ] => destruct E 693 | [ |- context[if ?E then _ else _] ] => destruct E
702 | [ H : context[match ?C with Red => _ | Black => _ end] 694 | [ _ : context[match ?C with Red => _ | Black => _ end]
703 |- _ ] => destruct C 695 |- _ ] => destruct C
704 end; crush); 696 end; crush);
705 try match goal with 697 try match goal with
706 | [ H : context[balance1 ?A ?B ?C] |- _ ] => 698 | [ _ : context[balance1 ?A ?B ?C] |- _ ] =>
707 generalize (present_balance1 A B C) 699 generalize (present_balance1 A B C)
708 end; 700 end;
709 try match goal with 701 try match goal with
710 | [ H : context[balance2 ?A ?B ?C] |- _ ] => 702 | [ _ : context[balance2 ?A ?B ?C] |- _ ] =>
711 generalize (present_balance2 A B C) 703 generalize (present_balance2 A B C)
712 end; 704 end;
713 try match goal with 705 try match goal with
714 | [ |- context[balance1 ?A ?B ?C] ] => 706 | [ |- context[balance1 ?A ?B ?C] ] =>
715 generalize (present_balance1 A B C) 707 generalize (present_balance1 A B C)
749 present_insert. 741 present_insert.
750 Qed. 742 Qed.
751 End present. 743 End present.
752 End insert. 744 End insert.
753 745
754 (** We can generate executable OCaml code with the command [Recursive Extraction insert], which also automatically outputs the OCaml versions of all of [insert]'s dependencies. In our previous extractions, we wound up with clean OCaml code. Here, we find uses of %\texttt{%#<tt>#Obj.magic#</tt>#%}%, OCaml's unsafe cast operator for tweaking the apparent type of an expression in an arbitrary way. Casts appear for this example because the return type of [insert] depends on the %\textit{%#<i>#value#</i>#%}% of the function's argument, a pattern which OCaml cannot handle. Since Coq's type system is much more expressive than OCaml's, such casts are unavoidable in general. Since the OCaml type-checker is no longer checking full safety of programs, we must rely on Coq's extractor to use casts only in provably safe ways. *) 746 (** We can generate executable OCaml code with the command %\index{Vernacular commands!Recursive Extraction}%[Recursive Extraction insert], which also automatically outputs the OCaml versions of all of [insert]'s dependencies. In our previous extractions, we wound up with clean OCaml code. Here, we find uses of %\index{Obj.magic}\texttt{%#<tt>#Obj.magic#</tt>#%}%, OCaml's unsafe cast operator for tweaking the apparent type of an expression in an arbitrary way. Casts appear for this example because the return type of [insert] depends on the %\textit{%#<i>#value#</i>#%}% of the function's argument, a pattern which OCaml cannot handle. Since Coq's type system is much more expressive than OCaml's, such casts are unavoidable in general. Since the OCaml type-checker is no longer checking full safety of programs, we must rely on Coq's extractor to use casts only in provably safe ways. *)
747
748 (* begin hide *)
749 Recursive Extraction insert.
750 (* end hide *)
755 751
756 752
757 (** * A Certified Regular Expression Matcher *) 753 (** * A Certified Regular Expression Matcher *)
758 754
759 (** Another interesting example is regular expressions with dependent types that express which predicates over strings particular regexps implement. We can then assign a dependent type to a regular expression matching function, guaranteeing that it always decides the string property that we expect it to decide. 755 (** Another interesting example is regular expressions with dependent types that express which predicates over strings particular regexps implement. We can then assign a dependent type to a regular expression matching function, guaranteeing that it always decides the string property that we expect it to decide.
760 756
761 Before defining the syntax of expressions, it is helpful to define an inductive type capturing the meaning of the Kleene star. That is, a string [s] matches regular expression [star e] if and only if [s] can be decomposed into a sequence of substrings that all match [e]. We use Coq's string support, which comes through a combination of the [Strings] library and some parsing notations built into Coq. Operators like [++] and functions like [length] that we know from lists are defined again for strings. Notation scopes help us control which versions we want to use in particular contexts. *) 757 Before defining the syntax of expressions, it is helpful to define an inductive type capturing the meaning of the Kleene star. That is, a string [s] matches regular expression [star e] if and only if [s] can be decomposed into a sequence of substrings that all match [e]. We use Coq's string support, which comes through a combination of the [Strings] library and some parsing notations built into Coq. Operators like [++] and functions like [length] that we know from lists are defined again for strings. Notation scopes help us control which versions we want to use in particular contexts.%\index{Vernacular commands!Open Scope}% *)
762 758
763 Require Import Ascii String. 759 Require Import Ascii String.
764 Open Scope string_scope. 760 Open Scope string_scope.
765 761
766 Section star. 762 Section star.
773 -> star s2 769 -> star s2
774 -> star (s1 ++ s2). 770 -> star (s1 ++ s2).
775 End star. 771 End star.
776 772
777 (** Now we can make our first attempt at defining a [regexp] type that is indexed by predicates on strings. Here is a reasonable-looking definition that is restricted to constant characters and concatenation. We use the constructor [String], which is the analogue of list cons for the type [string], where [""] is like list nil. 773 (** Now we can make our first attempt at defining a [regexp] type that is indexed by predicates on strings. Here is a reasonable-looking definition that is restricted to constant characters and concatenation. We use the constructor [String], which is the analogue of list cons for the type [string], where [""] is like list nil.
778
779 [[ 774 [[
780 Inductive regexp : (string -> Prop) -> Set := 775 Inductive regexp : (string -> Prop) -> Set :=
781 | Char : forall ch : ascii, 776 | Char : forall ch : ascii,
782 regexp (fun s => s = String ch "") 777 regexp (fun s => s = String ch "")
783 | Concat : forall (P1 P2 : string -> Prop) (r1 : regexp P1) (r2 : regexp P2), 778 | Concat : forall (P1 P2 : string -> Prop) (r1 : regexp P1) (r2 : regexp P2),
784 regexp (fun s => exists s1, exists s2, s = s1 ++ s2 /\ P1 s1 /\ P2 s2). 779 regexp (fun s => exists s1, exists s2, s = s1 ++ s2 /\ P1 s1 /\ P2 s2).
785 780 ]]
781
782 <<
786 User error: Large non-propositional inductive types must be in Type 783 User error: Large non-propositional inductive types must be in Type
787 784 >>
788 ]] 785
789 786 What is a %\index{large inductive types}%large inductive type? In Coq, it is an inductive type that has a constructor which quantifies over some type of type [Type]. We have not worked with [Type] very much to this point. Every term of CIC has a type, including [Set] and [Prop], which are assigned type [Type]. The type [string -> Prop] from the failed definition also has type [Type].
790 What is a large inductive type? In Coq, it is an inductive type that has a constructor which quantifies over some type of type [Type]. We have not worked with [Type] very much to this point. Every term of CIC has a type, including [Set] and [Prop], which are assigned type [Type]. The type [string -> Prop] from the failed definition also has type [Type].
791 787
792 It turns out that allowing large inductive types in [Set] leads to contradictions when combined with certain kinds of classical logic reasoning. Thus, by default, such types are ruled out. There is a simple fix for our [regexp] definition, which is to place our new type in [Type]. While fixing the problem, we also expand the list of constructors to cover the remaining regular expression operators. *) 788 It turns out that allowing large inductive types in [Set] leads to contradictions when combined with certain kinds of classical logic reasoning. Thus, by default, such types are ruled out. There is a simple fix for our [regexp] definition, which is to place our new type in [Type]. While fixing the problem, we also expand the list of constructors to cover the remaining regular expression operators. *)
793 789
794 Inductive regexp : (string -> Prop) -> Type := 790 Inductive regexp : (string -> Prop) -> Type :=
795 | Char : forall ch : ascii, 791 | Char : forall ch : ascii,
875 (** We require a choice of two arbitrary string predicates and functions for deciding them. *) 871 (** We require a choice of two arbitrary string predicates and functions for deciding them. *)
876 872
877 Variable s : string. 873 Variable s : string.
878 (** Our computation will take place relative to a single fixed string, so it is easiest to make it a [Variable], rather than an explicit argument to our functions. *) 874 (** Our computation will take place relative to a single fixed string, so it is easiest to make it a [Variable], rather than an explicit argument to our functions. *)
879 875
880 (** [split'] is the workhorse behind [split]. It searches through the possible ways of splitting [s] into two pieces, checking the two predicates against each such pair. [split'] progresses right-to-left, from splitting all of [s] into the first piece to splitting all of [s] into the second piece. It takes an extra argument, [n], which specifies how far along we are in this search process. *) 876 (** The function [split'] is the workhorse behind [split]. It searches through the possible ways of splitting [s] into two pieces, checking the two predicates against each such pair. The execution of [split'] progresses right-to-left, from splitting all of [s] into the first piece to splitting all of [s] into the second piece. It takes an extra argument, [n], which specifies how far along we are in this search process. *)
881 877
882 Definition split' : forall n : nat, n <= length s 878 Definition split' : forall n : nat, n <= length s
883 -> {exists s1, exists s2, length s1 <= n /\ s1 ++ s2 = s /\ P1 s1 /\ P2 s2} 879 -> {exists s1, exists s2, length s1 <= n /\ s1 ++ s2 = s /\ P1 s1 /\ P2 s2}
884 + {forall s1 s2, length s1 <= n -> s1 ++ s2 = s -> ~ P1 s1 \/ ~ P2 s2}. 880 + {forall s1 s2, length s1 <= n -> s1 ++ s2 = s -> ~ P1 s1 \/ ~ P2 s2}.
885 refine (fix F (n : nat) : n <= length s 881 refine (fix F (n : nat) : n <= length s
891 && P2_dec (substring (S n') (length s - S n') s)) 887 && P2_dec (substring (S n') (length s - S n') s))
892 || F n' _ 888 || F n' _
893 end); clear F; crush; eauto 7; 889 end); clear F; crush; eauto 7;
894 match goal with 890 match goal with
895 | [ _ : length ?S <= 0 |- _ ] => destruct S 891 | [ _ : length ?S <= 0 |- _ ] => destruct S
896 | [ _ : length ?S' <= S ?N |- _ ] => 892 | [ _ : length ?S' <= S ?N |- _ ] => destruct (eq_nat_dec (length S') (S N))
897 generalize (eq_nat_dec (length S') (S N)); destruct 1
898 end; crush. 893 end; crush.
899 Defined. 894 Defined.
900 895
901 (** There is one subtle point in the [split'] code that is worth mentioning. The main body of the function is a [match] on [n]. In the case where [n] is known to be [S n'], we write [S n'] in several places where we might be tempted to write [n]. However, without further work to craft proper [match] annotations, the type-checker does not use the equality between [n] and [S n']. Thus, it is common to see patterns repeated in [match] case bodies in dependently-typed Coq code. We can at least use a [let] expression to avoid copying the pattern more than once, replacing the first case body with: 896 (** There is one subtle point in the [split'] code that is worth mentioning. The main body of the function is a [match] on [n]. In the case where [n] is known to be [S n'], we write [S n'] in several places where we might be tempted to write [n]. However, without further work to craft proper [match] annotations, the type-checker does not use the equality between [n] and [S n']. Thus, it is common to see patterns repeated in [match] case bodies in dependently typed Coq code. We can at least use a [let] expression to avoid copying the pattern more than once, replacing the first case body with:
902
903 [[ 897 [[
904 | S n' => fun _ => let n := S n' in 898 | S n' => fun _ => let n := S n' in
905 (P1_dec (substring 0 n s) 899 (P1_dec (substring 0 n s)
906 && P2_dec (substring n (length s - n) s)) 900 && P2_dec (substring n (length s - n) s))
907 || F n' _ 901 || F n' _
908 902
909 ]] 903 ]]
910 904
911 [split] itself is trivial to implement in terms of [split']. We just ask [split'] to begin its search with [n = length s]. *) 905 The [split] function itself is trivial to implement in terms of [split']. We just ask [split'] to begin its search with [n = length s]. *)
912 906
913 Definition split : {exists s1, exists s2, s = s1 ++ s2 /\ P1 s1 /\ P2 s2} 907 Definition split : {exists s1, exists s2, s = s1 ++ s2 /\ P1 s1 /\ P2 s2}
914 + {forall s1 s2, s = s1 ++ s2 -> ~ P1 s1 \/ ~ P2 s2}. 908 + {forall s1 s2, s = s1 ++ s2 -> ~ P1 s1 \/ ~ P2 s2}.
915 refine (Reduce (split' (n := length s) _)); crush; eauto. 909 refine (Reduce (split' (n := length s) _)); crush; eauto.
916 Defined. 910 Defined.
1016 1010
1017 Section dec_star. 1011 Section dec_star.
1018 Variable P : string -> Prop. 1012 Variable P : string -> Prop.
1019 Variable P_dec : forall s, {P s} + {~ P s}. 1013 Variable P_dec : forall s, {P s} + {~ P s}.
1020 1014
1021 (** Some new lemmas and hints about the [star] type family are useful here. We omit them here; they are included in the book source at this point. *) 1015 (** Some new lemmas and hints about the [star] type family are useful. We omit them here; they are included in the book source at this point. *)
1022 1016
1023 (* begin hide *) 1017 (* begin hide *)
1024 Hint Constructors star. 1018 Hint Constructors star.
1025 1019
1026 Lemma star_empty : forall s, 1020 Lemma star_empty : forall s,
1149 + {~ star P (substring n' (length s - n') s)} := 1143 + {~ star P (substring n' (length s - n') s)} :=
1150 match n with 1144 match n with
1151 | O => fun _ => Yes 1145 | O => fun _ => Yes
1152 | S n'' => fun _ => 1146 | S n'' => fun _ =>
1153 le_gt_dec (length s) n' 1147 le_gt_dec (length s) n'
1154 || dec_star'' (n := n') (star P) (fun n0 _ => Reduce (F n'' n0 _)) (length s - n') 1148 || dec_star'' (n := n') (star P)
1149 (fun n0 _ => Reduce (F n'' n0 _)) (length s - n')
1155 end); clear F; crush; eauto; 1150 end); clear F; crush; eauto;
1156 match goal with 1151 match goal with
1157 | [ H : star _ _ |- _ ] => apply star_substring_inv in H; crush; eauto 1152 | [ H : star _ _ |- _ ] => apply star_substring_inv in H; crush; eauto
1158 end; 1153 end;
1159 match goal with 1154 match goal with
1225 1220
1226 (** * Exercises *) 1221 (** * Exercises *)
1227 1222
1228 (** %\begin{enumerate}%#<ol># 1223 (** %\begin{enumerate}%#<ol>#
1229 1224
1230 %\item%#<li># Define a kind of dependently-typed lists, where a list's type index gives a lower bound on how many of its elements satisfy a particular predicate. In particular, for an arbitrary set [A] and a predicate [P] over it: 1225 %\item%#<li># Define a kind of dependently typed lists, where a list's type index gives a lower bound on how many of its elements satisfy a particular predicate. In particular, for an arbitrary set [A] and a predicate [P] over it:
1231 %\begin{enumerate}%#<ol># 1226 %\begin{enumerate}%#<ol>#
1232 %\item%#<li># Define a type [plist : nat -> Set]. Each [plist n] should be a list of [A]s, where it is guaranteed that at least [n] distinct elements satisfy [P]. There is wide latitude in choosing how to encode this. You should try to avoid using subset types or any other mechanism based on annotating non-dependent types with propositions after-the-fact.#</li># 1227 %\item%#<li># Define a type [plist : nat -> Set]. Each [plist n] should be a list of [A]s, where it is guaranteed that at least [n] distinct elements satisfy [P]. There is wide latitude in choosing how to encode this. You should try to avoid using subset types or any other mechanism based on annotating non-dependent types with propositions after-the-fact.#</li>#
1233 %\item%#<li># Define a version of list concatenation that works on [plist]s. The type of this new function should express as much information as possible about the output [plist].#</li># 1228 %\item%#<li># Define a version of list concatenation that works on [plist]s. The type of this new function should express as much information as possible about the output [plist].#</li>#
1234 %\item%#<li># Define a function [plistOut] for translating [plist]s to normal [list]s.#</li># 1229 %\item%#<li># Define a function [plistOut] for translating [plist]s to normal [list]s.#</li>#
1235 %\item%#<li># Define a function [plistIn] for translating [list]s to [plist]s. The type of [plistIn] should make it clear that the best bound on [P]-matching elements is chosen. You may assume that you are given a dependently-typed function for deciding instances of [P].#</li># 1230 %\item%#<li># Define a function [plistIn] for translating [list]s to [plist]s. The type of [plistIn] should make it clear that the best bound on [P]-matching elements is chosen. You may assume that you are given a dependently typed function for deciding instances of [P].#</li>#
1236 %\item%#<li># Prove that, for any list [ls], [plistOut (plistIn ls) = ls]. This should be the only part of the exercise where you use tactic-based proving.#</li># 1231 %\item%#<li># Prove that, for any list [ls], [plistOut (plistIn ls) = ls]. This should be the only part of the exercise where you use tactic-based proving.#</li>#
1237 %\item%#<li># Define a function [grab : forall n (ls : plist (S n)), sig P]. That is, when given a [plist] guaranteed to contain at least one element satisfying [P], [grab] produces such an element. [sig] is the type family of sigma types, and [sig P] is extensionally equivalent to [{x : A | P x}], though the latter form uses an eta-expansion of [P] instead of [P] itself as the predicate.#</li># 1232 %\item%#<li># Define a function [grab : forall n (ls : plist (][S n)), sig P]. That is, when given a [plist] guaranteed to contain at least one element satisfying [P], [grab] produces such an element. The type family [sig] is the one we met earlier for sigma types (i.e., dependent pairs of programs and proofs), and [sig P] is extensionally equivalent to [{][x : A | P x}], though the latter form uses an eta-expansion of [P] instead of [P] itself as the predicate.#</li>#
1238 #</ol>#%\end{enumerate}% #</li># 1233 #</ol>#%\end{enumerate}% #</li>#
1239 1234
1240 #</ol>#%\end{enumerate}% *) 1235 #</ol>#%\end{enumerate}% *)