adamc@62
|
1 (* Copyright (c) 2008, Adam Chlipala
|
adamc@62
|
2 *
|
adamc@62
|
3 * This work is licensed under a
|
adamc@62
|
4 * Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
|
adamc@62
|
5 * Unported License.
|
adamc@62
|
6 * The license text is available at:
|
adamc@62
|
7 * http://creativecommons.org/licenses/by-nc-nd/3.0/
|
adamc@62
|
8 *)
|
adamc@62
|
9
|
adamc@62
|
10 (* begin hide *)
|
adamc@62
|
11 Require Import List.
|
adamc@62
|
12
|
adamc@62
|
13 Require Import Tactics.
|
adamc@62
|
14
|
adamc@62
|
15 Set Implicit Arguments.
|
adamc@62
|
16 (* end hide *)
|
adamc@62
|
17
|
adamc@62
|
18
|
adamc@62
|
19 (** %\chapter{Infinite Data and Proofs}% *)
|
adamc@62
|
20
|
adamc@62
|
21 (** In lazy functional programming languages like Haskell, infinite data structures are everywhere. Infinite lists and more exotic datatypes provide convenient abstractions for communication between parts of a program. Achieving similar convenience without infinite lazy structures would, in many cases, require acrobatic inversions of control flow.
|
adamc@62
|
22
|
adamc@62
|
23 Laziness is easy to implement in Haskell, where all the definitions in a program may be thought of as mutually recursive. In such an unconstrained setting, it is easy to implement an infinite loop when you really meant to build an infinite list, where any finite prefix of the list should be forceable in finite time. Haskell programmers learn how to avoid such slip-ups. In Coq, such a laissez-faire policy is not good enough.
|
adamc@62
|
24
|
adamc@202
|
25 We spent some time in the last chapter discussing the Curry-Howard isomorphism, where proofs are identified with functional programs. In such a setting, infinite loops, intended or otherwise, are disastrous. If Coq allowed the full breadth of definitions that Haskell did, we could code up an infinite loop and use it to prove any proposition vacuously. That is, the addition of general recursion would make CIC %\textit{%#<i>#inconsistent#</i>#%}%. For an arbitrary proposition [P], we could write:
|
adamc@202
|
26
|
adamc@202
|
27 [[
|
adamc@202
|
28
|
adamc@202
|
29 Fixpoint bad (u : unit) : P := bad u.
|
adamc@202
|
30
|
adamc@202
|
31 This would leave us with [bad tt] as a proof of [P].
|
adamc@62
|
32
|
adamc@62
|
33 There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem.
|
adamc@62
|
34
|
adamc@62
|
35 One solution is to use types to contain the possibility of non-termination. For instance, we can create a "non-termination monad," inside which we must write all of our general-recursive programs. In later chapters, we will spend some time on this idea and its applications. For now, we will just say that it is a heavyweight solution, and so we would like to avoid it whenever possible.
|
adamc@62
|
36
|
adamc@62
|
37 Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\textit{%#<i>#co-inductive types#</i>#%}%, is the subject of this chapter. *)
|
adamc@62
|
38
|
adamc@62
|
39
|
adamc@62
|
40 (** * Computing with Infinite Data *)
|
adamc@62
|
41
|
adamc@62
|
42 (** Let us begin with the most basic type of infinite data, %\textit{%#<i>#streams#</i>#%}%, or lazy lists. *)
|
adamc@62
|
43
|
adamc@62
|
44 Section stream.
|
adamc@62
|
45 Variable A : Set.
|
adamc@62
|
46
|
adamc@62
|
47 CoInductive stream : Set :=
|
adamc@62
|
48 | Cons : A -> stream -> stream.
|
adamc@62
|
49 End stream.
|
adamc@62
|
50
|
adamc@62
|
51 (** The definition is surprisingly simple. Starting from the definition of [list], we just need to change the keyword [Inductive] to [CoInductive]. We could have left a [Nil] constructor in our definition, but we will leave it out to force all of our streams to be infinite.
|
adamc@62
|
52
|
adamc@62
|
53 How do we write down a stream constant? Obviously simple application of constructors is not good enough, since we could only denote finite objects that way. Rather, whereas recursive definitions were necessary to %\textit{%#<i>#use#</i>#%}% values of recursive inductive types effectively, here we find that we need %\textit{%#<i>#co-recursive definitions#</i>#%}% to %\textit{%#<i>#build#</i>#%}% values of co-inductive types effectively.
|
adamc@62
|
54
|
adamc@62
|
55 We can define a stream consisting only of zeroes. *)
|
adamc@62
|
56
|
adamc@62
|
57 CoFixpoint zeroes : stream nat := Cons 0 zeroes.
|
adamc@62
|
58
|
adamc@62
|
59 (** We can also define a stream that alternates between [true] and [false]. *)
|
adamc@62
|
60
|
adamc@62
|
61 CoFixpoint trues : stream bool := Cons true falses
|
adamc@62
|
62 with falses : stream bool := Cons false trues.
|
adamc@62
|
63
|
adamc@62
|
64 (** Co-inductive values are fair game as arguments to recursive functions, and we can use that fact to write a function to take a finite approximation of a stream. *)
|
adamc@62
|
65
|
adamc@62
|
66 Fixpoint approx A (s : stream A) (n : nat) {struct n} : list A :=
|
adamc@62
|
67 match n with
|
adamc@62
|
68 | O => nil
|
adamc@62
|
69 | S n' =>
|
adamc@62
|
70 match s with
|
adamc@62
|
71 | Cons h t => h :: approx t n'
|
adamc@62
|
72 end
|
adamc@62
|
73 end.
|
adamc@62
|
74
|
adamc@62
|
75 Eval simpl in approx zeroes 10.
|
adamc@62
|
76 (** [[
|
adamc@62
|
77
|
adamc@62
|
78 = 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: 0 :: nil
|
adamc@62
|
79 : list nat
|
adamc@62
|
80 ]] *)
|
adamc@62
|
81 Eval simpl in approx trues 10.
|
adamc@62
|
82 (** [[
|
adamc@62
|
83
|
adamc@62
|
84 = true
|
adamc@62
|
85 :: false
|
adamc@62
|
86 :: true
|
adamc@62
|
87 :: false
|
adamc@62
|
88 :: true :: false :: true :: false :: true :: false :: nil
|
adamc@62
|
89 : list bool
|
adamc@62
|
90 ]] *)
|
adamc@62
|
91
|
adamc@62
|
92 (** So far, it looks like co-inductive types might be a magic bullet, allowing us to import all of the Haskeller's usual tricks. However, there are important restrictions that are dual to the restrictions on the use of inductive types. Fixpoints %\textit{%#<i>#consume#</i>#%}% values of inductive types, with restrictions on which %\textit{%#<i>#arguments#</i>#%}% may be passed in recursive calls. Dually, co-fixpoints %\textit{%#<i>#produce#</i>#%}% values of co-inductive types, with restrictions on what may be done with the %\textit{%#<i>#results#</i>#%}% of co-recursive calls.
|
adamc@62
|
93
|
adamc@62
|
94 The restriction for co-inductive types shows up as the %\textit{%#<i>#guardedness condition#</i>#%}%, and it can be broken into two parts. First, consider this stream definition, which would be legal in Haskell.
|
adamc@62
|
95
|
adamc@62
|
96 [[
|
adamc@62
|
97 CoFixpoint looper : stream nat := looper.
|
adamc@62
|
98 [[
|
adamc@62
|
99 Error:
|
adamc@62
|
100 Recursive definition of looper is ill-formed.
|
adamc@62
|
101 In environment
|
adamc@62
|
102 looper : stream nat
|
adamc@62
|
103
|
adamc@62
|
104 unguarded recursive call in "looper"
|
adamc@62
|
105 *)
|
adamc@202
|
106
|
adamc@62
|
107
|
adamc@62
|
108 (** The rule we have run afoul of here is that %\textit{%#<i>#every co-recursive call must be guarded by a constructor#</i>#%}%; that is, every co-recursive call must be a direct argument to a constructor of the co-inductive type we are generating. It is a good thing that this rule is enforced. If the definition of [looper] were accepted, our [approx] function would run forever when passed [looper], and we would have fallen into inconsistency.
|
adamc@62
|
109
|
adamc@62
|
110 The second rule of guardedness is easiest to see by first introducing a more complicated, but legal, co-fixpoint. *)
|
adamc@62
|
111
|
adamc@62
|
112 Section map.
|
adamc@62
|
113 Variables A B : Set.
|
adamc@62
|
114 Variable f : A -> B.
|
adamc@62
|
115
|
adamc@62
|
116 CoFixpoint map (s : stream A) : stream B :=
|
adamc@62
|
117 match s with
|
adamc@62
|
118 | Cons h t => Cons (f h) (map t)
|
adamc@62
|
119 end.
|
adamc@62
|
120 End map.
|
adamc@62
|
121
|
adamc@62
|
122 (** This code is a literal copy of that for the list [map] function, with the [Nil] case removed and [Fixpoint] changed to [CoFixpoint]. Many other standard functions on lazy data structures can be implemented just as easily. Some, like [filter], cannot be implemented. Since the predicate passed to [filter] may reject every element of the stream, we cannot satisfy even the first guardedness condition.
|
adamc@62
|
123
|
adamc@62
|
124 The second condition is subtler. To illustrate it, we start off with another co-recursive function definition that %\textit{%#<i>#is#</i>#%}% legal. The function [interleaves] takes two streams and produces a new stream that alternates between their elements. *)
|
adamc@62
|
125
|
adamc@62
|
126 Section interleave.
|
adamc@62
|
127 Variable A : Set.
|
adamc@62
|
128
|
adamc@62
|
129 CoFixpoint interleave (s1 s2 : stream A) : stream A :=
|
adamc@62
|
130 match s1, s2 with
|
adamc@62
|
131 | Cons h1 t1, Cons h2 t2 => Cons h1 (Cons h2 (interleave t1 t2))
|
adamc@62
|
132 end.
|
adamc@62
|
133 End interleave.
|
adamc@62
|
134
|
adamc@62
|
135 (** Now say we want to write a weird stuttering version of [map] that repeats elements in a particular way, based on interleaving. *)
|
adamc@62
|
136
|
adamc@62
|
137 Section map'.
|
adamc@62
|
138 Variables A B : Set.
|
adamc@62
|
139 Variable f : A -> B.
|
adamc@62
|
140
|
adamc@68
|
141 (* begin thide *)
|
adamc@62
|
142 (** [[
|
adamc@62
|
143
|
adamc@62
|
144 CoFixpoint map' (s : stream A) : stream B :=
|
adamc@62
|
145 match s with
|
adamc@62
|
146 | Cons h t => interleave (Cons (f h) (map' s)) (Cons (f h) (map' s))
|
adamc@68
|
147 end.
|
adamc@68
|
148 *)
|
adamc@62
|
149
|
adamc@62
|
150 (** We get another error message about an unguarded recursive call. This is because we are violating the second guardedness condition, which says that, not only must co-recursive calls be arguments to constructors, there must also %\textit{%#<i>#not be anything but [match]es and calls to constructors of the same co-inductive type#</i>#%}% wrapped around these immediate uses of co-recursive calls. The actual implemented rule for guardedness is a little more lenient than what we have just stated, but you can count on the illegality of any exception that would enhance the expressive power of co-recursion.
|
adamc@62
|
151
|
adamc@62
|
152 Why enforce a rule like this? Imagine that, instead of [interleave], we had called some other, less well-behaved function on streams. Perhaps this other function might be defined mutually with [map']. It might deconstruct its first argument, retrieving [map' s] from within [Cons (f h) (map' s)]. Next it might try a [match] on this retrieved value, which amounts to deconstructing [map' s]. To figure out how this [match] turns out, we need to know the top-level structure of [map' s], but this is exactly what we started out trying to determine! We run into a loop in the evaluation process, and we have reached a witness of inconsistency if we are evaluating [approx (map' s) 1] for any [s]. *)
|
adamc@68
|
153 (* end thide *)
|
adamc@62
|
154 End map'.
|
adamc@62
|
155
|
adamc@63
|
156
|
adamc@63
|
157 (** * Infinite Proofs *)
|
adamc@63
|
158
|
adamc@63
|
159 (** Let us say we want to give two different definitions of a stream of all ones, and then we want to prove that they are equivalent. *)
|
adamc@63
|
160
|
adamc@63
|
161 CoFixpoint ones : stream nat := Cons 1 ones.
|
adamc@63
|
162 Definition ones' := map S zeroes.
|
adamc@63
|
163
|
adamc@63
|
164 (** The obvious statement of the equality is this: *)
|
adamc@63
|
165
|
adamc@63
|
166 Theorem ones_eq : ones = ones'.
|
adamc@63
|
167
|
adamc@63
|
168 (** However, faced with the initial subgoal, it is not at all clear how this theorem can be proved. In fact, it is unprovable. The [eq] predicate that we use is fundamentally limited to equalities that can be demonstrated by finite, syntactic arguments. To prove this equivalence, we will need to introduce a new relation. *)
|
adamc@68
|
169 (* begin thide *)
|
adamc@63
|
170 Abort.
|
adamc@63
|
171
|
adamc@63
|
172 (** Co-inductive datatypes make sense by analogy from Haskell. What we need now is a %\textit{%#<i>#co-inductive proposition#</i>#%}%. That is, we want to define a proposition whose proofs may be infinite, subject to the guardedness condition. The idea of infinite proofs does not show up in usual mathematics, but it can be very useful (unsurprisingly) for reasoning about infinite data structures. Besides examples from Haskell, infinite data and proofs will also turn out to be useful for modelling inherently infinite mathematical objects, like program executions.
|
adamc@63
|
173
|
adamc@63
|
174 We are ready for our first co-inductive predicate. *)
|
adamc@63
|
175
|
adamc@63
|
176 Section stream_eq.
|
adamc@63
|
177 Variable A : Set.
|
adamc@63
|
178
|
adamc@63
|
179 CoInductive stream_eq : stream A -> stream A -> Prop :=
|
adamc@63
|
180 | Stream_eq : forall h t1 t2,
|
adamc@63
|
181 stream_eq t1 t2
|
adamc@63
|
182 -> stream_eq (Cons h t1) (Cons h t2).
|
adamc@63
|
183 End stream_eq.
|
adamc@63
|
184
|
adamc@63
|
185 (** We say that two streams are equal if and only if they have the same heads and their tails are equal. We use the normal finite-syntactic equality for the heads, and we refer to our new equality recursively for the tails.
|
adamc@63
|
186
|
adamc@63
|
187 We can try restating the theorem with [stream_eq]. *)
|
adamc@63
|
188
|
adamc@63
|
189 Theorem ones_eq : stream_eq ones ones'.
|
adamc@63
|
190 (** Coq does not support tactical co-inductive proofs as well as it supports tactical inductive proofs. The usual starting point is the [cofix] tactic, which asks to structure this proof as a co-fixpoint. *)
|
adamc@63
|
191 cofix.
|
adamc@63
|
192 (** [[
|
adamc@63
|
193
|
adamc@63
|
194 ones_eq : stream_eq ones ones'
|
adamc@63
|
195 ============================
|
adamc@63
|
196 stream_eq ones ones'
|
adamc@63
|
197 ]] *)
|
adamc@63
|
198
|
adamc@63
|
199 (** It looks like this proof might be easier than we expected! *)
|
adamc@63
|
200
|
adamc@63
|
201 assumption.
|
adamc@63
|
202 (** [[
|
adamc@63
|
203
|
adamc@63
|
204 Proof completed. *)
|
adamc@63
|
205
|
adamc@63
|
206 (** Unfortunately, we are due for some disappointment in our victory lap. *)
|
adamc@63
|
207
|
adamc@63
|
208 (** [[
|
adamc@63
|
209 Qed.
|
adamc@63
|
210
|
adamc@63
|
211 Error:
|
adamc@63
|
212 Recursive definition of ones_eq is ill-formed.
|
adamc@63
|
213
|
adamc@63
|
214 In environment
|
adamc@63
|
215 ones_eq : stream_eq ones ones'
|
adamc@63
|
216
|
adamc@63
|
217 unguarded recursive call in "ones_eq" *)
|
adamc@63
|
218
|
adamc@63
|
219 (** Via the Curry-Howard correspondence, the same guardedness condition applies to our co-inductive proofs as to our co-inductive data structures. We should be grateful that this proof is rejected, because, if it were not, the same proof structure could be used to prove any co-inductive theorem vacuously, by direct appeal to itself!
|
adamc@63
|
220
|
adamc@63
|
221 Thinking about how Coq would generate a proof term from the proof script above, we see that the problem is that we are violating the first part of the guardedness condition. During our proofs, Coq can help us check whether we have yet gone wrong in this way. We can run the command [Guarded] in any context to see if it is possible to finish the proof in a way that will yield a properly guarded proof term.
|
adamc@63
|
222
|
adamc@63
|
223 [[
|
adamc@63
|
224 Guarded.
|
adamc@63
|
225
|
adamc@63
|
226 Running [Guarded] here gives us the same error message that we got when we tried to run [Qed]. In larger proofs, [Guarded] can be helpful in detecting problems %\textit{%#<i>#before#</i>#%}% we think we are ready to run [Qed].
|
adamc@63
|
227
|
adamc@63
|
228 We need to start the co-induction by applying one of [stream_eq]'s constructors. To do that, we need to know that both arguments to the predicate are [Cons]es. Informally, this is trivial, but [simpl] is not able to help us. *)
|
adamc@63
|
229
|
adamc@63
|
230 Undo.
|
adamc@63
|
231 simpl.
|
adamc@63
|
232 (** [[
|
adamc@63
|
233
|
adamc@63
|
234 ones_eq : stream_eq ones ones'
|
adamc@63
|
235 ============================
|
adamc@63
|
236 stream_eq ones ones'
|
adamc@63
|
237 ]] *)
|
adamc@63
|
238
|
adamc@63
|
239 (** It turns out that we are best served by proving an auxiliary lemma. *)
|
adamc@63
|
240 Abort.
|
adamc@63
|
241
|
adamc@63
|
242 (** First, we need to define a function that seems pointless on first glance. *)
|
adamc@63
|
243
|
adamc@63
|
244 Definition frob A (s : stream A) : stream A :=
|
adamc@63
|
245 match s with
|
adamc@63
|
246 | Cons h t => Cons h t
|
adamc@63
|
247 end.
|
adamc@63
|
248
|
adamc@63
|
249 (** Next, we need to prove a theorem that seems equally pointless. *)
|
adamc@63
|
250
|
adamc@63
|
251 Theorem frob_eq : forall A (s : stream A), s = frob s.
|
adamc@63
|
252 destruct s; reflexivity.
|
adamc@63
|
253 Qed.
|
adamc@63
|
254
|
adamc@63
|
255 (** But, miraculously, this theorem turns out to be just what we needed. *)
|
adamc@63
|
256
|
adamc@63
|
257 Theorem ones_eq : stream_eq ones ones'.
|
adamc@63
|
258 cofix.
|
adamc@63
|
259
|
adamc@63
|
260 (** We can use the theorem to rewrite the two streams. *)
|
adamc@63
|
261 rewrite (frob_eq ones).
|
adamc@63
|
262 rewrite (frob_eq ones').
|
adamc@63
|
263 (** [[
|
adamc@63
|
264
|
adamc@63
|
265 ones_eq : stream_eq ones ones'
|
adamc@63
|
266 ============================
|
adamc@63
|
267 stream_eq (frob ones) (frob ones')
|
adamc@63
|
268 ]] *)
|
adamc@63
|
269
|
adamc@63
|
270 (** Now [simpl] is able to reduce the streams. *)
|
adamc@63
|
271
|
adamc@63
|
272 simpl.
|
adamc@63
|
273 (** [[
|
adamc@63
|
274
|
adamc@63
|
275 ones_eq : stream_eq ones ones'
|
adamc@63
|
276 ============================
|
adamc@63
|
277 stream_eq (Cons 1 ones)
|
adamc@63
|
278 (Cons 1
|
adamc@63
|
279 ((cofix map (s : stream nat) : stream nat :=
|
adamc@63
|
280 match s with
|
adamc@63
|
281 | Cons h t => Cons (S h) (map t)
|
adamc@63
|
282 end) zeroes))
|
adamc@63
|
283 ]] *)
|
adamc@63
|
284
|
adamc@63
|
285 (** Since we have exposed the [Cons] structure of each stream, we can apply the constructor of [stream_eq]. *)
|
adamc@63
|
286
|
adamc@63
|
287 constructor.
|
adamc@63
|
288 (** [[
|
adamc@63
|
289
|
adamc@63
|
290 ones_eq : stream_eq ones ones'
|
adamc@63
|
291 ============================
|
adamc@63
|
292 stream_eq ones
|
adamc@63
|
293 ((cofix map (s : stream nat) : stream nat :=
|
adamc@63
|
294 match s with
|
adamc@63
|
295 | Cons h t => Cons (S h) (map t)
|
adamc@63
|
296 end) zeroes)
|
adamc@63
|
297 ]] *)
|
adamc@63
|
298
|
adamc@63
|
299 (** Now, modulo unfolding of the definition of [map], we have matched our assumption. *)
|
adamc@63
|
300 assumption.
|
adamc@63
|
301 Qed.
|
adamc@63
|
302
|
adamc@63
|
303 (** Why did this silly-looking trick help? The answer has to do with the constraints placed on Coq's evaluation rules by the need for termination. The [cofix]-related restriction that foiled our first attempt at using [simpl] is dual to a restriction for [fix]. In particular, an application of an anonymous [fix] only reduces when the top-level structure of the recursive argument is known. Otherwise, we would be unfolding the recursive definition ad infinitum.
|
adamc@63
|
304
|
adamc@63
|
305 Fixpoints only reduce when enough is known about the %\textit{%#<i>#definitions#</i>#%}% of their arguments. Dually, co-fixpoints only reduce when enough is known about %\textit{%#<i>#how their results will be used#</i>#%}%. In particular, a [cofix] is only expanded when it is the discriminee of a [match]. Rewriting with our superficially silly lemma wrapped new [match]es around the two [cofix]es, triggering reduction.
|
adamc@63
|
306
|
adamc@63
|
307 If [cofix]es reduced haphazardly, it would be easy to run into infinite loops in evaluation, since we are, after all, building infinite objects.
|
adamc@63
|
308
|
adamc@63
|
309 One common source of difficulty with co-inductive proofs is bad interaction with standard Coq automation machinery. If we try to prove [ones_eq'] with automation, like we have in previous inductive proofs, we get an invalid proof. *)
|
adamc@63
|
310
|
adamc@63
|
311 Theorem ones_eq' : stream_eq ones ones'.
|
adamc@63
|
312 cofix; crush.
|
adamc@63
|
313 (** [[
|
adamc@63
|
314
|
adamc@63
|
315 Guarded. *)
|
adamc@63
|
316 Abort.
|
adamc@68
|
317 (* end thide *)
|
adamc@63
|
318
|
adamc@63
|
319 (** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with "hiding" the co-inductive hypothesis. We will see examples of effective co-inductive proving in later chapters. *)
|
adamc@64
|
320
|
adamc@64
|
321
|
adamc@64
|
322 (** * Simple Modeling of Non-Terminating Programs *)
|
adamc@64
|
323
|
adamc@67
|
324 (** We close the chapter with a quick motivating example for more complex uses of co-inductive types. We will define a co-inductive semantics for a simple assembly language and use that semantics to prove that an optimization function is sound. We start by defining types of registers, program labels, and instructions. *)
|
adamc@64
|
325
|
adamc@64
|
326 Inductive reg : Set := R1 | R2.
|
adamc@64
|
327 Definition label := nat.
|
adamc@64
|
328
|
adamc@64
|
329 Inductive instrs : Set :=
|
adamc@64
|
330 | Const : reg -> nat -> instrs -> instrs
|
adamc@64
|
331 | Add : reg -> reg -> reg -> instrs -> instrs
|
adamc@64
|
332 | Halt : reg -> instrs
|
adamc@64
|
333 | Jeq : reg -> reg -> label -> instrs -> instrs.
|
adamc@64
|
334
|
adamc@67
|
335 (** [Const] stores a constant in a register; [Add] stores in the first register the sum of the values in the second two; [Halt] ends the program, returning the value of its register argument; and [Jeq] jumps to a label if the values in two registers are equal. Each instruction but [Halt] takes an [instrs], which can be read as "list of instructions," as the normal continuation of control flow.
|
adamc@67
|
336
|
adamc@67
|
337 We can define a program as a list of lists of instructions, where labels will be interpreted as indexing into such a list. *)
|
adamc@67
|
338
|
adamc@64
|
339 Definition program := list instrs.
|
adamc@64
|
340
|
adamc@67
|
341 (** We define a polymorphic map type for register keys, with its associated operations. *)
|
adamc@64
|
342 Section regmap.
|
adamc@64
|
343 Variable A : Set.
|
adamc@64
|
344
|
adamc@64
|
345 Record regmap : Set := Regmap {
|
adamc@64
|
346 VR1 : A;
|
adamc@64
|
347 VR2 : A
|
adamc@64
|
348 }.
|
adamc@64
|
349
|
adamc@67
|
350 Definition empty (v : A) : regmap := Regmap v v.
|
adamc@64
|
351 Definition get (rm : regmap) (r : reg) : A :=
|
adamc@64
|
352 match r with
|
adamc@64
|
353 | R1 => VR1 rm
|
adamc@64
|
354 | R2 => VR2 rm
|
adamc@64
|
355 end.
|
adamc@64
|
356 Definition set (rm : regmap) (r : reg) (v : A) : regmap :=
|
adamc@64
|
357 match r with
|
adamc@64
|
358 | R1 => Regmap v (VR2 rm)
|
adamc@64
|
359 | R2 => Regmap (VR1 rm) v
|
adamc@64
|
360 end.
|
adamc@64
|
361 End regmap.
|
adamc@64
|
362
|
adamc@202
|
363 (** Now comes the interesting part. We define a co-inductive semantics for programs. The definition itself is not surprising. We could change [CoInductive] to [Inductive] and arrive at a valid semantics that only covers terminating program executions. Using [CoInductive] admits infinite derivations for infinite executions. An application [run rm is v] means that, when we run the instructions [is] starting with register map [rm], either execution terminates with result [v] or execution runs safely forever. (That is, the choice of [v] is immaterial for non-terminating executions.) *)
|
adamc@67
|
364
|
adamc@64
|
365 Section run.
|
adamc@64
|
366 Variable prog : program.
|
adamc@64
|
367
|
adamc@64
|
368 CoInductive run : regmap nat -> instrs -> nat -> Prop :=
|
adamc@64
|
369 | RConst : forall rm r n is v,
|
adamc@64
|
370 run (set rm r n) is v
|
adamc@64
|
371 -> run rm (Const r n is) v
|
adamc@64
|
372 | RAdd : forall rm r r1 r2 is v,
|
adamc@64
|
373 run (set rm r (get rm r1 + get rm r2)) is v
|
adamc@64
|
374 -> run rm (Add r r1 r2 is) v
|
adamc@64
|
375 | RHalt : forall rm r,
|
adamc@64
|
376 run rm (Halt r) (get rm r)
|
adamc@64
|
377 | RJeq_eq : forall rm r1 r2 l is is' v,
|
adamc@64
|
378 get rm r1 = get rm r2
|
adamc@64
|
379 -> nth_error prog l = Some is'
|
adamc@64
|
380 -> run rm is' v
|
adamc@64
|
381 -> run rm (Jeq r1 r2 l is) v
|
adamc@64
|
382 | RJeq_neq : forall rm r1 r2 l is v,
|
adamc@64
|
383 get rm r1 <> get rm r2
|
adamc@64
|
384 -> run rm is v
|
adamc@64
|
385 -> run rm (Jeq r1 r2 l is) v.
|
adamc@64
|
386 End run.
|
adamc@64
|
387
|
adamc@67
|
388 (** We can write a function which tracks known register values to attempt to constant fold a sequence of instructions. We track register values with a [regmap (option nat)], where a mapping to [None] indicates no information, and a mapping to [Some n] indicates that the corresponding register is known to have value [n]. *)
|
adamc@67
|
389
|
adamc@64
|
390 Fixpoint constFold (rm : regmap (option nat)) (is : instrs) {struct is} : instrs :=
|
adamc@64
|
391 match is with
|
adamc@64
|
392 | Const r n is => Const r n (constFold (set rm r (Some n)) is)
|
adamc@64
|
393 | Add r r1 r2 is =>
|
adamc@64
|
394 match get rm r1, get rm r2 with
|
adamc@67
|
395 | Some n1, Some n2 =>
|
adamc@67
|
396 Const r (n1 + n2) (constFold (set rm r (Some (n1 + n2))) is)
|
adamc@64
|
397 | _, _ => Add r r1 r2 (constFold (set rm r None) is)
|
adamc@64
|
398 end
|
adamc@64
|
399 | Halt _ => is
|
adamc@64
|
400 | Jeq r1 r2 l is => Jeq r1 r2 l (constFold rm is)
|
adamc@64
|
401 end.
|
adamc@64
|
402
|
adamc@67
|
403 (** We characterize when the two types of register maps we are using agree with each other. *)
|
adamc@67
|
404
|
adamc@64
|
405 Definition regmapCompat (rm : regmap nat) (rm' : regmap (option nat)) :=
|
adamc@64
|
406 forall r, match get rm' r with
|
adamc@64
|
407 | None => True
|
adamc@64
|
408 | Some v => get rm r = v
|
adamc@64
|
409 end.
|
adamc@64
|
410
|
adamc@67
|
411 (** We prove two lemmas about how register map modifications affect compatibility. A tactic [compat] abstracts the common structure of the two proofs. *)
|
adamc@67
|
412
|
adamc@67
|
413 (** remove printing * *)
|
adamc@64
|
414 Ltac compat := unfold regmapCompat in *; crush;
|
adamc@64
|
415 match goal with
|
adamc@88
|
416 | [ H : _ |- match get _ ?R with Some _ => _ | None => _ end ] => generalize (H R); destruct R; crush
|
adamc@64
|
417 end.
|
adamc@64
|
418
|
adamc@64
|
419 Lemma regmapCompat_set_None : forall rm rm' r n,
|
adamc@64
|
420 regmapCompat rm rm'
|
adamc@64
|
421 -> regmapCompat (set rm r n) (set rm' r None).
|
adamc@64
|
422 destruct r; compat.
|
adamc@64
|
423 Qed.
|
adamc@64
|
424
|
adamc@64
|
425 Lemma regmapCompat_set_Some : forall rm rm' r n,
|
adamc@64
|
426 regmapCompat rm rm'
|
adamc@64
|
427 -> regmapCompat (set rm r n) (set rm' r (Some n)).
|
adamc@64
|
428 destruct r; compat.
|
adamc@64
|
429 Qed.
|
adamc@64
|
430
|
adamc@67
|
431 (** Finally, we can prove the main theorem. *)
|
adamc@64
|
432
|
adamc@64
|
433 Section constFold_ok.
|
adamc@64
|
434 Variable prog : program.
|
adamc@64
|
435
|
adamc@64
|
436 Theorem constFold_ok : forall rm is v,
|
adamc@64
|
437 run prog rm is v
|
adamc@64
|
438 -> forall rm', regmapCompat rm rm'
|
adamc@64
|
439 -> run prog rm (constFold rm' is) v.
|
adamc@64
|
440 Hint Resolve regmapCompat_set_None regmapCompat_set_Some.
|
adamc@64
|
441 Hint Constructors run.
|
adamc@64
|
442
|
adamc@65
|
443 cofix;
|
adamc@65
|
444 destruct 1; crush; eauto;
|
adamc@65
|
445 repeat match goal with
|
adamc@67
|
446 | [ H : regmapCompat _ _
|
adamc@67
|
447 |- run _ _ (match get ?RM ?R with
|
adamc@67
|
448 | Some _ => _
|
adamc@67
|
449 | None => _
|
adamc@67
|
450 end) _ ] =>
|
adamc@65
|
451 generalize (H R); destruct (get RM R); crush
|
adamc@65
|
452 end.
|
adamc@64
|
453 Qed.
|
adamc@64
|
454 End constFold_ok.
|
adamc@64
|
455
|
adamc@67
|
456 (** If we print the proof term that was generated, we can verify that the proof is structured as a [cofix], with each co-recursive call properly guarded. *)
|
adamc@67
|
457
|
adamc@64
|
458 Print constFold_ok.
|
adamc@81
|
459
|
adamc@81
|
460
|
adamc@81
|
461 (** * Exercises *)
|
adamc@81
|
462
|
adamc@81
|
463 (** %\begin{enumerate}%#<ol>#
|
adamc@81
|
464
|
adamc@81
|
465 %\item%#<li># %\begin{enumerate}%#<ol>#
|
adamc@81
|
466 %\item%#<li># Define a co-inductive type of infinite trees carrying data of a fixed parameter type. Each node should contain a data value and two child trees.#</li>#
|
adamc@81
|
467 %\item%#<li># Define a function [everywhere] for building a tree with the same data value at every node.#</li>#
|
adamc@81
|
468 %\item%#<li># Define a function [map] for building an output tree out of two input trees by traversing them in parallel and applying a two-argument function to their corresponding data values.#</li>#
|
adamc@104
|
469 %\item%#<li># Define a tree [falses] where every node has the value [false].#</li>#
|
adamc@104
|
470 %\item%#<li># Define a tree [true_false] where the root node has value [true], its children have value [false], all nodes at the next have the value [true], and so on, alternating boolean values from level to level.#</li>#
|
adamc@200
|
471 %\item%#<li># Prove that [true_false] is equal to the result of mapping the boolean "or" function [orb] over [true_false] and [falses]. You can make [orb] available with [Require Import Bool.]. You may find the lemma [orb_false_r] from the same module helpful. Your proof here should not be about the standard equality [=], but rather about some new equality relation that you define.#</li>#
|
adamc@81
|
472 #</ol>#%\end{enumerate}% #</li>#
|
adamc@81
|
473
|
adamc@81
|
474 #</ol>#%\end{enumerate}% *)
|