## Mercurial > cpdt > repo

### comparison src/Equality.v @ 479:40a9a36844d6

Find changesets by keywords (author, files, the commit message), revision
number or hash, or revset expression.

Batch of changes based on proofreader feedback

author | Adam Chlipala <adam@chlipala.net> |
---|---|

date | Wed, 28 Nov 2012 19:33:21 -0500 |

parents | 1fd4109f7b31 |

children | f38a3af9dd17 |

comparison

equal
deleted
inserted
replaced

478:f02b698aadb1 | 479:40a9a36844d6 |
---|---|

104 Eval compute in fun x => id' x. | 104 Eval compute in fun x => id' x. |

105 (** %\vspace{-.15in}%[[ | 105 (** %\vspace{-.15in}%[[ |

106 = fun x : nat => (fix id' (n : nat) : nat := n) x | 106 = fun x : nat => (fix id' (n : nat) : nat := n) x |

107 ]] | 107 ]] |

108 | 108 |

109 By running [compute], we ask Coq to run reduction steps until no more apply, so why do we see an application of a known function, where clearly no beta reduction has been performed? The answer has to do with ensuring termination of all Gallina programs. One candidate rule would say that we apply recursive definitions wherever possible. However, this would clearly lead to nonterminating reduction sequences, since the function may appear fully applied within its own definition, and we would naively "simplify" such applications immediately. Instead, Coq only applies the beta rule for a recursive function when _the top-level structure of the recursive argument is known_. For [id'] above, we have only one argument [n], so clearly it is the recursive argument, and the top-level structure of [n] is known when the function is applied to [O] or to some [S e] term. The variable [x] is neither, so reduction is blocked. | 109 By running [compute], we ask Coq to run reduction steps until no more apply, so why do we see an application of a known function, where clearly no beta reduction has been performed? The answer has to do with ensuring termination of all Gallina programs. One candidate rule would say that we apply recursive definitions wherever possible. However, this would clearly lead to nonterminating reduction sequences, since the function may appear fully applied within its own definition, and we would %\%naive%{}%ly "simplify" such applications immediately. Instead, Coq only applies the beta rule for a recursive function when _the top-level structure of the recursive argument is known_. For [id'] above, we have only one argument [n], so clearly it is the recursive argument, and the top-level structure of [n] is known when the function is applied to [O] or to some [S e] term. The variable [x] is neither, so reduction is blocked. |

110 | 110 |

111 What are recursive arguments in general? Every recursive function is compiled by Coq to a %\index{Gallina terms!fix}%[fix] expression, for anonymous definition of recursive functions. Further, every [fix] with multiple arguments has one designated as the recursive argument via a [struct] annotation. The recursive argument is the one that must decrease across recursive calls, to appease Coq's termination checker. Coq will generally infer which argument is recursive, though we may also specify it manually, if we want to tweak reduction behavior. For instance, consider this definition of a function to add two lists of [nat]s elementwise: *) | 111 What are recursive arguments in general? Every recursive function is compiled by Coq to a %\index{Gallina terms!fix}%[fix] expression, for anonymous definition of recursive functions. Further, every [fix] with multiple arguments has one designated as the recursive argument via a [struct] annotation. The recursive argument is the one that must decrease across recursive calls, to appease Coq's termination checker. Coq will generally infer which argument is recursive, though we may also specify it manually, if we want to tweak reduction behavior. For instance, consider this definition of a function to add two lists of [nat]s elementwise: *) |

112 | 112 |

113 Fixpoint addLists (ls1 ls2 : list nat) : list nat := | 113 Fixpoint addLists (ls1 ls2 : list nat) : list nat := |

114 match ls1, ls2 with | 114 match ls1, ls2 with |

246 | 246 |

247 (** For the inductive versions of the [ilist] definitions, we proved a lemma about the interaction of [get] and [imap]. It was a strategic choice not to attempt such a proof for the definitions that we just gave, because that sets us on a collision course with the problems that are the subject of this chapter. *) | 247 (** For the inductive versions of the [ilist] definitions, we proved a lemma about the interaction of [get] and [imap]. It was a strategic choice not to attempt such a proof for the definitions that we just gave, because that sets us on a collision course with the problems that are the subject of this chapter. *) |

248 | 248 |

249 Variable elm : A. | 249 Variable elm : A. |

250 | 250 |

251 Theorem get_imap : forall ls (mem : fmember elm ls) (hls : fhlist B ls), | 251 Theorem fhget_fhmap : forall ls (mem : fmember elm ls) (hls : fhlist B ls), |

252 fhget (fhmap hls) mem = f (fhget hls mem). | 252 fhget (fhmap hls) mem = f (fhget hls mem). |

253 (* begin hide *) | 253 (* begin hide *) |

254 induction ls; crush; case a0; reflexivity. | 254 induction ls; crush; case a0; reflexivity. |

255 (* end hide *) | 255 (* end hide *) |

256 (** %\vspace{-.2in}%[[ | 256 (** %\vspace{-.2in}%[[ |

501 (* EX: Prove that fhapp is associative. *) | 501 (* EX: Prove that fhapp is associative. *) |

502 (* begin thide *) | 502 (* begin thide *) |

503 | 503 |

504 (** We might like to prove that [fhapp] is associative. | 504 (** We might like to prove that [fhapp] is associative. |

505 [[ | 505 [[ |

506 Theorem fhapp_ass : forall ls1 ls2 ls3 | 506 Theorem fhapp_assoc : forall ls1 ls2 ls3 |

507 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) (hls3 : fhlist B ls3), | 507 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) (hls3 : fhlist B ls3), |

508 fhapp hls1 (fhapp hls2 hls3) = fhapp (fhapp hls1 hls2) hls3. | 508 fhapp hls1 (fhapp hls2 hls3) = fhapp (fhapp hls1 hls2) hls3. |

509 ]] | 509 ]] |

510 | 510 |

511 << | 511 << |

515 while it is expected to have type "fhlist B (ls1 ++ ls2 ++ ls3)" | 515 while it is expected to have type "fhlist B (ls1 ++ ls2 ++ ls3)" |

516 >> | 516 >> |

517 | 517 |

518 This first cut at the theorem statement does not even type-check. We know that the two [fhlist] types appearing in the error message are always equal, by associativity of normal list append, but this fact is not apparent to the type checker. This stems from the fact that Coq's equality is%\index{intensional type theory}% _intensional_, in the sense that type equality theorems can never be applied after the fact to get a term to type-check. Instead, we need to make use of equality explicitly in the theorem statement. *) | 518 This first cut at the theorem statement does not even type-check. We know that the two [fhlist] types appearing in the error message are always equal, by associativity of normal list append, but this fact is not apparent to the type checker. This stems from the fact that Coq's equality is%\index{intensional type theory}% _intensional_, in the sense that type equality theorems can never be applied after the fact to get a term to type-check. Instead, we need to make use of equality explicitly in the theorem statement. *) |

519 | 519 |

520 Theorem fhapp_ass : forall ls1 ls2 ls3 | 520 Theorem fhapp_assoc : forall ls1 ls2 ls3 |

521 (pf : (ls1 ++ ls2) ++ ls3 = ls1 ++ (ls2 ++ ls3)) | 521 (pf : (ls1 ++ ls2) ++ ls3 = ls1 ++ (ls2 ++ ls3)) |

522 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) (hls3 : fhlist B ls3), | 522 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) (hls3 : fhlist B ls3), |

523 fhapp hls1 (fhapp hls2 hls3) | 523 fhapp hls1 (fhapp hls2 hls3) |

524 = match pf in (_ = ls) return fhlist _ ls with | 524 = match pf in (_ = ls) return fhlist _ ls with |

525 | eq_refl => fhapp (fhapp hls1 hls2) hls3 | 525 | eq_refl => fhapp (fhapp hls1 hls2) hls3 |

649 | 649 |

650 To an experienced dependent types hacker, the appearance of this goal term calls for a celebration. The formula has a critical property that indicates that our problems are over. To get our proofs into the right form to apply [UIP_refl], we need to use associativity of list append to rewrite their types. We could not do that before because other parts of the goal require the proofs to retain their original types. In particular, the call to [fhapp] that we generalized must have type [(ls1 ++ ls2) ++ ls3], for some values of the list variables. If we rewrite the type of the proof used to type-cast this value to something like [ls1 ++ ls2 ++ ls3 = ls1 ++ ls2 ++ ls3], then the lefthand side of the equality would no longer match the type of the term we are trying to cast. | 650 To an experienced dependent types hacker, the appearance of this goal term calls for a celebration. The formula has a critical property that indicates that our problems are over. To get our proofs into the right form to apply [UIP_refl], we need to use associativity of list append to rewrite their types. We could not do that before because other parts of the goal require the proofs to retain their original types. In particular, the call to [fhapp] that we generalized must have type [(ls1 ++ ls2) ++ ls3], for some values of the list variables. If we rewrite the type of the proof used to type-cast this value to something like [ls1 ++ ls2 ++ ls3 = ls1 ++ ls2 ++ ls3], then the lefthand side of the equality would no longer match the type of the term we are trying to cast. |

651 | 651 |

652 However, now that we have generalized over the [fhapp] call, the type of the term being type-cast appears explicitly in the goal and _may be rewritten as well_. In particular, the final masterstroke is rewriting everywhere in our goal using associativity of list append. *) | 652 However, now that we have generalized over the [fhapp] call, the type of the term being type-cast appears explicitly in the goal and _may be rewritten as well_. In particular, the final masterstroke is rewriting everywhere in our goal using associativity of list append. *) |

653 | 653 |

654 rewrite app_ass. | 654 rewrite app_assoc. |

655 (** [[ | 655 (** [[ |

656 ============================ | 656 ============================ |

657 forall (pf0 : a :: ls1 ++ ls2 ++ ls3 = a :: ls1 ++ ls2 ++ ls3) | 657 forall (pf0 : a :: ls1 ++ ls2 ++ ls3 = a :: ls1 ++ ls2 ++ ls3) |

658 (pf'0 : ls1 ++ ls2 ++ ls3 = ls1 ++ ls2 ++ ls3) | 658 (pf'0 : ls1 ++ ls2 ++ ls3 = ls1 ++ ls2 ++ ls3) |

659 (f : fhlist B (ls1 ++ ls2 ++ ls3)), | 659 (f : fhlist B (ls1 ++ ls2 ++ ls3)), |

728 | 728 |

729 Section fhapp'. | 729 Section fhapp'. |

730 Variable A : Type. | 730 Variable A : Type. |

731 Variable B : A -> Type. | 731 Variable B : A -> Type. |

732 | 732 |

733 (** This time, the naive theorem statement type-checks. *) | 733 (** This time, the %\%naive theorem statement type-checks. *) |

734 | 734 |

735 (* EX: Prove [fhapp] associativity using [JMeq]. *) | 735 (* EX: Prove [fhapp] associativity using [JMeq]. *) |

736 | 736 |

737 (* begin thide *) | 737 (* begin thide *) |

738 Theorem fhapp_ass' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) | 738 Theorem fhapp_assoc' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) |

739 (hls3 : fhlist B ls3), | 739 (hls3 : fhlist B ls3), |

740 fhapp hls1 (fhapp hls2 hls3) == fhapp (fhapp hls1 hls2) hls3. | 740 fhapp hls1 (fhapp hls2 hls3) == fhapp (fhapp hls1 hls2) hls3. |

741 induction ls1; crush. | 741 induction ls1; crush. |

742 | 742 |

743 (** Even better, [crush] discharges the first subgoal automatically. The second subgoal is: | 743 (** Even better, [crush] discharges the first subgoal automatically. The second subgoal is: |

771 (f0 : fhlist B ((ls1 ++ ls2) ++ ls3)), f == f0 -> (a0, f) == (a0, f0) | 771 (f0 : fhlist B ((ls1 ++ ls2) ++ ls3)), f == f0 -> (a0, f) == (a0, f0) |

772 ]] | 772 ]] |

773 | 773 |

774 Now we can rewrite with append associativity, as before. *) | 774 Now we can rewrite with append associativity, as before. *) |

775 | 775 |

776 rewrite app_ass. | 776 rewrite app_assoc. |

777 (** %\vspace{-.15in}%[[ | 777 (** %\vspace{-.15in}%[[ |

778 ============================ | 778 ============================ |

779 forall f f0 : fhlist B (ls1 ++ ls2 ++ ls3), f == f0 -> (a0, f) == (a0, f0) | 779 forall f f0 : fhlist B (ls1 ++ ls2 ++ ls3), f == f0 -> (a0, f) == (a0, f0) |

780 ]] | 780 ]] |

781 | 781 |

789 | 789 |

790 (** This example illustrates a general pattern: heterogeneous equality often simplifies theorem statements, but we still need to do some work to line up some dependent pattern matches that tactics will generate for us. | 790 (** This example illustrates a general pattern: heterogeneous equality often simplifies theorem statements, but we still need to do some work to line up some dependent pattern matches that tactics will generate for us. |

791 | 791 |

792 The proof we have found relies on the [JMeq_eq] axiom, which we can verify with a command%\index{Vernacular commands!Print Assumptions}% that we will discuss more in two chapters. *) | 792 The proof we have found relies on the [JMeq_eq] axiom, which we can verify with a command%\index{Vernacular commands!Print Assumptions}% that we will discuss more in two chapters. *) |

793 | 793 |

794 Print Assumptions fhapp_ass'. | 794 Print Assumptions fhapp_assoc'. |

795 (** %\vspace{-.15in}%[[ | 795 (** %\vspace{-.15in}%[[ |

796 Axioms: | 796 Axioms: |

797 JMeq_eq : forall (A : Type) (x y : A), x == y -> x = y | 797 JMeq_eq : forall (A : Type) (x y : A), x == y -> x = y |

798 ]] | 798 ]] |

799 | 799 |

810 | 810 |

811 Section fhapp''. | 811 Section fhapp''. |

812 Variable A : Type. | 812 Variable A : Type. |

813 Variable B : A -> Type. | 813 Variable B : A -> Type. |

814 | 814 |

815 Theorem fhapp_ass'' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) | 815 Theorem fhapp_assoc'' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2) |

816 (hls3 : fhlist B ls3), | 816 (hls3 : fhlist B ls3), |

817 fhapp hls1 (fhapp hls2 hls3) == fhapp (fhapp hls1 hls2) hls3. | 817 fhapp hls1 (fhapp hls2 hls3) == fhapp (fhapp hls1 hls2) hls3. |

818 induction ls1; crush. | 818 induction ls1; crush. |

819 Qed. | 819 Qed. |

820 End fhapp''. | 820 End fhapp''. |

821 | 821 |

822 Print Assumptions fhapp_ass''. | 822 Print Assumptions fhapp_assoc''. |

823 (** << | 823 (** << |

824 Closed under the global context | 824 Closed under the global context |

825 >> | 825 >> |

826 | 826 |

827 One might wonder exactly which elements of a proof involving [JMeq] imply that [JMeq_eq] must be used. For instance, above we noticed that [rewrite] had brought [JMeq_eq] into the proof of [fhapp_ass'], yet here we have also used [rewrite] with [JMeq] hypotheses while avoiding axioms! One illuminating exercise is comparing the types of the lemmas that [rewrite] uses under the hood to implement the rewrites. Here is the normal lemma for [eq] rewriting:%\index{Gallina terms!eq\_ind\_r}% *) | 827 One might wonder exactly which elements of a proof involving [JMeq] imply that [JMeq_eq] must be used. For instance, above we noticed that [rewrite] had brought [JMeq_eq] into the proof of [fhapp_assoc'], yet here we have also used [rewrite] with [JMeq] hypotheses while avoiding axioms! One illuminating exercise is comparing the types of the lemmas that [rewrite] uses under the hood to implement the rewrites. Here is the normal lemma for [eq] rewriting:%\index{Gallina terms!eq\_ind\_r}% *) |

828 | 828 |

829 Check eq_ind_r. | 829 Check eq_ind_r. |

830 (** %\vspace{-.15in}%[[ | 830 (** %\vspace{-.15in}%[[ |

831 eq_ind_r | 831 eq_ind_r |

832 : forall (A : Type) (x : A) (P : A -> Prop), | 832 : forall (A : Type) (x : A) (P : A -> Prop), |

840 internal_JMeq_rew_r | 840 internal_JMeq_rew_r |

841 : forall (A : Type) (x : A) (B : Type) (b : B) | 841 : forall (A : Type) (x : A) (B : Type) (b : B) |

842 (P : forall B0 : Type, B0 -> Type), P B b -> x == b -> P A x | 842 (P : forall B0 : Type, B0 -> Type), P B b -> x == b -> P A x |

843 ]] | 843 ]] |

844 | 844 |

845 The key difference is that, where the [eq] lemma is parameterized on a predicate of type [A -> Prop], the [JMeq] lemma is parameterized on a predicate of type more like [forall A : Type, A -> Prop]. To apply [eq_ind_r] with a proof of [x = y], it is only necessary to rearrange the goal into an application of a [fun] abstraction to [y]. In contrast, to apply [internal_JMeq_rew_r], it is necessary to rearrange the goal to an application of a [fun] abstraction to both [y] and _its type_. In other words, the predicate must be _polymorphic_ in [y]'s type; any type must make sense, from a type-checking standpoint. There may be cases where the former rearrangement is easy to do in a type-correct way, but the second rearrangement done naively leads to a type error. | 845 The key difference is that, where the [eq] lemma is parameterized on a predicate of type [A -> Prop], the [JMeq] lemma is parameterized on a predicate of type more like [forall A : Type, A -> Prop]. To apply [eq_ind_r] with a proof of [x = y], it is only necessary to rearrange the goal into an application of a [fun] abstraction to [y]. In contrast, to apply [internal_JMeq_rew_r], it is necessary to rearrange the goal to an application of a [fun] abstraction to both [y] and _its type_. In other words, the predicate must be _polymorphic_ in [y]'s type; any type must make sense, from a type-checking standpoint. There may be cases where the former rearrangement is easy to do in a type-correct way, but the second rearrangement done %\%naive%{}%ly leads to a type error. |

846 | 846 |

847 When [rewrite] cannot figure out how to apply [internal_JMeq_rew_r] for [x == y] where [x] and [y] have the same type, the tactic can instead use an alternate theorem, which is easy to prove as a composition of [eq_ind_r] and [JMeq_eq]. *) | 847 When [rewrite] cannot figure out how to apply [internal_JMeq_rew_r] for [x == y] where [x] and [y] have the same type, the tactic can instead use an alternate theorem, which is easy to prove as a composition of [eq_ind_r] and [JMeq_eq]. *) |

848 | 848 |

849 Check JMeq_ind_r. | 849 Check JMeq_ind_r. |

850 (** %\vspace{-.15in}%[[ | 850 (** %\vspace{-.15in}%[[ |

851 JMeq_ind_r | 851 JMeq_ind_r |

852 : forall (A : Type) (x : A) (P : A -> Prop), | 852 : forall (A : Type) (x : A) (P : A -> Prop), |

853 P x -> forall y : A, y == x -> P y | 853 P x -> forall y : A, y == x -> P y |

854 ]] | 854 ]] |

855 | 855 |

856 Ironically, where in the proof of [fhapp_ass'] we used [rewrite app_ass] to make it clear that a use of [JMeq] was actually homogeneously typed, we created a situation where [rewrite] applied the axiom-based [JMeq_ind_r] instead of the axiom-free [internal_JMeq_rew_r]! | 856 Ironically, where in the proof of [fhapp_assoc'] we used [rewrite app_assoc] to make it clear that a use of [JMeq] was actually homogeneously typed, we created a situation where [rewrite] applied the axiom-based [JMeq_ind_r] instead of the axiom-free [internal_JMeq_rew_r]! |

857 | 857 |

858 For another simple example, consider this theorem that applies a heterogeneous equality to prove a congruence fact. *) | 858 For another simple example, consider this theorem that applies a heterogeneous equality to prove a congruence fact. *) |

859 | 859 |

860 Theorem out_of_luck : forall n m : nat, | 860 Theorem out_of_luck : forall n m : nat, |

861 n == m | 861 n == m |

894 | 894 |

895 In other words, the successor function [S] is insufficiently polymorphic. If we try to generalize over the type of [n], we find that [S] is no longer legal to apply to [n]. *) | 895 In other words, the successor function [S] is insufficiently polymorphic. If we try to generalize over the type of [n], we find that [S] is no longer legal to apply to [n]. *) |

896 | 896 |

897 Abort. | 897 Abort. |

898 | 898 |

899 (** Why did we not run into this problem in our proof of [fhapp_ass'']? The reason is that the pair constructor is polymorphic in the types of the pair components, while functions like [S] are not polymorphic at all. Use of such non-polymorphic functions with [JMeq] tends to push toward use of axioms. The example with [nat] here is a bit unrealistic; more likely cases would involve functions that have _some_ polymorphism, but not enough to allow abstractions of the sort we attempted above with [pattern]. For instance, we might have an equality between two lists, where the goal only type-checks when the terms involved really are lists, though everything is polymorphic in the types of list data elements. The {{http://www.mpi-sws.org/~gil/Heq/}Heq} library builds up a slightly different foundation to help avoid such problems. *) | 899 (** Why did we not run into this problem in our proof of [fhapp_assoc'']? The reason is that the pair constructor is polymorphic in the types of the pair components, while functions like [S] are not polymorphic at all. Use of such non-polymorphic functions with [JMeq] tends to push toward use of axioms. The example with [nat] here is a bit unrealistic; more likely cases would involve functions that have _some_ polymorphism, but not enough to allow abstractions of the sort we attempted above with [pattern]. For instance, we might have an equality between two lists, where the goal only type-checks when the terms involved really are lists, though everything is polymorphic in the types of list data elements. The {{http://www.mpi-sws.org/~gil/Heq/}Heq} library builds up a slightly different foundation to help avoid such problems. *) |

900 | 900 |

901 | 901 |

902 (** * Equivalence of Equality Axioms *) | 902 (** * Equivalence of Equality Axioms *) |

903 | 903 |

904 (* EX: Show that the approaches based on K and JMeq are equivalent logically. *) | 904 (* EX: Show that the approaches based on K and JMeq are equivalent logically. *) |