-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unif_rule is unable to bypass the priority of injectivity unification vs rewriting reduction? BUG? #1159
Comments
Hi. The short answer to your first question is: yes, unification rules are applied after decomposition and weak-head normalization. Is it a bug? Not really after the doc: "Given a unification problem t ≡ u, if the engine cannot find a solution, it will try to match the pattern t ≡ u against the defined rules (modulo commutativity of ≡) and rewrite the problem to the right-hand side of the matched rule." You can follow what the unification algorithm is doing by writing: debug +u; You also print implicit arguments by writing: flag "print_implicits" on; a) When doing refine Id_func before simplify, you get the following unification problem: @Context_cat X A ≡ @Context_cat (@Context_cat X A) (Terminal_catd (@Context_cat X A)) You declare Context_cat : Π [X : cat], catd X → cat as injective. This means that Context_cat X x ≡ Context_cat Y y should imply X ≡ Y and x ≡ y, where ≡ is the definitional equality. Are you sure of that? (Lambdapi does not check injectivity.) Partial injectivity is a feature request (#270). As Context_cat is declared as injective, the above unification is decomposed into:
but 2) is not solvable (A is a variable and Terminal_catd is a constant), so the algorithm stops. b) When doing simplify before refine, you get as goal: func (@Context_cat X A) (@Context_cat X A) and refine works as expected. c) The last assert fails for the same reason as in a) d) adding the unification rule that you propose doesn't change anything because you never get a potentially solvable unification problem matching the LHS of your unification rule because, indeed, decomposition is done before the application of unification rules. |
Here is a preliminary heuristic argument. Let's try to find a counterexample to "Context_cat X x ≡ Context_cat Y y should imply X ≡ Y and x ≡ y". So there would be no obvious counterexamples if the symbol is declared "SEQUENTIALLY INJECTIVE": "Context_cat X x ≡ Context_cat Y y should IMPLY FIRSTLY X ≡ Y and THEREAFTER x ≡ y". Anyway below is further data for a ?BUG? REPORT which shows none of these questions are clear. // BUG REPORT:
Replacing (1) by (2) instead provokes a problem 1000 lines later down the realistic-application file. The problem is that the lambdapi compiler is slow stuck at a
Replacing (1) by (3) instead provokes an error: "unable to prove type preservation" later in 2 rules, which are fixed by giving more explicitly the first argument of the symbol Context_cat where injectivity arises. But then the same above problem of slow stuck happens at the same |
unif_rule is unable to bypass the priority of injectivity unification vs rewriting reduction? BUG?
MINIMAL EXAMPLE (more minimal is possible, without the type dependency)
And why it shows up / matters is contained in the pull request Deducteam/lambdapi-stdlib#25
The text was updated successfully, but these errors were encountered: