jonsterling/JonPRL

Integrate delta equalities into computation system

Opened this issue · 21 comments

Unfolding abstractions/operators should be counted as a computation step.
Shouldn't this "reduce 2" unfold pi2?
Also, why do I need the ";"? I get a parsing error without the ";".

Operator pi2 : (0).
[pi2(x)] =def= [spread(x;a.b.b)].

Theorem foo : [ ceq(<>;lam(x.pi2(x)) pair(<>;<>)) ] {
reduce 2;
}.

I thought in nuprl you had to unfold abstractions to reveal redexes?

On Aug 1, 2015, at 11:57 PM, vrahli notifications@github.com wrote:

Unfolding abstractions/operators should be counted as a computation step.
Shouldn't this "reduce 2" unfold pi2?
Also, why do I need the ";"? I get a parsing error without the ";".

Operator pi2 : (0).
[pi2(x)] =def= [spread(x;a.b.b)].

Theorem foo : [ ceq(<>;lam(x.pi2(x)) pair(<>;<>)) ] {
reduce 2;
}.


Reply to this email directly or view it on GitHub.

The parser bug is a general thing, any tactic accepting a plain number seems to have it.. hm.

Ah I've found the issue with the parser.. Another whitespace handling issue :/

In Nuprl, we have an unfold tactic to unfold abstractions. We also have a
reduce tactic that I don't think
unfold abstractions unless you tell it too: you can write additions to
Reduce to that effect. And we also
have a way to reduce for n computation steps at a specific position in a
term (we have a rule that can
prove that if t computes in 1 computation step to u then t ~ u). Because
unfolding an abstraction counts as 1 computation step, you can also unfold
abstractions like that
(that's probably the way the unfold tactic works). Your reduce tactic
seems to be more like
that. If you intend your tactic to perform n computation steps, you might
want to unfold abstractions.

On Sun, Aug 2, 2015 at 10:30 AM, Jonathan Sterling <notifications@github.com

wrote:

I thought in nuprl you had to unfold abstractions to reveal redexes?

On Aug 1, 2015, at 11:57 PM, vrahli notifications@github.com wrote:

Unfolding abstractions/operators should be counted as a computation step.
Shouldn't this "reduce 2" unfold pi2?
Also, why do I need the ";"? I get a parsing error without the ";".

Operator pi2 : (0).
[pi2(x)] =def= [spread(x;a.b.b)].

Theorem foo : [ ceq(<>;lam(x.pi2(x)) pair(<>;<>)) ] {
reduce 2;
}.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#183 (comment).

http://www.cs.cornell.edu/~rahli/

I still think it would be great to add definitions to the computation system.
Right now GeneralRules.Unfolds does not come with any justification, while
if unfolding abstractions was part of the computation system, it would
be justified by the CEqRules.Step rule.
What do you guys think?

@vrahli I've been thinking about this in the last few months and I've come around to thinking that we really shouldn't differentiate delta equalities from computational equalities in the way we do. So I think I agree with you!

I'm sorry that development has slowed for the past month or two; I've been working on grad school applications, as well as three papers that need to be written in order to justify the next big push of JonPRL development, which will involve a better syntactic framework & logical framework/refiner.

Here's a proposal for the new operational semantics design.

datatype rule_type = 
    BETA    (* beta rules / computation *)
  | DELTA   (* delta rules / definitional extension *)

(* the Kripke frame for our intensional small step judgment *)
signature FRAME = 
sig
  type world
  val lookupRule : world -> term -> (term * rule_type) option
end

functor SmallStep (W : FRAME) =
struct

  exception Stuck

  fun step (w : W.world) (ruleTypes : rule_type set) (M : term) = 
    (* built-in computation rules go here *)
    handle Stuck => 
      case W.lookupRule w M of
          NONE => raise Stuck
        | SOME (N, ty) = if Set.elem (ty, ruleTypes) then N else raise Stuck
end

Then, users can compute terms, selecting beforehand a set of rule types that they want to apply (as in Coq). This design will also let us compute terms properly at the top-level with Eval, which currently isn't able to unfold definitions.

One kind of interesting thing is that this approach could pave the way for allowing users to introduce non-uniform computation rules (i.e. β-rules) in their developments; of course, it won't be possible for us to check that these are valid rules, but it could be useful in "trusted modules".

@vrahli @jozefg How does this sound? Can you think of any better options?

That sounds good. Did you already start implementing this?

On Wed, Nov 18, 2015 at 7:19 PM, Jonathan Sterling <notifications@github.com

wrote:

Here's a proposal for the new operational semantics design.

datatype rule_type =
BETA (* beta rules / computation )
| DELTA (
delta rules / definitional extension )
(
the Kripke frame for our intensional small step judgment *)signature FRAME = sig
type world
val lookupRule : world -> term -> (term * rule_type) optionend
functor SmallStep (W : FRAME) =struct

exception Stuck

fun step (w : W.world) (ruleTypes : rule_type set) (M : term) =
(* built-in computation rules go here *)
handle Stuck =>
case W.lookupRule w M of
NONE => raise Stuck
| SOME (N, ty) = if Set.elem (ty, ruleTypes) then N else raise Stuckend

Then, users can compute terms, selecting beforehand a set of rule types
that they want to apply (as in Coq). This design will also let us compute
terms properly at the top-level with Eval, which currently isn't able to
unfold definitions.

One kind of interesting thing is that this approach could pave the way for
allowing users to introduce non-uniform computation rules (i.e. β-rules) in
their developments; of course, it won't be possible for us to check that
these are valid rules, but it could be useful in "trusted modules".

@vrahli https://github.com/vrahli @jozefg https://github.com/jozefg
How does this sound? Can you think of any better options?


Reply to this email directly or view it on GitHub
#183 (comment).

http://www.cs.cornell.edu/~rahli/

@vrahli I haven't started implementing it; I'll be able to resume developing JonPRL after I've finished applying to grad school (mid-December).

If anyone wants to take this on in the meanwhile, they're welcome though!

I think I can implement this one. I'm on break for a few days and have a little spare time.

@jozefg Great!!

One possible complication which I forgot about is the fact that you will have to take into account universe levels when doing this. In fact, I have not thought about this too carefully, but it may either scuttle the project or at least make it more difficult...

Really, the way that we are doing level polymorphism is easy to implement, but not ideal, since it means that "simple" rewriting isn't quite possible. If you look at the code for Unfold, wherever that is, you will see that we have to properly account for levels and hoist things about before rewriting.

So I've been thinking about how to implement this. The basic improvement would be to unfold operators defined with =def=, that's pretty straightforward. The bigger issue is the universe question. With unfold we require the user to provide a universe explicitly if they want something other than the current one. This seems fine because tactics need to rely on user help as well. It seems very wrong, however, that our operational semantics requires the user to explain how to step something :/ Furthermore, if step says that e reduces to v, it really ought to be the case that v is the unique canonical term with this property.. I'm not quite sure what to do here then.

One possible idea would be to introduce a new operator @i as an operator and say that a theorem-as-operator is neutral on its own, but when we apply @i to it it's reducible. If we introduce this @ operator we'll have to specify what Unfold auto to do with it, but that seems like a simpler problem.

@jozefg I think the "proper" way to do it is just follow Nuprl and let abstractions have a universe parameter.

EDIT: But the above could get really frustrating to use without display forms.

@jonsterling I don't know so much about Nuprl-the-proof-assistant, is it worth trying to do this on our end? The alternative is just to make delta rules introduced by theorems second class compared to Operators.

@jozefg

Well, what we have currently was sort of a "cheapest possible approach" so I wouldn't have to think about parameters, and so that the concrete syntax would have as little noise as possible. I think that it would be worthwhile to support declaring operators like

Operator welp{i} : (1)
[welp{i}(x.B[x])] =def= [(A : U{i}) * (x:A) B[x]]

or something. In the longer term, we'd want to support this behavior generically for many different kinds of parameters, but for now, having special support for level-parameterized abstractions seems reasonable; in "Red JonPRL" (i.e. JonPRL with the new logical framework), we'll do a better job of supporting this stuff in general.

The above proposal would allow us to put delta equalities directly into the operational semantics.

What do you mean, btw, about delta rules introduced by theorems? Just want to make sure I understand all the options.

I'm excited to start working on Red JonRPL (ps why red?).

I was originally thinking that while when you write Theorem foo : [...] {...}. you create a new operator foo. I originally thought that it would be more challenging to handle those operators because they'd be the ones potentially with a universe parameter hidden inside them. This wasn't quite right: universes might appear in definitions introduced by =def= as well so all operators are equally challenging to unfold. At least, I think :)

@jozefg Red JonPRL is "revolutionary".

Yes, exactly :) All operators are equally problematic 😉. We could generate a parameterized operator in this case for the extract of each theorem.

@jonsterling Hah, nice :)

It would be great if indeed operators were parametrized by levels as Jon
suggested. This is how it works in Nuprl.
The only somehow tricky thing then is to get second order substitution right
(see the definitions of sosub_aux and sosub in
https://github.com/vrahli/NuprlInCoq/blob/master/terms/sovar.v
for example).

On Wed, Dec 2, 2015 at 12:45 AM, Danny Gratzer notifications@github.com
wrote:

@jonsterling https://github.com/jonsterling Hah, nice :)


Reply to this email directly or view it on GitHub
#183 (comment).

http://www.cs.cornell.edu/~rahli/

As for second order substitution and metavariables, the new syntactic framework will have a proper treatment of this, which I took from Fiore's second order algebraic theories. When I get to my computer, I'll paste in a link to the rules for the new syntactic framework.

Is there any wrinkle in the interaction of second order substitutions and level parameters? My thinking/hope is that they should be orthogonal.

On Dec 2, 2015, at 7:26 AM, Vincent Rahli notifications@github.com wrote:

It would be great if indeed operators were parametrized by levels as Jon
suggested. This is how it works in Nuprl.
The only somehow tricky thing then is to get second order substitution right
(see the definitions of sosub_aux and sosub in
https://github.com/vrahli/NuprlInCoq/blob/master/terms/sovar.v
for example).

On Wed, Dec 2, 2015 at 12:45 AM, Danny Gratzer notifications@github.com
wrote:

@jonsterling https://github.com/jonsterling Hah, nice :)


Reply to this email directly or view it on GitHub
#183 (comment).

http://www.cs.cornell.edu/~rahli/

Reply to this email directly or view it on GitHub.

Here, section 3 of Syntax and Semantics of Abstract Binding Trees contains the rules for metavariables; in particular section 3.5 contains the algorithm for second order substitution. Do these look OK?