jeffreykegler/Marpa--R2

Calling ambiguous differs from ambiguity_metric

srathbun opened this issue · 62 comments

I have been hacking up a copy of jddurand's sql grammar. I've ended up with a modification of the grammar that parses my input, but gives me some weird results.

If I do not call ambiguous after read, then the value calls work, and I get a tree. This is what the _parse_debug method does inside the grammar, so I did not notice until I switched it off.

The default Marpa parse method does call ambiguous after read, but in my case it returns the empty string. According to the docs, that means the parse is unambiguous, and calling value will work. However, I get value() called when recognizer is not in tree mode The current mode is "forest" when I do so. If I call ambiguity_metric it returns a value of 2, indicating an ambiguous parse.

Should the value calls work with an ambiguous grammar? Also, I do not think that these two methods should have different results.

To do its work, ambiguous evaluates the tree in 'forest' mode, which means that $recce->value() calls won't work -- they require 'tree' mode. As a workaround, I believe that you can reset the mode with the $recce->series_restart() method before calling value().

I had thought this was documented somewhere, but looking at the docs, I don't see anything. I'll leave this issue open until I resolve the documentation aspect of the issue.

Re-reading, I am not sure I responded to all the implications. Definitely, the answers provided by ambiguity() and ambiguity_metric() should be consistent. If you don't think they are in some cases, could you provide an example, as small as possible, of the failure?

In my case, they do not provide the same answers. If you use my fork with this gist you should see it fail with the forest error.

But if you use the TEST_DEBUG env option, you'll get a parse tree out.

The "forest error" is Marpa behaving as expected, and series_restart() should be a workaround. This is, admittedly, not the smoothest interface ever designed, but that won't be fixed until versions of Marpa after Marpa::R2.

The "forest error" aside, inconsistent returns from ambiguity() and ambiguity_metric() would be a bug, but it's not clear from the gist where and if that is happening.

@jddurand, any thoughts on this?

At http://irclog.perlgeek.de/marpa/2015-01-23#i_9995239 in the IRC channel, I ask about the "forest error", for the purpose of documenting the workaround.

I do several things, so that I don't get this error any more:

(1) I almost always work work with events, so:
for
(
$pos = $self -> recce -> read($string, $pos, $length);
$pos < $length;
$pos = $self -> recce -> resume($pos)
)
{
($start, $span) = $self -> recce -> pause_span;
($event_name, $span, $pos) = $self -> _validate_event($string, $start, $span, $pos);
$lexeme = $self -> recce -> literal($start, $span);
$original_lexeme = $lexeme;
$pos = $self -> recce -> lexeme_read($event_name);

    die "lexeme_read($event_name) rejected lexeme |$lexeme|\n" if (! defined $pos);
    ...
}

_validate_event() chooses 1 event if there are several.

This logic is used in GraphViz2::Marpa and Text::Balanced::Marpa.

(2) I rank the alternatives for a problematic rule:
rdn ::= attribute_pair rank => 1
| attribute_pair spacer plus spacer rdn rank => 2
This tells Marpa to differentiate between choices and hence between trees.

This example comes from X500::DN::Marpa, which uses actions, not events.

(3) By reporting the list of simultaneous events I can study the problem and re-write the part of the grammar leading to this issue. That's the big advantage of _validate_event(). It has the reporting code in it.

Don't forget! rank only works in conjunction with the ranking_method parameter:

    Marpa::R2::Scanless::R -> new
    ({
        exhaustion        => 'event',
        grammar           => $self -> grammar,
        ranking_method    => 'high_rule_only',
        semantics_package => 'X500::DN::Marpa::Actions',
    })

@ronsavage -- OK. All of those avoid ambiguity, which is cool if that is your choice. But Marpa users should be able to embrace and exploit ambiguity so, in the context of this issue, I don't count those as workarounds.

Hi Jeffrey

Fair enough!

But at least users should know about such techniques.

On 24/01/15 10:06, Jeffrey Kegler wrote:

@ronsavage https://github.com/ronsavage -- OK. All of those avoid
ambiguity, which is cool is that is your choice. But Marpa users should
be able to embrace and exploit ambiguity so, in the context of this
issue, I don't count those as workarounds.


Reply to this email directly or view it on GitHub
#233 (comment).

Ron Savage - savage.net.au

Since this ticket is collecting techniques to avoid ambiguity, I should mention the 'latm' option in the grammar. As I read the docs for Marpa::R2::Scanless::DSL, there may be circumstances where using latm => 1 will find less tokens that the default, latm => 0. So, in such cases 'latm' will reduce ambiguity, and perhaps eliminate it.

Thanks for the tips @ronsavage but I guess I was not clear enough in my description. I know the grammar is ambiguous, and that's ok. I've been mucking around with it, so having a funky parse at various points is expected. My biggest issue was that during my testing, the parse debug function I used did not call ambiguous. However, the regular parse function did.

But, the call returned empty string, allowing the regular parse function to move on and try to call value, which failed. By adding ambiguous and ambiguity_metric to my debug function, I found that they report different results. The input could be ambiguous, but that's not the problem. The problem is the difference between the two checks.

You won't see these lines directly in the gist, because it's all part of the underlying libraries. Grabbing my fork, you can add both checks to the parse_debug function, and see the problem when you run the input through it.

rns commented

ambiguous() returning empty string means there is no ambiguity [1]. Hence
the call to value() in parse(). On multiple parses ambiguous() (currently)
returns non-empty string with a report on ambiguities.

ambiguity_metric(), on the other hand, returns 1 on unambiguous parse or >1
on two or more parses.

So, they serve a similar purpose, but in a different way. Hope this helps.

[1]
https://metacpan.org/pod/distribution/Marpa-R2/pod/Scanless/R.pod#ambiguous

On Mon, Jan 26, 2015 at 3:32 PM, Spencer Rathbun notifications@github.com
wrote:

Thanks for the tips @ronsavage https://github.com/ronsavage but I guess
I was not clear enough in my description. I know the grammar is ambiguous,
and that's ok. I've been mucking around with it, so having a funky parse at
various points is expected. My biggest issue was that during my testing,
the parse debug function I used did not call ambiguous. However, the
regular parse function
https://github.com/jeffreykegler/Marpa--R2/blob/master/cpan/lib/Marpa/R2/SLR.pm#L1568
did.

But, the call returned empty string, allowing the regular parse function
to move on and try to call value, which failed. By adding ambiguous and
ambiguity_metric to my debug function, I found that they report different
results. The input could be ambiguous, but that's not the problem. The
problem is the difference between the two checks.

You won't see these lines directly in the gist, because it's all part of
the underlying libraries. Grabbing my fork, you can add both checks to the
parse_debug
https://github.com/RiveraGroup/MarpaX-Languages-SQL2003-AST/blob/master/lib/MarpaX/Languages/SQL2003/AST.pm#L151
function, and see the problem when you run the input through it.


Reply to this email directly or view it on GitHub
#233 (comment)
.

@rns and that is exactly the problem. I'm getting an empty string from ambiguous and 2 from ambiguity_metric for the same parse.

rns commented

That can be a bug. Do you have something I can reproduce? Or is it entirely in your fork of Marpa::R2?

Update: s/Marpa::R2/ MarpaX-Languages-SQL2003-AST/, so in your fork of https://github.com/RiveraGroup/MarpaX-Languages-SQL2003-AST

I don't have a fork of Marpa::R2, I have a fork of jddurand's sql grammar. Said grammar can be used with the gist I've linked above. If you run the gist input with the TEST_DEBUG environment variable, it will produce an output tree. Without that variable, it hits the ambiguous check, sees everything is ok, and tries to call value, which fails.

Since you need my grammar to see the error, you can add the ambiguous check to the debug function I linked, and have it print the output for both ambiguous and ambiguity_metric to see the problem.

rns commented

The code snippet starting here -- https://github.com/RiveraGroup/MarpaX-Languages-SQL2003-AST/blob/master/lib/MarpaX/Languages/SQL2003/AST.pm#L161 -- does not see ambiguity in my run with TEST_DEBUG set (no AMBIGUITY: in the output). It just does return $value_ref[0]. Do you see the same thing?

Yes, that's what I see. You will note that ambiguous does not get called at all. If you turn off TEST_DEBUG, it will go through the standard parse code, call ambiguous and get an empty string, then fail on the value call.

rns commented

ambiguous() calls ambiguity_metric(), and returns empty string if ambiguity_metric() returns 1, so it works ok for a single parse under TEST_DEBUG, if called after read() in _parse_debug().

return q{} if $slr->ambiguity_metric() == 1;

https://github.com/jeffreykegler/Marpa--R2/blob/master/cpan/lib/Marpa/R2/SLR.pm#L1513

So far, both ambiguous() and ambiguity_metric() work as they should.

However, ambiguity_metric() does return 2 if called alone at the same place after read() under TEST_DEBUG.

That's not what I expected. You are using my fork of the sql grammar, correct? It should bite the dust on the value() call afterwards. Have you tried it without the TEST_DEBUG variable?

Good to know that ambiguous calls ambiguity_metric under the hood.

rns commented

This

value() called when recognizer is not in tree mode
  The current mode is "forest"

is because $slr->series_restart(); must be called before you can use value() after $slr->ambiguous();

However, if I call $slr->series_restart();, before the value() it returns

Could not resolve rule action named '_nonTerminalSemantic'
  Rule was UNKNOWN_STUFF ::= Identifier
  Could not fully qualify "_nonTerminalSemantic": no resolve package

The shows a problem with semantics_package arg to $slr, which, strangely, does not show up without $slr->series_restart();.

Can you see smth. like that?

Yeah, that's exactly what I see. But I thought that if ambiguous returned an empty string you did not need to call series_restart? Otherwise, how does the standard parse function work at all?

rns commented

parse() works because ambiguous() calls ambiguity_metric(), and if it returns 1, then ambiguous() doesn't try do build an ASF (abstract syntax forest) to show ambiguities. series_restart() is needed only if ASF constructor is called [1] which is not the case if there is no ambiguous parse.

[1] https://github.com/jeffreykegler/Marpa--R2/blob/master/cpan/lib/Marpa/R2/SLR.pm#L1514

@rns: Thanks for picking this one up! Given the oblique relationship
between the gist and the actually problematic calls, it might have take me
quite a while to get to this.

@rns: Could you write up a rev to the Marpa::R2 docs that explains the need
to call series_restart() when ambiguous() detects an ambiguous parse?
With that I'll be able to close this.

On Mon, Jan 26, 2015 at 8:49 AM, rns notifications@github.com wrote:

parse() works because ambiguous() calls ambiguity_metric(), and if it
returns 1, then ambiguous() doesn't try do build an ASF (abstract syntax
forest) to show ambiguities. series_restart() is needed only if ASF
constructor is called [1] which is not the case if there is no ambiguous
parse.

[1]
https://github.com/jeffreykegler/Marpa--R2/blob/master/cpan/lib/Marpa/R2/SLR.pm#L1514


Reply to this email directly or view it on GitHub
#233 (comment)
.

rns commented

@jeffreykegler: Ok, I'll look into patching the doc for series_restart(), will keep you informed.

I'm a bit puzzled by ambiguity_metric() returning 2 when called directly and 1 when called in ambiguous() though.

Really? I missed that. I don't think that should happen.

Often in the attempt to duplicate a bug in a small sample, it goes away,
and I misread that this was one of those cases.

On Mon, Jan 26, 2015 at 10:14 AM, rns notifications@github.com wrote:

@jeffreykegler https://github.com/jeffreykegler: Ok, I'll look into
patching the doc for series_restart(), will keep you informed.

I'm a bit puzzled by ambiguity_metric() returning 2 when called directly
and 1 when called in ambiguous() though.


Reply to this email directly or view it on GitHub
#233 (comment)
.

@jeffreykegler Sorry my example was so oblique. @rns Thanks for working through the thing.

I think that a little documentation clean up would be nice, but this issue was opened specifically for the difference between ambiguity_metric and ambiguous. Until that gets resolved in some way I'd like to keep this issue open.

rns commented

I'll to look into that ambiguity_metric() == 2, but ambioguous(ambiguity_metric()) == 1 case when I have a bit more time this week.

@rns: If you can reproduce it in a more convenient form, I can pick up from
there.

On Mon, Jan 26, 2015 at 10:48 AM, rns notifications@github.com wrote:

I'll to look into that ambiguity_metric() == 2, but ambioguous(ambiguity_metric())
== 1 case when I have a bit more time this week.


Reply to this email directly or view it on GitHub
#233 (comment)
.

rns commented

@srathbun, @jeffreykegler
I've taken out the grammar and the input and did this standalone test script -- code, output -- all tests pass except ambiguity_metric() with high_rule_only ranking; with none ranking, both ambiguous() and ambiguity_metric() work as they should -- so perhaps ranking is somehow involved.

As an aside, with high_rule_only and actions @jddurand discussed in, IIRC, http://irclog.perlgeek.de/marpa/2014-12-12#i_9797833 -- in this gist, output shows that while ambiguity_metric() continues returning 2, ambiguous() stopped returning empty string and now tries building a syntax forest (note Marpa::R2::ASF::new('Marpa::R2::ASF' line in [2]), but fails due to what seems to be a bug -- calling rule_describe() as an object method.

@rns thanks for putting that together. My understanding of the grammar is that it only works properly with the high_rule_only option. @jddurand thoughts?

Now that there's a example that clearly shows it, I'll take over the
problem. It sounds like it's a bug in my code. If it's not I'll be glad
to let you know. :-)

On Mon, Jan 26, 2015 at 2:01 PM, Spencer Rathbun notifications@github.com
wrote:

@rns https://github.com/rns thanks for putting that together. My
understanding of the grammar is that it only works properly with the
high_rule_only option. @jddurand https://github.com/jddurand thoughts?


Reply to this email directly or view it on GitHub
#233 (comment)
.

rns commented

Yes, the grammar uses rank adverb heavily and becomes extremely ambiguous
(ambiguity_metric() returns 7, IIRC) without high_rule_only.

On Tue, Jan 27, 2015 at 12:01 AM, Spencer Rathbun notifications@github.com
wrote:

@rns https://github.com/rns thanks for putting that together. My
understanding of the grammar is that it only works properly with the
high_rule_only option. @jddurand https://github.com/jddurand thoughts?


Reply to this email directly or view it on GitHub
#233 (comment)
.

Re #233 (comment)

high_rule_only is intentional as you say, as explained in http://blogs.perl.org/users/jean-damien_durand/2014/12/a-marpa-powered-sql-2003-parser.html .

The reason behind this 'parse_debug' routine is only because I wanted to have the process() output in parse() fails, and only under the test environnement (aka t/*.t).
I think I took original's parse(), and just added the progress(), keeping its original behaviour... Perhaps I did it wrong and it shows up the bug mentionner therebefore -; !

Still caught up with other stuff, but I took a preliminary look. In Libmarpa I have 2 "ambiguity metrics", one pre-ordering, and the other which takes into account the ordering (that is, stuff like "high rule only"). Marpa::R2 uses the post-ordering ambiguity metric. In other words, @rns, yes, ranking is very much involved.

rns commented

@jeffreykegler -- yes, ranking, and, perhaps, semantic actions, too.

This gist [1] shows that ambiguous() and ambiguity_metric() both (wrongly)
think that the parse is ambiguous if JD's Unicode escaping actions [2] are
preserved in the grammar source (even if actually undefined).

Those actions have been removed from the test script [3], because value()
wouldn't run with them being undefined.

[1] https://gist.github.com/rns/c0ac561941fec6848b87
[2] http://irclog.perlgeek.de/marpa/2014-12-12#i_9797833
[3] https://gist.github.com/rns/d9c45866a3219ce93d7b

On Wed, Jan 28, 2015 at 10:32 AM, Jeffrey Kegler notifications@github.com
wrote:

Still caught up with other stuff, but I took a preliminary look. In
Libmarpa I have 2 "ambiguity metrics", one pre-ordering, and the other
which takes into account the ordering (that is, stuff like "high rule
only"). Marpa::R2 uses the post-ordering ambiguity metric. In other words,
yes @rns https://github.com/rns, ranking is very much involved.


Reply to this email directly or view it on GitHub
#233 (comment)
.

@rns: Could you open a separate issue for the rule_describe() problem?
When multiple problems occur in an issue with a long history, I have a high
chance of missing some -- and every time I need to go back to the issue, I
have to reread all the not-directly-related stuff from other issues.

On Wed, Jan 28, 2015 at 12:50 AM, rns notifications@github.com wrote:

@jeffreykegler -- yes, ranking, and, perhaps, semantic actions, too.

This gist [1] shows that ambiguous() and ambiguity_metric() both (wrongly)
think that the parse is ambiguous if JD's Unicode escaping actions [2] are
preserved in the grammar source (even if actually undefined).

Those actions have been removed from the test script [3], because value()
wouldn't run with them being undefined.

[1] https://gist.github.com/rns/c0ac561941fec6848b87
[2] http://irclog.perlgeek.de/marpa/2014-12-12#i_9797833
[3] https://gist.github.com/rns/d9c45866a3219ce93d7b

On Wed, Jan 28, 2015 at 10:32 AM, Jeffrey Kegler notifications@github.com

wrote:

Still caught up with other stuff, but I took a preliminary look. In
Libmarpa I have 2 "ambiguity metrics", one pre-ordering, and the other
which takes into account the ordering (that is, stuff like "high rule
only"). Marpa::R2 uses the post-ordering ambiguity metric. In other
words,
yes @rns https://github.com/rns, ranking is very much involved.


Reply to this email directly or view it on GitHub
<
https://github.com/jeffreykegler/Marpa--R2/issues/233#issuecomment-71797978>

.


Reply to this email directly or view it on GitHub
#233 (comment)
.

I have not followed entirely the thread, but is quite happy to see that the grammar handles correctly complex SQL stuff -; (modulo the ambiguity and al. maybe bug -;)
I am referring to https://gist.github.com/rns/c0ac561941fec6848b87

@jddurand You're grammar has held up quite well, and Marpa has been a joy to work with. My issues have almost entirely been based around trying to hammer just enough TSQL logic into it, and skip "unknown" things without throwing everything away as unknown. Marpa has made that much, much, much easier than the last time I did something similar with lex&yacc.

I've moved my other projects aside, and this issue is now my top priority. I'm treating it as covering only the problem described in the title: the output of ambiguous() and ambiguity_metric() being inconsistent.

@jeffreykegler Thank you, but I'm in no hurry to have it resolved. The stuff I'm doing can avoid it by just running with TEST_DEBUG=1 and that avoids the check.

Thanks! As reported, it seems to be an outright bug. Outright bugs in Marpa::R2 take priority over new features. That it's so minor that it is not hurting anyone is good, but it is still my top priority.

@rns: Could the issue be summarized this way?

In the presence of "high rule only" ranking, it is possible that that a grammar which, after ranking, is unambiguous, will be reported by ambiguity_metric() to be ambiguous.

Or is there more to it?

I've duplicated the problem in a small example: https://gist.github.com/jeffreykegler/0ce38abec2fe14e458b4

I'm inclined to treat it as a documentation bug. Here's the problem. When using high rule only, Libmarpa makes an attempt to efficiently discover if all ambiguity has been removed. It does this by seeing if there are any choices not resolved by the "high rule only" criterion. But there's a corner case it can miss, in which case it will think that a parse that "high rank only" has made unambiguous, is still ambiguous.

It is possible for there to be a place in the parse where there was a choice, which the "high rule only" criterion does not resolve, but which is "orphaned" by a higher level choice that it does resolve. The above-linked gist illustrates this. Libmarpa in computing the ambiguity metric is not smart enough to spot this.

I could change Libmarpa to be smart about these things, which would make it slightly less efficient. But the idea behind ambiguity_metric() is to be extremely cheap to compute -- after all, ambiguous() is going to go on to find out exactly this same information.

My thought is to add to the documentation something like this language:

Applications which
use "high rule only" need to be aware that ambiguity_metric() may report an parse with only one choice to be ambiguous.
This reflects the fact that the parse is
ambiguous pre-ranking.

A parse that is pre-ranking ambiguous sometimes, but not always, becomes unambiguous after ranking if parsed with "high rule only" because "high rule only" throws away the choices for non-high-ranking rules.
ambiguity_metric() makes a
highly efficient and usually accurate guess as to whether an ranking
has turned an ambiguous parse into an unambiguous one,
but this guess is not 100% accurate.

Hi Jeffrey

may report an parse

'an' => 'a'.

Ron Savage - savage.net.au

Le jeudi 29 janvier 2015, 21:55:04 Jeffrey Kegler a écrit :

It is possible for there to be a place in the parse where there was a choice, which the "high rule only" criterion does not resolve, but which is "orphaned" by a higher level choice that it does resolve. The above-linked gist illustrates this. Libmarpa in computing the ambiguity metric is not smart enough to spot this.

Then $slr->parse() cannot use ambiguous() family. The end-user IMHO will not understand we rely on a method that is not reliable.

I could change Libmarpa to be smart about these things, which would make it slightly less efficient. But the idea behind ambiguity_metric() is to be extremely cheap to compute -- after all, ambiguous() is going to go on to find out exactly this same information.

What about having this reliable method in libmarpa available, but as a third method (or eventually re-use existing ones with an additional parameter), and call the later /only/ in the $slr->parse() ?
This will affect only applications using $slr->parse(), which is very new, and with which the end user expects it to croak if the parse tree value is truely ambiguous.

And, yes, one should document current ambiguous methods are are guess, in order to be cheap, and list in which situations the guess can be wrong, i.e. at least the situation you describe

Another comment: what about current package using ambiguous to detect ambiguity ? This should be listed and authors warned.
I know that MarpaX::Languages::C::AST is checking for ambiguity, but with the old method, i.e. calling value twice, This package, so, will not be affected at least.
But Ron's recent package might be (?)

rns commented

Perhaps make ambiguity_metric() just throw an exception if 'high_rule_only' is in effect, e.g.:

ambiguity_metric() is not designed for use with high_rule_only ranking method, use ambiguous() instead.

Along 'worse is better' lines. ;)

rns commented

@jddurand ambiguous() uses ASF, so it is not affected by this issue and works as it should, as shown by Jeffrey's gist, it is only ambiguity_metric() that is affected by high_rule_only.

@rns oh ok - thanks -;

Actually, then, it may be best I fix ambiguity_metric() so that it is accurate in the case of "high rule only" as well. I can do this.

I would prefer if you made it accurate. It breaks the rule of least surprise to have it misidentify a parse. But more importantly, slr->parse() fails horribly in this edge case, even though it uses ambiguous which should work correctly. I would expect it to either throw an exception stating the parse is ambiguous or give me the results, not fail with a weird error about forest modes.

rns commented

if ambiguity_metric() is accurate across all ranking methods, slr->parse() will work properly as is.

OK. Making ambiguity_metric() accurate will be my new top priority.

I think the just referenced commit is the fix. @rns, do you run a bleeding edge Marpa::R2? Do you want to check?

I've tested the test suite, @rns's reproduction of the problem, and my small reproduction of the issue, and all look OK.

In making the fix, I changed ambiguity_metric() so that an ambiguous parse always returns 2, and an unambiguous parse always returns 1. So effectively it's not really a "metric" any more, just a (now accurate) indicator. This is backward compatible, because the documentation said that all return values above 2 were to be treated as equivalent.

The documentation says that any return value of 2 or greater indicates ambiguous. I will leave that in the docs, to allow for use of a more complex metric in the future. I don't have any plans for such a metric, but it's backward compatible, so why not.

I found a nice implementation. The ambiguity computation for "high rule only" is an efficient tree traversal that ends as soon as it encounters an ambiguous node. Further, even this cost is only paid if all 3 of the following are true:

  1. You actually call ambiguity_metric()
  2. The pre-ranking parse was ambiguous.
  3. You are using "high rank only" ranking.
rns commented

On Sun, Feb 1, 2015 at 2:42 AM, Jeffrey Kegler notifications@github.com
wrote:

I think the just referenced commit is the fix. @rns
https://github.com/rns, do you run a bleeding edge Marpa::R2? Do you
want to check?

Yes and yes. :)

I've tested the test suite, @rns's reproduction
of the problem, and my small reproduction of the issue, and all look OK.

Seen the same thing, did the first cut at regression test --https://gist.github.com/rns/f2a87e24b7bdcc695389 -- would you prefer
it standalone or should it be added to sl_ambig.t?


Reply to this email directly or view it on GitHub
#233 (comment)
.

@rns -- thanks for doing that. Unless the regression test is a real fit with the other stuff in sl_ambig.t, I'd prefer it standalone.

@rns -- Changed my mind. :-) Yes, please, make the regression test part of sl_ambig.t.

rns commented

I will. :)

On Sun, Feb 1, 2015 at 7:35 PM, Jeffrey Kegler notifications@github.com
wrote:

@rns https://github.com/rns -- Changed my mind. :-) Yes, please, make
the regression test part of sl_ambig.t.


Reply to this email directly or view it on GitHub
#233 (comment)
.

As @rns suggests in #239, this issue seems ready to be closed.