ohmjs/ohm

Semantic action '[...]' has the wrong arity: expected 3, got 0

ericmorand opened this issue · 5 comments

I face a strange issue where OHM throws a Semantic action '[...]' has the wrong arity: expected 3, got 0 error when I used rest parameters like this:

And: (...nodes: [TerminalNode, IterationNode, TerminalNode]) => compileAndRule({
    values: nodes[1]
}),

At runtime, it is clear that the method will receive three arguments. Why is OHM complaining about the arity before it even tries to execute the method?

It's very easy to make mistakes with semantic action arity. These up-front checks have always been part of Ohm and they generally bring a lot of benefit. But at the time they were added, JS didn't support rest parameters, so this case wasn't one we had to worry about.

What's the reason to use a rest parameter here, when the number of arguments is fixed? E.g., could you rewrite your action like this?

And: (nodeA: TerminalNode, nodeB: IterationNode, nodeC: TerminalNode) => compileAndRule({
    values: nodeB
}),

The reason this needs to be checked up front is that at runtime, there's no way to know that a function was passed an incorrect number of arguments:

> let func = (a, b) => [a, b]
undefined
> func()
[ undefined, undefined ]
> func(1, 2, 3)
[ 1, 2 ]

My question is more about what does it bring to have such a check?

There are two possible situations:

  • A developer is writing the code of its own library, based on ohm. At this point, either the tests pass or they don't. If he write a semantic action handler that doesn't implement the maximum number of arguments - because he doesn't need the last two ones for example - but the tests pass, there is nothing wrong and the check is a burden.
  • A developer is using the library and at that point, there is zero chance that the number of arguments is not correct since the library was tested.

So, in what exact situation is this check useful?

Here are a couple scenarios where it's useful:

  • When you are writing new semantic actions for a grammar, it's not always obvious exactly what the arity of the rule body is. For example, here is a rule body from our ES5 grammar: return (#(spacesNoNL ~space) Expression<withIn>)? #sc. If you get the number of arguments wrong, it can often result in confusing behaviour which is not easy to debug. In this case, the check catches the error early and provides a clear indication that you've done something wrong.
  • When you're editing a grammar that already has some semantic actions written for it, you might change the arity (deliberately or not). Again, having the check can catch problems early.

Hope that helps.

So it is useful for developing, right? And in this
case wouldn't the tests catch the errors?

I'm not sure I understand how a change to the arity of a rule would not be detected by a failing test if this change is not backward compatible.

Let's say that is use rest parameters, and that my action depends on the 1-indexed and 3-indexed arguments. If a change to the rule insert an argument between the second and the fourth one, my test will fail. In this case I don't need ohm to actually fail before the test does.

But if I change the rule in a way that add a 5th argument, my test will succeed. And in this case I don't want ohm to fail since my test must pass. It creates a false negative.

Maybe I missed your point but I don't see the benefit of this check in the context of a tested project.