bitjson/bch-vm-limits

add consideration of static vs (static + dynamic) cost accounting

Closed this issue · 3 comments

The combo of static + dynamic cost accounting is powerful, but from an initial perspective, it's not obvious and in fact looks like a rookie mistake of "You're going to have to execute this thing (even partially) to do this. It should be done statically to avoid incurring VM costs."

I think it would be a good addition to explicitly mention that purely static analysis is considered and discarded as an option because it forces either limiting (overly conservative) or dangerous (overly loose) heuristics where both of these are a problem AND the cost of dynamic validation is very low in any case.

If that would be something you consider adding, I'll try to come up with a minimal way to include it.

Obvious downside of static cost evaluation: you'd have to have a fixed price for opcodes no matter the size of inputs to them.
Then OP_CAT concatenating 1 + 1 byte would cost the same as OP_CAT concatenating 5k and 5k bytes - and you'd be forced to budget it as if it's always doing the most expensive operation, so it would severely limit smaller uses for no reason.

Thank you @emergent-reasons for opening the issue! Note for future readers, there's more discussion on BCR, and a succinct description is now in rationale: https://github.com/bitjson/bch-vm-limits/blob/master/rationale.md#use-of-explicitly-defined-density-limits

Appreciate you adding that.