Performance Benchmarks
Closed this issue · 5 comments
@finiteloop mentioned in #43 that macropy slows stuff down a bunch. It would be nice if we had a nice benchmark suite that we could use to see how fast MacroPy was in a few cases:
- Large file with one or two macros inside (e.g.
log
s) - Large file full of macros inside (e.g. a test suite with lots of
require
s) - Large file with one or two macros which contain a whole pile of non-macro code (e.g. a few case classes with a pile of methods inside)
Only then would we be able to properly measure what effect (if any) our changes are having on performance, and therefore make systematic improvements in that direction.
We have an internal patch that is hard to port to macropy main that saves the compiled Python files (*.pyc files) after the macro conversion is complete, and we use those if the modified time of the source file is old enough. This was a huge win for us, and it is a technique that you might consider. I will try to spend the time to port it if I get the time over the next few months, but unfortunately I am swamped with other stuff right now.
Sure no problem, thanks for the feedback =)
I guess there are three things to do here
- Get a nice benchmark suite set up
- Intrinsically speed up the macro-expansion process (e.g. via heuristic culling of macro-less subtrees)
- Reduce macro-expansions by caching .pyc files
I've added .pyc caching as part of the new export hooks. This is nice as it fits nicely with other export hooks like the SaveExporter, and is entirely optional.
I have no idea if I'm handling the .pyc
s right at all (probably not) but as a first approximation, it works
Closing this for now, unless people have problems with the .pyc caching or pre-compilation subsystem (very likely) in which case they can open new issues. In general, I believe those should be sufficient to make using macropy not painfully slow in the long run, even if the initial compiles take a bit