Massive performance loss from 0.3.5 to 0.3.6
Closed this issue · 4 comments
I am seeing some massive performance loss going from dill==0.3.5
to dill==0.3.6
.
Profiling the program with which I discovered this problem looks like this with 0.3.6
:
With 0.3.5
less than 2% of the runtime are spent in this branch.
The fact that _dill:343 save
is called an enormously large number of times looks suspicious...
Thanks for the gprof2dot
results. Are you seeing this for some corner case, or in general? It's generally helpful to include some test code that would enable us to try to reproduce your results. It seems you might be dumping a numpy.ufunc
or a numpy.ndarray
.
I will try to come up with a minimal example.
Added profiling script for dumps
in bae57e9.
Test suite:
get_gprof -n 2 -e 2 "import mystic; mystic.monitors.Monitor()"
get_gprof -n 2 -e 2 "import numpy; numpy.array([1,2,3])"
get_gprof -n 2 -e 2 "[1,2,3]"
get_gprof -n 2 -e 2 "import numpy; numpy.sin"
get_gprof -n 2 -e 2 "lambda x: x*x"
Tested dill-0.3.5
versus dill-0.3.6
.
Speed difference in all cases was negligible.
Call path looks very different, but it appears that runtime is pretty much the same in all cases, according to my testing (python 3.7
on macos
).
Times were nearly identical.
Times were nearly identical.
Same for the other objects tested.
I'm interested to see why you are experiencing a difference. I'll try your example once you post it.
With the new get_gprof
profiling script, I've been regularly testing objects and comparing to prior releases. I'm not seeing any real performance differences in the last two releases. I'm going to close this, but please feel free to reopen this if you post an example of what you were seeing that triggered this issue.