Proposal to improve CPU multiprocessing performance
kuffmode opened this issue · 3 comments
Hi all,
I realized the multi-threading performance is good, but probably can be even better for local machines. I noticed that my CPU cores are not engaged fully so a simple solution that I usually use and found to be helpful here is joblib
parallel processing.
At its core, all it needs is something like this:
results = Parallel(n_jobs=-1)(
delayed(network_analysis.analyse_single_target)(
settings=settings, data=data,target=node) for node in range(n_nodes))
But of course, it can be more user-friendly if this is wrapped in a function, something like an interface where we just tell how many jobs, what to do, some kwargs
for the function and potentially some kwargs
for the parallel processing backend. This way, each core is occupied with one single_target
analysis, so as in my case, it can help a lot with the performance.
About joblib
, we used it in our own library and compared to some fancier things like dask
and ray
it actually is a lot better and less pain! So far, it never broke anything for us.
Hi Kayson,
Thanks for sharing this, looks like a nice way to make the parallelization over targets more convenient. My proposal would be to add this as a demo script, so people can build on it. If you like, just open a pull request or send me your script and I will test/adapt it and include it.
Regarding Michael's comment, I just merged @daehrlich's implementation for MPI-supported CMI estimation into master (release v1.5). Maybe this is helpful as well.
Best,
Patricia
Awesome, I will do it in early 2024 then. I think the advantage of joblib is that it basically doesn't need anything but the function so it will be very straight forward for people to use it. I'm not sure if it's any better compared to MPI but I think it's a good simple trick.