Reduce size of dependencies
Closed this issue · 2 comments
Thanks for making this package available!
The readme says
mkl_fft [..] is now being released as a stand-alone package
It is true, but it still depends on the mkl package (of >200MB compressed) which will be downloaded both when installing with conda and pip.
The large size of dependencies can be a significant problem. For comparison, pyFFTW wheels take 2.3MB only. I imagine this package needs only a small part of MKL, is there a chance self-contained wheels could be made (e.g. by statically linking the relevant parts of MKL)?
@rth One can link MKL statically. Intel MKL Link Advisor is helpful in ensuring that it is done right.
For that libs
variable in setup.py#L39 need to change according to the MKL link advisor references above. The order is in which libraries are listed may be important too.
We do not do this automatically, because we want MKL to serve computation ecosystem (numpy/scipy/mkl_fft/mkl_random, etc.), and linking each tool statically will significantly increase its memory footprint in a real-life workflows.
Static linking also has the disadvantage of not allowing for dynamic choice of threading layer. A feature now used in TBB, SMP and Numba packages.
I see thanks for the explanations.
We do not do this automatically, because we want MKL to serve computation ecosystem (numpy/scipy/mkl_fft/mkl_random, etc.), and linking each tool statically will significantly increase its memory footprint in a real-life workflows.
I guess there is no easy general solution. MKL is a really nice performance wise. I just wanted to raise awareness that having a large monolithic dependency that keeps growing in size (cf conda/conda#6756 (comment)) while sometimes only a fraction of the functionality is needded (e.g. FFT) is also an issue in some use cases where size of downloads matter.