A pure Python library for benchmarked, scalable numerics, built using numba.
Current state-of-the-art in numerics / algorithmics / machine learning has many big problems, two of which are:
- The data is getting bigger and more complex, and code is having trouble scaling to these levels.
- The code is getting bigger and more complex, and developers are having trouble scaling to these levels.
To fix (1) we need better algorithms, code which vectorises to SIMD instructions, and code which parallelises across CPU cores.
To fix (2) we need to focus on simpler code which is easier to debug.
fastats
(ie, fast-stats) tries to help with both of these by; using Linear Algebra for performance optimizations in common functions,
using numba
from Anaconda to JIT compile the optimized Python code to
vectorised native code, whilst being trivial to run in pure Python mode for debugging.
Finding the roots of an equation is central to much of data science and machine learning. For monotonic functions we can use a Newton-Raphson solver to find the root:
from fastats import newton_raphson
def my_func(x):
return x**3 - x - 1
result = newton_raphson(0.025, 1e-6, root=my_func)
This uses numba under-the-hood to JIT compile the python code to native code, and uses fastats transforms to call my_func
where required.
However, we usually wish to take a fast function and apply it to a large data set, so fastats
allows you to get the optimized function back as a callable:
newton_opt = newton_raphson(0.025, 1e-6, root=my_func, return_callable=True)
result = newton_opt(0.03, 1e-6)
If you profile this you will find it's extremely fast (from a 2015 Macbook Pro):
>>> %timeit newton_opt(0.03, 1e-6)
785 ns ± 8.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
compared with SciPy 1.0.1:
>>> import scipy
>>> scipy.__version__
>>> from scipy.optimize import newton
>>> %timeit newton(my_func, x0=0.03, tol=1e-6)
25.6 µs ± 954 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Most high-level languages like Python/Lua/Ruby have a formal C-API which allows us to 'drop' down to native code easily (such as SciPy shown above). However, not only is this time-consuming, error-prone and off-putting to many developers, but as you can see from the example above, the specialised C extensions do not automatically scale to larger data.
Through the use of numba to JIT-compile the entire function down to native code, we can quickly scale to much larger data sizes without leaving the simplicity of Python.
The secret is in the handling of the function arguments.
When we write C-extensions to high-level languages, we are usually trying to speed up a certain algorithm which is taking too long. This works well for specialised libraries, however in this world of big
data, the next step is usually now I want to apply that function to this array of 10 million items
. This is where the C-extension / native library technique falls down.
C-extensions to high-level languages are necessarily limited by the defined API - ie, you can write a C function to take 3 floats, or 3 arrays of floats, but it's very difficult to deal with arbitrary inputs.
fastats
allows you to pass functions as arguments into numba
, and therefore abstract away the specific looping or concurrency constructs, resulting in faster, cleaner development time, as well as faster execution time.
Python >= 3.5 only. Python 3.6 or newer is strongly recommended.
See setup.py - install_requires
for installation requirements.
The contribution guide contains information on how to install development requirements.
For test requirements, take a look at .travis.yml or .appveyor.yml.
Please make sure you've read the contribution guide: CONTRIBUTING.md
In short, we use PRs for everything.