netlib-java
is a wrapper for low-level BLAS,
LAPACK and ARPACK
that performs as fast as the C / Fortran interfaces.
If you're a developer looking for an easy-to-use linear algebra library on the JVM, we strongly recommend Commons-Math, MTJ and Breeze:
- Apache Commons Math for the most popular mathematics library in Java (not using
netlib-java
). - Matrix Toolkits for Java for high performance linear algebra in Java (builds on top of
netlib-java
). - Breeze for high performance linear algebra in Scala (builds on top of
netlib-java
).
In netlib-java
, implementations of BLAS/LAPACK/ARPACK are provided by:
- delegating builds that use machine optimised system libraries (see below)
- self-contained native builds using the reference Fortran from netlib.org
- F2J to ensure full portability on the JVM
The JNILoader will attempt to load the implementations in this order automatically.
All major operating systems are supported out-of-the-box:
- OS X (
x86_64
) - Linux (
i686
,x86_64
, Raspberry Piarmhf
) (must havelibgfortran3
installed) - Windows (32 and 64 bit)
High performance BLAS / LAPACK are available commercially and open source for specific CPU chipsets. It is worth noting that "optimised" here means a lot more than simply changing the compiler optimisation flags: specialist assembly instructions are combined with compile time profiling and the selection of array alignments for the kernel and CPU combination.
An alternative to optimised libraries is to use the GPU: e.g. cuBLAS or clBLAS. However, GPU implementations have severe performance degradation for small arrays. MultiBLAS is an initiative to work around the limitation of GPU BLAS implementations by selecting the optimal implementation at runtime, based on the array size.
To enable machine optimised natives in netlib-java
, end-users make their machine-optimised libblas3
(CBLAS) and
liblapack3
(Fortran) available as shared libraries at runtime.
If it is not possible to provide a shared library, the author may be available
to assist with custom builds (and further improvements to netlib-java
) on a commercial basis.
Make contact for availability (budget estimates are appreciated).
Apple OS X requires no further setup because OS X ships with the veclib framework, boasting incredible CPU performance that is difficult to surpass (performance charts below show that it out-performs ATLAS and is on par with the Intel MKL).
(includes Raspberry Pi)
Generically-tuned ATLAS and OpenBLAS are available with most distributions (e.g. Debian) and must be enabled explicitly using the package-manager. e.g. for Debian / Ubuntu one would type
sudo apt-get install libatlas3-base libopenblas-base
sudo update-alternatives --config libblas.so.3
sudo update-alternatives --config liblapack.so.3
selecting the preferred implementation.
However, these are only generic pre-tuned builds. To get optimal performance for a specific
machine, it is best to compile locally by grabbing the latest ATLAS or the latest OpenBLAS and following the compilation
instructions (don't forget to turn off CPU throttling and power management during the build!).
Install the shared libraries into a folder that is seen by the runtime linker (e.g. add your install
folder to /etc/ld.so.conf
then run ldconfig
) ensuring that libblas.so.3
and liblapack.so.3
exist and point to your optimal builds.
If you have an Intel MKL licence, you could also
create symbolic links from libblas.so.3
and liblapack.so.3
to libmkl_rt.so
.
NOTE: Some distributions, such as Ubuntu precise
do not create the necessary symbolic links
/usr/lib/libblas.so.3
and /usr/lib/liblapack.so.3
for the system-installed implementations,
so they must be created manually.
The native_system
builds expect to find libblas3.dll
and liblapack3.dll
on the %PATH%
(or current working directory).
Besides vendor-supplied implementations,
OpenBLAS provide generically tuned binaries,
and it is possible to build
ATLAS.
Use Dependency Walker to help resolve any problems such as:
UnsatisfiedLinkError (Can't find dependent libraries)
.
NOTE: OpenBLAS doesn't provide separate libraries
so you will have to customise the build or copy the binary into both libblas3.dll
and
liblapack3.dll
whilst also obtaining a copy of libgfortran-1-3.dll
, libquadmath-0.dll
and
libgcc_s_seh-1.dll
from MinGW.
A specific implementation may be forced like so:
-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.NativeRefBLAS
-Dcom.github.fommil.netlib.LAPACK=com.github.fommil.netlib.NativeRefLAPACK
-Dcom.github.fommil.netlib.ARPACK=com.github.fommil.netlib.NativeRefARPACK
And a specific (non-standard) native binary may be forced like so:
-Dcom.github.fommil.netlib.NativeSystemBLAS.natives=netlib-native_system-myos-myarch.so
To turn off natives altogether, add these to the JVM flags:
-Dcom.github.fommil.netlib.BLAS=com.github.fommil.netlib.F2jBLAS
-Dcom.github.fommil.netlib.LAPACK=com.github.fommil.netlib.F2jLAPACK
-Dcom.github.fommil.netlib.ARPACK=com.github.fommil.netlib.F2jARPACK
Java has a reputation with older generation developers because Java applications were slow in the 1990s. Nowadays, the JIT ensures that Java applications keep pace with – or exceed the performance of – C / C++ / Fortran applications.
The following performance charts give an idea of the performance ratios of Java vs the native
implementations. Also shown are pure C performance runs that show that
dropping to C at the application layer gives no performance benefit.
If anything, the Java version is faster for smaller matrices and is consistently faster
than the "optimised" implementations for some types of operations (e.g. ddot
).
One can expect machine-optimised natives to out-perform the reference implementation – especially for larger arrays – as demonstrated below by Apple's veclib framework, Intel's MKL and (to a lesser extent) ATLAS.
Of particular note is the cuBLAS (NVIDIA's graphics card) which performs as well
as ATLAS on DGEMM
for arrays of ~20,000+
elements (but as badly as the Raspberry Pi for smaller arrays!) and
not so good for DDOT
.
Included in the CUDA performance results is the
time taken to setup the CUDA interface and copy the matrix elements to the GPU device. The nooh
run is
a version that does not include the overhead of transferring arrays to/from the GPU device: to take
full advantage of the GPU requires developers to re-write their applications with
GPU devices in mind. e.g. re-written implementation of LAPACK that took advantage of the GPU BLAS
would give a much better performance improvement than dipping in-and-out of GPU address space.
The DGEMM benchmark measures matrix multiplication performance:
The DGETRI benchmark measures matrix LU Factorisation and matrix inversion performance:
The DDOT benchmark measures vector dot product performance:
The DSAUPD benchmark measures the
calculation of 10% of the eigenvalues for sparse matrices (N
rows by N
colums). Not included in
this benchmark is the time taken to perform the matrix multiplication at each iteration
(typically N
iterations).
NOTE: larger arrays were called first so the JIT has already kicked in for F2J implementations: on a cold startup the F2J implementations are about 10 times slower and get to peak performance after about 20 calls of a function (Raspberry Pi doesn't seem to have a JIT).
Don't download the zip file unless you know what you're doing: use maven or ivy to manage your dependencies as described below.
Releases are distributed on Maven central:
<dependency>
<groupId>com.github.fommil.netlib</groupId>
<artifactId>all</artifactId>
<version>1.1.2</version>
<type>pom</type>
</dependency>
SBT developers can use
"com.github.fommil.netlib" % "all" % "1.1.2" pomOnly()
Those wanting to preserve the pre-1.0 API can use the legacy package (but note that it will be removed in the next release):
<dependency>
<groupId>com.googlecode.netlib-java</groupId>
<artifactId>netlib</artifactId>
<version>1.1</version>
</dependency>
and developers who feel the native libs are too much bandwidth can
depend on a subset of implementations: simply look in the all
module's pom.xml
.
Snapshots (preview releases, when new features are in active development) are distributed on Sonatype's Snapshot Repository, e.g.:
<dependency>
<groupId>com.github.fommil.netlib</groupId>
<artifactId>all</artifactId>
<version>1.2-SNAPSHOT</version>
</dependency>
If the above fails, ensure you have the following in your pom.xml
:
<repositories>
<repository>
<id>sonatype-snapshots</id>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
Please consider supporting the maintenance of this open source project with a donation:
You can contribute by clicking the star button!
Contributors are encouraged to fork this repository and issue pull requests. Contributors implicitly agree to assign an unrestricted licence to Sam Halliday, but retain the copyright of their code (this means we both have the freedom to update the licence for those contributions).