A Julia interface to Apache Spark™
Latest Version | Documentation | PackageEvaluator | Build Status |
---|---|---|---|
Spark.jl is a package that allows the execution of Julia programs on the Apache Spark platform. It supports running pure Julia scripts on Julia data structures, while utilising the data and code distribution capabalities of Apache Spark. It supports multiple cluster types (in client mode), and can be consider as an analogue to PySpark or RSpark within the Julia ecosystem.
Spark.jl requires at least Java 7 and Maven to be installed and available in PATH
.
Pkg.add("Spark.jl")
This will download and build all Julia and Java dependencies. To use Spark.jl type:
using Spark
Spark.init()
sc = SparkContext(master="local")
- LATEST — in-development version of the documentation.
The package is tested against Julia 1.0
, 1.4
and Java 8 and 11. It's also been tested on Amazon EMR and Azure HDInsight. While large cluster modes have been primarily tested on Linux, OS X and Windows do work for local development. See the roadmap for current status.
Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems.
Apache®, Apache Spark and Spark are registered trademarks, or trademarks of the Apache Software Foundation in the United States and/or other countries.