Your next API to work with Apache Spark.
This project adds a missing layer of compatibility between Kotlin and Apache Spark. It allows Kotlin developers to use familiar language features such as data classes, and lambda expressions as simple expressions in curly braces or method references.
We have opened a Spark Project Improvement Proposal: Kotlin support for Apache Spark to work with the community towards getting Kotlin support as a first-class citizen in Apache Spark. We encourage you to voice your opinions and participate in the discussion.
- Supported versions of Apache Spark
- Releases
- How to configure Kotlin for Apache Spark in your project
- Kotlin for Apache Spark features
- Examples
- Reporting issues/Support
- Code of Conduct
- License
Apache Spark | Scala | Kotlin for Apache Spark |
---|---|---|
3.0.0 | 2.12 | kotlin-spark-api-3.0.0_2.12:1.0.0-preview1 |
The list of Kotlin for Apache Spark releases is available here.
The Kotlin for Spark artifacts adhere to the following convention:
[Apache Spark version]_[Scala core version]:[Kotlin for Apache Spark API version]
You can add Kotlin for Apache Spark as a dependency to your project: Maven
, Gradle
, SBT
, and leinengen
are supported.
Here's an example pom.xml
:
<dependency>
<groupId>org.jetbrains.kotlinx.spark</groupId>
<artifactId>kotlin-spark-api-3.0.0_2.12</artifactId>
<version>${kotlin-spark-api.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>${spark.version}</version>
</dependency>
Note that core
is being compiled against Scala version 2.12
.
You can find a complete example with pom.xml
and build.gradle
in the Quick Start Guide.
Once you have configured the dependency, you only need to add the following import to your Kotlin file:
import org.jetbrains.kotlinx.spark.api.*
val spark = SparkSession
.builder()
.master("local[2]")
.appName("Simple Application").orCreate
spark.toDS("a" to 1, "b" to 2)
The example above produces Dataset<Pair<String, Int>>
.
There are several aliases in API, like leftJoin
, rightJoin
etc. These are null-safe by design.
For example, leftJoin
is aware of nullability and returns Dataset<Pair<LEFT, RIGHT?>>
.
Note that we are forcing RIGHT
to be nullable for you as a developer to be able to handle this situation.
NullPointerException
s are hard to debug in Spark, and we doing our best to make them as rare as possible.
We provide you with useful function withSpark
, which accepts everything that may be needed to run Spark — properties, name, master location and so on. It also accepts a block of code to execute inside Spark context.
After work block ends, spark.stop()
is called automatically.
withSpark {
dsOf(1, 2)
.map { it to it }
.show()
}
dsOf
is just one more way to create Dataset
(Dataset<Int>
) from varargs.
It can easily happen that we need to fork our computation to several paths. To compute things only once we should call cache
method. However, it becomes difficult to control when we're using cached Dataset
and when not.
It is also easy to forget to unpersist cached data, which can break things unexpectedly or take up more memory
than intended.
To solve these problems we've added withCached
function
withSpark {
dsOf(1, 2, 3, 4, 5)
.map { it to (it + 2) }
.withCached {
showDS()
filter { it.first % 2 == 0 }.showDS()
}
.map { c(it.first, it.second, (it.first + it.second) * 2) }
.show()
}
Here we're showing cached Dataset
for debugging purposes then filtering it.
The filter
method returns filtered Dataset
and then the cached Dataset
is being unpersisted, so we have more memory t
o call the map
method and collect the resulting Dataset
.
For more idiomatic Kotlin code we've added toList
and toArray
methods in this API. You can still use the collect
method as in Scala API, however the result should be casted to Array
.
This is because collect
returns a Scala array, which is not the same as Java/Kotlin one.
For more, check out examples module. To get up and running quickly, check out this tutorial.
Please use GitHub issues for filing feature requests and bug reports. You are also welcome to join kotlin-spark channel in the Kotlin Slack.
This project and the corresponding community is governed by the JetBrains Open Source and Community Code of Conduct. Please make sure you read it.
Kotlin for Apache Spark is licensed under the Apache 2.0 License.