Machine Learning over Twitter's stream. Using Apache Spark, Web Server and Lightning Graph server.
This project are using sbt, scala and java.
$ sbt assembly
First of all, the application depends on Lightning Graph Server.
or Install on your machine.
Second, the spark job (twtml-spark) depends on web server (twtml-web).
$ sbt web/run
or
$ scala web/target/scala-2.11/twtml-web*.jar
It's possible to execute the spark job (twtml-spark) by command-line, without change configuration files.
First of all, there are 3 ways to execute the application:
- sbt
$ sbt "spark/run --master <master>"
- standalone jar
$ scala -extdirs "$SPARK_HOME/lib" spark/target/scala-2.10/twtml-spark*.jar --master <master>
- spark-submit
$ spark-submit --master <master> spark/target/scala-2.10/twtml-spark*.jar
Without master parameter, the default is local[2].
$ <command> --help
OR
$ <command> -h
Just only spark job needs a configuration. It's also configurable by command-line. You can see de command options running:
$ <command> --consumerKey xxxxxxxxxxxxxxxxxxxxxxxxx \
--consumerSecret xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
--accessToken xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
--accessTokenSecret xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
$ <command> --lightning http://localhost:3000 \
--twtweb http://localhost:8888
If you prefer, you can use configuration file to save the same options available by command-line. It's necessary to create a <application.conf> file. You can also copy from <reference.conf>.
$ cp spark/src/main/resources/reference.conf \
/spark/src/main/resources/application.conf
Now, just change de application.conf
spark/src/main/resources/application.conf
...
lightning="http://localhost:3000"
twtweb="hhttp://localhost:8888"
consumerKey="xxxxxxxxxxxxxxxxxxxxxxxxx"
consumerSecret="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
accessToken="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
accessTokenSecret="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
...
Lightning is a data-visualization server providing API-based access to reproducible, web-based, interactive visualizations.
Simple Build Tool - 0.13.9
sbt is an open source build tool for Scala and Java projects, similar to Java's Maven or Ant.
Apache Spark - 1.4.1
Apache Spark is an open-source cluster computing framework originally developed in the AMPLab at UC Berkeley. In contrast to Hadoop's two-stage disk-based MapReduce paradigm, Spark's in-memory primitives provide performance up to 100 times faster for certain applications. By allowing user programs to load data into a cluster's memory and query it repeatedly, Spark is well suited to machine learning algorithms.
Apache Hadoop - 2.7.1
Apache Hadoop is an open-source software framework written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are commonplace and thus should be automatically handled in software by the framework.
Scala - 2.11.7
Scala is an object-functional programming language for general software applications. Scala has full support for functional programming and a very strong static type system. This allows programs written in Scala to be very concise and thus smaller in size than other general-purpose programming languages. Many of Scala's design decisions were inspired by criticism of the shortcomings of Java.
Java Open JDK - Standard Edition - 1.7+
A general-purpose computer programming language designed to produce programs that will run on any computer system.