Deep learning is a form of state-of-the-art machine learning that can learn to recognize patterns in data unsupervised.
Unsupervised pattern recognition saves time during data analysis, trend discovery and labeling of certain types of data, such as images, text, sound and time series.
See Deeplearning4j.org for applications, tutorials, definitions and other resources on the discipline.
-
Distributed deep learning via Akka clustering and distributed coordination of jobs via Hazelcast with configurations stored in Apache Zookeeper.
-
Various data-preprocessing tools, such as an image loader that allows for binarization, scaling of pixels, normalization via zero-unit mean and standard deviation.
-
Deep-belief networks for both continuous and binary data.
-
Native matrices via Jblas, a Fortran library for matrix computations.
-
Automatic cluster provisioning for Amazon Web Services' Elastic Compute Cloud (EC2).
-
Baseline ability to read from a variety of input providers including S3 and local file systems.
-
Text processing via Word2Vec as well as a term frequency–inverse document frequency (TFIDF) vectorizer.
- Special tokenizers/stemmers and a SentenceIterator interface to make handling text input agnostic.
- Ability to do moving-window operations via a Window encoding. Optimized with Viterbi.
L2 Regualarization
Dropout
Adagrad
Momentum
Optimization algorithms for training (Conjugate Gradient, Stochastic Gradient Descent)
Different kinds of activation functions (Tanh, Sigmoid, HardTanh, Softmax)
Normalization by input rows, or not
Sparsity (force activations of sparse/rare inputs)
Weight transforms (useful for deep autoencoders)
Different kinds of loss functions - squared loss, reconstruction cross entropy, negative log likelihood
Probability distribution manipulation for initial weight generation
Recursive neural nets, convolutional neural nets and a recursive neural tensor.
Matrix-provider agnostic:
A matrix-abstraction layer that sits atop various matrix providers which will allow for distributed GPU deep learning via either AMD, NVIDIA or native with BLAS, as well as bindings for Colt for plain old Java abstraction layers used in tasks such as face detection, named-entity recognition and sentiment analysis.
It is highly reccommended that you use development snapshots right now.
snapshots-repo https://oss.sonatype.org/content/repositories/snapshots false true <dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-core</artifactId>
<version>0.0.3.2-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-scaleout-akka</artifactId>
<version>0.0.3.2-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-scaleout-akka-word2vec</artifactId>
<version>0.0.3.2-SNAPSHOT</version>
</dependency>