tiny-dnn is a C++11 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.
Linux/Mac OS |
Windows |
---|---|
- Features
- Comparison with other libraries
- Supported networks
- Dependencies
- Build
- Examples
- Contributing
- References
- License
- Gitter rooms
Check out the documentation for more info.
- 2016/11/30 v1.0.0a3 is released!
- 2016/9/14 tiny-dnn v1.0.0alpha is released!
- 2016/8/7 tiny-dnn is now moved to organization account, and rename into tiny-dnn :)
- 2016/7/27 tiny-dnn v0.1.1 released!
- reasonably fast, without GPU
- with TBB threading and SSE/AVX vectorization
- 98.8% accuracy on MNIST in 13 minutes training (@Core i7-3520M)
- portable & header-only
- Run anywhere as long as you have a compiler which supports C++11
- Just include tiny_dnn.h and write your model in C++. There is nothing to install.
- easy to integrate with real applications
- no output to stdout/stderr
- a constant throughput (simple parallelization model, no garbage collection)
- work without throwing an exception
- can import caffe's model
- simply implemented
- be a good library for learning neural networks
Please see wiki page.
- core
- fully-connected
- dropout
- linear operation
- power
- convolution
- convolutional
- average pooling
- max pooling
- deconvolutional
- average unpooling
- max unpooling
- normalization
- contrast normalization (only forward pass)
- batch normalization
- split/merge
- concat
- slice
- elementwise-add
- tanh
- sigmoid
- softmax
- rectified linear(relu)
- leaky relu
- identity
- exponential linear units(elu)
- cross-entropy
- mean squared error
- mean absolute error
- mean absolute error with epsilon range
- stochastic gradient descent (with/without L2 normalization and momentum)
- adagrad
- rmsprop
- adam
Nothing. All you need is a C++11 compiler.
tiny-dnn is header-ony, so there's nothing to build. If you want to execute sample program or unit tests, you need to install cmake and type the following commands:
cmake .
Then open .sln file in visual studio and build(on windows/msvc), or type make
command(on linux/mac/windows-mingw).
Some cmake options are available:
options | description | default | additional requirements to use |
---|---|---|---|
USE_TBB | Use Intel TBB for parallelization | OFF1 | Intel TBB |
USE_OMP | Use OpenMP for parallelization | OFF1 | OpenMP Compiler |
USE_SSE | Use Intel SSE instruction set | ON | Intel CPU which supports SSE |
USE_AVX | Use Intel AVX instruction set | ON | Intel CPU which supports AVX |
USE_NNPACK | Use NNPACK for convolution operation | OFF | Acceleration package for neural networks on multi-core CPUs |
USE_OPENCL | Enable/Disable OpenCL support (experimental) | OFF | The open standard for parallel programming of heterogeneous systems |
USE_LIBDNN | Use Greentea LinDNN for convolution operation with GPU via OpenCL (experimental) | OFF | An universal convolution implementation supporting CUDA and OpenCL |
USE_SERIALIZER | Enable model serialization | ON2 | - |
BUILD_TESTS | Build unit tests | OFF3 | - |
BUILD_EXAMPLES | Build example projects | OFF | - |
BUILD_DOCS | Build documentation | OFF | Doxygen |
1 tiny-dnn use c++11 standard library for parallelization by default
2 If you don't use serialization, you can switch off to speedup compilation time.
3 tiny-dnn uses Google Test as default framework to run unit tests. No pre-installation required, it's automatically downloaded during CMake configuration.
For example, type the following commands if you want to use intel TBB and build tests:
cmake -DUSE_TBB=ON -DBUILD_TESTS=ON .
You can edit include/config.h to customize default behavior.
construct convolutional neural networks
#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;
using namespace tiny_dnn::layers;
void construct_cnn() {
using namespace tiny_dnn;
network<sequential> net;
// add layers
net << conv<tan_h>(32, 32, 5, 1, 6) // in:32x32x1, 5x5conv, 6fmaps
<< ave_pool<tan_h>(28, 28, 6, 2) // in:28x28x6, 2x2pooling
<< fc<tan_h>(14 * 14 * 6, 120) // in:14x14x6, out:120
<< fc<identity>(120, 10); // in:120, out:10
assert(net.in_data_size() == 32 * 32);
assert(net.out_data_size() == 10);
// load MNIST dataset
std::vector<label_t> train_labels;
std::vector<vec_t> train_images;
parse_mnist_labels("train-labels.idx1-ubyte", &train_labels);
parse_mnist_images("train-images.idx3-ubyte", &train_images, -1.0, 1.0, 2, 2);
// declare optimization algorithm
adagrad optimizer;
// train (50-epoch, 30-minibatch)
net.train<mse>(optimizer, train_images, train_labels, 30, 50);
// save
net.save("net");
// load
// network<sequential> net2;
// net2.load("net");
}
construct multi-layer perceptron(mlp)
#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;
using namespace tiny_dnn::layers;
void construct_mlp() {
network<sequential> net;
net << fc<sigmoid>(32 * 32, 300)
<< fc<identity>(300, 10);
assert(net.in_data_size() == 32 * 32);
assert(net.out_data_size() == 10);
}
another way to construct mlp
#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::activation;
void construct_mlp() {
auto mynet = make_mlp<tan_h>({ 32 * 32, 300, 10 });
assert(mynet.in_data_size() == 32 * 32);
assert(mynet.out_data_size() == 10);
}
more sample, read examples/main.cpp or MNIST example page.
Since deep learning community is rapidly growing, we'd love to get contributions from you to accelerate tiny-dnn development! For a quick guide to contributing, take a look at the Contribution Documents.
[1] Y. Bengio, Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2, 2012
[2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.
other useful reference lists:
The BSD 3-Clause License
We have a gitter rooms for discussing new features & QA. Feel free to join us!
developers | https://gitter.im/tiny-dnn/developers |
users | https://gitter.im/tiny-dnn/users |