/MiniDNN

A header-only C++ library for deep neural networks

Primary LanguageC++

MiniDNN

MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional to construct different types of feed-forward neural networks. MiniDNN is built on top of Eigen.

MiniDNN is a header-only library implemented purely in C++98, whose only dependency, Eigen, is also header-only. These features make it easy to embed MiniDNN into larger projects with a broad range of compiler support.

This project was largely inspired by the tiny-dnn library, a header-only C++14 implementation of deep learning models. What makes MiniDNN different is that MiniDNN is based on the high-performance Eigen library for numerical computing, and it has better compiler support.

MiniDNN is still quite experimental for now. Originally I wrote it with the aim of studying deep learning and practicing model implementation, but I also find it useful in my own statistical and machine learning research projects.

Features

  • Able to build feed-forward neural networks with a few lines of code
  • Header-only, highly portable
  • Fast on CPU
  • Modularized and extensible
  • Provides detailed documentation that is a resource for learning
  • Helps understanding how DNN works
  • A wonderful opportunity to learn and practice both the nice and dirty parts of DNN

Quick Start

The self-explanatory code below is a minimal example to fit a DNN model:

#include <MiniDNN.h>

using namespace MiniDNN;

typedef Eigen::MatrixXd Matrix;
typedef Eigen::VectorXd Vector;

int main()
{
    // Set random seed and generate some data
    std::srand(123);
    // Predictors -- each column is an observation
    Matrix x = Matrix::Random(400, 100);
    // Response variables -- each column is an observation
    Matrix y = Matrix::Random(2, 100);

    // Construct a network object
    Network net;

    // Create three layers
    // Layer 1 -- convolutional, input size 20x20x1, 3 output channels, filter size 5x5
    Layer* layer1 = new Convolutional<ReLU>(20, 20, 1, 3, 5, 5);
    // Layer 2 -- max pooling, input size 16x16x3, pooling window size 3x3
    Layer* layer2 = new MaxPooling<ReLU>(16, 16, 3, 3, 3);
    // Layer 3 -- fully connected, input size 5x5x3, output size 2
    Layer* layer3 = new FullyConnected<Identity>(5 * 5 * 3, 2);

    // Add layers to the network object
    net.add_layer(layer1);
    net.add_layer(layer2);
    net.add_layer(layer3);

    // Set output layer
    net.set_output(new RegressionMSE());

    // Create optimizer object
    RMSProp opt;
    opt.m_lrate = 0.001;

    // (Optional) set callback function object
    VerboseCallback callback;
    net.set_callback(callback);

    // Initialize parameters with N(0, 0.01^2) using random seed 123
    net.init(0, 0.01, 123);

    // Fit the model with a batch size of 100, running 10 epochs with random seed 123
    net.fit(opt, x, y, 100, 10, 123);

    // Obtain prediction -- each column is an observation
    Matrix pred = net.predict(x);

    // Layer objects will be freed by the network object,
    // so do not manually delete them

    return 0;
}

To compile and run this example, simply download the source code of MiniDNN and Eigen, and let the compiler know about their paths. For example:

g++ -O2 -I/path/to/eigen -I/path/to/MiniDNN/include example.cpp

Documentation

The API reference page contains the documentation of MiniDNN generated by Doxygen, including all the class APIs.

License

MiniDNN is an open source project licensed under MPL2.