/neural-networks

MATLAB implementation of several neural network models

Primary LanguageMATLABMIT LicenseMIT

neural-networks

single-layer neural network

Dataset: cifar-10

Optimization method: mini batch gradient descent

Loss function: cross entropy for softmax; SVM multi-class

Regularization: L2

New features: eg. learning rate decay, Xavier initialization

Test accuracy (highest): 40.66%

double-layer neural network

Dataset: cifar-10

Optimization method: mini batch gradient descent

Loss function: cross entropy

Regularization: L2

New features: eg. cyclical learning rate, ensemble learning, dropout

Test accuracy (highest): 54.84%

multi-layer neural network

Dataset: cifar-10

Optimization method: mini batch gradient descent

Loss function: cross entropy

Regularization: L2

New features: eg. batch normalization, He initialization, data augmentation

Test accuracy (highest): 58.66%

recurrent neural network

Dataset: Text from Harry Potter and the Goblet of Fire

Optimization method: AdaGrad

Loss function: cross entropy

Goal: synthesize text