/Batchwise-Dropout

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.

Primary LanguageC++

Batchwise Dropout Benjamin Graham, University of Warwick, 2015 GPLv3

If you use this software please tell me what you are using it for (b.graham@warwick.ac.uk).

Run "make dataset" for dataset in the list { mnist, cifar10, artificial }