You may need to set up an ssh certificate for git.
- Make the file using
make
- Get the output using
./output
- If needed, clear the object files (*.o) using
make clean
There are three different examples for you to choose:
- Very simple integer BP:
example_fc_int_bp_very_simple()
- Fully connected (fc) mnist integer BP:
example_fc_int_bp_mnist()
- Fully connected (fc) mnist integer DFA:
example_fc_int_dfa_mnist()
- Fully connected (fc) fashion mnist integer DFA:
example_fc_int_dfa_fashion_mnist()
You can comment out the current function and call the new function in the main.cpp
file
- For mnist dataset training,use classification loss function
batchCrossEntropyLoss()
and test the accuracy
-
Error: output: pktnn_mat.cpp:566: pktnn::pktmat& pktnn::pktmat::matElemDivMat(pktnn::pktmat&, pktnn::pktmat&): Assertion `mat1.dimsEqual(mat2)' failed
Cause: backward() --> computeDeltas() --> matElemDivMat()
-
Training Accuracy low, for example 9.8%
Cause: useDfa(false)
-
Training Accuracy 100%, maybe not correct either Try to modify the layer setting to match with BP_simple_training
//Modified the fc layers to do BP training fc1.useDfa(false).initWeightBias().setActv(a).setNextLayer(fc2); //Use BP for training, set useDfa(false) fc2.useDfa(false).initWeightBias().setActv(a).setPrevLayer(fc1).setNextLayer(fcLast); fcLast.useDfa(false).setActv(b).setPrevLayer(fc2);
- Neural Networks and Deep Learning - Michael Nielsen http://neuralnetworksanddeeplearning.com/
- Different activation functions and initialization functions https://shahaab-co.com/mag/en-articles/weight-initialization-in-deep-learning/