MNIST_onehiddenlayer
- Project description
- Development Tools
- Results
- Conclusion
- References
Comparing different non-linear activation (Sigmoid, TanH, ReLu) using MNIST dataset with one hidden layer
- PyTorch framework
- Sigmoid: Iteration 3000. Loss: 0.7164252996444702. Accuracy: 83
- Tanh: Iteration 3000. Loss: 0.1971532106399536. Accuracy: 94
- ReLU: Iteration 2500. Loss: 0.09552594274282455. Accuracy: 95
ReLU showed the highest accuracy, however, only a little different from Tanh
- following Practical Deep Learning with PyTorch Course