PyTorch is a great Deep Learning (DL) library that makes it easy for the user to experiment with DL related concepts such as: CNNs, CUDA, backpropagation, Loss functions, and more.
However, users may come to find that although abstractions of common DL operations make it easy for the user to implement, it may take away meaning from what is actually happening under the hood.
Hence, this introductory tutorial will teach the user how to implement standard Neural Network operations and generalize how to integrate the forward/backward pass to any custom operation using PyTorch
The tutorials will build one or few specific concept at a time. Unless otherwise already covered, most of the operations in each tutorial will manually be implemented using PyTorch's capabilities.
After the concept has been defined and implemented, we will make performance visualization where we compare similar alternative operations that the user could potentially make to improve their predictive model
The Linear Layer tutorial is strongly encouraged to review first as it presents fundamental DL concepts that will be assumed for the rest of tutorials and also presents standard workflow into how we will train and compare different models.
The following are operations/concepts that have been explored
- Linear Layer
- Cross-Entropy-Loss
- Gradient Descent
- Newton's Method
ReLU:
- ReLU
- Tanh
- Leaky ReLU
- Binary-Cross-Entropy
- Sigmoid