To better understand how the forward propagation works in Neural Networks, we are manually building an Artificial Neural Network in the Jupyer Notebook.
A genaral Neural Network (NN) would take 𝑛 inputs, would have many hidden layers, each hidden layer having 𝑚 nodes, and would have an output layer. Although the network is showing one hidden layer, but we will code the network to have many hidden layers. Similarly, although the network shows an output layer with one node, we will code the network to have more than one node in the output layer.