An attempt to build a whitebox and blackbox implementation of Artificial Neural Networks to undersatnd how ANN learns.
y | ||
---|---|---|
1 | 0 | 1 |
1 | 1 | 1 |
0 | 1 | 0 |
0 | 0 | 0 |
relation : y =
2 Input variables
1 Output variable y
Hidden layers : 1 (2 Neurons)
- Hidden Layer : Sigmoid
- Output Layer : Sigmoid
As expected the blackbox results were more accurate than whitebox. Whitebox results were biased towards output 1 even though it seemed to be learing and updating its weights through back propagation.