Result . . . . 2 out of 8 correct . . .
Opened this issue · 2 comments
I do not quite understand is that we are not getting enough correct answers to call it much of an accomplishment . . . Take this last line in the code:
output = neural_network.think(np.array([1, 0, 0]))
This is the resulting output for each corresponding input :
Input Output
0 [0, 0, 0] --> [ 0.5]
1 [1, 0, 0] --> [ 0.99993704]
2 [0, 1, 0] --> [ 0.44822538]
3 [1, 1, 0] --> [ 0.9999225]
4 [0, 0, 1] --> [ 0.009664]
5 [1, 0, 1] --> [ 0.99358931]
6 [0, 1, 1] --> [ 0.00786466]
7 [1, 1, 1] --> [ 0.99211997]
This is for the XOR function:
INPUT OUTPUT
A B A XOR B
0 0 0
0 1 1
1 0 1
1 1 0
So we need to consider . . . . are we XORing all three values or just the first two values. . . . but nevertheless, when we plug in 1, 0, 0 we get the outcome 0.99993704, . . (1), and that is correct, however, look at all the other values! 0, 2, 3, 4, 5, 6 would be wrong, using the first two or all three, and I am not sure why we would bother with the rest if only the first two digits matter. This is pretty serious,
given that only 1 and 7 give a correct answer . . .
I know this is an old issue, but I do want to comment on it...
I do not see why these answers are incorrect. As described somewhere on this page, the outcome should be equal to the first value, which is at all inputs kind of correct (except for [0, 0, 0]).
skywola can you share a free error code of neuron @syedshoebahmed786@gmail.com
@Skywola