I have seen several times people using -1unlike 0when working with neural networks for input. How is it better and does he do any math for its implementation?
-1
0
Edit: Using forward and reverse support
Edit 2: I let him go, but the network stopped learning, so I guess math needs to change somewhere?
Edit 3: Finally found the answer. Mathematics for the binary system is different from bipolar. See my answer below.
: f(x) = -1 + 2 / (1 + e^-x)
f(x) = -1 + 2 / (1 + e^-x)
: f’(x) = 0.5 * (1 + f(x)) * (1 – f(x) )
f’(x) = 0.5 * (1 + f(x)) * (1 – f(x) )
, , , , ( , , - - ). , . .
-, , , (, , 0-, , ). , . , backprop .
The network learns quickly using inputs -1/1 compared to 0/1. Also, if you use inputs -1/1, 0 means "unknown input / noise / does not matter." I would use -1/1 as an input to my neural network.