Neural network 0 vs -1

I have seen several times people using -1unlike 0when working with neural networks for input. How is it better and does he do any math for its implementation?

Edit: Using forward and reverse support

Edit 2: I let him go, but the network stopped learning, so I guess math needs to change somewhere?

Edit 3: Finally found the answer. Mathematics for the binary system is different from bipolar. See my answer below.

+5
source share
3 answers
, , .

: f(x) = -1 + 2 / (1 + e^-x)

: f’(x) = 0.5 * (1 + f(x)) * (1 – f(x) )

+7

, , , , ( , , - - ). , . .

-, , , (, , 0-, , ). , . , backprop .

0

The network learns quickly using inputs -1/1 compared to 0/1. Also, if you use inputs -1/1, 0 means "unknown input / noise / does not matter." I would use -1/1 as an input to my neural network.

-1
source

All Articles