ReLU does not learn to handle negative Keras / Tensorflow inputs

I want my neural network to convert a negative value to a positive value. Theoretically, this can be done using the ReLU function and 1 node, which takes the input weight equal to -1 (therefore, the negative input is multiplied by -1 = positive input.

It just keeps outputting 0. The code below. I used -1 as input to find out if he can learn at least one input.

I tried adding more layers, but that doesn’t help to see edit, IT DID help if I add more

train_input = np.asarray([[-1]]*10000) # Input arr of -1s train_output = np.asarray(map(lambda x: [abs(x[0])] , train_input)) # Define the model model = Sequential() model.add(Dense(1, input_dim=1, kernel_initializer='normal', activation='linear')) model.add(LeakyReLU(alpha=.001)) model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) # Train and evaluate model.fit(train_input, train_output, epochs=10, batch_size=10, verbose=0) test_model_output = model.predict(test_input) print str(test_input[0][0]) + " " + str(test_output[0][0]) + " " + str(test_model_output[0][0]) 

The output that I get below (the first value is the input, the second is the expected, 3 is the model output)

 -1 1 0.0 

EDIT I tried using a random unified initializer so that it initializes negative weights and it works. I understand why this should make the network easier. But I do not understand why this is necessary.

 from keras.initializers import RandomUniform model.add(Dense(1, input_dim=1, kernel_initializer=RandomUniform(minval=-0.05, maxval=0.05, seed=None), activation='linear')) 

EDIT 2 Someone said that I do not have enough time to prepare the data. At first I thought that making 10x more data and lots is 10 times less (more iterations). This is not BUT , if I added a 10-fold number of eras (100 in total), it worked. Thus, to convert positive initialized weights to negative

+1
artificial-intelligence machine-learning neural-network tensorflow keras
source share
4 answers

The problem was that I did not have enough time to train. Although this is a very simple function, initialized weights should go from negative to positive.

Increase the number of training sessions (more eras, smaller parties, more training data), as a result of which the gradient will turn from positive to negative.

0
source share

I would initialize the first weight to be negative using keras.initializers.Constant (value = -1) https://keras.io/initializers/#constant

May help launch this first neuron.

+2
source share

I am not familiar with the libraries you use, but it looks like you need to specify input_shape in the Dense line.

where None indicates that any positive integer can be expected

So, if you do not enter anything like input_shape, then it assumes that your input will be any positive number.

Thus, adding input_shape=(-1,1) can solve your problem!

https://keras.io/getting-started/sequential-model-guide/

+1
source share

I think the model result is correct.

Since the Rectified Linear Unit works as follows

f (x) = max (0, x) [x β†’ entry into the neuron]

In your example, the input value is -1. ie (x = -1)

f (x) = max (0, -1) => The answer for this equation is zero ['0']

This may be the reason for the result of your patterns.

(or)

Perhaps an error occurred while performing the multiplication.

+1
source share

All Articles