I want my neural network to convert a negative value to a positive value. Theoretically, this can be done using the ReLU function and 1 node, which takes the input weight equal to -1 (therefore, the negative input is multiplied by -1 = positive input.
It just keeps outputting 0. The code below. I used -1 as input to find out if he can learn at least one input.
I tried adding more layers, but that doesnβt help to see edit, IT DID help if I add more
train_input = np.asarray([[-1]]*10000) # Input arr of -1s train_output = np.asarray(map(lambda x: [abs(x[0])] , train_input)) # Define the model model = Sequential() model.add(Dense(1, input_dim=1, kernel_initializer='normal', activation='linear')) model.add(LeakyReLU(alpha=.001)) model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) # Train and evaluate model.fit(train_input, train_output, epochs=10, batch_size=10, verbose=0) test_model_output = model.predict(test_input) print str(test_input[0][0]) + " " + str(test_output[0][0]) + " " + str(test_model_output[0][0])
The output that I get below (the first value is the input, the second is the expected, 3 is the model output)
-1 1 0.0
EDIT I tried using a random unified initializer so that it initializes negative weights and it works. I understand why this should make the network easier. But I do not understand why this is necessary.
from keras.initializers import RandomUniform model.add(Dense(1, input_dim=1, kernel_initializer=RandomUniform(minval=-0.05, maxval=0.05, seed=None), activation='linear'))
EDIT 2 Someone said that I do not have enough time to prepare the data. At first I thought that making 10x more data and lots is 10 times less (more iterations). This is not BUT , if I added a 10-fold number of eras (100 in total), it worked. Thus, to convert positive initialized weights to negative
artificial-intelligence machine-learning neural-network tensorflow keras
tt_Gantz
source share