What is the difference between these two ways of adding Neural Network layers to Keras?

I use Keras with Theano as a backend, and I have a consistent neural network model.

I wonder if there is a difference between the following?

model.add(Convolution2D(32, 3, 3, activation='relu'))

and

model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
+4
source share
1 answer

They are essentially the same. The advantage is that you can add other layers (e.g. BatchNormalization) between them.

In Keras, if not specified, Convolution2Dwill use the default “linear” activation, which is only an identification function

def linear(x):
    '''
    The function returns the variable that is passed in, so all types work.
    '''
    return x 

and all that the layer does is Activationapply the activation function to the input

def call(self, x, mask=None):
    return self.activation(x)

Edit:

, Convolution2D(activation = 'relu') relu , Activation('relu') Convolution2D(32, 3, 3)

call Convolution2D:

output = self.activation(output)
return output

output - . , Convolution2D.

:
Convolution2D : https://github.com/fchollet/keras/blob/a981a8c42c316831183cac7598266d577a1ea96a/keras/layers/convolutional.py
Activation layer: https://github.com/fchollet/keras/blob/a981a8c42c316831183cac7598266d577a1ea96a/keras/layers/core.py
: https://github.com/fchollet/keras/blob/master/keras/activations.py

+5

All Articles