They are essentially the same. The advantage is that you can add other layers (e.g. BatchNormalization) between them.
In Keras, if not specified, Convolution2Dwill use the default “linear” activation, which is only an identification function
def linear(x):
'''
The function returns the variable that is passed in, so all types work.
'''
return x
and all that the layer does is Activationapply the activation function to the input
def call(self, x, mask=None):
return self.activation(x)
Edit:
, Convolution2D(activation = 'relu') relu , Activation('relu') Convolution2D(32, 3, 3)
call Convolution2D:
output = self.activation(output)
return output
output - . , Convolution2D.
:
Convolution2D : https://github.com/fchollet/keras/blob/a981a8c42c316831183cac7598266d577a1ea96a/keras/layers/convolutional.py
Activation layer: https://github.com/fchollet/keras/blob/a981a8c42c316831183cac7598266d577a1ea96a/keras/layers/core.py
: https://github.com/fchollet/keras/blob/master/keras/activations.py