The new version (0.3.0) of Keras no longer tied weights in AutoEncoder, and it still shows different convergence. This is because weights are initialized differently.
In an example other than AE, weight (32.16) weights are first initialized, then Dense (16.32). In the AE example, the weight (32.16) weights are first initialized, then Dense (16.32), and then when you create an AutoEncoder instance, the Dense (32.16) weights are initialized again (self.encoder.set_previous (node) will call build () to initialize the weights).
Now the following two NNs converge in exactly the same way:
autoencoder = Sequential() encoder = containers.Sequential([Dense(32,16,activation='tanh')]) decoder = containers.Sequential([Dense(16,32)]) autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True)) rms = RMSprop() autoencoder.compile(loss='mean_squared_error', optimizer=rms) np.random.seed(0) autoencoder.fit(trainData,trainData, nb_epoch=20, batch_size=64, validation_data=(testData, testData), show_accuracy=False)
# non-autoencoder model = Sequential() model.add(Dense(32,16,activation='tanh')) model.add(Dense(16,32)) model.set_weights(autoencoder.get_weights()) model.compile(loss='mean_squared_error', optimizer=rms) np.random.seed(0) model.fit(trainData,trainData, nb_epoch=numEpochs, batch_size=batch_size, validation_data=(testData, testData), show_accuracy=False)