What is called "embedding" here in callbacks.TensorBoard , in the broad sense, means any layer weight. According to Keras Documentation :
embeddings_layer_names: a list of layer names to keep track of. If None or an empty list, the entire implementation layer will be scanned.
Thus, by default it will track the levels of Embedding , but you do not need an Embedding layer to use this visualization tool.
In the presented MLP example, the embeddings_layer_names argument is missing. You must find out which layers you are going to visualize. Suppose you want to visualize the scales (or kernel in Keras) of all Dense layers, you can specify embeddings_layer_names as follows:
model = Sequential() model.add(Dense(200, activation='relu', input_shape=(784,))) model.add(Dropout(0.2)) model.add(Dense(100, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(60, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(30, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(10, activation='softmax')) embedding_layer_names = set(layer.name for layer in model.layers if layer.name.startswith('dense_')) tb = callbacks.TensorBoard(log_dir='temp', histogram_freq=10, batch_size=32, write_graph=True, write_grads=True, write_images=True, embeddings_freq=10, embeddings_metadata=None, embeddings_layer_names=embedding_layer_names) model.compile(...) model.fit(...)
Then you can see something like this in TensorBoard: 
You can see the corresponding lines in the Keras source if you want to find out what happens to embeddings_layer_names .
Edit:
So, here is a dirty solution for visualizing level outputs. Since the initial TensorBoard does not support this, introducing a new callback seems inevitable.
Since rewriting the entire TensorBoard will require a lot of space on the page, I will simply expand the original TensorBoard and write out the parts that differ (which is already quite long). But to avoid duplicating calculations and saving the model, rewriting a TensorBoard will be a better and cleaner way.
import tensorflow as tf from tensorflow.contrib.tensorboard.plugins import projector from keras import backend as K from keras.models import Model from keras.callbacks import TensorBoard class TensorResponseBoard(TensorBoard): def __init__(self, val_size, img_path, img_size, **kwargs): super(TensorResponseBoard, self).__init__(**kwargs) self.val_size = val_size self.img_path = img_path self.img_size = img_size def set_model(self, model): super(TensorResponseBoard, self).set_model(model) if self.embeddings_freq and self.embeddings_layer_names: embeddings = {} for layer_name in self.embeddings_layer_names:
To use it:
tb = TensorResponseBoard(log_dir=log_dir, histogram_freq=10, batch_size=10, write_graph=True, write_grads=True, write_images=True, embeddings_freq=10, embeddings_layer_names=['dense_1'], embeddings_metadata='metadata.tsv', val_size=len(x_test), img_path='images.jpg', img_size=[28, 28])
Before starting TensorBoard, you need to save the labels and images for log_dir for visualization:
from PIL import Image img_array = x_test.reshape(100, 100, 28, 28) img_array_flat = np.concatenate([np.concatenate([x for x in row], axis=1) for row in img_array]) img = Image.fromarray(np.uint8(255 * (1. - img_array_flat))) img.save(os.path.join(log_dir, 'images.jpg')) np.savetxt(os.path.join(log_dir, 'metadata.tsv'), np.where(y_test)[1], fmt='%d')
Here is the result:
