I am working on a class-incremental classifier approach using CNN as a function extractor and a fully connected block for classification.
First, I fine-tuned VGG for each trained network to complete a new task. When the network learns a new task, I keep a few examples for each class so as not to forget when new classes are available.
When some classes are available, I must calculate each output of the instances included in the examples for the new classes. Now adding zeros to the outputs for the old classes and adding a label corresponding to each new class at the output of the new classes, I have my new shortcuts, i.e. if 3 new classes are introduced.
Old type type: [0.1, 0.05, 0.79, ..., 0 0 0]
A new type of type: [0.1, 0.09, 0.3, 0.4, ..., 1 0 0] ** the last outputs correspond to the class.
My question is, how can I change the loss function for a custom one that will train for new classes? The loss function that I want to implement is defined as:

where the distillation losses correspond to the outputs of the old classes to avoid forgetting, and the loss of classification corresponds to the new classes.
If you can provide me a sample code to change the loss function in keras, it would be nice.
Thanks!!!!!
deep-learning computer-vision keras conv-neural-network loss-function
Eric
source share