How to implement weighted binary cross-entropy on anano?

How to implement weighted binary cross-entropy on anano?

My convolutional neural network only predicts 0 ~~ 1 (sigmoid).

I want to punish my predictions this way:

Table table

Basically, I want to punish MORE when the model predicts 0, but the truth was 1.

Question: How to create this weighted binary cross-entropy function using theano and lasagne?

I tried this below

prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy the tensor tgt = targets.copy("tgt") # Make it a vector # tgt = tgt.flatten() # tgt = tgt.reshape(3000) # tgt = tgt.dimshuffle(1,0) newshape = (T.shape(tgt)[0]) tgt = T.reshape(tgt, newshape) #Process it so [index] < 0.5 = 0 , and [index] >= 0.5 = 1 # Make it an integer. tgt = T.cast(tgt, 'int32') weights_per_label = theano.shared(lasagne.utils.floatX([0.2, 0.4])) weights = weights_per_label[tgt] # returns a targets-shaped weight matrix loss = lasagne.objectives.aggregate(T.nnet.binary_crossentropy(predictions, tgt), weights=weights) return loss loss_or_grads = weighted_crossentropy(prediction, self.target_var) 

But I get this error below:

TypeError: The new form in reshape must be a vector or scalar list / tuple. Got subtenser {int64} .0 after conversion to vector.


Link: https://github.com/fchollet/keras/issues/2115

Link: https://groups.google.com/forum/#!topic/theano-users/R_Q4uG9BXp8

+8
python theano keras lasagne
source share
2 answers

Thanks to the developers of the lasagne group, I fixed this by building my own loss function.

 loss_or_grads = -(customized_rate * target_var * tensor.log(prediction) + (1.0 - target_var) * tensor.log(1.0 - prediction)) loss_or_grads = loss_or_grads.mean() 
+2
source share

To fix a syntax error:

Edit

 newshape = (T.shape(tgt)[0]) tgt = T.reshape(tgt, newshape) 

to

 newshape = (T.shape(tgt)[0],) tgt = T.reshape(tgt, newshape) 

T.reshape expects a tuple of axes, you did not specify this, therefore, an error.

Before punishing false negatives (prediction 0, true 1), make sure that this prediction error is not based on the statistics of your training data, as @uyaseen suggested .

0
source share

All Articles