I just went through the tutorial TensorFlow( https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html#deep-mnist-for-experts ).
TensorFlow
I have two questions:
Why is he using cost function with y_ * log(y)? Shouldn't it be y_ * log(y) + (1-y_) * log(1-y)?
cost function with y_ * log(y)
y_ * log(y) + (1-y_) * log(1-y)
Does TensorFlowanyone know how to calculate gradientfor cost functionwhich I use? Shouldn't we say somewhere TensorFlowhow to calculate gradient?
gradient
cost function
Thank!
For y = 1 or 0 you can use y_ * log (y) + (1-y_) * log (1-y), but when y is a one-time encoding, y = [0 1] or [1 0], we use y_ * log (y). In fact, they are the same.
TensorFlow, .
, node . Tensorflow backpropagation ( ) .