We modified the manual CIFAR-10 (Neur Network of Convolution) to work in the database "Accounting" for gender classification on faces. We read here that “ Parameter Exchange ” is useful as an assumption that one patch function is useful regardless of the location in the image. With the exception of:
Note that sometimes the assumption of sharing parameters may not make sense. This is especially true when the input images in ConvNet have a certain centralized structure, where, for example, we should expect that completely different functions should be studied on one side of the image, and not the others. One practical example is that of input to faces that have been centered on the image.
Purpose: Therefore, we would like to disable parameter separation for our CNN.
the code
I think the CIFAR-10 tutorial uses parameter separation? and this part of the code in the def inference(images) function seems to have something to do with it:
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) bias = tf.nn.bias_add(conv, biases)
What are the challenges:
def _variable_on_cpu(name, shape, initializer): with tf.device('/cpu:0'): var = tf.get_variable(name, shape, initializer=initializer) return var
Question
- Does parameter separation really happen in the CIFAR-10 tutorial?
- Could you tell us if we are looking at the correct part of the code to disable the exchange of parameters or where else to look?
- Any other recommendations / suggestions are welcome because we do not know where to start.
deep-learning tensorflow
NumesSanguis
source share