I created the Neural Network Deep Convolution to classify individual pixels in an image. My training data will always be the same size (32x32x7), but my testing data can be of any size.
Github repository
Currently, my model will only work on images of the same size. I very often used a longorflow mnist tutorial to help me build my model. In this tutorial we use only 28x28 images. How to change the following mnist model to accept images of any size?
x = tf.placeholder(tf.float32, shape=[None, 784]) y_ = tf.placeholder(tf.float32, shape=[None, 10]) W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) x_image = tf.reshape(x, [-1, 28, 28, 1])
To make things a little more complicated, my model transposes convolutions where the output form should be indicated. How to adjust the next line of code so that the transposed convolution produces a form that has the same input size.
DeConnv1 = tf.nn.conv3d_transpose(layer1, filter = w, output_shape = [1,32,32,7,1], strides = [1,2,2,2,1], padding = 'SAME')
python deep-learning tensorflow conv-neural-network deconvolution
Devin haslam
source share