Mini Bus Simulator

How can I train a network in TensorFlow with mini paragraphs of data? In the Deep-MNIST tutorial, they use:

for i in range(1000):
   batch = mnist.train.next_batch(50)
   train_step.run(feed_dict={x: batch[0], y_: batch[1]})

My question is - is variable xand y_in sizes suited for one example, and batch[0], batch[1]- the list of inputs and outputs? in this case, will TensorFlow automatically add gradients for each training example to these lists? or should I create my model so that xthey y_get the whole mini-channel?

My problem is that when I try to submit its list for each placeholder, it tries to enter the entire list for the placeholder, and therefore I get a size mismatch:, Cannot feed value of shape (n, m) for Tensor u'ts:0', which has shape '(m,)'where nis the size of the minibar and mis the separate input size.

Thank.

+4
source share
1 answer

In the MNIST tutorial x, there y_are placeholders with a specific form:

x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])

shape=[None, 784] means this placeholder has 2 dimensions.

So, to answer your first question:

are x and y_ variables with sizes suitable for one example

The first dimension can contain an undefined number of elements (so, 1, 2, ... 50 ...), and the second dimension can contain exaclly 784 = 28 * 28 elements (which are features of a single MNIST image).

If you load a chart with a python list with the form [1, 784] or [50, 784], then this is absolutely the same for tenorflow, it can handle it without any problems.

batch [0], batch [1] - lists of such inputs and outputs? in the textbook, they define a batch call batch = datasets.train.next_batch(50). In this way:

  • batch [0] - [50, 784]
  • batch [1] - [50, 10]

TensorFlow ? , x y_ -?

Tensorflow .

, Cannot feed value of shape (n, m) for Tensor u'ts:0', which has shape '(m,)' .

, .

+9

All Articles