IndexError: cannot force a fragment of type tensorvariable to integer

I run "ipython debugf.py" and it gave me the error message below

IndexError Traceback (most recent call last) /home/ml/debugf.py in <module>() 8 fff = theano.function(inputs=[index], 9 outputs=cost, ---> 10 givens={x: train_set_x[index: index+1]}) IndexError: failed to coerce slice entry of type TensorVariable to integer" 

I am looking for a forum and no luck, is there anyone who can help? thanks!
debugf.py:

 import theano.tensor as T import theano import numpy index =T.lscalar() x=T.dmatrix() cost=x +index train_set_x=numpy.arange(100).reshape([20,5]) fff=theano.function(inputs=[index], outputs=cost, givens={x:train_set_x[index: index+1]}) #<--- Error here 
+6
source share
2 answers

Change train_set_x to anano.shared, and the code is fine. I don’t know the reason, but it works! Hope this post can help others. Correct code below

 import theano.tensor as T import theano import numpy index =T.lscalar() x=T.dmatrix() cost=x +index train_set_x=numpy.arange(100.).reshape([20,5]) #<--- change to float, #because shared must be floatX type #change to shared variable shared_x = theano.shared(train_set_x) fff=theano.function(inputs=[index], outputs=cost, givens={x:shared_x[index: index+1]}) #<----change to shared_x 
+2
source

The reason for this is that the index is a symbolic tensor variable (a long scalar, as you can see in line 4). Therefore, when python tries to build the dictionary that anano needs for its β€œgiven” input, it tries to truncate the numpy array using a symbolic variable, which obviously cannot be done since it doesn't matter yet (this is only when you type then into the function).

As you understand, data transfer through theano.shared is the best approach. This means that all training data can be uploaded to the GPU and then sliced ​​/ indexed on the fly to run each example.

However, you may find that you have too much training data to match your GPU memory, or for some other reason do not want to use a shared variable. Then you can simply change your function definition

 data = T.matrix() fff=theano.function(inputs=[data], outputs=cost, givens={x: data} ) 

Then instead of writing

 fff(index) 

You write

 fff(train_set_x[index: index+1]) 

It should be warned that the process of moving data to the GPU is slow, so it is much better to minimize the number of transfers, if possible.

+1
source

All Articles