What is the best way to use the GPU to evaluate TensorFlow?

I used Tensorflow (processor version) for my Deep Learning model. In particular, using DNNRegressor Estimator for training with a given set of parameters (network structure, hidden layers, alpha, etc.). Although I was able to reduce the loss, the model took a very long time to learn (about 3 days) and the take time was 9 seconds per 100th step.

enter image description here

I came to this article: https://medium.com/towards-data-science/how-to-traine-tensorflow-models-79426dabd304 and found that the GPU could be faster to learn. So, I took p2.xlarge gpu from AWS (single core GPU) enter image description here with 4 (vCPU), 12 (ECU) and 61 (MiB).

But the learning speed is 9 s per 100th step. I use the same code that I used for the Appraisers on the CPU because I read that the Appraisers use the GPU on their own. Here is my output from the nvidia-smi command. enter image description here

  • Indicates that GPU memory is being used, but my Volatile GPU-Util is 1%. Unable to understand what I'm missing. Is it intended to work the same way, or am I missing something because the global steps per second are the same for the CPU and GPU implementation of the Appraisers.
  • Should I explicitly change something in the DNNRegressor evaluator code?
0
python neural-network tensorflow tensorflow-gpu
source share
1 answer

It looks like you can read csv and convert to pandas DataFrame and then use tensorflow pandas_input_fn. This is a known issue with the implementation of pandas_input_fn. You can track the problem at https://github.com/tensorflow/tensorflow/issues/13530 .

To combat this, you can use another method for I / O (e.g. reading from TFRecords). If you want to continue using pandas and increase your steps / second, you can reduce your batch_size, although this can have negative consequences for your appraiser’s ability to learn.

0
source share

All Articles