8

I wonder why training RNNs typically doesn't use 100% of the GPU.

For example, if I run this RNN benchmark on a Maxwell Titan X on Ubuntu 14.04.4 LTS x64, the GPU utilization is below 90%:

enter image description here

The benchmark was launched using the command:

python rnn.py -n 'fastlstm' -l 1024 -s 30 -b 128

How can I diagnose what the bottleneck is?

Franck Dernoncourt
  • 5,862
  • 12
  • 44
  • 80

1 Answers1

5

I get about this same utilization rate when I train models using Tensorflow. The reason is pretty clear in my case, I'm manually choosing a random batch of samples and calling the optimization for each batch separately.

That means that each batch of data is in main memory, it's then copied into GPU memory where the rest of the model is, then forward/back propagation and update is performed in-gpu, then execution is handed back to my code where I grab another batch and call optimize on it.

There's a faster way to do that if you spend a few hours setting up Tensorflow to do batch loading in parallel from pre-prepared TF records.

I realize you may or may not be using tensorflow under keras, but since my experience tends to produce very similar utilization numbers, I'm going out on a limb by suggesting that there's a reasonably likely causal link to draw from these correlations. If your framework is loading each batch from main memory into the GPU without the added efficiency/complexity of asynchronous loading (which the GPU itself can handle), then this would be an expected result.

davidparks21
  • 433
  • 1
  • 4
  • 18