Questions tagged [gpu]

Graphics Processing Units (GPUs) within the context of Machine Learning often refer to the hardware requirements, design considerations, or level of parallelization for implementing and running various machine learning algorithms.

Graphics Processing Units (GPUs) within the context of Machine Learning often refer to the hardware requirements, design considerations, or level of parallelization for implementing and running various machine learning algorithms. Due to the size of various data sets and complexity of many cutting-edge learning techniques (Deep learning, Reinforcement Learning, Neural networks) applied to various use cases (Audio, Video, Signal Processing), GPUs are often required to carry out these computations. External GPUs processing power can often be accessed through third party cloud based platforms such as Google Colab or Amazon Web Services.

173 questions
44
votes
4 answers

Multi GPU in Keras

How we can program in the Keras library (or TensorFlow) to partition training on multiple GPUs? Let's say that you are in an Amazon ec2 instance that has 8 GPUs and you would like to use all of them to train faster, but your code is just for a…
Hector Blandin
  • 579
  • 1
  • 7
  • 11
41
votes
3 answers

Choosing between CPU and GPU for training a neural network

I've seen discussions about the 'overhead' of a GPU, and that for 'small' networks, it may actually be faster to train on a CPU (or network of CPUs) than a GPU. What is meant by 'small'? For example, would a single-layer MLP with 100 hidden units…
StatsSorceress
  • 2,021
  • 3
  • 16
  • 30
41
votes
8 answers

Using TensorFlow with Intel GPU

Is there any way now to use TensorFlow with Intel GPUs? If yes, please point me in the right direction. If not, please let me know which framework, if any, (Keras, Theano, etc) can I use for my Intel Corporation Xeon E3-1200 v3/4th Gen Core…
James Bond
  • 1,265
  • 2
  • 12
  • 13
37
votes
4 answers

How to disable GPU with TensorFlow?

Using tensorflow-gpu 2.0.0rc0. I want to choose whether it uses the GPU or the CPU.
Florin Andrei
  • 1,130
  • 1
  • 9
  • 13
25
votes
3 answers

Should I use GPU or CPU for inference?

I'm running a deep learning neural network that has been trained by a GPU. I now want to deploy this to multiple hosts for inference. The question is what are the conditions to decide whether I should use GPU's or CPUs for inference? Adding more…
Dan
  • 361
  • 1
  • 3
  • 6
16
votes
5 answers

R: machine learning on GPU

Are there any machine learning packages for R that can make use of the GPU to improve training speed (something like theano from the python world)? I see that there is a package called gputools which allows execution of code on the gpu, but I'm…
Simon
  • 1,071
  • 2
  • 10
  • 28
13
votes
1 answer

How to make my Neural Netwok run on GPU instead of CPU

I have installed Anaconda3 and have installed latest versions of Keras and Tensorflow. Running this command : from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) I find the Notebook is running in CPU: [name:…
Deni Avinash
  • 133
  • 1
  • 1
  • 5
13
votes
3 answers

CNN memory consumption

I'd like to be able to estimate whether a proposed model is small enough to be trained on a GPU with a given amount of memory If I have a simple CNN architecture like this: Input: 50x50x3 C1: 32 3x3 kernels, with padding (I guess in reality theyre…
11
votes
2 answers

What is the difference between Pytorch's DataParallel and DistributedDataParallel?

I am going through this imagenet example. And, in line 88, the module DistributedDataParallel is used. When I searched for the same in the docs, I haven’t found anything. However, I found the documentation for DataParallel. So, would like to know…
Dawny33
  • 8,476
  • 12
  • 49
  • 106
11
votes
1 answer

GPU Accelerated Data Processing for R in Windows

I'm currently taking a paper on Big Data which has us utilising R heavily for data analysis. I happen to have a GTX1070 in my pc for gaming reasons. Thus, I thought it would be really cool if I could use that to speed up some of the processing for…
Jesse Maher
  • 113
  • 1
  • 5
10
votes
3 answers

Switching Keras backend Tensorflow to GPU

I use Keras-Tensorflow combo installed with CPU option (it was said to be more robust), but now I'd like to try it with GPU-version. Is there a convenient way to switch? Or shall I re-install fully Tensorflow? Is the GPU version reliable?
Hendrik
  • 8,767
  • 17
  • 43
  • 55
10
votes
1 answer

What size language model can you train on a GPU with x GB of memory?

I'm trying to figure out what size language model I will be able to train on a GPU with a certain amount of memory. Let's for simplicity say that 1 GB = 109 bytes; that means that, for example, on a GPU with 12 GB memory, I can theoretically fit 6…
HelloGoodbye
  • 213
  • 1
  • 2
  • 7
9
votes
1 answer

After the training phase, is it better to run neural networks on a GPU or CPU?

My understanding is that GPUs are more efficient for running neural nets, but someone recently suggested to me that GPUs are only needed for the training phase. Once trained, it's actually more efficient to run them on CPUs. Is this true?
Crashalot
  • 223
  • 2
  • 5
8
votes
1 answer

FP16, FP32 - what is it all about? or is it just Bitsize for Float-Values (Python)

What is it all about FP16, FP32 in Python? My potential Business Partner and I are building a Deep Learning Setup for working with time series. He came up with "FP16 and FP32" while finding a GPU. It looks like he's talking about Floating Point…
Ishmael89
  • 91
  • 1
  • 1
  • 3
8
votes
3 answers

Why do I get an OOM error although my model is not that large?

I am a newbie in GPU based training and deep learning models. I am running cDCGAN (Conditional DCGAN) in TensorFlow on my 2 Nvidia GTX 1080 GPUs. My data set consists of around 320,000 images with size 64*64 and 2,350 class labels. If I set my batch…
Ammar Ul Hassan
  • 185
  • 1
  • 1
  • 5
1
2 3
11 12