3

As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.

I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?

LINUX X64 (AMD64/EM64T) DISPLAY DRIVER

This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?

Any guidance would be greatly appreciated.

crayden
  • 131
  • 1
  • 1
  • 4

1 Answers1

5

You seem to be looking at the latest Quatro 4000, which has the following compute rating:

enter image description here

You can find the complete list here for all Nvidia GPUs.

While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.

The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:

enter image description here

The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.

It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.

n1k31t4
  • 15,468
  • 2
  • 33
  • 52