Almost all ML software using GPUs works (best) with CUDA, so Nvidia GPUs are preferred.
Take a look at this discussion . And there is an article about which the GPU will receive for deep learning (modern neural networks). Relevant quote:
Which graphics processor should I get? NVIDIA or AMD?
NVIDIA standard libraries made it easy to create the first deep learning libraries in CUDA, while AMD OpenCL did not have such powerful standard libraries. Right now, there are no good deep learning libraries for AMD cards, so NVIDIA is. Even if some OpenCL libraries are available in the future, I will stick with NVIDIA: the fact is that GPU computing or the GPGPU community are very large for CUDA and quite small for OpenCL. Thus, good open source solutions and reliable recommendations for your programming are available in the CUDA community.
The reason the NVIDIA breeds is because they put a lot of effort into supporting scientific computing (like cuDNN . That means they recognize the field and try moving on to these applications).
So, NVIDIA has a lot of GPUs. Which one can you get?
Short answer based on article cited above (I highly recommend reading it!): GTX 980.
In fact, the number of cores is not so important. GPUs do not have a ton of memory, so communication with the host (your RAM) is inevitable. Therefore, the amount of internal memory (so you can load and process more) and the bandwidth (so you do not spend a lot of time waiting) are important.
Artem sobolev
source share