Ollama extends support to AMD graphics cards for running large language models locally
Open source tool Ollama, known for running large language models such as Llama 2, Mistral, Gemma, and others, has just announced AMD graphics card support in preview for Windows and Linux. This development means all Ollama features can now be accelerated by AMD graphics cards.
In the past, Ollama was only compatible with NVIDIA graphics cards through CUDA. However, with the release of version 0.1.29, the tool extends its compatibility to several AMD graphics cards via ROCm.
The list of supported AMD graphics cards is extensive, including a wide range of AMD Radeon RX graphics cards, many AMD Radeon PRO graphics cards, and a selection of AMD Instinct graphics cards, with the full list available in the announcement post.
The 0.1.29 update of Ollama also introduces enhancements and addresses various bugs, further expanding and improving the tool's functionality.

Comments
Good information. I like AMD