Ollama extends support to AMD graphics cards for running large language models locally

Ollama extends support to AMD graphics cards for running large language models locally

Open source tool Ollama, known for running large language models such as Llama 2, Mistral, Gemma, and others, has just announced AMD graphics card support in preview for Windows and Linux. This development means all Ollama features can now be accelerated by AMD graphics cards.

In the past, Ollama was only compatible with NVIDIA graphics cards through CUDA. However, with the release of version 0.1.29, the tool extends its compatibility to several AMD graphics cards via ROCm.

The list of supported AMD graphics cards is extensive, including a wide range of AMD Radeon RX graphics cards, many AMD Radeon PRO graphics cards, and a selection of AMD Instinct graphics cards, with the full list available in the announcement post.

The 0.1.29 update of Ollama also introduces enhancements and addresses various bugs, further expanding and improving the tool's functionality.

by Paul

stoyangenov
stoyangenov found this interesting
Ollama iconOllama
  115
  • ...

Ollama is an AI Chatbot designed to facilitate local operation of Llama 2 and other large language models. Rated 5, it is powered by advanced AI technology. Its main competitors in the market include Google Gemma, Devin, and Auto-GPT.

Comments

Darleen Palmer
0

Good information. I like AMD

Gu