Ollama debuts 'ollama launch' to run coding tools with local or cloud models

Ollama debuts 'ollama launch' to run coding tools with local or cloud models

Ollama has released 'ollama launch', a command line tool that allows users to run coding assistants such as Claude Code and Codex using local or cloud-based language models without the need for environment variables or configuration files. The update adds support for both local models like glm-4.7-flash and qwen3-coder, as well as cloud models such as glm-4.7:cloud and gpt-oss:120b-cloud. Coding sessions are now extended up to five hours, and Ollama recommends a context length of at least 64,000 tokens for optimal performance.

by Fla

Ollama iconOllama
  118

Facilitates local deployment of Llama 3, Code Llama, and other language models, enabling customization and offline AI development. Perfect for creating personalized AI chatbots and writing tools.

No comments so far, maybe you want to be first?
Gu