Ollama debuts 'ollama launch' to run coding tools with local or cloud models
Ollama has released 'ollama launch', a command line tool that allows users to run coding assistants such as Claude Code and Codex using local or cloud-based language models without the need for environment variables or configuration files. The update adds support for both local models like glm-4.7-flash and qwen3-coder, as well as cloud models such as glm-4.7:cloud and gpt-oss:120b-cloud. Coding sessions are now extended up to five hours, and Ollama recommends a context length of at least 64,000 tokens for optimal performance.
No comments so far, maybe you want to be first?
Gu