
Alibaba launches Qwen3.5 with open-weight 397B model and broader language support
Alibaba has officially released Qwen3.5, making the open-weight Qwen3.5-397B-A17B model available for research and development. As a native vision-language model, Qwen3.5-397B-A17B demonstrates strong results in benchmarks, spanning reasoning, coding, agent abilities, and multimodal understanding.
Building on this, the model introduces an innovative hybrid architecture that combines linear attention through Gated Delta Networks with a sparse mixture-of-experts system. While the model contains 397 billion total parameters, only 17 billion are active during each inference, which helps to optimize both efficiency and compute costs without reducing model capability.
Alongside these technical advancements, Qwen3.5 expands language and dialect support from 119 to 201, enabling better accessibility for a wider global user base. Performance gains over the Qwen3 series derive from greatly increased scaling of reinforcement learning tasks and supported environments. Qwen3.5 further advances pretraining, focusing on power, efficiency, and versatility.
These improvements are facilitated by a heterogeneous infrastructure that separates parallelism approaches for the vision and language components, helping avoid inefficiencies seen in unified systems. Users can now access Qwen3.5 via Qwen Chat with auto, thinking, and fast modes, and try the flagship Qwen3.5-Plus model through Alibaba Cloud ModelStudio.
