French AI startup Mistral has launched Mistral 3, its new suite of open multimodal models
The France based AI company Mistral has introduced the Mistral 3 family, a new suite of open-source multilingual and multimodal models. The lineup includes Ministral models with 3, 8, and 14 billion parameters in base, instruct, and reasoning versions, all with image understanding support. These smaller models target local and edge use and offer competitive performance with lower token generation.
The flagship Mistral Large 3 uses a sparse Mixture of Experts architecture trained on about 3,000 Nvidia H200 GPUs. It includes 41 billion active parameters out of 675 billion total and is released under the Apache 2.0 license, like the rest of the series. Benchmarks place it second among open-source non reasoning models and sixth for reasoning tasks on the LMArena leaderboard, with performance similar to Qwen and DeepSeek, although Deepseek V3.2 shows improvements in recent tests.
All models are available through Mistral AI Studio, Hugging Face, and cloud platforms including Amazon Bedrock, Azure Foundry, IBM WatsonX, and Together AI.

