Mistral's new open source AI model Small 3.1 is out & outperforms GPT-4o Mini and Gemma 3
Mistral has unveiled its latest AI model, Mistral Small 3.1, claiming it to be the top performer in its category. Building on the previous Mistral Small 3, this updated model boasts enhanced text performance, improved multimodal understanding, and an expanded context window of up to 128,000 tokens. It surpasses comparable models like Gemma 3 and GPT-4o Mini, with inference speeds reaching 150 tokens per second. Released under the Apache 2.0 license, Mistral Small 3.1 is the first open-source model to outperform leading small proprietary models.
Designed for a wide array of generative AI tasks, including instruction following, conversational assistance, image understanding, and function calling, Mistral Small 3.1 is suitable for both enterprise and consumer applications. It can operate on a single RTX 4090 or a Mac with 32 GB RAM, making it ideal for on-device use. The model is available starting today on Hugging Face, Mistral AI’s La Plateforme, and Google Cloud Vertex AI, with availability on NVIDIA NIM expected in the coming weeks.


Comments
Also, this model has been trained with twice less data that Google/OpenAI mini-models and without learning reinforcement (costly method to double check training), making it very efficient (both to train and query). But to Hell that when Google, OpenAI and co are keep saying that bigger is necessary better, and AGI is for this quarter!