Introduction
The landscape of artificial intelligence for software development has just received a significant shake-up. Mistral AI has announced the release of Devstral 2, a new family of next-generation coding models, accompanied by Mistral Vibe CLI, a native terminal agent designed for end-to-end automation. This strategic move aims to democratize access to state-of-the-art (SOTA) coding tools while maintaining an open-source and permissive approach.
The news isn't just about raw model power, but also efficiency: Mistral promises high performance with reduced cost and resource consumption compared to proprietary competitors like Claude Sonnet. With options ranging from data center execution to local consumer hardware, Devstral 2 positions itself as a versatile solution for independent developers and large enterprises alike.
Context: The Evolution of Coding Models
Until recently, the gap between proprietary (closed-source) and open models in the coding field was stark. With this launch, Mistral AI seeks to bridge this gap by offering open-weights models that compete directly with the most renowned solutions.
Devstral 2 is not just a single model, but a family consisting of two main variants designed for different use cases:
- Devstral 2 (123B): The flagship model for complex tasks.
- Devstral Small 2 (24B): A compact version for rapid, local execution.
Devstral 2 Technical Features
The Devstral 2 model is a 123-billion parameter dense transformer with a 256K token context window. It is designed to handle extensive codebases and orchestrate changes across multiple files while maintaining architectural coherence. Released under a modified MIT license, this model achieved a score of 72.2% on SWE-bench Verified, establishing itself as one of the best open-weight models available.
Performance and Efficiency
According to data provided by Mistral, Devstral 2 is up to 7 times more cost-efficient than Claude Sonnet on real-world tasks. Although Claude Sonnet 4.5 remains superior in terms of absolute preference, Devstral 2 outperforms open models like DeepSeek V3.2 with a win rate of 42.8% (versus a 28.6% loss rate) in human evaluations.
"Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it's a remarkably smooth driver. This is a massive contribution to the open-source ecosystem."
Cline
For deployment, Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs.
Devstral Small 2: Local Power
For those without enterprise infrastructure, Devstral Small 2 (24B parameters) represents the ideal solution. Released under the Apache 2.0 license, this model can be executed locally on consumer hardware (including GeForce RTX GPUs and CPU-only configurations) or on on-premise servers.
Despite its smaller size, it achieves a respectable 68.0% on SWE-bench Verified. It supports image inputs (multimodal) and is perfect for tasks requiring low latency and maximum data privacy.
Mistral Vibe CLI: The Terminal Agent
In addition to the models, Mistral has launched Mistral Vibe CLI, an open-source command-line coding assistant (Apache 2.0). This tool transforms the terminal into an autonomous agent capable of exploring, modifying, and executing code.
Key Features
- Project-Aware Context: Automatically scans file structure and Git status.
- Smart References: Uses
@for file autocomplete and!for shell commands. - Multi-file Orchestration: Reasons over the entire codebase, not just the open file.
- Flexible Configuration: Manageable via a simple
config.tomlfile.
Vibe CLI can also be integrated into IDEs like Zed and supports the Agent Communication Protocol.
"Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale."
Kilo Code
Pricing and Availability
Currently, Devstral 2 is available for free via Mistral's API for a limited period. Afterward, pricing will be:
- Devstral 2: $0.40 (input) / $2.00 (output) per million tokens.
- Devstral Small 2: $0.10 (input) / $0.30 (output) per million tokens.
You can also try the models through partners like NVIDIA (build.nvidia.com) or integrate them into your workflows using Vibe CLI.
Conclusion
With the release of Devstral 2 and Vibe CLI, Mistral confirms its commitment to providing competitive open-source alternatives to proprietary models. The combination of a powerful model (Devstral 2), an efficient local version (Small 2), and a practical developer tool (Vibe CLI) creates a robust ecosystem that promises to accelerate distributed intelligence in software engineering.
For more technical details and downloads, visit the official announcement: Devstral 2 and Vibe CLI News.
FAQ
What exactly is Devstral 2?
Devstral 2 is a family of AI models from Mistral (123B and 24B) designed specifically for coding, with permissive open-source licenses.
Is Devstral 2 free to use?
Yes, Devstral 2 is currently offered for free via Mistral's API, but it will transition to a paid pay-per-token model in the future.
Does Devstral Small 2 run on my computer?
Yes, Devstral Small 2 (24B) is optimized to run on consumer hardware, including GeForce RTX GPUs and even CPU-only configurations.
What is Mistral Vibe CLI?
It is an open-source command-line assistant that uses Devstral to automate development tasks, modify files, and manage projects directly from the terminal.
How does Devstral 2 compare to Claude Sonnet?
Devstral 2 is much more cost-efficient (up to 7x), but benchmarks indicate that Claude Sonnet 4.5 still holds an advantage in absolute human preference.