Mistral AI SAS today introduced Magistral, a new lineup of reasoning-optimized large language models.
The LLM series includes two algorithms on launch. The first, Magistral Small, is available under an open-source license and features 24 billion parameters. It’s joined by a more capable, proprietary model called Magistral Small that will be available through Mistral AI’s cloud services.
Mistral AI is a Paris-based OpenAI competitor backed by more than $1 billion in funding. Alongside the newly launched reasoning-optimized models, it offers general-purpose LLMs and neural networks optimized for specialized tasks such as solving math problems. The launch of Magistral comes amid rumors that the company is seeking to raise another $1 billion from investors.
The two models in the Magistral series share several features. Both understand multiple languages and ship with a chain-of-thought feature, which allows them to break down complex tasks into simpler sub-steps. Moreover, they can display the sub-steps involved in generating a prompt response, which enables users to verify its accuracy.
Magistral Medium, Mistral’s other new reasoning model, generates higher-quality output. The company compared it with Magistral Small by asking the models to solve problems from a qualifying exam for the 2024 U.S. Math Olympiad. Magistral Medium scored 73.6% with default settings and 90% with a configuration designed to boost output quality. Magistral Small scored 70.7% and 83.3%, respectively.
Magistral Medium also includes speed optimizations not available in its open-source namesake. When users access the former model through Le Chat, Mistral’s chatbot service, they can activate two settings called Think mode and Flash Answers. According to Mistral, the settings allow Magistral Medium to answer prompts nearly 10 times faster than competing models.
In a paper accompanying the launch of Magistral, Mistral detailed how the LLM series was developed. The company used a popular AI training method known as reinforcement learning, or RL. “Instead of relying on existing implementations and RL traces distilled from prior models, we follow a ground up approach, relying solely on our own models and infrastructure,” Mistral researchers wrote in the paper.
The typical RL project involves two models: the LLM being trained and a so-called critic model that guides the training process by providing the LLM with feedback. According to Mistral, Magistral was trained using an RL method that removes the need for a critic model. This arrangement can improve the quality of LLM prompt responses.
Mistral developed programs called generators and verifiers to manage the training process. Magistral used the generators to answer the practice questions in its training dataset. The verifiers, in turn, checked the accuracy of the model’s answers. The trainers and verifiers spread the calculations involved in the workflow across a cluster of graphics cards.
During the project, Mistral created several versions of the training workflow with which it trained Magistral and compared them. The company says that the test produced several new discoveries about RL. “We contribute insights that add to, or contradict, existing RLVR literature, for example on whether RL can improve upon the distillation SFT baseline for small models,” the company’s researchers wrote.
Mistral’s first discovery was that a version of Magistral trained solely on a coding dataset proved surprisingly adept at solving math problems. The opposite was true as well, the company determined. “The model demonstrates strong performance to out-of-domain tasks, showcasing the generalization ability of RL,” Mistral’s researchers wrote. The ability to apply knowledge from one field to another is important for many reasoning tasks.
An earlier research paper observed that small models trained solely with RL can’t compete with LLMs developed the same way. According to Mistral, its AI training tests showed that’s not always the case. “We achieved strong results even with pure RL,” the company’s researchers detailed.
Mistral has released the code for Magistral Small on Hugging Face. Magistral Medium, in turn, is available through Le Chat and the company’s application programming interface for developers.
Image: Mistral
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU