The Allen Institute for AI (Ai2) just lately launched what it calls its strongest household of fashions but, Olmo 3. However the firm saved iterating on the fashions, increasing its reinforcement studying (RL) runs, to create Olmo 3.1.
The brand new Olmo 3.1 fashions concentrate on effectivity, transparency, and management for enterprises.
Ai2 up to date two of the three variations of Olmo 2: Olmo 3.1 Suppose 32B, the flagship mannequin optimized for superior analysis, and Olmo 3.1 Instruct 32B, designed for instruction-following, multi-turn dialogue, and gear use.
Olmo 3 has a 3rd model, Olmo 3-Base for programming, comprehension, and math. It additionally works effectively for proceed fine-tuning.
Ai2 stated that to improve Olmo 3 Suppose 32B to Olmo 3.1, its researchers prolonged its finest RL run with an extended coaching schedule.
“After the unique Olmo 3 launch, we resumed our RL coaching run for Olmo 3 32B Suppose, coaching for an extra 21 days on 224 GPUs with additional epochs over our Dolci-Suppose-RL dataset,” Ai2 stated in a weblog publish. “This yielded Olmo 3.1 32B Suppose, which brings substantial positive factors throughout math, reasoning, and instruction-following benchmarks: enhancements of 5+ factors on AIME, 4+ factors on ZebraLogic, 4+ factors on IFEval, and 20+ factors on IFBench, alongside stronger efficiency on coding and sophisticated multi-step duties.”
To get to Olmo 3.1 Instruct, Ai2 stated its researchers utilized the recipe behind the smaller Instruct dimension, 7B, to the bigger mannequin.
Olmo 3.1 Instruct 32B is "optimized for chat, instrument use, & multi-turn dialogue—making it a way more performant sibling of Olmo 3 Instruct 7B and prepared for real-world purposes,” Ai2 stated in a publish on X.
For now, the brand new checkpoints can be found on the Ai2 Playground or Hugging Face, with API entry coming quickly.
Higher efficiency on benchmarks
The Olmo 3.1 fashions carried out effectively on benchmark exams, predictably beating the Olmo 3 fashions.
Olmo 3.1 Suppose outperformed Qwen 3 32B fashions within the AIME 2025 benchmark and carried out near Gemma 27B.
Olmo 3.1 Instruct carried out strongly towards its open-source friends, even beating fashions like Gemma 3 on the Math benchmark.
“As for Olmo 3.1 32B Instruct, it’s a larger-scale instruction-tuned mannequin constructed for chat, instrument use, and multi-turn dialogue. Olmo 3.1 32B Instruct is our most succesful absolutely open chat mannequin so far and — in our evaluations — the strongest absolutely open 32B-scale instruct mannequin,” the corporate stated.
Ai2 additionally upgraded its RL-Zero 7B fashions for math and coding. The corporate stated on X that each fashions benefited from longer and extra steady coaching runs.
Dedication to transparency and open supply
Ai2 beforehand advised VentureBeat that it designed the Olmo 3 household of fashions to supply enterprises and analysis labs extra management and understanding of the info and coaching that went into the mannequin.
Organizations may add to the mannequin’s knowledge combine and retrain it to additionally study from what’s been added.
This has lengthy been a dedication for Ai2, which additionally provides a instrument referred to as OlmoTrace that tracks how LLM outputs match its coaching knowledge.
“Collectively, Olmo 3.1 Suppose 32B and Olmo 3.1 Instruct 32B present that openness and efficiency can advance collectively. By extending the identical mannequin move, we proceed to enhance capabilities whereas retaining end-to-end transparency over knowledge, code, and coaching selections,” Ai2 stated.
