Researchers at Nvidia have developed a brand new approach that flips the script on how massive language fashions (LLMs) be taught to purpose.
The strategy, referred to as reinforcement studying pre-training (RLP), integrates RL into the preliminary coaching section quite than saving it for the tip.
This method encourages the mannequin to “assume for itself earlier than predicting what comes subsequent, thus instructing an impartial pondering habits earlier within the pretraining,” the researchers state of their paper.
By studying to purpose on plain textual content with no need exterior verifiers, fashions skilled with RLP present important enhancements in studying advanced reasoning duties downstream, hinting at a way forward for extra succesful and adaptable AI for real-world duties.
The standard LLM coaching cycle
Usually, massive language fashions are first pre-trained on huge quantities of textual content utilizing a "next-token prediction" goal, the place they’re given a string of textual content and requested to constantly guess what the subsequent phrase (or token) will probably be. On this section, they be taught grammar, information, and fundamental associations.
Within the later post-training section, fashions normally be taught advanced reasoning skills resembling chain-of-thought (CoT) the place a mannequin lays out its reasoning step-by-step. This stage usually entails supervised fine-tuning (SFT) or reinforcement studying from human suggestions (RLHF), which require specialised, curated datasets.
The paper’s authors argue this sequential course of doesn’t match human comprehension, which is “not a linear token-by-token course of, however quite a parallel integration of enter with prior data.” Present pre-training strategies lack this mechanism, hindering a mannequin's capacity to develop deep reasoning from the beginning.
How reinforcement studying pre-training works
RLP reframes this course of by treating CoT technology as an motion the mannequin takes earlier than predicting the subsequent token. At every step, the mannequin first generates an inner "thought" or reasoning chain. It then predicts the subsequent phrase within the textual content, utilizing the unique context augmented with its new thought.
The mannequin receives a reward primarily based on how a lot its thought improved the accuracy of its prediction in comparison with a baseline that didn't generate a thought (pure next-token prediction). This reward sign is calculated robotically primarily based on the change in likelihood, eliminating the necessity for exterior verifiers or human-labeled information.
The reward is optimistic solely when the generated thought helps the mannequin higher predict the subsequent token. By rewarding ideas primarily based on their predictive profit, RLP successfully teaches the mannequin learn how to assume usefully on the identical huge, unstructured datasets used for traditional pre-training.
This steady suggestions loop permits the mannequin to be taught when a easy predictive guess is adequate and when it wants to have interaction in deeper reasoning. Because the researchers put it, “RLP is designed to form pondering in base fashions by rewarding solely these ideas that measurably assist next-token prediction.”
This foundational method, nevertheless, doesn't make later fine-tuning phases out of date. In line with Bryan Catanzaro, VP of utilized deep studying analysis at Nvidia and a co-author of the paper, RLP is designed to enrich, not exchange, these essential steps. "RLP isn’t meant to exchange the later post-training phases like supervised fine-tuning or reinforcement studying from human suggestions," Catanzaro advised VentureBeat. "These phases stay essential for refining mannequin habits… It’s actually designed to amplify the effectiveness of these later phases by giving the mannequin a head begin."
RLP in motion
In experiments with Qwen3-1.7B and Nemotron-Nano-12B, Nvidia’s workforce examined RLP throughout a collection of math and science reasoning benchmarks. The outcomes present that fashions enhanced with RLP persistently outperformed their conventionally skilled counterparts, with notably sturdy features in reasoning-heavy duties.
For an enterprise, this improved reasoning might translate to extra dependable outputs in multi-step workflows like monetary evaluation or authorized doc summarization.
"RLP encourages the mannequin throughout pretraining to assume earlier than it predicts, serving to the mannequin internalize a extra coherent reasoning model," mentioned Catanzaro. "This might assist cut back delicate logical errors, particularly in longer workflows.”
Whereas stressing that RLP-trained fashions will nonetheless want the same old guardrails resembling verification layers, human oversight, and consistency checks, Catanzaro mentioned that “RLP offers you a stronger baseline."
Importantly, the advantages of RLP compound as an alternative of disappearing throughout subsequent fine-tuning phases (catastrophic forgetting is a typical downside in LLM coaching, the place later coaching phases trigger the mannequin to overlook its beforehand realized abilities and data). The RLP-trained mannequin achieved an total rating that was 7-8% greater than baselines after an equivalent post-training routine. The researchers conclude that RLP “establishes sturdy reasoning foundations that aren’t washed out by downstream alignment however as an alternative compound with post-training.”
The effectivity of the approach is a key discovering. On the Qwen3-1.7B mannequin, RLP improved efficiency by 17% over commonplace steady pre-training and likewise beat the same approach referred to as Reinforcement Pretraining through prefix-matching rewards (RPT). This benefit held even when the baseline mannequin was skilled with 35 instances extra information to match the computational price, confirming the features come from the tactic itself, not simply extra processing.
Moreover, RLP demonstrates spectacular scalability and flexibility, efficiently extracting a reasoning sign from general-purpose internet information—not simply curated datasets. When utilized to the hybrid Mamba-Transformer mannequin Nemotron-Nano-12B, RLP achieved a 35% relative enchancment over a closely skilled baseline whereas utilizing only a tiny fraction of the info.
Whereas these outcomes level towards a extra environment friendly path for constructing highly effective fashions, Catanzaro frames the innovation as a basic shift within the studying course of itself, quite than an instantaneous resolution to excessive coaching prices.
"This analysis is thrilling as a result of it gives a shift in how fashions take up info throughout pretraining resulting in a better studying course of," he defined. "It wouldn’t exchange large-scale pretraining, however provide one other inventive methodology in constructing the very best fashions."
A brand new basis for AI coaching
Finally, RLP factors towards a future the place pre-training is now not a monolithic strategy of next-token prediction. As a substitute, the subsequent technology of fashions could possibly be constructed on a hybrid of goals, creating AI that learns to assume extra robustly from day one. Catanzaro gives a strong analogy to border this shift:
"Subsequent-token prediction teaches a mannequin what the world appears like; reinforcement-style goals like RLP can train it how to consider what it’s seeing," he mentioned. "The mixture of those two goals might assist fashions develop deeper, extra structured pondering a lot earlier in coaching… Instruments like RLP can construct on high of that basis, making studying extra energetic, curious, and much more environment friendly."
There’s nonetheless loads to be taught in regards to the dynamics of reinforcement studying within the pre-training section, however what appears clear is that “introducing exploration earlier in coaching opens a brand new axis for scaling — not simply in measurement, however in how fashions be taught to purpose,” Catanzaro mentioned.