A analysis By Epoch AI, a non -profit the AI Research Institute suggests that the AI industry may not be able to obtain massive performance profits from reasoning AI models for much longer. As soon as within a year, the progress of reasoning models could decrease, according to the reports of the report.
Reasoning models as OpenAi’s O3 They have led to substantial gains at the AI reference points in recent months, particularly the reference points that measure mathematics and programming skills. The models can apply more computing to problems, which can improve their performance, with the disadvantage that they take more than conventional models to complete tasks.
Reasoning models first develop in training a conventional model in a massive amount of data, then applying a technique called reinforcement learning, which effectively gives the “feedback” model about its solutions to difficult problems.
Until now, Frontier AI Labs as OpenAi has not applied a huge amount of computer power to the reinforcement learning stage of the reasoning model, according to POFOCH.
That is changing. Operai has said that he applied about 10 times more computer science to train O3 than his predecessor, O1, and Epoch speculates that most of this computing was dedicated to reinforcement learning. And Operai’s researcher Dan Roberts recently revealed that the company’s future plans require Prioritize reinforcement learning Use much more computer power, even more than for initial model training.
But there is still a limit to the amount of computing that can be applied to reinforcement learning, per time.

Josh You, an epoch analyst and author of the analysis, explains that the performance profits of the standard models currently fart every year, while reinforcement learning performance gains are growing ten times every 3-5 months. The progress of reasoning training “will probably converge with the general border by 2026,” he continues.
Epoch’s analysis makes a series of assumptions and is based in part on the public comments of the executives of the AI company. But he also defends that scale reasoning models can be challenging for reasons in addition to computer science, including high general costs for research.
“If a persistent research cost is required for research, reasoning models may not climb as far as expected,” he writes. “The rapid computing scale is potentially a very important ingredient in the progress of the reasoning model, so it is worth tracking this closely.”
It is likely that any indication that reasoning models can reach some type of limit in the near future concern the AI industry, which has invested huge resources in development of this type of models. Studies have already shown that reasoning models, which can be incredibly expensive to runThey have serious defects, as a tendency to hallucinate more That certain conventional models.