China’s DeepSeek has claimed its flagship AI system, known as R1, was trained for just $294,000, which is a fraction of the sums believed to be spent by US 0 details were published in a peer-reviewed paper in Nature this week, and it is likely to fuel further debate over Beijing’s ambitions in the global artificial intelligence 1 Hangzhou-based company said the reasoning-focused model was trained using 512 Nvidia H800 2 hardware was designed specifically for China after the US prohibited sales of the more powerful H100 and A100 3 paper, which was co-authored by founder Liang Wenfeng, marks the first time the firm has disclosed such 4 uses a fraction of US models’ cost In January, the release of DeepSeek’s cheaper AI tools destabilized global markets, resulting in a sell-off in tech shares on fears they could undercut established giants such as Nvidia and OpenAI .
However, Liang and his team have kept a low profile, surfacing only for sporadic product updates ever 5 reported $294,000 price tag stands in stark contrast to estimates from American 6 chief executive of OpenAI, Sam Altman, in 2023 said: “Training foundational models cost much more than $100 million.” However, he did not give out any specific 7 large language models involves running banks of powerful chips for extended periods, consuming enormous amounts of electricity while processing text and 8 observers have long assumed the bill for such projects runs into the tens or even hundreds of 9 assumption is now being challenged, and in a supplementary document, DeepSeek admitted it owns A100 chips and had used them in early development, before moving the full-scale training onto its H800 10 to the tech firm, the model ran for 80 hours during its final training 11 though Nvidia has insisted that the Chinese startup has access only to their H800 processors, American officials remain sceptical.
A few months back, US sources told Reuters that DeepSeek illegally owns large volumes of the H100 chips that have export bans to 12 innovation under the microscope R1 has drawn attention not only for its low training costs but also because it may be the first major model to undergo formal peer review. “This is a very welcome precedent, and if we don’t have this norm of sharing, it becomes very hard to evaluate risks,” said Lewis Tunstall, a machine-learning engineer at Hugging Face who reviewed the Nature 13 review process prompted DeepSeek to clarify technical details, including how its model was trained and what safeguards were in place.
“Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model,” said Huan Sun, an AI researcher at Ohio State University. DeepSeek’s key breakthrough was using a pure reinforcement learning 14 of relying on human-curated reasoning examples, according to the 15 model was rewarded for solving problems correctly and gradually developed its own problem-solving 16 firm says this trial-and-error system allowed R1 to verify its workings without copying human tactics. “This model has been quite influential,” Sun added. “Almost all reinforcement learning work in 2025 may have been inspired by R1 one way or another.” DeepSeek denies copying claims Soon after R1’s release, speculation swirled that DeepSeek had leaned on rival outputs, particularly from OpenAI, to accelerate training; however, the company has now flatly denied that 17 correspondence with referees, DeepSeek insisted that R1 did not copy reasoning examples generated by OpenAI.
However, like most large language models, it was trained on internet 18 means that some AI-produced content was inevitably included, and the explanation has convinced some reviewers. “I cannot be 100% sure R1 was not trained on OpenAI examples. However, replication attempts by other labs suggest reinforcement learning is good enough on its own.” Tunstall 19 says R1 is built to excel at reasoning-heavy tasks such as coding and 20 most closed systems developed by U. S.
firms, it was released as an open-weight model, freely downloadable by 21 the AI community site Hugging Face , it has already been downloaded more than 10 million 22 firm spent around $6 million developing the base model that R1 is built upon, but even with that added, its costs fall well short of the sums associated with 23 many in the field, that makes R1 24 and colleagues recently tested the system on scientific data tasks and found it was not the most accurate, but among the best in terms of cost-to-performance. Don’t just read crypto 25 26 to our newsletter. It's free .
Story Tags

Latest news and analysis from Cryptopolitan



