![demis hassabis sitting in a chair in front of a dark backdrop with his hands together](https://i.kinja-img.com/image/upload/c_fit,q_60,w_645/f22e295a151f2bc68d0f179b1ba11101.jpg)
In This Story
Despite it rattling markets and Silicon Valley, Google’s (GOOGL-1.24%) artificial intelligence head isn’t worried about Chinese AI startup, DeepSeek.
Google DeepMind chief executive Demis Hassabis told the tech giant’s employees that when looking “into the details” of DeepSeek’s AI models, the AI startup’s claims are “exaggerated,” CNBC (CMCSA+2.53%) reported, citing audio of an all-hands meeting on Wednesday.
Hassabis was reportedly asked “what lessons and implications” the company could learn from DeepSeek’s success in an AI summary of employees’ questions. He told employees that DeepSeek’s reported low cost of training competitive AI models is possibly “only a tiny fraction” of what it spent to build its AI systems, and that the AI startup likely used more hardware than it said it did. He also reportedly told employees that DeepSeek probably depended on advanced models from AI companies in the west.
“We actually have more efficient, more performant models than DeepSeek,” Hassabis reportedly told employees. “So we’re very calm and confident in our strategy and we have all the ingredients to maintain our leadership into this year.”
Neither Google nor DeepMind immediately responded to a request for comment.
Earlier this week, Hassabis said the Hangzhou-based startup’s AI model “is probably the best work” from China, and is “impressive,” during a Google event at the AI Action Summit in Paris, CNBC reported. Hassabis said DeepSeek has demonstrated “extremely good engineering,” and that its AI models have deeper geopolitical implications.
However, he also said DeepSeek doesn’t show “actual new scientific advance” and is “using known techniques” in the AI industry, according to CNBC.
Last month, DeepSeek released results for its latest open-source reasoning models, DeepSeek-R1, which performed comparably to OpenAI’s reasoning models, o1-mini and o1, on several industry benchmarks. In December, the startup launched its DeepSeek-V3 models which it said cost just $5.6 million to train and develop on Nvidia’s (NVDA0.00%) H800 chips — the reduced-capability version of Nvidia’s H100 chips used by U.S. firms.
DeepSeek’s cheaper-yet-competitive models have raised questions over Big Tech’s big spending on AI infrastructure, as well as how effective U.S. chip export controls are.