WikiBit 2025-11-22 20:39Google is now chasing 1,000× more AI compute capacity by 2029. The company told employees it must double its serving power every six months to keep up
Google is now chasing 1,000× more AI compute capacity by 2029. The company told employees it must double its serving power every six months to keep up with how fast AI demand is growing.
This was said directly by Amin Vahdat, vice president at Google Cloud, during an all-hands meeting on November 6, according to CNBC.
In his presentation titled , Amin showed a slide that didnt waste any words: “Now we must double every 6 months…. the next 1000x in 4-5 years.”
He told the room, “The competition in AI infrastructure is the most critical and also the most expensive part of the AI race.”
The meeting was attended by Alphabet CEO Sundar Pichai and CFO Anat Ashkenazi, who both took questions from employees already worried about whether the company can sustain this aggressive push.
The timing of the meeting came just one week after Alphabet‘s Q3 results outperformed Wall Street’s expectations. Sundar and Anat then raised the capital expenditures forecast again, this time to $91–$93 billion for the year, with a further “significant increase” expected in 2026.
Googles three biggest rivals in the hyperscale space (Microsoft, Amazon, and Meta) have all hiked their spending targets as well. Between the four companies, total capex this year is now projected to cross $380 billion.
Google focuses on scaling without outspending
Amin was clear that Google doesn‘t plan to spend blindly. “Our job is of course to build this infrastructure but it’s not to outspend the competition, necessarily,” he said.
“We‘re going to spend a lot,” Amin added, but stressed that the goal is to build systems that are “more reliable, more performant and more scalable than what’s available anywhere else.”
To hit that level of efficiency, Amin said the company is relying not just on bigger data centers, but on smarter architecture, custom silicon, and better AI models.
One major piece is the newly launched Ironwood TPU, the seventh generation of Googles Tensor Processing Unit. He said Ironwood is nearly 30× more power efficient than the first-generation TPU from 2018.
He also pointed to DeepMind as a long-term advantage, saying its research into future AI model designs gives Google insights that others dont have. But the infrastructure must catch up.
“We need to deliver 1,000 times more capability, compute, storage networking for essentially the same cost and increasingly, the same power, the same energy level,” Amin said. “It won‘t be easy but through collaboration and co-design, we’re going to get there.”
Sundar later warned that 2026 will be intense, pointing to the growing demand for cloud and compute capacity across industries. He also tackled employee concerns around a possible AI bubble, which many investors and analysts have been debating this year.
One staffer asked, “Amid significant AI investments and market talk of a potential AI bubble burst, how are we thinking about ensuring long-term sustainability and profitability if the AI market doesnt mature as expected?”
Sundar warns against underinvestment despite market fears
Sundar didn‘t dismiss the concern. “It’s a great question. Its been definitely in the zeitgeist, people are talking about it,” he said.
But he warned that underinvesting would carry bigger risks. He pointed to Google Clouds growth, which jumped 34% year-on-year to $15 billion in Q3, with a backlog of $155 billion. “Those numbers would have been much better if we had more compute,” Sundar added.
He said the company has built flexibility into its balance sheet and is ready for market swings. “We are better positioned to withstand, you know, misses, than other companies,” he said.
Anat was also asked a tough question about the pace of capex growth: “Capex is accelerating at a rate significantly faster than our operating income growth. What‘s the company’s strategy for healthy free cash flow over the next 18 to 24 months?”
She said the business has real opportunities to expand, especially by helping companies move from traditional physical data centers into Google Cloud. “The opportunity in front of us is significant and we cant miss that momentum,” Anat said.
Gemini 3 launch reveals compute strain
Google launched Gemini 3 earlier this week, its newest AI model. The company claims it can handle more complex questions than any of its earlier versions.
But the celebration was short-lived. Sundar said the real problem now is distribution, not development. He brought up Veo, the video generation tool the company upgraded last month, as an example.
“When Veo launched, how exciting it was,” Sundar said. “If we could‘ve given it to more people in the Gemini app, I think we would have gotten more users but we just couldn’t because we are at a compute constraint.”
He told employees to brace for turbulence in 2025 and beyond. “There will be no doubt ups and downs,” he said. “It‘s a very competitive moment so, you can’t rest on your laurels. We have a lot of hard work ahead but again, I think we are well positioned through this moment.”
Talk about a possible bubble intensified ahead of Nvidias Q3 earnings this week. Shares of AI-heavy names like CoreWeave and Oracle have dropped over the past month.
Sundar told the BBC that market behavior shows “elements of irrationality” and warned, “If a bubble were to burst, no company is going to be immune, including us.”
Nvidia CEO Jensen Huang rejected that idea on Wednesdays call, saying, “We see something very different.”
Nvidia, which counts Google as a major customer, reported 62% revenue growth and delivered better-than-expected Q4 guidance.
Still, the market didn‘t reward the results. Nvidia fell 3.2%, dragging the Nasdaq down 2.2%. Google’s parent Alphabet dropped 1.2% on the same day.
Disclaimer:
The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.
0.00