The Bubble
The Great Deflation: Inside the Pop of the 2025 AI Bubble
The "pop" heard ’round the financial world wasn't a single, deafening explosion like the Lehman Brothers collapse, nor was it a sudden black Monday. Instead, it was a series of hisses—the sound of air slowly, agonizingly leaving a very expensive balloon.
By late 2025, the AI landscape had morphed into something resembling the comical chaos of a 1920s political cartoon. You can almost picture it: fat cats in top hats (and fleece Patagonias) scrambling as the balloon they’d been pumping full of cash began to wobble precariously. The narrative had shifted. It was no longer just about who had the smartest chatbot or the most creative image generator; it was about who could survive the "circular economy" they had constructed to keep the party going. The euphoria of 2023 and 2024—fueled by breathless demos and infinite promises—had been replaced by a gnawing anxiety, a realization that the financial gravity they had defied for so long was finally starting to pull.
The Five-Trillion-Dollar Elephant
Let’s look at the numbers, because they are staggering. By November 2025, Nvidia officially crossed the $5 trillion market cap threshold. To put that in perspective, a single chipmaker became worth more than the GDP of almost every nation on Earth, eclipsing economic powerhouses like Germany and Japan. It was a triumph of engineering and hype, a monument to the belief that silicon was the new oil.
Jensen Huang, the company’s leather-jacketed CEO, sat atop a backlog of bookings that essentially amounted to selling pickaxes during a gold rush where everyone is convinced there’s gold, even if they haven't found much yet. "The world is voting with real capex," Huang told investors, dismissing the skeptics with the confidence of a man holding a royal flush. His argument was simple: the old way of computing (CPUs) is dead, and the only way forward is through his chips (GPUs). He spoke of the "end of Moore's Law" and the necessity of accelerated computing, framing the trillions in spending not as a gamble, but as a mandatory infrastructure upgrade for the species—a tax on the future of intelligence.
However, historical parallels began to haunt the trading floors. Veterans of the dot-com era pointed to Cisco Systems, the "plumbing" of the internet, which briefly became the world's most valuable company in 2000 before losing 80% of its value when the demand for routers saturated. The question on everyone's mind was: How many chips does the world actually need if the software isn't profitable?
The "Circular Economy" of AI
But beneath that shiny valuation lies a mechanism that economists and cynical analysts have started calling "round-tripping" or the "circular economy of AI." It is a financial perpetual motion machine that would make a Ponzi schemer blush.
Here is how the loop works: Big Tech companies (Microsoft, Amazon, Google) spend billions buying Nvidia chips to build massive data centers. They then invest billions into AI startups (like OpenAI, Anthropic, or xAI). Those startups, flush with cash but lacking their own hardware, turn around and spend that investor money on... cloud credits and Nvidia chips.
It gets even more incestuous. Nvidia itself began investing directly in the startups that were its biggest customers. For instance, Nvidia invests in a cloud provider like CoreWeave or a model builder like Mistral. Those companies then use that capital—often leveraged against the very chips they are buying—to purchase Nvidia GPUs. Nvidia books the revenue, beating quarterly expectations. Its stock price soars. It uses that inflated stock currency to invest in more startups.
As one analyst noted, "It looks like growth, but it is often just the same dollars changing hands, inflating valuations at every stop. It is the financial equivalent of two people in a lifeboat selling the same life jacket back and forth to each other for higher and higher prices."
The risk here is counterparty contagion. If one link in this chain snaps—say, a startup fails to raise its next Series F round—the revenue for the cloud provider dries up, the demand for chips vanishes, and the investment portfolio of the tech giant takes a hit. The entire ecosystem is leveraged on the assumption that the next round of funding is guaranteed.
The Tycoons: Pivots and "Exhausted" Knowledge
The personalities driving this bubble are as colorful as the market is volatile, and their recent moves tell you everything you need to know about the fear lurking behind the scenes.
Elon Musk, never one to shy away from drama, dropped a bombshell early in the year regarding the limits of the tech itself. In a moment of candor that sent shivers through the research community, he declared that the well had run dry.
"The cumulative sum of human knowledge has been exhausted in AI training," Musk said. "That happened basically last year."
His implication was terrifying for the industry: the models might have run out of things to learn. If the internet has already been scraped clean—every book, every article, every Reddit thread—how do you make the models smarter?
His solution is synthetic data—AI teaching AI. It is a risky bet that the models won’t cannibalize themselves. Researchers warn of "model collapse," a degenerative process where an AI trained on AI-generated data eventually starts outputting gibberish. It is the digital equivalent of inbreeding; errors compound, nuances are lost, and the output becomes a homogenized sludge. It’s like making a photocopy of a photocopy of a photocopy; eventually, the image is just noise. Yet, Musk’s company xAI continued to raise billions to build the "Colossus" cluster, betting that raw compute could overcome the lack of fresh human insight.
Then there is Mark Zuckerberg. Remember the Metaverse? Neither does Wall Street. In a ruthless pivot reported in December 2025, Zuckerberg slashed the budget for his Reality Labs (the Metaverse division) by nearly 30% to feed the AI beast. The billions that were once destined for virtual reality headsets and legless avatars were re-routed to buy GPUs.
He is effectively cutting one dream to finance another, betting the farm that AI "agents" will be the personal superintelligence for billions of users. The strategy shifted from immersing users in a virtual world to overlaying intelligence on the real world via smart glasses.
"We are really more bullish on the importance of wearables," Zuckerberg admitted, signaling that his AI ambitions now outweigh his virtual reality ones.
Sam Altman of OpenAI remains the enigmatic central figure. Facing a burn rate that saw his company losing $13.5 billion while generating only $4.3 billion in revenue in the first half of 2025, he remains unfazed. The gap between what OpenAI spends on compute and what it makes from $20 monthly subscriptions is a canyon that is being filled with venture capital.
He has admitted that the current craze has "bubble" characteristics but insists the long-term payoff is huge. His ask? Trillions of dollars for infrastructure—a project codenamed "Stargate." It is a wager that requires the entire global economy to reorganize itself around his product to justify the cost. Altman is essentially asking the world to build a nuclear power plant to run a toaster, promising that the toaster will eventually be able to solve cold fusion.
The Quiet Arrival of Reality
Google’s release of Gemini 3 in November 2025 was telling. Unlike the splashy, celebrity-filled demos of the past—where models wrote poetry or created surreal videos of astronauts riding horses—this rollout was "silent" and enterprise-focused. It wasn't about magic; it was about utility. The marketing focused on "factually accurate" legal analysis, employee onboarding, and supply chain logistics.
It was boring, and that is exactly what the market needed to see. The industry was trying to pivot from "creative" AI (which is fun but hard to monetize) to "reliable" AI (which is boring but profitable).
However, utility is hard. An MIT study from late 2025 cast a long shadow over the industry, claiming that 95% of enterprise AI pilots were failing to deliver value. The report highlighted a "learning gap." Companies were finding that while AI could write a decent email, trusting it to handle a supply chain or manage a bank account was a different beast entirely.
The issues were practical, stubborn, and expensive:
Trust: Executives were terrified of "hallucinations" where an AI might invent a legal precedent or a financial figure. A 1% error rate is acceptable for a poem; it is catastrophic for a ledger.
Integration: Plugging a stochastic, unpredictable model into a rigid, legacy corporate database proved to be a nightmare. It was like trying to teach a poet to use Excel—possible, but painful.
Cost: The cost per query for these massive models meant that for many tasks, it was still cheaper to hire a human than to run the GPU. The "productivity gains" were being eaten alive by the "compute costs."
The Verdict
We are left with a market that is essentially fighting a war between "financial gravity" and "technological belief."
On one side, you have the Believers (Jensen Huang, Satya Nadella) who argue this is a platform shift as significant as the internet or electricity. Nadella, in his 2025 letter to shareholders, urged them to "think in decades," ignoring the messy quarterly results. They argue that the infrastructure must be built before the killer apps arrive, just as fiber optic cables were laid before Netflix existed. They believe we are at the bottom of the "S-curve" and that exponential growth is just beginning.
On the other side, you have the Skeptics, led by voices like Goldman Sachs’ Jim Covello, who look at the "circular" revenue streams and ask the uncomfortable question: "What $1 trillion problem is AI hoping to solve?" They argue that replacing low-wage tasks with incredibly expensive technology is bad economics. They see a market propped up by vendor financing, where the only people making money are the ones selling the shovels, and the people buying the shovels are paying with borrowed money.
