The promise of Artificial (General) Intelligence is the greatest hype bubble this side of the new millennium.
Huge checks are being cashed on the promise of AI’s profitability. The chip manufacturer Nvidia currently has a market cap of 3 trillion dollars, making it the second-most valuable company in the world. Its bloated valuation stems in large part from the ongoing AI revolution, which has sent demand for graphics-processing chips like those made by Nvidia soaring. Meanwhile, OpenAI, the maker of ChatGPT, was valued at an astonishing $340 billion in early 2025. And there’s little sign that investments in the technology are letting up. Four tech titans—Meta, Alphabet, Amazon, and Microsoft—plan on pouring more than $300 billion into AI in 2025 alone. Clearly, there’s lots of loose capital floating around for those willing and able to get aboard the AI hype train.
But beyond the bloated market caps and overinflated investment drives—no different in essence from past speculative bubbles, from Dutch tulips to dotcom startups in the Y2K era, the“irrational exuberance” of stock overvaluation, in Alan Greenspan’s memorable phrase—a growing chorus is asking: Where is the value in AI, both in the narrow economic and wider social sense?
So far at least, the AI revolution has been characterized by two countervailing tendencies: on the one hand, an enormous willingness to invest in the booming AI industry (fueled in part by the fear of being “left behind”), and, on the other hand, extremely meager returns in any substantive, meaningful sense of that term. As the Boston Consulting Group noted last October, tackling the narrower financial meaning, “After all the hype over artificial intelligence (AI), the value is hard to find.”
Is AI making us more productive? Is it resulting in better-quality outputs? Is it solving real-world problems at a scale and with a degree of accuracy and quality commensurate with its significant energy usage and fiscal investments?
The answer to all of those questions, in my opinion, is no, and is likely to remain so for the foreseeable future. As the economist Daron Acemoglu has argued, AI’s productivity contribution will likely be no more than 0.5 percent in total over the next decade. “I don’t think we should belittle 0.5 percent in 10 years,” Acemoglu has said. “That’s better than zero. But it’s just disappointing relative to the promises that people in the industry and in tech journalism are making.” And it’s particularly disappointing given the trillions of dollars and hundreds of terawatt-hours AI-driven industries aim to absorb in the years ahead. As ever, we need to ask about opportunity costs: What might humanity have accomplished were these resources used differently?
By now, the Internet is chock-full of stories of how AI has failed to deliver in a social, less economic, sense. One basic issue is that of generative AI’s proneness to error. LLMs struggle with truth and with correspondences between text and reality. Hallucinations aren’t incidental to LLMs—they’re inherent. Hallucinations aren’t contingent bugs to be ironed out in some future iteration, given “better” data (but from where?): They’re an ontological, or necessary, feature of the tech involved.
Now, humans left to their own devices will always commit errors, and the tolerance for errors is itself variable: The range of acceptable error differs significantly for an AI (or a human) when writing a college term paper, say, or helping land a passenger jet, or detecting cancerous growths. Still, we may soon inhabit a Brazil-like world of half-broken technologies always in need of (impossible) repairs: In Terry Gilliam’s absurdist 1985 movie Brazil, we confront a dystopian future in which nothing works the way it ought to, its futuristic infrastructure held together by little more than duct tape, anarchic subterfuge, and a semi-resigned willingness to accept catastrophic error as a built-in feature of daily life.
Catastrophic errors, confidently pronounced, are likely to become our future as well. Last year, a Purdue study found that ChatGPT provided erroneous answers to programming questions in 52% of cases. What happens once those errors find their way into our society’s basic infrastructure?
More comically, I was recently scolded by ChatGPT for claiming that Donald Trump was president of the United States: “Trump is no longer in office,” the large-language model cheerily pronounced. But hallucinations are no laughing matter: They’re already having real-world consequences. A Norwegian man recently filed a complaint demanding that OpenAI be fined after ChatGPT erroneously claimed that he had murdered his own children. What if someone had acted on those mistaken claims? An Australian passenger traveling to Chile was recently told by ChatGPT that he would not need a visa to enter the country (“You can enter visa-free”), which was wrong.
In academia—a world that, at least in theory, revolves around the distinction between truth and falsehood—the effects are becoming particularly noticeable. University libraries are being overrun by students in search of fictitious sources— books and articles that simply do not exist—that have been recommended to them by chatbots. Worryingly, Los Alamos National Laboratory, which conducts research on sensitive technologies, had to warn its users against the threat of fake citations, including the “higher chance” of encountering “‘ghost’ or ‘hallucinated’ references” in published works. Back in January 2023, I called this AI’s propensity to produce “credible nonsense”—that is, plausible-sounding outputs with little or no connection to really-existing reality.
More what we read, including scientific publications, is increasingly shot through with AI-generated content. Here, for instance, is a book chapter published by Springer containing three instances of the ChatGPT-derived phrase “Certainly! Here is the translated text,” likely pasted directly from the platform’s prompt—just a tiny example of how LLM-speak filters its way into the intellectual sphere. Princeton University academics last year tried to assess Wikipedia’s proportion of AI-generated articles and arrived at an estimate of around 5 percent. Estimates of this kind will always be uncertain because of the essential camouflage of LLM-ed output, but it seems likely that the level of AI-written content will only increase. Worryingly, a Columbia Journalism Review-affiliated study found that AI platforms provided erroneous sources in more than 60 percent of the researchers’ queries. The CJR’s piece was titled, simply, “AI Search Has A Citation Problem.” Moreover, this error-proneness is a danger to AI itself: As LLMs begin to “ingest” synthetic but mistake-riddled outputs as part of their training data, the result may be be an “unintentional feedback loop” of ever-worsening outputs, as one researcher wrote in the New York Times last year.
More and more evidence suggests that AI will have ruinous effects on already whittled-away powers of concentration, reading, writing, and thinking. The “loss of decision-making” as younger generations increasingly come to rely on AI could plausibly cause a reduction in overall human intelligence.
The key issue is that we essentially hone our intelligence by engaging in intelligence-demanding activities; but AI reduces our need to do so (so-called cognitive offloading), and so, the chance to develop basic skills like wading through lengthy writings, summarizing lectures or reading materials, and writing unaided by technology. AI could be useful for older generations who have already developed the requisite skills but will likely wreak havoc on younger people’s cognitive capacities—and the world they will ultimately create and inhabit.
But in the near-term, the real risks from the AI hype bubble are financial, which is to say structural to the world economy. As the venture capital firm Sequoia Capital’s analyst David Cahn reiterated last summer: “Where is all the revenue?”. While Cahn had found that by late 2023, AI industries would have had to generate $200 billion in revenues, by the end of 2024, $200-billion-dollar question had now become “AI’s $600B question.” While Cahn ultimately believed “it will be worthwhile” and that “speculative frenzies are part of technology, and so they are not something to be afraid of,” others are not so bullish.
Ali Baba’s Joe Tsai recently warned of an AI data center construction bubble: Centers are being built en masse with an unclear customer base. Similarly, the billionaire investor Roy Dalio likened “investor exuberance over artificial intelligence” to the “build-up to the dotcom bust at the turn of the millennium,” the FT reported earlier this year. As an essay in The American Prospect on “bubble trouble” notes, “If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.”
That’s a big “if,” of course. But the question is worth asking more forcefully than it has been so far. With hundreds of billions of dollars in AI investments slated for the next few years, there will have to be significant returns lest the hype bubble burst, leaving governments and the public to foot the bill for the inevitable post-implosion cleanup. Sleepwalking into a hype-driven meltdown just doesn’t seem very intelligent.
A decent piece but could have done with more analysis about *why* you have the beliefs you have about AI not meeting the hype.