Are AI Valuations a Huge Bubble?
More specifically, will AI Eat Its Own? A Financial Savant, John Talbott, Weighs In
Nvidia is the world’s most valuable company with a market cap of close to $5 trillion. Yet its revenues are peanuts — only $130 billion. And its earnings are just $17 billion. Hence, Nvidia is selling at 37 times revenues and 56 times earning. The company’s price-earnings ratio is roughly four times the S&P’s historic average. Nvidia is an outlier. But the S&P’s P/E ratio is roughly 75 percent above its mean.
As you likely already know, the reason the market’s been booming is that the valuations of the 10 largest AI companies, including Nvidia, have skyrocketed to the point that they collectively account for over one third of the S&P’s market cap. Meanwhile a recent MIT study reports,
Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return
Listen to John, But First, Ask Yourself If You are a Downside Investor
My friend John Talbott has an extraordinary background in finance. As his bio below indicates, he’s written nine books on finance and economics, worked as a top investment banker for Goldman Sachs, advised a host of top US companies and foreign countries, taught finance at universities in four countries, and authored op eds for or appeared on essentially every leading financial media outlet.
John and I routinely discuss the economy, markets, and geopolitics. Of late we’ve been discussing whether it’s 1929 all over again — in particular whether AI company valuations are a massive bubble that will shortly burst bringing down the entire stock market with inevitable financial and economic knock-on effects.
I’m going to quote John’s views below. But let me first point out something that is obvious after it’s obvious. If a) you are happy as a claim with your living standard and wouldn’t raise it if you could, b) your living standard is affordable (to check, run MaxiFi) assuming you invest safely, e.g., in Treasury Inflation Protected Securities, d) you have no kids or heirs, and d) you are invested in the stock market, the following is true.
You are investing solely for the downside and should get the hell out of the market.
Why the downside? Because if the market does well, you aren’t going to raise your spending. But if it drops, and does so by a lot, you will need to reduce your spending. Hence, you only have downside living-standard risk.
If you do have kids and are spending less than you could, when measured on a safe-real-return basis, the money you don’t intend to spend is, effectively, their money. Then the first question you need to ask is whether you want to give them gifts now or leave them more assets when you pass. If you prefer to leave them a bequest, the second question is whether you want to put that bequest, if not your own future living standard, at risk. If the answer answer is no, then
You should get the hell out of the market.
Most of Us Don’t Invest for the Downside
If, like most people, you are investing for your own or your kids’ possible upsides, you still may want to exit the market, at least for a while, given the potential of AI stocks to tank.
But wait, how can I call out this potential if I’m an economist? All financial assets are, according to economics, priced taking into account all known information. If AI stocks were about to tank, they would have tanked already. I.e., it’s only the arrival of new information that can impact the value of financial securities.
Fair enough, but the proposition that markets are fully rational seems rather hard to swallow here — just as the valuations of the DOT COM stocks in the year 2000 seemed hard to swallow.
Personally, as a very risk averse investor, I’d stay clear of AI stocks unless someone can clearly respond to John’s concerns.
John’s Concerns
Industry rarely needs big, cutting-edge LLM AI models because businesses just don’t have many huge databases that need analyzing and certainly if they did, they wouldn’t need to analyze them more than once a month.
Running a business is not as complicated as learning how to speak English or how to assemble amino acids into proteins.
Companies will develop their own small AI programs that are trained on their very small databases and then if they want to create a written report in Swahili they may plug into a big AI program for 1/2 a second and let it write it.
If true, then why is there so much pent-up demand for data center time?
It’s not business clients that are maxing out the current data centers. It is the AI companies themselves who can monopolize a huge data center for a month just to develop and train the next iteration of their very big LLM models.
This tells me a number of things.
1 - AI models have either slowed or peaked in their quality or ability to avoid hallucinations. Unlike computer chips where smaller, more powerful chips mean faster processing times, there may be a limit to Moore’s law for LLMs. Bigger is not necessarily better.
2 - If you haven’t drank the AGI kool-aid being served all over Silicon Valley, it is pretty obvious that an LLM that just predicts what the next word should be is not thinking or reasoning - it is just pretending to. If creativity and innovation were that easy, Einstein would not be our greatest scientist. Instead, it would be the champion contestant on Wheel of Fortune.
3 - Yes it takes a huge data center a month to train its LLM, but the output is a trillion data place mathematical matrix that assigns all the probability weightings to predict the next word. And here is the killer - the matrix fits on a USB stick. No chance that that won’t get pirated quickly.
4 - I don’t see any network or first mover advantages in AI unlike google’s Search or Facebook’s global web of friends or the universal acceptance of MS’s WORD and EXCEL so no barriers to entry and no monopolistic margins.
5 - If the finished matrix is that small, there is no reason a couple of guys in a garage who have access to the matrix can’t price LLMs at low-margin commodity prices solely reflecting the cost of the massive amount of electricity needed per output. They won’t have to recoup $5 trillion in capital spending on AI models because they never spent a dime.
6. So, I have concluded that this is a massive bubble. If my reasoning is right above then the most overvalued companies are those needed to build thousands more data centers. Nvidia jumps out as the one in most trouble, but there are other companies like Core-Weave where building data centers is 99% of their business.
7. We have to look at the remaining Big 7 on a case by case business.
Tesla’s car business is collapsing so the question comes down to whether you want to bet $1.5 trillion on a drug-addicted, aging inventor who wants to build robotaxis, humanoid robots and relocate human civilization to the barren planet of Mars. Its LLM AI business is essentially worth nothing.
Cloud computing companies like amazon, MS and google should do fine as every industry will be impacted by AI and renting cloud space.
Facebook and Apple will fall back on their existing businesses and their P/Es will lose the secret sauce premium of AI. But there is a silver lining, Zuckerberg and Apple will quit wasting all their FCF on AI development so maybe instead they can distribute some of their FCF to shareholders.
While not a Big 7, Open AI disappears under a morass of leverage, partnerships, side-hussles and simple mismanagement.
And the rest of American companies should see some cost savings as they replace cubicle-dwelling, white-collar workers who have not had a value-adding creative thought in 30 years with similarly non-thinking AI bots and agents. This might justify a slight P/E premium to corporate America from the expected future pop to earnings, but whatever value is created may be over-run by a recession caused by the unemployment of 10 to 20 million lost souls and no job opportunities for recent college grads.
Here is what I said in Jan 2025.
AI is great at analyzing huge databases. LLM’s have trained on all the words available on the internet.
And what it can do with huge databases is amazing - witness what it has done examining protein formations.
But, there just aren’t that many huge databases that need analyzing. And once AI has analyzed all protein structures it certainly doesn’t have to reanalyze them again hourly, daily or even yearly. These huge databases are rather static.
Maybe predicting the weather or scheduling commercial aircraft has to be continuously updated, but these are the exceptions to the rule.
Most companies and industries have relatively tiny databases that need analysis which can be accomplished without huge Nvidia chips. Each industry will rely on much smaller database analysis and then “rent” a LLM to report the summarized results in written or spoken English.
This is the mother of all bubbles. The increase in market value will not accrue to the AI providers, it will distribute across all industries as costs are lowered as humans are replaced with machines.
Let me add some additional concerns.
AI may be running out of fresh data on which to train.
AI has no clear way to telling true from false data. Hence, one can expect it to generate at least partially wrong answers.
AI may be reading its own answers. Talk about only listening to yourself.
AI may end up poaching. I.e., each LLM may end up trying to learn by querying other LLMs and then repeat back those answers. In the limit, one LLM will have no way to show its smarter than any other. This is just a restatement of John’s no monopoly edge point.
My own experience asking LLMs to solve lifetime financial problems makes me very suspicious of their value. They produce answers that are wildly different from the correct answers provided by my company’s MaxiFi Planner software. And financial firms are among the top users of Ai.
John R. Talbott
John most recently was an associate professor of Finance at S P Jain School of Global Management with campuses in Dubai, Mumbai, Singapore and Sydney. He is also a best-selling author whose nine books on economics and finance predicted the entire global financial crisis starting with The Coming Crash in the Housing Market published in 2003 and followed in February 2006 with his book, Sell Now! The End of the Housing Bubble. His most recent finance book is entitled, Survival Investing, and provides investment advice to those concerned that new financial crises may reappear in the future.
Previously, John was an investment banker for Goldman Sachs where he specialized in corporate finance, M&A and leveraged buyouts. At Goldman, he structured, financed and closed four of the ten largest transactions in the history of the firm. John has acted as a financial advisor to some twenty Fortune 500 companies and numerous countries including Russia, Qatar and Jordan where he emphasized democratic reforms as a needed engine for growth.
John’s seventy plus articles on economics and finance have appeared in the Wall Street Journal, The Financial Times, The San Francisco Chronicle, The Herald Tribune, The Boston Globe, The New Republic, The Huffington Post and salon.com and he has appeared on television as a finance expert on CNBC, Fox News, CBS, MSNBC, Fox Business News and CSPAN and with Dylan Rattigan, Joe Scarborough, Neil Cavuto, Maria Bartiromo and Larry Kudlow.
He was named a Visiting Scholar at UCLA’s Anderson School of Business and has written and published peer-reviewed academic research articles on economic growth, emerging economies, inequality, democracy and AIDS prevention. John is a structural engineer from Cornell University and holds an MBA from UCLA’s Anderson School.


Very helpful post. Reminded me of this classic from Cliff Asness at the tail end of the dot com bubble:
Bubble Logic: Or, How to Learn to Stop Worrying and Love the Bull by Clifford S. Asness
https://share.google/ntmwIUP0xoaUCu6rK
I'll pick up that gauntlet!
I think this post misses the point about where AI supporters think the profits will come from.
It's not from analyzing large databases. Claude's Anthropic AI index indicates the following as common uses of their tool:
>Helping with and automating coding and math (36.9%)
>Education (12.7%)
>Office and administrative support (8.4%)
https://assets.anthropic.com/m/218c82b858610fac/original/Economic-Index.pdf (figure 1.1)
People are using AI to help themselves with concrete work tasks, not merely analyzing large datasets.
If you try to get at the economic value of the tasks getting sped up or automated, you quickly get numbers in the range of Trillions of dollars a year. See for example Open AI's GDPVal paper, which I discussed on your podcast: https://openai.com/index/gdpval/. Specifically if you just look at tasks where AI models do 50-50 or better (or choose a higher threshold) in a head-to-head matchup with human experts, as judged by human experts, you can multiply by implied wage bills to get big numbers very fast.
Now this calculation is dubious -- just because the AI is 50-50ish vs. humans (or 70-30 vs. humans) in a contest doesn't mean you'll automatically defer to it. And just because it is that good doesn't mean that AI companies will make those profits -- they might be competed away or concentrated in the hands of whatever scarce factor remains (energy? chips? top tier human innovators?). But it does suggest massive productivity gains -- and therefore revenue -- are possible. Even with this generation of technology which is the worst the AI will ever be (see the Scaling Law -- which implies AI will only get better and better).
So there are at least two multi-trillion dollar annual revenue streams in play:
>Trillions in productivity gains from automating the knowledge work that can be automated
>Trillions in potential ad-revenue from when OpenAI finally starts placing ads in their service which is used to get purchasing advice from over 700 million weekly active users.
I can't guarantee this isn't a bubble -- of course current companies could drop the ball, or profits could end up accruing somewhere unexpected. But it's naive to think that the productivity gains won't make that possible because AI is 'there just aren’t that many huge databases that need analyzing'.