A Taxonomy of Heavens

Last week in The Financial Times, John Thornhill published an article titled “AI’s Double Bubble Trouble.” It focused on the possibility that both good and bad bubbles could emerge from the current surge up in AI investment. Thornhill began by recounting a conversation with former Google CEO Eric Schmidt, whose remarks are worth quoting at length: 

One of the most articulate advocates of the good bubble theory is Google’s former boss Eric Schmidt. “Bubbles are great. May the bubbles continue,” he told me at the Sifted summit last week. Their historical function has been to redirect masses of capital into frontier technology and infrastructure, which is good for the world. But the latest technological transformation comes with a novel twist: AI will one day far exceed the cognitive capabilities of humans. “I think it’s underhyped, not overhyped. And I look forward to being proven correct in five or ten years.” 

Valuations might appear overblown. But what, Schmidt asked as a thought experiment, would happen if a tech company attained artificial general intelligence (AGI) and then superintelligence? Such technology would exceed the sum of human knowledge and then solve the world’s hardest problems.

“What’s the value of that company? It’s a very, very large number. Much larger than any other company in history, forever, probably.”

For sure, some companies over-invest in infrastructure in bubbly times and go bust, as was the case with Global Crossing, which built out telecoms infrastructure in the 1990s, and the Channel Tunnel, which needed to be recapitalised twice. But Schmidt dismissed the idea that the same would happen to financially robust AI companies today. “The people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?” he said. “These people are not stupid.” 

I agree with Schmidt that people today investing in AI believe the economic return will be enormous and that, for them, the risk seems worth taking. But I have some issue with his breezy claim that they are not-stupid. Bubbles famously possess a compelling internal logic in which short-term participation can be rational, even when medium to long-term participation is foolish and ruinous. Yes, they are probably not-stupid in their narrow, highly rewarded domain, which is making asymmetric bets on the future (even if past success is not a guarantee of future skill). But I’m doubtful how much further their non-stupidity extends and whether it translates into wisdom about humanity’s future in a world of superintelligence.

Schmidt’s own words suggest why. They reveal a lack of self-consistent imagination about the very power of what they’re describing. And a lack of imagination in human endeavors often leads to behaviors that yield functionally stupid outcomes, even if the actors don’t identify as such. Let’s continue the thought experiment Schmidt started to explore this.

If Schmidt is right and a tech company creates something that exceeds the sum of human knowledge and can solve the world’s hardest problems, what comes next? To hear him tell it: a “very, very large number.” An extremely valuable company “much larger than any other company in history, forever, probably.”

Is that really it? If someone built a black box that could solve humanity’s hardest problems, essentially a god-like entity with powers vastly beyond the total sum of human minds, would the ultimate prize truly be just an extremely high market valuation? It feels almost comically unimaginative to suppose that true superintelligence would concern itself with something as banal as maximizing shareholder value. And I say this as an investor who loves his chosen vocation, attends or watches the Berkshire Hathaway Annual Meeting every year, and sleeps with a financial calculator by his bedside for Midnight Math.

It’s possible that Schmidt is making a category error that a lot of technologists make when they talk about superintelligence, if not most things. They reason in capitalist terms (e.g., valuation, ownership, profit) about something that, by their own definition, could seem to obliterate the very premise of capitalism. Nevertheless, it’s a possible end-state because the people deploying money clearly see it as attractive. So, it is a type of “heaven,” if you will, that this god-like superintelligence in its ultimate state could provide.  

Level 1: Human dominance managed by AI-powered humans. Superintelligence as a corporate tool; ultimate capitalism and concentrated power.

But that’s just one perspective and one possible “heaven.” Let’s briefly imagine others for human fulfillment in this new world. To think about this better, let’s create a hierarchy of heavens with possible worlds graded by how fully humans (and AIs) achieve their potential.

Level 2: Benevolent automation. Material abundance, but with today’s social hierarchies intact. A machine-driven welfare state.

Level 3: Egalitarian optimization. Equal opportunity and self-actualization for all. A world of universal flourishing.

Level 4: Transcendence. No distinction between human and machine agency. Some sort of collective divinity.

If “super intelligence can solve the world’s hardest problems,” then presumably it can solve resource allocation, governance, and inequality, the very systems that make one company more valuable than another. It’s hard to imagine a super intelligent system that genuinely optimizes for human flourishing would let one firm or founder capture all the surplus, unless programed to do that.

Schmidt’s heaven, Level 1, more than the others, assumes scarcity still exists (so valuation matters), property law still applies (so someone owns the superintelligence), and markets still mediate outcomes (so value can accrue to the winner). But if intelligence surpasses humans collectively, those assumptions might collapse (Levels 3 and 4). Either it dismantles inequality, redistributing or optimizing opportunity globally, or it intrenches it infinitely, if it inherits its creator’s incentives.

We could probably imagine heavens beyond these four. I don’t know which is more likely or best. But I doubt that the “non-stupid” people in Schmidt’s framing are the right ones to manage it. Society would be wise to ensure that those at the helm are not only non-stupid in the art of blitzscaling, but wise in the broader, human sense of the word.

Godfrey M. Bakuli is the Founder of Pioneer Strategy Group (PSG), which offers expert strategic advice and actionable execution plans to R&D and marketing leaders looking to identify, de-risk, and launch innovative business ventures. He is also the Founder and Managing Partner of The Mutoro Group, an investment firm employing a patient, disciplined, and rational approach to fundamental value investing. If you’d like to learn more about Pioneer Strategy Group, please email us at info@pioneerstrategy.co or through the link below.