Chris Zombik

Against "Superintelligence": Responding to the "AI Futures Model: Dec 2025 Update"

Date Published

Lmao I am so mad about this:

Christians: "We've significantly improved our model(s) of the end times and the Second Coming of Christ!"

This post is a sequel to the infamous AI 2027, which, if you weren't aware, seeks to simulate how AI will either bring about a utopia or destroy the world by 2027. I am not here to comment on whether AI is Bad. But I am lowkey furious about this kind of messaging around AI, and will tell you definitively that It is Bad.

To wit:


Simple idea: extrapolate AI revenue until it’s the majority of world GDP. Of course, there’s something silly about this; every previous fast-growing tech sector has eventually plateaued… That said, AI seems like it could be the exception, because in principle AI can do everything


I will put aside the nonsense of "revenue extrapolation" as a method for divining the future to focus on a more immediate nonsense: There are all sorts of things AI can't do, in principle or otherwise! AI can't change a diaper, make you dinner, or fold your laundry (I don't want to hear about humanoid robots that do not currently and may not ever exist). Moreover, AI cannot create new art (visual, literary, musical, etc.).[1] Creating art requires taste, which AI inherently does not possess (something something the grounding problem). Consider the fact that AI can generate images (even good ones!) but it cannot tell you which of the images it generates are good.[2] And that's not even getting into the "economically useless" stuff AI can't do, like falling in love, or being a parent, or being an integral member of a friend group. 

The rest of the post deals with defining plausible milestones for AI takeoff, with substantial emphasis placed upon simulating how and when AI can become better than human experts at various tasks necessary for runaway automation. The word "superintelligence" is invoked numerous times, much to my dismay. This is the heart of my objection. As I see it, AI is not now and cannot ever be "superintelligent." We're asked to believe that AI is (somehow) going to make the leap from simply reflecting the data it is trained on, to suddenly producing original, groundbreaking insights; that AI will be a better AI researcher than the best human AI researchers, a better mathematician than the best human mathematicians, a better physicist than the best human physicists, a better novelist than the best human novelists. There is no question that AI is already a better mathematician, physicist, and novelist than most normal people (and, saliently, than all AI investors). But is that "superintelligence"? Or just a fairly good reflection of existing human intelligence?

Compressing human intelligence into a language model is a cool and interesting product. But I don't see any route from there to superintelligence. Consider: a calculator is not "more intelligent" than humans at math. The advantage of a calculator is that it's much faster than a human and cannot make mistakes. It isn't better qualitatively—it can't do any math that humans haven't already thought of. AI is much the same. The code it writes is all code you could have written yourself, given infinite time; the computer just does it faster because computers don't get tired or bored. Same thing for the emails it writes, or the research analyses it produces. Same thing for the D&D encounter ideas it generates for me. It's not better, just faster. Not quality, just quantity.

In essence, I've become completely blackpilled on the "superintelligence" view of AI progress. "Everything" isn't like chess, where a naive ML engine can discover newer and deeper winning lines from first principles based on straightforward reinforcement learning; or TikTok, where the naive engagement algorithm can do the same thing, only for holding your attention instead of beating you at chess. I'm even fairly sure that if you had enough manpower and time, humans could manually fold proteins just as well as AlphaFold. The (real, undeniable!) advantage of these AI systems is that they leverage computers to do these tasks so quickly that they make things that are uneconomical but in principle doable for humans suddenly economical to do.

Hence, I believe any benefit we may or may not get from AI will come not from it being much more intelligent than us, but much faster than us. Claims to the contrary—that due to AI world destruction or utopia are imminent—are just tech industry propaganda designed to confuse investors and scare policy makers so they keep the money flowing.

Footnotes

  1. [1]See also https://newaesthetics.art/. On the topic of using AI to break new ground in architecture and visual aesthetics, they state: "If jazz didn't exist, could you prompt Suno to create it? This seems like an open problem." I think the authors are being polite because AI is so trendy within their Silicon Valley milieu. In fact, the answer is self-evidently "No."
  2. [2]h/t ranprieur.com for this observation