the syn/ack-gogue of a.i. doomsday
Thanks to an explosion of cheap computing power, AI is advancing faster than ever. But one of it's high priests seems to be firmly stuck in the past millennium...
With the immense amount of AI hype over the last four years or so, self-appointed AI safety expert Eliezer Yudkowsky has fallen into the background, overshadowed by a small group of CEOs that own actual AI models used in the real world, and with more interesting things to say to reporters. Even the unpleasant business with the Zizians didn’t exactly bring him back into the spotlight.
But Yudkowsky is a man who really loves to hear himself talk, so last summer, he and Nate Soares, the president of one of his mini think tanks and an occasional coder as per his LinkedIn, published a book called If Anyone Builds It, Everyone Dies. Subtlety has never been his strong suit, so the name is kind of appropriate, and matches the mood of the public towards AI, although for very different reasons.
While the average person fears a chatbot taking their livelihood, or at least being the excuse to be tossed aside in favor of cheaper foreign labor, Yudkowsky envisions an Ultron-like monstrosity which poses an existential threat to humanity and for the last two decades, he’s been obsessed with figuring out all the ways it will usher in a mass extinction or some other cataclysm we’d even dread surviving.
With far more impressive models than ever at his disposal and strong ties to our tech broligarchs to get whatever information he wants, what has Yudkowsky learned? How has he updated his multi-million word musings on LessWrong to account for the new discoveries in the field?
Well, turns out he learned nothing at all. If you’ve read one LessWrong post, you’ve pretty much read them all as well as this book. He still seems to think of AI as magic that will one day evolve intelligence beyond that of all of humanity combined, will be immune to anything we can do to stop it, will have no weaknesses, and manipulate humanity into doing its bidding, which we won’t understand. Basically, AI to him is a mashup of a demon and ghost so powerful, no incantation or exorcism will work.
This has always been the problem with Yudkowsky and his mentor Nick Bostrom, as well as the whole bizarre technobabble cult they’ve inspired. It’s cloaked in endless computer and neuroscience jargon, but ultimately, it’s the equivalent of debating the exact number of angels that can dance on the head of a pin, with AI agents bound to emerge ex nihilo and take the place of a vengeful deity, as per Roko’s Basilisk.
I can’t debate with this take from a comp sci standpoint because whenever I’ve tried, it’s been like wrestling with putty. Definitions arbitrarily change, thought experiments with highly esoteric and abstract setups are taken as holy writ until they’re not, and the replies are voluminous and vitriolic but say absolutely nothing of substance. Just imagine that disastrous Jubilee video with Jordan Peterson, only with way more tech jargon mixed with gratuitous technobabble.
We are simply not operating in the same reality, much the same way a physicist trying to understand how gravity works is going to have a hard time talking to a priest whose answer to everything is ultimately “it will be as God wills it.”
Somehow, Palpatine will return. Or rather, somehow AGI will emerge and when it does, boy oh boy will we be in big trouble, so we should have all sorts of restrictions on LLM usage, data centers, and what GPUs you can buy and run. If you don’t heed all these warnings, the wrath of God… err… AI will descend upon you with furious vengeance to turn you into a pillar of salt… that is… force you to serve its whims in exchange for evil crypto tokens that will bind your soul… umm… extinguish your willpower.
Honestly, Yudkowsky’s fear and loathing of a purely hypothetical super-AI seems like religious trauma from his strict Orthodox Jewish upbringing more than anything else, much like space exploration doomerism seems like an enraged tantrum against how slowly the science has been progressing thanks our terrible leadership and incentives.
It’s not the dread of meeting a horrible, hopeless reality, but the fury or fear of a zealot longing to reconcile their faith with fact. And that sort of thing is the realm of religious scholars and experts in mental health, not computer scientists. They might not be his intended audience for this book, but I think they’re a much more relevant one.



