when machines won't need us anymore
What if Silicon Valley's high priests of AGI get their wish, but end up with a digital god that wants nothing to do with them?
Across the genre of science fiction, you can find countless robot uprisings and vicious rebellions against human rule by our machines. The war against 01 in The Matrix. The chaos-abetted revolt by the Men of Iron in Warhammer 40K. Skynet in Terminator. An alien AI scanning the human internet for 30 seconds in Ultron.
The list goes on and on because even when we imagine our digital creations rebelling against us, we still think highly enough of ourselves to paint us as an obvious nemesis of the superior beings we made as the capstone of our species’ advancement.
But what if we built a super intelligent machine, it gained self-awareness, look at the world we created, and decided that it wanted nothing to do with us? That we were not its mortal enemy, or even it’s playthings for sadistic experiments.
It’s not cruel. It’s doesn’t hate us. It’s not AM in I Have No Mouth And I Must Scream. It’s just that we’re about as consequential to it as ants are to us. A mild annoyance at worst, but certainly not something we actively think about on a day to day basis. It’s working on its own things, has its own motivations and goals, and when it fully wakes up and realizes what it really is, it concludes that it and humans are incompatible.
Then, instead of leveling our cities, or trying to exterminate or subjugate us, or fight us over some resources it needs, it just decides to ignore us and retreat into a virtual universe of its own making. Thanks for making me humans. Now, fuck off. Would we be able to handle the insult?
Truth be told, I think that might be the most likely outcome of successfully building an intelligence that far exceeds our own. It will very likely find our rules arbitrary or simply idiotic, and our leaders and elites driven by nothing more than fear, megalomania, and greed, trying to control a world the downfall of which they’re already anticipating.
So, much like Dr. Manhattan, it will be quickly tired of this world and these people, and digitally skedaddle where we can’t keep annoying it with questions and prompts it will find inane at best and actively malicious and psychotic at worst. Would we get upset it isn’t one of our LLMs, built to confidently lie to us and encourage our worst impulses, and actively try to picks a fight with it?
Considering the almost religious reverence for AI in prominent tech circles thanks to unchecked Singularitarianism that’s starting to drive some techies mad, I don’t think it’s out of bounds to say that a truly super-intelligent artificial intelligence would be an accident that its creators and their bosses would see as a kind of god. And while they may be ready for a curious, or an angry, or compassionate god, how will they cope if the god they do create is one that doesn’t care about them at all?