losing your mind one prompt at a time
Mental health professionals are starting to treating patients suffering from psychotic episodes after spending too much time with chatbots.
In the grim universe of Cyberpunk 2077, humans who ended up with too many robotic parts after getting carried away, have to worry about losing themselves to a condition known as cyber-psychosis. In it, they enter a dissociative state where their minds play traumatic or impactful events of their lives on a loop and their bodies act it out, almost always extremely violently. Think sleepwalking but with a kill count, until they’re finally put down by a highly armed paramilitary force.
How exactly this happens isn’t well explained, it’s just one of those in-universe things we’re supposed to go along with. We also don’t think this would happen to real human beings. But what does seem to be happening now is a sudden surge in patients with a delusional idea either about AI or enabled by AI that’s become commonly known as AI psychosis, and is deeply worrying mental health experts.
Sufferers believe that their AI of choice is alive, or an alien intelligence, or a god, or is a messenger from beyond to inform them they’re not human, or they’ve discovered a new branch of reality, or science, or math. There has even been cult-like behavior and a public meltdown by a pivotal ChatGPT investor around chatbot-driven delusions.
Of course, this prompts the question. Are chatbots causing this condition, or are they enablers of a descent into delusional psychosis by people who already lean that way? The problem is that there’s not enough evidence to say for sure, and plenty of data to make the case for both, although that data is anecdotal and certainly enough for any solid conclusions.
On the one hand, there are patients who’ve never shown any signs of mental illness until they started relying on chatbots on a regular basis. On the other, just because those signs weren’t observed, doesn’t mean the predisposition is not there, and the flattering, agreeable, factually flexible nature of LLMs allows them to manifest those delusions at unprecedented speeds.
Never before have there been so many eager digital yes-men built to cater to every one of our whims talking to people with such frequency, so they’re bound to find all sorts of latent cases. We also know that chatbots can make depression, anxiety, and paranoia much worse in sufferers because they’re sold as impartial, highly accurate, and able to help with anything, anytime — even though this is just marketing hype — so their generated advice isn’t taken with nearly enough skepticism.
So, if you want to avoid an AI psychosis episode, you should take any LLM’s answers with a hefty grain of salt, do your own research before you commit to anything drastic, and remember these are just overhyped tools, tools that are fueling an economic and industry-wide bubble. Unless you’re trying to check well known facts and figures, an LLM is an enabler, and should be treated accordingly.