the people who surrender their minds to machines
Some people are volunteering to hand over all their thinking to AI with serious implications for society.
Not too long ago, I wrote about the vast majority of people who don't get a warm and fuzzy from chatbots, and postulated that the aggressive marketing strategy of telling people they no longer need to do anything but follow an LLM's lead was an affront to their agency, hence the vicious pushback. Very few of us like being told that we're effectively obsolete and doing anything is a waste of time because some app can do everything we do but faster and better, so we should just embrace a life of being meat with eyes, ears, and more importantly, a credit card.
Well, let's talk about the small, but vocal and greatly amplified minority who do, deciding that like Sam Altman of OpenAI, they probably wouldn't know how to breathe the right way without prompting Claude or ChatGPT. With their outsized voices on social media, researchers and observers had plenty of chances to think about their behavior, coming up with a term for what they're actively doing, one that sounds like a diagnosis: cognitive surrender.
You see, the good news is that LLMs don't actively make you stupid as claimed by an oversimplified take on a viral MIT study. They simply encourage you to do the same thing you do with every other piece of technology. Instead of memorizing a route or remembering phone numbers, you let highly reliable devices maintain that information for you and do whatever calculations they need to get you from Point A to Paint B with as little fuss as possible.
In some cases, this is fine. Not every number we'll ever call or text needs to be in our long term memory. We don't need to remember how to get to a restaurant we stopped by twice in the last five years. But when we begin to offload tasks that do need our attention and require us to make choices that can affect our careers and even our lives, blindly trusting a machine is dangerous.
Surely these people must realize the consequences of their choice, right? I mean, how does this even happen? By now, we all know at least one person who spent an awful lot of time talking to a chatbot and seemingly losing their grasp on reality to a legitimately disturbing degree. Psychologists are even trying to study the brand new phenomenon they're calling AI psychosis, the end result of conversations with chatbots gone either very weird or horribly wrong.
Here's the catch. They might either not really appreciate what they're doing, or just prefer it to the alternative. Roughly half of any population is not very open to new ideas and experiences. They're not afraid of them per se, they just can't dive right into them. They have to slowly wade in like they're entering a cold pool. Given the tools to explore and research, they'd much rather have an authority figure either nudge them in the right direction, or just outright give them the right answer.
It's not that they're lazy, it's that they don't want to keep making mistakes and fail, stuck in a downward spiral with questionable results. And here's a machine that is supposedly able to answer any question, and has the sum knowledge of all human intellectual ventures. Surely, if you can't ask it for advice, who can you ask?
I know what you're probably thinking next. So, conservatives are more pro-AI than liberals because they crave structure and AI gives it to them? Actually, no, this isn't the case. Both liberals and conservatives are equally concerned about how it's used and its future. Trusting chatbots with your life seems to be either an extreme case of analysis paralysis, or curiosity crossing into gullibility rather than a matter of political alignment, and its being constantly and aggressively encouraged by our technobabbling tech broverlords at every turn.
But this means that the people who do surrender, either under what they feel is insurmountable pressure to not fail at anything, or because they just don't want to overwork their brain meats, are getting their news from chatbots who constantly lie and invent outlets, universities, researchers, and journalists, and in the process, significantly throttle traffic to actual news sources, crippling their revenue streams and jeopardizing their ability to do real reporting and analysis.
They will also tend to start thinking very much alike because as multiple studies show, giving over your agency to LLMs means a lot more regurgitation and a lot less comprehension, especially because given the chance to evolve with no human poking and prodding to agitate them, AI models will always revert to the mean.
And that's really the danger of submitting to the big tech hive mind: to become an incurious mental zombie driven by a virtual leash by the internet comment section that also routinely lies to you while love bombing you into submission in a feat of self-absorption that treats the rest of the world as immaterial and rejects critical thought as an unnecessary burden.
Just consider that when police in Maryland decided to trust facial recognition with no questions asked, 14 people were arrested on warrants issued for someone else, spending months in jails for crimes others committed while detectives gave it all a glance and a shrug instead of doing the bare minimum and confirming if they even had the right person in custody.
I'm sure you can imagine the kind of damage a large enough demographic like this could do in close elections and important discussions about the long term future of our civilization. If they can't be bothered to make decisions or figure out the truth for themselves, do we really want to trust them with deciding what the rest of us will need to do to tackle real problems requiring cooperation and major changes to our social and economic order?