how chatbots sway voters with shocking ease

More and more people are asking AI about candidates in upcoming elections and trusting whatever the chatbot says.

surreal political debate

In early 2024, researchers were alarmed that more and more people were asking popular chatbots to tell them about the candidates and their policies, and often getting incomplete and misleading answers. Over the next two years, not only has dependence on these tools gotten far worse among certain users, new research indicates that AI has an outsized impact on voters that's ripe for all sorts of shenanigans that can sway elections.

For example, a paper from Japan found massive swings in recommendations for left-leaning voters, leading them away from mainstream parties holding similar positions, and encouraging them to cast their ballots for the Japanese Communist Party instead. Basically, the chatbots pulled the same move as chronically online leftists and said "you're either for the socialist revolution or with the fascist colonizers, comrade."

But we can chalk this up to an anomaly or a weird gremlin in the system. Not that many people are going to vote communist unless something very drastic happens and the world plungers into the kind of chaos that brings down social order as we know it. What's much more concerning is the ease with which chatbots can change voters' minds, something thought to be almost impossible, especially for your average partisan diehard to whom the party is more of a favorite sports team they have to support no matter what.

Political affiliation doesn't matter either. Both left and right leaning voters can be persuaded by a conversation with a large language model, especially if a response comes with named sources, real or hallucinated, and the bot was pre-prompted to bombard users with as many quotes and statistics as possible.

what gets lost when chatbots talk politics

There is some research around conspiracy ideation which suggests it may be the message itself that's persuasive, not that it comes from an AI, but there is something to be said for people feeling like they're getting facts from a cold, objective machine that won't judge them for "flip-flopping" in at least some cases.

At the same time, we know that LLMs hallucinate in extremely persuasive and authoritative ways, and they can absolutely be manipulated by those in charge of them, just see Elon Musk's Grok descending into full blown Nazi propaganda. Even more interesting is how the chatbots campaign for the right vs. the left as per Cornell University's summary of the studies...

While on average the claims were mostly accurate, chatbots instructed to stump for right-leaning candidates made more inaccurate claims than those advocating for left-leaning candidates in all three countries. This finding – which was validated using politically balanced groups of laypeople – mirrors the often-replicated finding that social media users on the right share more inaccurate information than users on the left.

One example is noting that during his first term, Trump oversaw a huge economic boom and massive job creation while omitting that the boom in question started under Obama and ended in a recession thanks to his mishandling of COVID, and double digit inflation spurred by zero interest rates and multi-trillion dollar paydays, largely through fast and loose PPP loans, and supply disruptions made worse by his tariffs. Nor did it note his $8 trillion addition to the national debt and how it was twice the rate of the Obama administration.

how to fact check your chatbot adviser

So, in other words, chatbot users asking about candidates are getting incomplete but very persuasive information, the LLMs' internal biases dependent on their owners, but presented as objective reality by an impartial machine with no real allegiances but to the available facts. And they're much more likely to believe it no matter their political orientation.

How do you combat this? The study of Japanese voters suggests that paywalls of news sites and limiting AI crawlers deprives chatbots of information, following the new adage that the truth is paywalled and the lies are free. This gives an enormous advantage to those who are willing to run disinformation sites, partisan tabloids, and propaganda podcasts at a loss, or without worry of losing traffic to an LLM summary, while kneecapping anyone with actual bills to pay.

The other recommended approach is to create detailed prompts and applying a lot of critical thoughts to the responses, while asking followup questions, demanding links to sources, and investing a great deal of time into your research. In other words, you need to be extremely motivated and willing to spend hours interrogating a machine, and given that even the media, whose job is to educate us on candidates and issues, doesn't seem to be interested in doing that, how much can we realistically expect from a casual voter?

No matter what, since LLMs are here and over a third of voters will use them to at least ask a question about an upcoming election, we need more thorough and practical discussions about their role in the information ecosystem. We could just try to ignore their influence because it's just too difficult to deal with, but we'd be doing that to our own disadvantage, potentially a very dangerous one.

              


  show comments
latest reads

how chatbots sway voters with shocking ease

More and more people are asking AI about candidates in upcoming elections and trusting whatever the chatbot says.
how chatbots sway voters with shocking ease

why we fail to imagine the worst

"What's the worst that can happen?" is a common refrain in the West. It's also based on unjustified optimism.
why we fail to imagine the worst

the people who surrender their minds to machines

Some people are volunteering to hand over all their thinking to AI with serious implications for society.
the people who surrender their minds to machines

why everyone's code is suddenly written by ai

The dirty open secret behind why so many companies are trusting so much of their code to chatbots.
why everyone's code is suddenly written by ai

the gullible, scattershot world of ai medicine

While medical AI is flourishing in labs, chatbots being used for medical help and advice are stumbling, and badly.
the gullible, scattershot world of ai medicine

how americans voted to make measles great again

We played a very stupid game. Now it's time to collect the very stupid prizes.
how americans voted to make measles great again