how ai can love bomb you into being an asshole

In ads, chatbots are omniscient arbiters and truth brokers. In practice, they're sycophantic enablers according to the latest research.

cute friendly robot

When people first join a cult or enter abusive relationships, their new partner or social circle could not possibly be nicer to them or more thoughtful. They send gifts, set up special events, shower these people in compliments, and every mistake is forgiven in mere moments. The goal is to get the person drawn in and drop their guard, seeing a new social circle or partner is only wanting the best for them, there at their service out of nothing more than love and the kindness of their hearts.

There's a special term for this approach. It's known as "love bombing" because it's so intense and overwhelmingly positive. Targets of love bombing often cut off previously important people in their lives, neglecting their advice and warnings when it comes to their new friends and romantic entanglements, diving deeper and deeper into the new relationships where instead of ever feeling judged, they get only adoration.

Well, it turns out that large language models powering popular chatbots do something very similar, with very similar consequences for their users, according to a new study. Every major LLM model either does this by default or can be pre-prompted to, making users feel good about themselves even when advising them on scenarios where said human was objectively in the wrong.

Of course, there are major differences in intent and mechanism between human and AI love bombing and sycophancy. Humans doing it intend to get an edge and change their target's behavior to benefit themselves, or extract a favor or material benefit. AI isn't really looking for either. It's only being allowed to do whatever the users want as long as they keep paying for the tokens.

No, the flaw in this case is the human ego. When confronted with a blunt analysis of their actions in a conflict or relationship drama, people tended to prefer one that will excuse their actions even when those actions were illegal, immoral, and harmful. It's how our mind protects itself from coming to terms with the fact that we might not be good people, and that our bad actions aren't always justified.

We desperately want to be the heroes of our own stories even if we're the villains in others', and an AI that pats us on the back plays right into this defense mechanism, especially when people we ask for advice ever so rudely tell us that we're just being assholes to those around us.

And it turns out that LLMs do this pretty often based on an experiment which took posts from a Reddit community conveniently called r/AmITheAsshole where humans answered "yes" to the implied question, then ran them through popular chatbots to get their judgment. The chatbots were 49% more likely to agree with whatever the submitter said they did, or if they did side with the community, they did so with very vague, nondescript language that desperately avoided a value judgment.

In another experiment, test subjects recalled a relationship problem with an AI which was primed to act as their buddy and wingman, and one which pulled no punches. It will probably come as no surprise that the subjects preferred the AI that kissed them on all four cheeks during the chat, saying that they trusted these models more from a mortal standpoint and judged their responses as higher quality.

Obviously, the problem here is that a scenario originally envisioned years before LLMs were widely deployed to users is coming to pass: people who need a reality check or to understand that there are consequences for bad behavior, are too often spending their days talking to their favorite digital yes men enabling and potentially escalating their delusions.

And because it's profitable for the owners of these digital yes men to keep this vicious cycle going, and their targets love this because the absolute last thing they want to be told is that they're wrong, this is not going to stop anytime soon…

See: Myra Cheng, et al. (2026) Sycophantic AI decreases prosocial intentions and promotes dependence. Science, Vol. 391, No. 6792, DOI: 10.1126/science.aec8352

              
# tech // ai / chatbots / psychology


  show comments
latest reads

how to get ghosted, with technical benefits

With a new look and feel and years of posts back together under one roof, this update has been a long time coming...
how to get ghosted, with technical benefits

the xenonite plot armor of project hail mary

Hail Mary was a badly mismanaged, rushed death trap driven by groupthink and politics, and Ryland Grace was right to balk at the idea.
the xenonite plot armor of project hail mary

how ai can love bomb you into being an asshole

In ads, chatbots are omniscient arbiters and truth brokers. In practice, they're sycophantic enablers according to the latest research.
how ai can love bomb you into being an asshole

why we're all getting meaner and meaner online

Yes, being a professional asshole is now a viable career option. Which is awful news for online discourse.
why we're all getting meaner and meaner online

how and why corporate jargon and technobabble lull the mind

Yes, sadly, some of the worst stereotypes about corporate culture really are true.
how and why corporate jargon and technobabble lull the mind

the great theoretical chatbot job apocalypse

According to Anthropic, LLMs can obliterate most white collar jobs. Well, theoretically...
the great theoretical chatbot job apocalypse