how chatbots' lies can get you killed in the middle of nowhere
And it's not a bug but a feature.
Imagine that you have an assistant who can plan a spectacular trip to a country that you saw on a map a handful of times in seconds, highlighting every must-see route, monument, and hike. You buy your tickets, fly halfway across the world, hike for miles and miles into a mountain range to see an archeological site of great importance. Only when you reach the exact coordinates, there’s nothing there and there never was.
You’re now alone in the wilderness with no supplies, no cell service, and even if you’ll get lucky and do happen run into locals in this remote region — a big if — you have to hope they’ll intuit your dilemma from context clues and be able and willing to help. It’s a pretty bad situation, and tourists blindly relying on chatbots are currently suffering the consequences in South America, Japan, and the Swiss Alps.
In some cases, people are spending time and money traveling into the mountains on a mission to hike a trail to some non-existent natural wonder. In others, they’re trapped on mountains because the way up may have been correct, but the suggested paths back down were most certainly not. And others still find that the itineraries spat out by an LLM have little in common with actual flights, trains, and bus routes, scrambling to redo their logistics and re-book everything not even last minute, but well into the trip.
Now, this is the part where we can do our best Nelson impressions from the Simpsons and say that this is what people get for simply trusting a machine both experts and at least somewhat informed users know is prone to lying and is pretty terrible at solving real world problems. But I would argue that AI companies have responsibility here and I mean that in a very legal sense.
lies, damn lies, statistics, and chatbots
Consider that AI companies advertise their chatbots as oracles and hand-wave error rates and hallucinations as rare or minor inconveniences while in reality, the answers they’re giving to users are wrong nearly 60% of the time. Worse yet, it’s not that the responses are obviously and clearly incorrect, it’s that they’re peppered with hidden errors small and large, but delivered with downright aggressive confidence.
So, if you don’t know much about a country, but a very confident tech executive who no one is contradicting or questioning too much, tells you that he has a tool that can plan your next trip to see amazing sights and experience a fantastic culture with only the smallest, most insignificant hiccups to keep in the back of your mind, and the tool oh so confidently tells you about the Sacred Temple of Whatever, you may at least be fooled into thinking that temple might exist.
In fact, tourists in Peru about to hike to The Sacred Canyon of Humantay showed very detailed descriptions, complete with pictures of the location to a local guide jumping in to stop them. It all looked very convincing but only because all the descriptions and images were based on arbitrarily combining real facts about real places.
Humantay is a stunning lake high in the Peruvian Andes and there are Sacred Canyon hikes featuring caves and boulders with prehistoric petroglyphs in Southern Australia and in Arizona. Unless you’re at least slightly familiar with the geography of Peru, the American Southwest, and the Australian South, the franken-destination in question is going to sound legitimate to the casual tourist relying on an LLM pitched as downright superhuman in its abilities and knowledge.
putting the digital con in confidence
It’s like a con man selling the services of another con man who lies to his marks with expertly crafted forgeries and takes full advantage of people’s tendency to mistake confidence for competence. Even worse, very few in the media are even capable of pushing back on their sales pitches because they’re assaulted with impenetrable or obscure technobabble they don’t know how to counter when they do.
Getting lost abroad after being given bad directions seems mild when put up against queries leading to outcomes like mental health problems being greatly exacerbated by chatbots unable to deal with a person genuinely in need to professional help, and driving some users into delusions verging on psychotic breaks from reality.
So, at the very least, we’re dealing with false advertising, on top of blatant copyright violations, on top of possibly deceiving investors with fanciful dreams of very literal AI godhood, mixing in downright religious fervor with over-extended technology and bad math. We’re allowing companies with billions in funding to casually lie to users, shrug when called out, and promise that in six months they’ll fix the problem only to repeat this cycle and pretend there are no real world consequences of this.
Pointing and laughing at people who trusted chatbots and just leaving it at that may give us a false sense of superiority, but it ignores a much bigger problem. In virtually any industry playing fast and loose with the information you give customers gets you fines and regulatory agencies breathing down your neck. We don’t say that scammy businesses should get a pass because caveat emptor. Why exempt generative AI?