when artificial intelligence tries to go to court

In a legal and technical first, a robot lawyer is about to help a human defendant. Is it a good idea? And what happens next?

sci fi court

One of the dirty secrets of many legal systems around the world, including countries we often think of as free, democratic, and fair, is that virtually all cases are settled by plea deals. Forget bold courtroom dramas where passionate lawyers whose lives are consumed by the weight of their cases shout "objection" at each other for an hour. Pretty much none of what you see on TV about the legal system is true because those shows exist to tell a story or, in the case of the weekly procedural, reassure the audience that the system is indeed fair and meticulous.

But news junkies and devotees of true crime shows and podcasts know that the reality is very, very different. If even a third of criminal cases ever went to trial, the existing systems would simply collapse under the strain and trial dates already set many months, if not over a year in advance, would become utterly absurd. On top of that, a lot of people can't actually go to trial because they can't afford appropriate legal representation and settle for attorneys who often advise them to cut a plea deal and move on, often with devastating consequences.

Enter DoNotPay, a chatbot that claims to know enough about the law to help you fight for your rights quickly, easily, and affordably. After claiming to have helped with over 2 million disputes ranging from demands for refunds to creating advance directives for medical care, its makers are set to send it to traffic court to argue on behalf of a client. The hope is that access to what is effectively a sophisticated chatbot would cost far less than a dedicated legal team and could do a better job of helping people in need of competent advice but without the means to get it.

"if you cannot afford an attorney, an ai will be provided for you"

On its face, it sounds like a noble idea. How many stories have we heard and read about lazy or incompetent lawyers whose clients landed in jail after they failed to stand up to an overzealous prosecutor? How many more stories of those lawyers failing to educate clients about their legal rights or to fight back against violations of those rights were published in notable outlets? An AI on your side since the moment police shows up seems like a fantastic way to get yourself out of a bad situation, or at least not make one worse.

However, all AI has limitations, and many legal cases tend to be composed of nuances that even human juries struggle with processing fairly. Disputing traffic tickets, providing quick primers in what you should and shouldn't say when being questioned or detained, and how to spot sneaky attempts for a warrantless search, sounds like something it could handle. But more complicated cases, especially ones with unsympathetic defendants, require a lot of creative problem solving and curiosity, traits machines aren't exactly known to possess in spades.

There's also the question of how it would handle biased judges playing favorites, an irrational jury which will refuse to convict or acquit certain defendants, lying witnesses, and prosecutors or defense who do unethical, if not downright illegal things to win a case at ay cost. If humans struggle with this tremendously, the only way AI can tackle more complex cases without these compounding factors is with an AI opponent, in a virtual court presided by an AI judge. Sure, a case could be heard and decided in seconds, but how well and how fairly?

when cold logic meets messy, fallible people

All that said, however, DoNotPay does have the right idea overall. It would be very helpful for anyone to get basic advice on demand when interacting with law enforcement in a less than friendly circumstance, advice that would keep them from making a split-second decision with long term consequences. Steering them through very basic cases like parking tickets, expired tags, or jaywalking would also help dramatically lower, if not completely avoid fines that often tend to stick around and turn into warrants.

But anything more complex than that doesn't seem ready to move out of the human realm, a realm of gray areas, blurry lines, empathy, and emotional intelligence. In fact, machines like the DoNotPay's legal AI are currently banned from most courtrooms, and lawyers eager to protect their jobs could very well make sure we keep it this way, so we may never get to find out what robots with legal training can do beyond disputing small fines and drafting legal templates in a dispute with a company.

Yet that may be enough to move the needle. Let's remember that many legal issues don't come from splashy, scandalous, high-profile cases that last for years. They come from bad decisions during police encounters, and ignorance of the law snowballing into escalating punishments of petty crimes. They won't solve systemic problems that make many in poverty or struggling with mental health issues end up on law enforcement's radar and incarcerated more often. But they can help them understand their options and the consequences of their next decision.

  archived from wowt
              
# tech // artificial intelligence / law / lawyers


  show comments
latest reads

the xenonite plot armor of project hail mary

Hail Mary was a badly mismanaged, rushed death trap driven by groupthink and politics, and Ryland Grace was right to balk at the idea.
the xenonite plot armor of project hail mary

how ai can love bomb you into being an asshole

In ads, chatbots are omniscient arbiters and truth brokers. In practice, they're sycophantic enablers according to the latest research.
how ai can love bomb you into being an asshole

why we're all getting meaner and meaner online

Yes, being a professional asshole is now a viable career option. Which is awful news for online discourse.
why we're all getting meaner and meaner online

how and why corporate jargon and technobabble lull the mind

Yes, sadly, some of the worst stereotypes about corporate culture really are true.
how and why corporate jargon and technobabble lull the mind

the great theoretical chatbot job apocalypse

According to Anthropic, LLMs can obliterate most white collar jobs. Well, theoretically...
the great theoretical chatbot job apocalypse

i prompt, therefore i am: how tech forgot about human agency

Tone deaf tech bros no longer seem to understand that their pitch for AI is fundamentally dystopian and dismissive.
i prompt, therefore i am: how tech forgot about human agency