Could you replace your lawyer with AI?


The case is this: An Australian driver is accused of using a mobile phone while driving, a violation of Road Rules 2014 (NSW) Reg 300. Their defence: It was not a phone in their hand, but a misidentified juice box. Acting for them is Jeanette Merjane, a senior associate at law firm Lander & Rogers.

Also acting for them is an AI trained on legal documents.

In a bright lecture hall at the University of Technology, Sydney, SXSW Sydney session “Can AI Win a Court Case?” compares a human lawyer to NexLaw‘s Legal AI Trial Copilot by having both argue the same case. While Merjane has prepared her arguments the traditional way, Copilot (not to be confused with Microsoft’s generative AI chatbot) will be prompted to generate a defence live, which is to be read by a volunteer as though they are representing themselves in court.

From a show of hands before the showdown, around two thirds of the audience believe Marjane will make a more convincing argument. Still, there are a few that think the legal AI tool might surprise us.

AI is already changing the practice of law

An illustration of a person with a computer monitor for a head (symbollising AI) sitting next to someone else, writing something down.


Credit: J. Hazelwood / Mashable Composite; gorodenkoff, iStock / Getty

On the face of it, the legal profession seems like an area where widespread adoption of AI should be enthusiastically embraced. 

Legal work is infamous for involving long hours, extensive research, and complicated jargon. Having an AI algorithm automate some of this arduous work would theoretically lower costs and make the legal system more accessible, as well as save lawyers a lot of pain. What’s more, legal arguments typically make extensive references to legislation and past cases, all of which could be used to train an AI algorithm.

As such, legal AI may appear to be a promising field. In fact, AI technology is already changing the practice of law across the globe. In November 2023, AI company Luminance automated a contract negotiation “without human intervention” in a demonstration of its legal large language model Autopilot. One month later, a Brazilian lawmaker revealed he had used OpenAI’s ChatGPT to write tax legislation which had since passed. Massachusetts State Sen. Barry Finegold even used ChatGPT to help write a bill regulating generative AI, while the American Bar Association has noted that AI can be useful for predicting outcomes and informing legal strategy.

Even so, such application of AI is not without issues. Perhaps one of the most high-profile instances of AI meeting law is DoNotPay, a U.S. company which offers online legal services and chatbots, and has claimed to be “the world’s first robot lawyer.” In 2023, DoNotPay announced plans to use its AI to argue a speeding case, having the chatbot listen to the proceedings via a smartphone and instruct the defendant through an earpiece. The stunt was cancelled after state bar prosecutors warned that CEO Joshua Browder could potentially be charged with unauthorised practice of law were it to go ahead. 

Despite the experiment’s cancellation, DoNotPay still found itself in hot water amidst the Federal Trade Commission’s (FTC) crackdown on AI technology last September. Though, according to the FTC, DoNotPay allegedly claimed it would “replace the $200-billion-dollar legal industry with artificial intelligence,” the FTC found that its services failed to deliver what they promised, and its outputs could not be substituted for the work of a human lawyer.

“[I]f a client were to interact directly with a generative AI tool that ‘gave legal advice,’ then the legal entity behind that tool would be purporting to give legal advice,” Brenda Tronson told Mashable, speaking generally on the issue of AI and the law. A senior lecturer in Law and Justice at the University of New South Wales as well as a barrister at Level 22 Chambers, Sydney, Tronson specialises in legal ethics and public law.

“If that legal entity was not qualified to give advice, then, in my view, they would be engaging in unqualified legal practice and would be liable for that conduct.”

Generative AI chatbots are trying to answer legal questions

LawConnect CEO Christian Beck hadn’t heard of DoNotPay when Mashable spoke to him in October. Even so, he didn’t seem to be concerned that the company’s legal AI chatbot for laypeople would run into the same issues.

“Obviously there’s laws that stop non-lawyers claiming to be lawyers giving legal advice,” Beck told Mashable. “But if you look at something like ChatGPT, it’s answering all the legal questions, right? And they’re not bound by that. So what we’re doing is we’re combining the AI answers with verifications from lawyers that are qualified.”

Unveiled last October, LawConnect’s AI chatbot aims to answer users’ legal questions. Though the AI will provide immediate responses, users can choose to send their inquiries to real human lawyers for verification and potential further action. The chatbot uses OpenAI’s API and is trained on publicly available information from the internet, however Beck stressed that lawyers’ verified answers are fed back into the AI to make it more likely to provide correct responses to similar questions in the future.

“Just describe your legal issue, and you’ll receive a personalised report created by AI with the option to have it reviewed and verified,” states LawConnect’s website.

Beck did note that as LawConnect is being made available globally across all areas of law, using OpenAI’s AI models for translation when necessary, though the company is “working through all of the issues” surrounding this. Still, he wasn’t daunted by this massive and complicated undertaking.

“We’re certainly not out there telling [people] we’re lawyers when we’re not,” said Beck. “We are telling them that these are AI answers like they could get from another AI source, but what we are saying is that we’re verifying them with lawyers, and we always use qualified lawyers to verify the questions.”

A disclaimer at the bottom of LawConnect’s website states that its content “is for informational purposes only and should not be relied upon as a substitute for legal advice.” Even so, the tool is a glimpse at what an AI-assisted legal system could look like as companies continue to explore the area.

Hallucinating AI lawyers

While AI chatbots’ instant answers appear to offer convenience, problems such as hallucinations currently limit such tools’ usefulness in making the legal system more accessible. A hallucination is false AI-generated content which the algorithm presents as true — a common issue considering that these tools do not actually understand what they generate.

“If a person who is seeking legal assistance uses those tools and does not assess or verify the output, then they might end up in a worse position than if they did not use those tools,” Tronson told Mashable.

Mashable Light Speed

Yet even seasoned lawyers who should perform such verification have fallen victim to false AI-generated information. There have already been multiple well-publicised cases where lawyers have inappropriately applied generative AI after failing to understand the technology.

An illustration of a person with a computer monitor for a head (symbollising AI) looking confused.


Credit: J. Hazelwood / Mashable Composite; gorodenkoff, iStock / Getty

In June 2023, two attorneys were handed $5,000 fines after filing submissions which cited non-existent legal cases. The lawyers admitted to using ChatGPT to do their research, relying on sources that had been completely invented by the AI tool. Judge P. Kevin Castel criticised the pair for continuing to stand by the fabricated cases even after their veracity had been called into question, accusing the lawyers of acting in bad faith.

“[W]e made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” their law firm Levidow, Levidow & Oberman said in a statement refuting Castel’s characterisation at the time.

Such statements demonstrate a clear misunderstanding of the nature of generative AI, a tool which is specifically designed to create content and is incapable of effectively fact-checking itself.


While AI chatbots’ instant answers appear to offer convenience, problems such as hallucinations currently limit such tools’ usefulness…

Despite examples such as this, lawyers continue to over rely on AI to their own detriment. Later in 2023, another lawyer was reportedly citing fake cases which his client, disbarred former Trump attorney Michael Cohen, had generated using Google Bard. This February, U.S. law firm Morgan & Morgan cautioned its employees against blindly trusting AI after one of its lead attorneys also appeared to cite cases invented by ChatGPT.

“Some legal practitioners are very knowledgeable and are using [AI tools] well, while others still have very limited understanding or awareness of the tools, with most falling somewhere in between,” Tronson told Mashable. 

While Tronson had not tried out LawConnect or NexLaw’s Copilot herself, she did note that such specialised AI systems may already be of more use than tools like ChatGPT.

“The publishers’ tools that I have seen demonstrated are trained on a more confined set of information and they do provide sources and links,” Tronson told Mashable. “Any tool where those two features apply is generally more useful than ChatGPT, as this limits hallucinations and makes it easier to verify the information. At that point, the tool effectively becomes a search engine which provides text about the results (where that text might not be correct) rather than just a list of results.”

This limited benefit calls into question the usefulness of legal AI tools, especially considering the technology’s prohibitive environmental cost as well as the potentially dire consequences for erring in law. However, Tronson did acknowledge that such tools may eventually improve to a point where they offer more utility.

“It is possible that we will see an improvement in the tools, or in the reliability or quality of output from the current tools,” said Tronson. “If that occurs, and subject to the questions of liability…, then they might contribute to better accessibility. Similarly, if generative AI tools are developed to assist organisations such as Legal Aid and community legal centres, it is possible that those organisations can help a larger number of people, which would also assist with accessibility.”

AI as a tool for legal professionals

SXSW Sydney’s battle between NexLaw’s Copilot and Merjane made no effort to hide who had authored the arguments. Still, it was plainly obvious which defence against the allegations of driving while using a mobile phone had been crafted by a human, and which was from an AI.

Even aside from its stiff language, Copilot made obvious stumbles such as citing incorrect legislation, even referencing laws in the wrong state. Its defence also focused upon the testimony of the defendant’s spouse and the type of car they drove, alleging that their Mercedes Benz‘s Bluetooth and Apple CarPlay capabilities meant they’d have no need to interact with their phone manually.

In contrast, Merjane presented a photograph of the alleged offence, emphasising the inability to positively identify the item in the driver’s hand. She also pulled up the defendant’s phone records to show that no calls were active at the time the photo was taken, and cited his clean driving record. Merjane was significantly quicker to answer the judge’s questions as well.


It was plainly obvious which defence…had been crafted by a human, and which was from an AI.

Fortunately, NexLaw’s Legal AI Trial Copilot doesn’t intend to replace lawyers. As its website states, “Copilot is designed to complement and augment the work of human legal professionals, not replace them.”

“I think it’s clear that, given the costs of legal representation, there’s great potential for AI to assist with improving access to justice,” said Professor David Lindsay from UTS’ Faculty of Law, who acted as judge in the exercise. 

“But at this stage, and in some respects, this afternoon’s presentation presents a false dichotomy. The immediate future will involve trained lawyers working alongside AI systems. So as in almost all contexts, to frame the question as ‘humans versus AI’ is a distraction from the more important issues involving people working alongside AI systems, and the legal and ethical implications of that.”

The ethical implications of legal AI and dehumanising law

Aside from the quality of information legal AI algorithms might dispense, such tools also raise ethical issues. Liability and confidentiality are significant concerns surrounding the integration of AI into legal practice.

There are two primary confidentiality concerns with legal AI, according to Tronson. The first is whether the AI system keeps information which is inputted into it (as well as the legal jurisdiction its servers fall under). The second is to what extent such inputs are used in training the AI algorithm, particularly where confidential information may be inadvertently disclosed.

“The first concern can be controlled,” Tronson stated, noting that the AI tools’ contractual terms are key. “The likelihood of the latter concern arising should be lower, but without knowledge of how a particular system works, this can be difficult or impossible to assess.”

The leadership of the courts and professional bodies will be vital in building legal practitioners’ understanding of AI tools, Tronson noted. Even so, she believes there are some situations where using AI is likely to be unethical in every circumstance, such as in writing witness statements.


The leadership of the courts and professional bodies will be vital in building legal practitioners’ understanding of AI tools.

Last October, a New York judge reprimanded an expert witness who used Microsoft’s Copilot to generate an assessment of damages in a real estate case.

Understanding of nuance and the limitations of AI is vital to its effective, fair application. Similarly, understanding of nuance in human behaviour and law are vital to the effective, fair application of the legal system. Though AI does have potential to “democratise” the law, the technology carries an equally enormous risk of dehumanising it as well.

“For those who cannot afford a lawyer, AI can help,” U.S. Chief Justice John G. Roberts, Jr. acknowledged in the U.S. Supreme Court’s 2023 Year-End Report on the Federal Judiciary. “It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge… 

“But any use of AI requires caution and humility,” he continued. “[L]egal determinations often involve gray areas that still require application of human judgment.”

Could an AI chatbot replace your lawyer?

An illustration of a person with a computer monitor for a head (symbollising AI) standing next to a person they're representing before a judge.


Credit: J. Hazelwood / Mashable Composite; gorodenkoff, iStock / Getty

The experiment at SXSW Sydney clearly demonstrated that legal AI chatbots still have some way to go before they can compete with human lawyers. As NexLaw asserts, these tools are currently intended to assist human legal professionals rather than supplant them. Yet even as AI advances, completely replacing lawyers will continue to remain a dangerous prospect. 

A widely circulated quote attributed to a 1979 IBM presentation declared: “A computer can never be held accountable, therefore a computer must never make a management decision.” Similarly, replacing lawyers with AI raises issues of who might be accountable when things go wrong. Considering the state of generative AI as well as the widespread misunderstanding of the technology, things are bound to go wrong.

“From my point of view, the most important thing is for lawyers to remember that the tools do not ‘think,’ and that a practitioner must always exercise their own judgment and critical thinking in relation to how they use any output,” said Tronson. “As long as a practitioner applies critical thinking and their own judgment, there are appropriate uses for generative AI.”

Unlike creatives such as artists, writers, and musicians, fewer people are likely to mourn lawyers should the profession fall to automation. Even so, such a death would fundamentally change the legal system, impacting not only those who work within it, but anyone who has any cause to interact with it — which is everyone.





Source link