21 Jul 2023

AI Tools for Legal Research: Lessons Learned from a Federal Case

By Megan Bramhall 

The New York Times broke a story over Memorial Day weekend that many of us working in the legal technology industry knew was inevitable.

A lawyer had used ChatGPT to assist him with some legal research, copied and pasted the case citations surfaced by the AI chatbot into a brief prepared for a client, and filed that brief in New York federal court. The problem? He never checked those cites and they were entirely made up by ChatGPT.

Legaltech News published a summary of what happened in a case that will surely go down as one of the defining moments in the early use of open-web AI tools by lawyers:

  • A plaintiff sued Avianca Airlines for alleged injuries suffered when he was struck by a beverage cart during a flight;
  • Avianca moved to dismiss and the plaintiff’s lawyer — joined by New York co-counsel — filed an opposition to the motion, including case citations, on March 1st;
  • After defense counsel filed a reply memorandum stating that it could not find many of the cases cited by the plaintiff’s lawyers, the court ordered the attorneys to provide copies of those case decisions;
  • When plaintiff’s lawyers were unable to surface six of the cases, defense counsel submitted a letter to the court asserting that these cases were likely fabricated;
  • The plaintiff’s lawyer submitted an affidavit on May 4th, acknowledging that he used ChatGPT in his legal research and that six specific cases included in the brief were nonexistent.

“At least six of the submitted cases as research for a brief appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York. “The court is presented with an unprecedented circumstance.”

The plaintiff’s lawyer appeared before Judge Castel on June 8th for a sanctions hearing and Law360 reported that the judge judge will be taking the matter under advisement before issuing a written decision.

This unfortunate case touched off nationwide buzz among legal professionals, but in truth it was a predictable incident for anyone who understands a critical flaw in the currently available versions of generative AI tools. They are prone to “hallucinate” believable answers that are patently false.

In fact, in recent months other lawyers have shared their own eye-opening experiences with open-web AI tools. One attorney wrote in a column about how ChatGPT surfaced a list of 15 law review articles — complete with full citations and page numbers — that did not actually exist. And another attorney blogged recently about ChatGPT directing him to a case that sounded directly on-point, but in fact “did not exist anywhere except in the imagination of ChatGPT.”

The problem is not the use of AI-powered tools, but a lack of understanding of the purpose for which these tools were developed and — more importantly — the way they should be used by lawyers.

“Generative AI can be reliable for summarization of a particular document, while it can be unreliable for legal research,” said David Cunningham, chief innovation officer at Reed Smith, in Law.com. “Also, the answer is very dependent on whether the lawyer is using a public system versus a private, commercial, legal-specific system, preloaded and trained with trustworthy legal content.”

These considerations are not new to LexisNexis. In fact, LexisNexis has been leading the way in the development of legal AI tools for years, working to provide lawyers with products that leverage the power of AI technology to support key legal tasks. And with the rollout of Lexis+ AI, we’re now pioneering the use of generative AI for legal research, analysis and the presentation of results, with a focus on how these tools can enable legal professionals to achieve better outcomes.

It is important to understand that we are bringing a very deliberate approach to the development of these products:

  • Deep customer understanding — We spend a lot of time during R&D on acquiring an in-depth knowledge of how lawyers work and what they need. This drives every aspect of our development of AI tools.
  • Honor professional responsibilities — The legal profession has a unique level of ethical obligations to courts, clients and business associates that must be met regardless of the tools used … no matter the cost. As demonstrated in the Avianca Airlines case, using AI does not let a lawyer off the hook with regard to professional responsibility, and lawyers should not risk using off-the-shelf AI tools available on the open-web.
  • Creating AI models with legal expertise — Our team of AI product developers includes lawyers and paralegals from around the world. These legal subject matter experts work directly with our engineers and product managers to make sure we are building AI tools based on testing and evaluation conducted by lawyers.
  • LexisNexis content serves as point of grounding — We are integrating our unsurpassed LexisNexis content database into Lexis+ AI to provide more precise parameters around what the AI engine draws upon when providing legal information.

This legal domain expertise — in terms of research, professional ethics, developer expertise and content grounding — is what will set apart the development of our AI tools from those that are currently available on the open-web.

“I think (generative AI) is useful for lawyers as long as they’re using it properly, obviously, and as long as they realize that ChatGPT is not a legal research tool,” said Judge Scott U. Schlegel of the 24th Judicial District Court in Louisiana, in an interview with Legaltech News.

We invite you to join us on this journey by following our Lexis+ AI web page, where we will share more information about these AI-powered solutions and how they can responsibly support the practice of law.