Use this button to switch between dark and light mode.

Developing AI Responsibly: The LexisNexis Commitment

May 15, 2024 (3 min read)

By Geoffrey D. Ivnik, Esq. | Director of Large Markets, LexisNexis

The emergence of generative artificial intelligence (Gen AI) tools is an exciting breakthrough in our industry, but it is appropriate to also approach this new technology with caution. We are all aware by now of the risks of relying on Gen AI outputs without proper oversight by a lawyer, just as a lawyer would oversee the work product of a summer associate or a paralegal.

One way to mitigate these risks is to make sure that you are using a Legal AI tool — a Gen AI solution trained for the legal profession — that is grounded in authoritative legal content and developed with “Responsible AI” principles. This is a critical foundation that ensures the AI system producing outputs in response to your prompts has been developed, deployed and governed to comply with all relevant ethics and laws.

“Responsible AI is an approach to developing and deploying artificial intelligence from both an ethical and legal standpoint,” explains the International Organization for Standardization. “The goal is to employ AI in a safe, trustworthy and ethical way. Using AI responsibly should increase transparency while helping to reduce issues such as AI bias.”

Responsible AI is about more than doing the right thing; it is an important risk management guardrail to increase confidence that the Legal AI tool you are using will avoid potential reputational and financial damage to your law firm in the future.

Here are five keys to the responsible development of a Legal AI tool, drawn from the Responsible AI Principles at RELX, the parent company of LexisNexis:

  • Be able to explain how the solution works

The AI tool should have an appropriate level of transparency for each application and use case to ensure that different users can understand and trust the output.

  • Prevent the creation or reinforcement of unfair bias

Mathematical accuracy doesn’t guarantee freedom from bias, so the AI should be developed with procedures, extensive review and documentation processes, and the use of automated bias detection tools.

  • Accountability through human oversight

Humans must have ownership and accountability over the development, use and outcomes of AI systems. This requires an appropriate level of human oversight throughout the lifecycle of development and deployment, including ongoing quality assurance of machine outputs to pre-empt unintended use.

  • Respect privacy and champion data governance

The AI tool should be developed with robust data management and security policies and procedures, ensuring that personal information is handled in accordance with all applicable privacy laws and regulations — as well as privacy principles that require the developer to always act as responsible data stewards. For example, LexisNexis has made data security and privacy for customers a priority by opting out of certain Microsoft AI monitoring features to ensure OpenAI cannot access or retain confidential customer data.

  • Consider the real-world impact on people

This technology should be built only after a reflection on: the sphere of influence of a new product; a map of stakeholders beyond direct users; and the domain to which the tool applies, including any potential impact related to an individual’s health, career or rights.

LexisNexis has been an industry leader in the development of responsible and trustworthy AI tools for several years. This began with our use of extractive AI models that used various machine learning techniques and has now advanced to the transformational technology of Gen AI with the breakthrough Lexis+ AI platform.

Lexis+ AI supports legal professionals with the ethical and responsible adoption of Gen AI tools. The platform is being built according to a framework of pre-defined principles, ethics and rules that guide everything we do. All answers are grounded in the world’s largest repository of accurate and exclusive legal content from LexisNexis, including case law, statutes, treatises and more. In fact, Lexis+ AI is the only Legal AI solution that provides linked citations in responses.

This innovative platform enables conversational search, insightful summarization, intelligent legal drafting, and document upload and analysis capabilities — all in a seamless user experience. It incorporates multiple large language models to match the best model for each research task.

We’re now taking the industry to the next level with the launch of our second-generation legal generative AI assistant on Lexis+ AI. The new version of our AI Assistant on Lexis+ AI delivers an even more personalized experience that will support legal professionals in making informed decisions faster, generating outstanding work, and freeing up time for them to focus on other efforts that drive value. All existing Lexis+ AI customers have access to the enhanced AI Assistant.

If you want to learn more about how Lexis+ AI can help legal professionals achieve better outcomes, or to sign up for the Lexis+ AI Insider program that provides the latest in Legal AI educational content, visit www.lexisnexis.com/ai.