Use this button to switch between dark and light mode.

Human Oversight: The Key to Ethical AI Adoption

By: LexisNexis Canada

Human oversight is critical to ensure generative AI benefits legal services in an ethical and responsible manner. With diligent governance, professionals can utilize AI to improve efficiency, insights, and justice while pro-actively managing risks and upholding duties.

This article examines strategies for developing ethical and accountable AI systems through continuous human supervision. We discuss how human oversight reduces risk, amplifies benefits, and boosts overall accountability when practicing the law, and we look at the best ways to increase human oversight of AI systems.

AI Ethics

AI ethics is the study and application of ethical principles and values in the development, deployment, and use of artificial intelligence systems. It aims to ensure that AI technologies are designed, implemented, and utilized in a responsible and ethical manner, addressing potential risks, challenges, and unintended consequences.

How Generative AI Works

Generative AI relies solely on machine learning algorithms detecting patterns, without any human guidance on ethics, logic, or common sense. As a result, AI systems can reach conclusions that are biased, misleading, or simply incorrect in certain cases. To develop ethical and effective AI, human supervision is required from initial data inputs through final outputs.

Human oversight also mitigates other prominent AI risks. It can reduce the introduction of bias through input data checks and output validations. It can also ensure inputs are carefully curated, while simultaneously ensuring effective data governance. In short, human oversight drastically improves the results of the AI and boosts user trust.

AI Ethics and Governance: Applying Human Guidance in AI Development

It is vital, for specific AI systems built for specific purposes, that providers utilize extensive oversight from experts throughout the process. In the legal sector, for example, people with an in-depth understanding of law should guide:

  • data preparation
  • model training
  • testing
  • auditing
  • monitoring

All of this helps to continuously improve the performance of AI solutions.

Assessing Data Input

Human oversight should start with the inputting of data. Experts should carefully curate the data used to train systems and then constantly evaluate that data, looking for any outliers or anomalies and correcting sources to maintain high quality. Doing this can minimize inaccuracies and bias through bias-reducing procedures and algorithmic detection tools.

Regular Data Assessments

Regular data assessments identify problems like missing values, outliers, and unrepresentative data. This allows providers to address issues through cleaning and preprocessing. Perhaps most importantly, data assessments ensure that the data is representative and reflects the real-world scenario in which the AI operates.

AI Monitoring and Audits

Organizations bear responsibility for monitoring AI systems and should always consider the real-world impact. That’s why AI systems should also apply human oversight through auditing. At the highest level, AI systems can perform model performance evaluation, to develop a series of metrics — accuracy, precision, speed, relevance, so on — and judge outputs against those metrics. Rigorous supervision ensures AI evolves responsibly before deployment.

Soliciting Feedback from Experts

AI systems should judge outputs in real-world scenarios inviting feedback from real-world users to help identify problems. Lawyers should provide feedback as they are most likely to notice inaccuracies, and understand context, making their perspective more practical and attuned to nuances.

Effective oversight depends, then, on the knowledge of the expert and the irreplaceable shared wisdom of the community.

Becoming an AI Ethicist: Applying Oversight As Legal Professionals

Lawyers and law firms should also apply oversight when using AI tools in practice. They can perform their own audits, ensuring the AI systems meet the suggested standards. Law firms and lawyers can dig deeper in terms of auditing systems, adopting a more technical approach.

Auditing AI Systems at Law Firms

Tools like local interpretable model-agnostic explanations (LIME) and Shapley Additive exPlanations (SHAP) explore the interpretability of AI systems, for example. These are techniques that approximate any black box machine learning model with a local, interpretable model to reveal what is happening within systems and identify potential issues.

LIME and SHAP should not often prove necessary, as lawyers, in the pursuit of minimal risk, should use AI systems that offer greater transparency, and greater accountability. But AI systems are not static — they evolve, they improve, and they also deteriorate — so a once transparent system can become increasingly opaque. That’s why regular auditing is necessary.

Training People for Spotting AI Errors

Training and guidance also empower lawyers to practice effective human oversight. Law firms should train staff on the best ways to apply human oversight to outputs. That means lawyers should be able to scrutinize outputs, validate accuracy, spot errors, identify data misuse, and so on. Responsible AI adoption requires developing an organizational culture focused on oversight, ethics, and continuous improvement.

Learn more about Lexis+ AI to witness the benefits of AI-driven legal research.