Summary: Primary and Secondary Law Security, Privacy, and Trust Access to Authoritative Content Evidence Supported by Canadian Case Law Navigate Unfamiliar Issues Take Legal Action Quantum...
C anadian legal professionals have been on the leading edge of adopting generative artificial intelligence (Gen AI) technology. A LexisNexis® Canada Inc. survey found that 93% of Canadian lawyers are aware...
Canadian businesses and professionals have been eager to explore the possibilities of generative artificial intelligence (Gen AI) technology, but adoption rates need to accelerate in 2025 for Canada to...
All Canadian law students received access to Lexis+ AI on January 7, 2025. Since then, they have initiated over 18,000 Lexis+ AI sessions and usage has increased steadily week to week. Here is what Canadian...
The emergence of generative AI (Gen AI) tools represents an exciting breakthrough for the legal industry . However , i t’s crucial to approach this new technology with appropriate caution . Just as lawyers...
Human oversight is critical to ensure generative AI benefits legal services in an ethical and responsible manner. With diligent governance, professionals can utilize AI to improve efficiency, insights, and justice while pro-actively managing risks and upholding duties.
This article examines strategies for developing ethical and accountable AI systems through continuous human supervision. We discuss how human oversight reduces risk, amplifies benefits, and boosts overall accountability when practicing the law, and we look at the best ways to increase human oversight of AI systems.
AI ethics is the study and application of ethical principles and values in the development, deployment, and use of artificial intelligence systems. It aims to ensure that AI technologies are designed, implemented, and utilized in a responsible and ethical manner, addressing potential risks, challenges, and unintended consequences.
Generative AI relies solely on machine learning algorithms detecting patterns, without any human guidance on ethics, logic, or common sense. As a result, AI systems can reach conclusions that are biased, misleading, or simply incorrect in certain cases. To develop ethical and effective AI, human supervision is required from initial data inputs through final outputs.
Human oversight also mitigates other prominent AI risks. It can reduce the introduction of bias through input data checks and output validations. It can also ensure inputs are carefully curated, while simultaneously ensuring effective data governance. In short, human oversight drastically improves the results of the AI and boosts user trust.
It is vital, for specific AI systems built for specific purposes, that providers utilize extensive oversight from experts throughout the process. In the legal sector, for example, people with an in-depth understanding of law should guide:
All of this helps to continuously improve the performance of AI solutions.
Human oversight should start with the inputting of data. Experts should carefully curate the data used to train systems and then constantly evaluate that data, looking for any outliers or anomalies and correcting sources to maintain high quality. Doing this can minimize inaccuracies and bias through bias-reducing procedures and algorithmic detection tools.
Regular data assessments identify problems like missing values, outliers, and unrepresentative data. This allows providers to address issues through cleaning and preprocessing. Perhaps most importantly, data assessments ensure that the data is representative and reflects the real-world scenario in which the AI operates.
Organizations bear responsibility for monitoring AI systems and should always consider the real-world impact. That’s why AI systems should also apply human oversight through auditing. At the highest level, AI systems can perform model performance evaluation, to develop a series of metrics — accuracy, precision, speed, relevance, so on — and judge outputs against those metrics. Rigorous supervision ensures AI evolves responsibly before deployment.
AI systems should judge outputs in real-world scenarios inviting feedback from real-world users to help identify problems. Lawyers should provide feedback as they are most likely to notice inaccuracies, and understand context, making their perspective more practical and attuned to nuances.
Effective oversight depends, then, on the knowledge of the expert and the irreplaceable shared wisdom of the community.
Lawyers and law firms should also apply oversight when using AI tools in practice. They can perform their own audits, ensuring the AI systems meet the suggested standards. Law firms and lawyers can dig deeper in terms of auditing systems, adopting a more technical approach.
Tools like local interpretable model-agnostic explanations (LIME) and Shapley Additive exPlanations (SHAP) explore the interpretability of AI systems, for example. These are techniques that approximate any black box machine learning model with a local, interpretable model to reveal what is happening within systems and identify potential issues.
LIME and SHAP should not often prove necessary, as lawyers, in the pursuit of minimal risk, should use AI systems that offer greater transparency, and greater accountability. But AI systems are not static — they evolve, they improve, and they also deteriorate — so a once transparent system can become increasingly opaque. That’s why regular auditing is necessary.
Training and guidance also empower lawyers to practice effective human oversight. Law firms should train staff on the best ways to apply human oversight to outputs. That means lawyers should be able to scrutinize outputs, validate accuracy, spot errors, identify data misuse, and so on. Responsible AI adoption requires developing an organizational culture focused on oversight, ethics, and continuous improvement.
Learn more about Lexis+ AI to witness the benefits of AI-driven legal research.