Use this button to switch between dark and light mode.

How are LLMs and Generative AI Changing the Future of Finding Insights from Data?

January 23, 2025 (5 min read)
LLMs and Generative AI are revolutionizing data analysis.

Large Language Models and generative AI tools have transformed the way organizations bring order to the vast amount of online and offline data available. AI can narrow down this data universe to a concise summary of only the most relevant results, which allows organizations to generate insights that would not have been possible through manual searches.

In our latest blog, we explain how a Responsible AI approach can help organizations to get the most out of the technology.

AI launches a new era for summarization and insight generation

A McKinsey report called 2023 generative AI’s “breakout year” and, since then, its 2024 survey revealed that the percentage of organizations using the technology has nearly doubled. Its rise has been observed across multiple sectors – for example, 78% of banks have implemented generative AI for at least one-use case, according to IBM’s 2024 Global Outlook for Banking and Financial Markets.

A key reason for the proliferation of generative AI and LLMs is their transformative effect on what organizations can do with the vast amount of data that is available to them:

  • Generative tools are trained on high volumes of data to instantly create (or ‘generate’) new content such as text, imagery or videos in response to a user’s prompt.
  • LLMs use Natural Language Processing to ingest data and generate new text; analyze and classify text; find patterns in data; and provide concise and relevant summaries.

These tools offer two main benefits for organizations:

  • Finding new insights from data: AI tools can surface new insights from high volumes of data in a way that would be heavily resource-intensive or even impossible for humans to do manually. LLMs can detect trends and patterns in data and analyze the tone and sentiment of different sources. Insights vary from risks that should be investigated, to opportunities for new products or markets to consider. LLMs and generative AI should get better at their task over time as they learn from new data and repeated interactions with users.
  • Summarizing high volumes of data: Even when AI tools surface insights from data, the smaller subset of relevant results they provide can still absorb significant analyst time to process and act on them. LLMs and generative AI tools can analyze this subset to understand their meaning; extract the key points; and provide a concise summary for the analyst or user. This makes it easier and quicker to understand and identify risks and opportunities from data and distribute findings across the company.

The greater accuracy and efficiency of AI for insight generation and summarization has prompted many organizations to invest in the technology. For example:

  • Canada’s Bank of Nova Scotia uses LLMs to summarize conversations between a customer and the bank’s chatbot so, if a query is referred to a human agent, it saves them up to 70% of the time it would have taken to read that conversation.
  • Barclays is exploring the use of generative AI to improve its detection of fraud and money laundering, by recognising patterns from the data which could predict illicit activity.
  • Morgan Stanley uses natural language processing tools to improve the services it offers to companies, including providing account information and offering personalized financial advice.

MORE: Top 5 ways professional services teams are using generative AI

Responsible AI: Improving the accuracy and credibility of AI’s summaries and insights

Generative AI tools and LLMs have inbuilt problems which could undermine the summaries and insights they provide to organizations. Many of the issues stem from the ‘black box’ nature of AI. Humans cannot always see or understand why and how the model came up with a particular response, insight or summary. This brings several risks:

  • Algorithmic bias: If we do not know the rationale for an AI’s insights, we can’t identify biases from its developers, or the data it was trained on.
  • Hallucinations: A risk of generative AIs and LLMs is that, sometimes, the response to a user’s prompt is erroneous and not based on accurate data. The New York Times reported that up to 27% of responses from some of the best-known generative AI tools may be hallucinations.
  • Data risks: Data used to power AI technologies sometimes fails to comply with regulatory standards around security and privacy or respect the intellectual property of its originators or owners. Yet many LLMs and generative AI tools produce content without citing the source. If insights or summaries are based on data used without express permission from publishers, the organization acting on those insights is exposed to legal risks.

Overcoming these risks to leverage AI’s potential is a priority for organizations in every sector. The most promising approach is to implement a Responsible Business approach to AI. This means AI and the data powering it should be developed and deployed in a legally compliant and ethical way. It introduces a framework which does not only measure the potential of AI for innovation and profit, but for how well it furthers the company’s core values and ethics.

While Responsible AI starts from a set of principles about the ethical use of data and technology, organizations then need to implement these in practical ways. A common method used by organizations is to set up a committee which considers every potential AI initiative against a Responsible Business for AI framework.

Another is to set out guardrails which dictate how staff can and should use LLMs and generative AI tools. One guardrail which can reduce the risk of AI hallucinations is adopt a Retrieval-Augmented Generation (RAG) technique for generative AI tools and LLMs. This approach ensures that the tool retrieves every response from authoritative, original data sources, which supersedes its continuous learning from training data and subsequent prompts and responses. Each response should then cite the sources used to compile it, while allows the organization to verify that information and establish it is not a hallucination.

MORE: The AI Checklist: 10 best practices to ensure AI meets your company’s objectives

Power your Responsible AI approach with data and technology from LexisNexis®

LexisNexis offers a powerful combination of credible, licensed content and sophisticated technology that can power the effective implementation of Responsible AI. Its advantages include:

  • Credible data tailored for AI: As an established data provider for over 50 years, LexisNexis has extensive, long-standing – and in some cases, exclusive – content licensing agreements with publishers worldwide. We supply data to enable you to advance your goals while recognizing and respecting the intellectual property rights of our licensed partners.
  • A trustworthy provider committed to Responsible AI: We consider the real-world impact of our technology and data solutions on people by placing the advancement of the Rule of Law at the core of our business strategy and following the RELX Responsible AI Principles.

Download our Responsible AI toolkit to learn more about the how your company can exploit AI’s opportunities and manage its risks with high-quality data: