Have summaries of our latest blogs delivered to your inbox, so you can stay up to date on the topics and current events that matter to your business.
Artificial Intelligence (AI) has already had a significant impact in academia and educational environments. It has introduced many advantages such as personalized learning experiences and chatbots to assist...
Accessibility in technology is essential and cannot be viewed as simply an additional feature . Imagine a visually impaired student struggling to navigate an online research database because the text ...
Each year on April 22, Earth Day inspires individuals, governments, and organizations to reflect on their role in protecting the planet. But for many mission-driven nonprofits, Earth Day is a call to action...
Generative AI’s potential for companies is well-known, but the technology can create new risks if it is not powered by original and trustworthy data sources. In this blog, we explore those risks;...
Generative AI is widely predicted to transform almost every industry and use case, and companies spent more than $20 billion on the technology last year. But it also exposes these firms to new risks if...
Artificial Intelligence (AI) has already had a significant impact in academia and educational environments. It has introduced many advantages such as personalized learning experiences and chatbots to assist students and faculty with quick access to information. However, as AI tools continue to become widely available, they bring forward challenges and concerns that must urgently be addressed.
Imagine a student submits a research paper that appears well-written and well-researched. Upon closer inspection, however, the citations are fabricated, the arguments are surface-level, and the student cannot explain their own conclusions. This is the reality educators and librarians face as AI tools become widely available.
The challenge isn’t whether students will use AI—they already are. According to a global survey by the Digital Education Council, 86% of students already incorporate AI into their studies. The real question is: how can academic institutions lead the way in teaching responsible, high-quality AI use as a skill students will need in their future professions? Alternatively, do we try to prevent the use of AI altogether, likely leading students to covertly employ unreliable tools and unvetted sources that could impact research and academic integrity?
Preparing beyond their academic careers, students who learn to use AI responsibly and critically will be better prepared for professional environments that increasingly integrate AI-driven technologies. Such tools are becoming the norm, as 79% of professionals surveyed in the 2025 LexisNexis Future of Work Report said they’ve used genAI tools in recent months. It’s up to institutions to equip students with the skills to navigate our new AI-driven world.
Without proper oversight, AI can undermine the very foundations of academic integrity. Misinformation and hallucinations mislead students and compromises the credibility and quality of their research. Over-reliance on AI-generated writing threatens students’ unique voices, making their work indistinguishable from machine-generated content. Many educators also worry that students using AI as a shortcut will fail to develop the critical thinking and research literacy needed for academic and professional success.
There are also significant ethical concerns, as some AI tools collect and store user data to refine their models without transparency. This raises questions about data security and privacy, particularly in academic settings where there are large amounts of sensitive data, student information, research findings, and more. If institutions fail to act, they risk allowing AI to dictate the terms of academic research rather than setting responsible guidelines themselves. Additionally, the use of unlicensed sources in AI tools, especially those trained on copyrighted material, can expose institutions to legal risk and reputational damage.
Academic institutions must take the lead in shaping responsible AI use, starting with creating clear policies that define when and how students may use AI tools. Faculty can encourage responsible use of AI by setting clear guidelines and communicating the do's and don'ts of acceptable scholarship. It may be helpful to incorporate structured steps supporting quality work, like requiring AI-generated citations to be verified, including activities that push students to engage with the source material, rather than summarizations, and similar activities.
Educators can also require students to engage with primary, vetted sources, cross-check AI-generated information, and demonstrate original analysis in all assignments. In doing so, students will likely begin to view AI as a tool to support their academic work and help them understand AI’s position as a research aid, rather than a replacement for critical thinking.
Additionally, universities offering training programs and toolkits to help both faculty and students understand AI’s strengths and limitations are actively contributing to quality foundations in the future of research. Workshops and classroom discussions exploring how to fact-check AI-generated content, recognize misinformation, and critically engage with AI-assisted research are building critical skills students need before entering the workforce.
AI should be a tool for enhancing learning, not a substitute for independent thought. While AI can help with academic tasks like streamlining research, extracting information from long documents, and summarizing complex topics, fundamentally it cannot replace the critical thinking that is essential to academic and professional success.
Students learning quality AI-integrated research skills are learning to use a critical lens, evaluating the fit and form of evidence developed from LLMs. They are active participants navigating the validity, veracity, and value of information and evidence, while manoeuvring through the increased opportunities and challenges presented by AI. To help them on this journey, academic institutions can offer courses and classes covering both traditional and AI-assisted research methods. Educators and librarians can underscore the value of AI-powered research as a starting point but teach skills so that it is not dictating conclusions. By teaching students how to effectively integrate AI into their research process, they will gain a critical skillset for their future professions, empowered to maximize efficiency and quality.
Not all AI tools are created equally but institutions have options that support developing the professional skills students need while protecting the integrity of their education and scholarship more broadly. Such tools provide transparent, verifiable citations, rely on credible academic publishers, and prioritize data privacy.
To align AI use with academic values, institutions can invest in research tools that support their broader objectives. One such solution is Nexis+ AI™, which combines transparent generative AI with the industry’s largest collection of news sources approved by publishers for Gen AI use*. With Nexis+ AI, students kickstart their research by generating cited, multi-source answers to their research questions, extracting relevant insights from lengthy documents, and, if desired, first-draft outlines. By integrating tools like Nexis+ AI, universities can equip their students with research skills that are empowered by the benefits of AI, while also underscoring the importance of research integrity and transparency.
AI is here to stay, and academia has a choice: shape responsible AI use through skill development or struggle with its consequences. Institutions taking proactive steps—by defining policies, educating faculty and students, and adopting transparent AI solutions—will not only preserve academic integrity, but also produce graduates prepared to enter the workforce in an AI-powered world.
To learn more about Nexis+ AI, visit www.LexisNexis.com/NexisAI today.
*Based on reports December 2024