27 Feb 2023

Why You Need to Fact-Check AI-Generated Content for Misinformation

Content created by Artificial Intelligence proliferates at a remarkably rapid pace. AI’s power to transform our informational landscape is immense, due to the constant overflow of information. AI analysis is more than twice as fast as human discovery and gives us a wellspring of optimized data that allows us to expedite decision making.  However, if the AI dataset is flawed, it can drown out fact with misinformation and disinformation, as its spread is exponential.

With the increased prevalence of disinformation on topics ranging from politics to finance and beyond, it is easy to feel overwhelmed. So, how does one confidently wring out the fact from the fiction when overly saturated with information?

In this article, we will dive deeper into the role of AI in the disinformation landscape, examples of misinformation spread by AI-generated articles and the need for fact-checking articles. Let's get started. 

The role of AI in the disinformation landscape

Artificial Intelligence plays an outsized role in the disinformation landscape of our daily lives, affecting everything from personal beliefs to economic decisions. AI is incredible at quickly and deftly analyzing immense data sets and learning language, but we are dependent on those who create AI and teach it to do so in good faith.

Challenges in combatting AI-created misinformation

The race to get information out quickly can cause problems, especially when AI is not fact-checked in preference for speed. When the information proliferates fast enough to seem valid, articles are picked up by local news and radio. Despite broadcaster claims that they are verifying the information still, the false information may be all an individual hears and shares. The damage can be done on a mass-scale in minutes and is rarely reversible. 

In a more problematic use, criminals and criminal enterprises can intentionally embed malicious code into articles to be used for cybercrimes. AI can create fake profiles presenting false information that can shift financial markets, influence foreign affairs and introduce false social movements. This creates confusion by mimicking the language of major news sources and implementing data sets that affect machine-learned algorithms. The mimicry makes it hard to discern the source of the misinformation from those who rapidly shared it unwittingly.

MORE: How misinformation spreads on social media--and how to combat it

Examples of misinformation spread by AI-generated articles

We have all done an internet search that yielded results with pages and pages worth of sources, some from well-known entities and many of them from unknown sites and authors. Whether you’re searching for information on finances or world news, the validity of the articles you engage with is paramount.

When AI gets things wrong

Recently, many financial publications have used artificial intelligence to write articles about the ways financial services work. For example in a CNET AI-generated article explaining interest rates for high-yield savings accounts and car loans, the article incorrectly implies that you will earn double your principal investment, rather than stating your year-end account total will be your principal investment plus the acquired interest. It is missing the nuance and specificity of language to properly illustrate how investment returns and loan interest accrue.

The misinformation in the article might cause someone to believe they are going to to base their loan decisions on false information, therefore declining to work with the financial institution. If these articles of misinformation are posted on the company’s website, the client may believe the institution is engaging in bad business--thus driving clients away.

When AI is purposefully misleading

On the other end of the spectrum, AI-generated articles and photographs can be used to promote misinformation and publicize false public opinions. In 2019, The Associated Press unwittingly reviewed fake LinkedIn profiles of seemingly legitimate journalist, analysts and consultants.  The profiles included AI-generated photographs and bios that added to their authenticity and credibility, despite them being fake people operated by individuals with malicious intent.

These deceptive practices have an incredible power to influence public opinion and decision making. Unknowingly, someone may form an opinion be based on frequent falsehoods flooding their feeds regarding subjects from people and policy to finance and science. Articles in bulk, rife with misinformation or disinformation, can override fact if they are shared widely, causing people to make decisions that are not aligned with their best interests and needs.

MORE: The consequences of sharing misinformation

The need for fact-checking AI-generated articles

AI is not able to make complex judgements about the truthfulness of its created statements and articles. Because the training data used to teach is sometimes noisy (i.e. labeled incorrectly, whether intentionally or not) or the data set is too small, it causes a lack of accuracy.

AI needs a large enough set of data for the system to properly learn. When unverified and false information end up in these data sets, the problematic material grows at an exponential rate.

AI doesn’t create the problem, but it does magnify existing issues and biases. L. Song Richardson of UC Irvine School of Law writes that issues we find in current algorithms won’t be unlike issues in the real world, for example: the lack of accountability for existing issues of racial and gender biases in employment.

Challenges in fact-checking AI-generated articles

Through the implementation of intensive and multilayered fact-checking, we can change the disinformation landscape for the better. However, creating fact-checking on the scale of AI-generated articles is a massive challenge because:

  • the quantity of information to sift through is vast
  • the quantity of articles to fact-check is ever increasing
  • the propagation of these articles, including those with misinformation, is fast-flowing
  • large language models can produce natural sounding articles at the click of a button, essentially automating the production of misinformation
  • Deep fakes can make misinformation hard to detect

MORE: 4 causes of misinformation that block business success

Tools for fact-checking AI-generated articles

The race for fact-checkers to keep pace with the mounting information is already difficult and is only going to get harder, which is why you need the right research tools. Open web searches take time, and they aren't always verifiable. Conversely, a specialized research platform that includes global news data from a variety of sources will allow you to cross check your facts and make sure that the content you share is accurate. 

 

Furthermore, with dedicated research platforms, you can set up alerts to keep track of any popular topics and include visualizations of trending content, related topics and top sources. This takes away your need to manually research all of your topics, allowing you to spend more time investigating new stories while feeling confident in the accuracy of the information you're sharing.