Citations and their formatting are an important part of providing reliable information to an audience. Writing gets its credibility from sourcing the claims it makes. Readers need to be able to fact-check a writer's sources and trace where the claims in a piece come from.
If you cannot take AI-cited sources at face value and you (or the AI programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Here you should use the most important method of analysis available to you: lateral reading.
Lateral reading is done when you apply fact-checking techniques by leaving the AI output and consulting other sources to evaluate what the AI has provided based on your prompt. You can think of this as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.
This video from the University of Louisville Library tells you how to sort fact from fiction when online.
What does this process look like specifically with AI-based tools? Learn more in the sections below.
Although many responses produced by AI text generators are accurate, AI also often generates misinformation. Oftentimes, the answers produced by AI will be a mixture of truth and fiction. If you are using AI-generated text for research, it will be important to be able to verify its outputs. You can use many of the skills you’d already use to fact-check and think critically about human-written sources, but some of them will have to change. For instance, we can’t check the information by evaluating the credibility of the source or the author, as we usually do. We have to use other methods, like lateral reading, which we’ll explain below.
Remember, the AI is producing what it believes is the most likely series of words to answer your prompt. This does not mean it’s giving you the ultimate answer! When choosing to use AI, it’s smart to use it as a beginning and not an end. Being able to critically analyze the outputs that AI gives you will be an increasingly crucial skill throughout your studies and your life after graduation.
Adapted from Wayne State University
As of summer 2024, a typical AI model isn't assessing whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction.
AI can be wrong in multiple ways:
University of Wisconsin-Parkside Library | Contact Us
900 Wood Road Kenosha, WI 53141 | (262) 595-3432