AI and information literacy

Welcome

Generative artificial intelligence (AI) is a technology that is developing quickly. These AI tools like ChatGPT, Microsoft Bing, and others are neither good nor bad when it comes to finding and using information; they present a new way in which we can interact with information.

This guide is intended to help you critically engage with generative AI tools. It focuses on how they intersect with information literacy.

Students, make sure to check with your instructor before using AI tools for any course assignments, and be sure to cite any content they produce

AI information sources

Where does the information come from?

One of the challenges with generative AI is that many companies and products do not clearly specify what text, data, images etc these tools use to generate their responses. 

ChatGPT, for example, is "trained" on a body of text which allows it to generate text in response to a prompt. Some partial lists of the training dataset exist, and ChatGPT will also provide a partial list when queried.

While some AI tools do not provide traceable references for the sources that they use to produce their responses, others, like Perplexity, for example, do. 

Much of the data that is used to "train" AI is harvested from the Internet, using tools that search engines also deploy. This October 2023 conversation in Scientific American adds some clarity to this topic, as well as raising questions about personal data and privacy. These questions also raise concerns about bias, racism, sexism and other isms replicated and exacerbated by AI tools. See the AI, Bias and Equity page on this guide to explore these concerns more. 

AI and "hallucinations"

One well-known quirk of AI tools are known as "hallucinations." Hallucinations happen when the AI departs from the data it has been trained on, and... just makes stuff up. 

AI hallucinations have not been infrequent, and false information generated by AI tools can be difficult to spot. Several recent hallucinations have been especially well documented: ChatGPT declared its love for a New York Times reporter in early 2023, and a few months later, a lawyer filing a motion in court used non-existent cases fabricated by ChatGPT in their arguments. Likely due to the relatable subject matter, these two examples were widely discussed in the media, but these stories also point to the problematic relationship between AI and credibility. 

Below are a couple of helpful resources for understanding these hallucinations: 

What are AI hallucinations? IBM (no date) 

When chatbots hallucinate, New York Times, May 2023

We have to stop ignoring AI's hallucination problem, The Verge, May 2024

---

Like most things related to AI, this information can and will change. Different AI tools use different sources and datasets for "training." Tools like ChatGPT and others will update these datasets. When you select a tool to use, it is worth investigating where the AI tool is getting it's information. And, like all information you use, it is crucial to check AI-generated information for credibility and accuracy. See the Using Generative AI page on this guide for more information.

 

Creative Commons License

Parts of this guide are adapted from:

Changes include rewriting and combining some passages and adding original material.