AI Literacy at PCC
- AI Literacy
- Prompt Writing & AI Tools
- Evaluating AI Outputs
- AI, Bias, and Equity
- More AI Ethical Issues
- Citing Generative AI
- Sample Syllabus Statements
Questions?
Creative Commons License
Parts of this guide are adapted from:
- AI, ChatGPT, and the Library Libguide by Amy Scheelke for Salt Lake Community College licensed under CC BY-NC-SA 4.0
- Decoding Deception by Diana Daly and Kainan Jarrette licensed under a CC BY NC-SA 4.0
- Generative Artificial Intelligence by The UC San Diego Library, licensed under CC BY 4.0
Changes include rewriting and combining some passages and adding original material.
Welcome
Generative AI develops and changes quickly! AI tools like ChatGPT, Gemini, and others are neither good nor bad on their own...they present new ways to interact with information, and they offer new opportunities and challenges when it comes to learning more broadly.
This guide is intended to help you critically engage with generative AI tools and to help you be intentional about your choices: are AI outputs credible and reliable? does my AI use support my learning? is my use inline with my ethics?
Students: make sure to check your class syllabus and communicate with your instructor before using AI tools for any course assignments, and ALWAYS acknowledge your AI usage in any work, and cite all AI-produced content.
AI information sources
Where does AI information come from?
The first thing to understand about generative AI is that it isn't thinking and answering your questions: it is guessing what a likely answer would be based on language patterns. One of the challenges with generative AI is that many companies and products do not clearly specify what text, data, images etc these tools use to generate their responses.
Much of the data that is used to "train" AI is harvested from the Internet, using tools that search engines also deploy. This October 2023 conversation in Scientific American adds some clarity to this topic, as well as raising questions about personal data and privacy. These questions also raise concerns about bias, racism, sexism and other isms replicated and exacerbated by AI tools. See the AI, Bias and Equity page on this guide to explore these concerns more.
AI and "hallucinations"

In particular, LLMs such as ChatGPT or Google Gemini are:
- Designed to predict patterns, not create a repository of truth
- Generate text based on what sounds right, not what is right
- Designed to “fill in the blanks” rather than say they don’t know
Importantly, not only will AI fabricate inaccurate information, but it will do so confidently and (often) convincingly. So much so that AI hallucinations have sometimes even made their way into mainstream publications.
Below are a couple of helpful resources for understanding these hallucinations:
- What are AI hallucinations? IBM (no date)
- AI hallucinations can’t be stopped — but these techniques can limit their damage, Nature, January 2025
AI Myths
Before we go further, let’s clear up three common myths about AI:

Myth 01: AI is Conscious.
It’s not. AI replicates patterns, not inner-experience. Which isn’t to dismiss that the concept of AI raises some incredibly interesting philosophical questions. But it is to say that, at least for now, nobody has to worry about “sentient AI.”
Myth 02: AI is Objective and Infallible.
It’s not. As we’ll discuss more on this guide, AI absorbs all the biases and flaws of the data it’s trained on and prioritizes giving any answer over giving a correct answer.
Myth 03: AI is monolithic.
It’s not. AI is often talked about as though it’s one central force or system, but in reality it’s a broad concept used in tons of different and unique systems. This also means that not all AI provides the same benefits, has the same flaws, or poses the same risks.
Skepticism vs Panic
There are a lot of misconceptions and misinformation about AI, but we also want to acknowledge that the application and integration of AI raises some very serious and valid concerns. It’s healthy to be skeptical of how AI is used.
The issue arises when that skepticism turns to panic and fear. Fearing a technology makes us want to disengage with it, and we need skeptics to be engaging. Otherwise, only the blindly optimistic are left to guide where this technology goes, and that usually doesn’t end up well.
- Last Updated: Dec 12, 2025 2:13 PM
- URL: https://guides.pcc.edu/ai
- Print Page
