AI and information literacy
- AI and Information Literacy
- AI Tools
- Using Generative AI
- Citing Generative AI
- AI, Bias, and Equity
- More AI Ethical Issues
- Sample Syllabus Statements
Recommended reading
-
AI Ethics by Mark Coeckelbergh
Call Number: Rock Creek Library 170 C64a 2020This book, written by a philosopher of technology engaged in research and policy on the topic, moves away from science fiction fantasies and instead focuses on concrete ethical issues raised by AI and data science. -
How to Think about AI: A Guide for the Perplexed by Richard Susskind
For Susskind, balancing the benefits and threats of artificial intelligence -- saving humanity with and from AI -- is the defining challenge of our age. -
Is Artificial Intelligence Racist?: The ethics of AI and the future of humanity by Arshin Adib-Moghaddam
Call Number: Sylvania Library 174 A35i 2023This volume examines what data feeds into AI technology - and how this data will shape the future of humanity.
AI Ethics
Powerful technologies always create ethical challenges. Educators, environmentalists, ethicists and others are highlighting some of the challenges created by this technology, even as it evolves at breakneck speeds (and with little transparency). Philosopher Muhammad Tuhin synthesizes some common concerns in the article 10 Ethical Issues in AI Everyone Should Know:
1. Bias and Discrimination: When Algorithms Reflect Our Flaws
2. Privacy: The Vanishing Wall Between Public and Private
3. Job Displacement: When Machines Take Our Work
4. Autonomous Weapons: When Machines Decide to Kill
5. Deepfakes and Misinformation: Trust in a Post-Truth Era
6. Accountability: Who’s to Blame When AI Goes Wrong.
7. Transparency and Explainability: Understanding the Black Box
8. Human Enhancement and AI: Redefining What It Means to Be Human
9. AI and Environmental Impact: The Hidden Cost of Intelligence
10. Existential Risks: Could AI Outthink Us All?
Source: 10 Ethical Issues in AI Everyone Should Know , April 29, 2025
Technology professor Richard Susskind summarizes the challenges as follows:
We cannot look away or plunge our heads into the sand. There is too much to win and lose; too much is at stake. We are creating massively capable systems that could bring untold benefits. However, those same technologies constitute, in a number of dimensions, a credible set of threats to our way of life and even to humanity and civilization. That is why I believe that balancing the benefits and threats of artificial intelligence – saving humanity with and from AI – is the defining challenge of our age. -- How to Think About AI, p. 101
Learn more from these articles and the sections below:
What is AI ethics? IBM (no date)
5 ethical questions about artificial intelligence: There will be consequences. Britannica Money, October 2025.
Ethics of Artificial Intelligence and Robotics, Stanford Encyclopedia of Philosophy, 2020
Cognitive Offloading
The availability of ChatGPT and other GenAI tools to minimize or avoid challenging tasks which require mental effort can lead to what researchers call cognitive offloading. Some results from a recent M.I.T. study:
“EEG scans showed that those who leaned on AI exhibited significantly lower brain activity than their peers who worked independently. While their essays were polished and well-structured, they struggled when asked to recall what they had written due to less neural engagement.”
"[S]tudents who consistently used ChatGPT underperformed across neural, linguistic, and behavioral levels. When asked to write without assistance, their performance remained flat, suggesting a residual effect from earlier reliance."
Source: The Risks of Cognitive Offloading, Techopedia, 2025
Learn more from the following articles:
Cognitive Offloading: How AI is Quietly Eroding Our Critical Thinking, IEEE Computer Society, July 2025
AI Weakens Critical Thinking. This Is How to Rebuild It, Psychology Today, May 2025
The Risks of Cognitive Offloading: Are AI Models Making Us Dumb?, Techopedia, July 2025
Environmental Impacts
The AI revolution will definitely have an impact on the environment and climate, although the net impact is complex to determine. Enormous new data centers are being built worldwide to meet the computational demands of AI products, and major tech companies are backing away from their previous climate commitments. Some technologists express confidence that AI will bring efficiencies and powerful new tools in the efforts to combat climate change. Learn more from the following articles.
Explained: Generative AI’s Environmental Impact, Part 1, MIT News, January 2025
Responding to the Climate Impact of Generative AI, Part 2, MIT News, September 2025
A Data-Driven Look at AI’s Growing Environmental Footprint, UNU (United Nations University), August 2025
How AI Use Impacts the Environment and What You Can Do About It, World Economic Forum, June 2025
9 Ways AI Is Helping Tackle Climate Change, World Economic Forum, February 2024
Privacy
Tech giants (Google, Microsoft, Amazon, Meta, etc.) and countless less well-known companies offer users access to their “free” GenAI tools. What do these companies do with the data provided to them by users’ queries? Does this personal information become part of the training set for the tool? Given the power and expansive reach of GenAI tools, concerns about privacy and control of personal information are more relevant than ever. Learn more from the following articles.
What is Data Privacy and Why Should You Care?, National Cybersecurity Alliance, January 2025
Privacy in an AI Era: How Do We Protect Our Personal Information? Stanford University Human Centered Artificial Intelligence, March 2024
AI Assistants Privacy and Security Comparisons, Cybernews, September 2025
Copyright and Intellectual Property
GenAI tools are trained on enormous sets of data, including content that is protected by copyright law and may not be legally usable without the creator's consent. Who owns the copyright on a work created in part, or entirely by GenAI? Should a creator be able to allow humans to use their work, but forbid AI training? How can creators know if their content has been used without permission for GenAI training, and if so, what recourse do they have? Learn more about the controversies and complexities surrounding the protection of intellectual property in an age of AI from the following articles.
Copyright and Artificial Intelligence (links to three reports). U.S. Copyright Office, 2024 - 25
How Tech Giants Cut Corners to Harvest Data for A.I., New York Times, April 2024
What’s Yours Isn’t Mine: AI and Intellectual Property. Johns Hopkins Carey Business School, June 2024
A Global Phenomenon: The Creative Community’s Viral Outrage Against AI Theft, Copyright Alliance, March 2025
Misinformation, Scams, and Deepfakes
GenAI tools have turbocharged the creation and spread of false information and are being used effectively in scams, political manipulation, and other disinformation campaigns. Other AI tools are useful for detecting fake news and misinformation. Learn more from the following articles.
AI Chatbots Unable to Accurately Summarise News, BBC Finds, BBC, February 2025
AI Is Making Scams Smarter ... and More Dangerous, Consumer Affairs, May 2025
What Are Deepfakes? Everything to Know About These AI Image and Video Forgeries, CNET, May 2025
Digital Literacy for the Age of Deepfakes: Recognizing Misinformation in AI-Generated Media, NC Cooperative Extension, March 2025
AI-Powered Fact-Checking: Combating Misinformation in the Digital Age, Frontiers in Humanities and Social Sciences, April 2025
- Last Updated: Nov 24, 2025 3:10 PM
- URL: https://guides.pcc.edu/ai
- Print Page