AI and information literacy

Artificial Intelligence and Bias

All AI contains bias. AI mirrors society, and society is biased. The datasets used to train AI contain bias, because they were built by humans. AI is being used in decision making, but how can those decisions be equitable if AI's training comes from biased data? How can we move toward more ethical AI? 

The articles and resources below offer various perspectives about the problems of bias embedded in today's AI tools as well as some suggestions for reducing bias. 

Humans absorb bias from AI—and keep it after they stop using the algorithm, Scientific American, October 2023

Mitigating bias in artificial intelligence, Center for Equity, Gender and Leadership at the Haas School of Business @ University of CA Berkeley, 2020.  

What do we do about the biases in AI? Harvard Business Review, October 2019

Eliminating bias in AI may be impossible – a computer scientist explains how to tame it instead. The Conversation, July 2023  

Coded Bias 2020 film - available on Netflix

AI was asked to create images of Black African docs treating white kids. How'd it go? NPR, October 2023

Videos

This short video gives a simple explanation for how machine learning acquires bias. It is from Google, so it is a bit of the fox guarding the henhouse, but it clearly defines the concept and the problem, as well as the measures Google feels it has taken to alleviate bias. 

 

This video from London Interdisciplinary School, a British university, looks at images generated by AI tools and the types of bias that crops up. The video uses the concept of "representational bias" to show the harm done when gender, racial, and economic biases from the real world are replicated and amplified by AI tools, and also points to possible regulatory solutions. 

 

In this eye-opening TED talk, computer scientist Joy Buolamwini explores alarming racial gaps in facial recognition software. Buolamwini is also the founder of the Algorithmic Justice League