AI Literacy at PCC

Artificial Intelligence and Bias

All AI contains bias. AI mirrors society, and society is biased. The datasets used to train AI contain bias, because they were built by humans. AI is being used in decision making, but how can those decisions be equitable if AI's training comes from biased data? How can we move toward more ethical AI? 

The articles and resources below offer various perspectives about the problems of bias embedded in today's AI tools as well as some suggestions for reducing bias. 

Videos

This short video gives a simple explanation for how machine learning acquires bias. It is from Google, so it is a bit of the fox guarding the henhouse, but it clearly defines the concept and the problem, as well as the measures Google feels it has taken to alleviate bias. 

 

This video from London Interdisciplinary School, a British university, looks at images generated by AI tools and the types of bias that crops up. The video uses the concept of "representational bias" to show the harm done when gender, racial, and economic biases from the real world are replicated and amplified by AI tools, and also points to possible regulatory solutions. 

 

In this eye-opening TED talk, computer scientist Joy Buolamwini explores alarming racial gaps in facial recognition software. Buolamwini is also the founder of the Algorithmic Justice League