Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

What about AI that is biased?

Bias and discrimination1

in current and future AI systems is one concern among a number of real issues, each of which deserves attention.

Bias in AI refers to systematic errors and distortions in the data and algorithms used to train AI systems that cause those systems to treat people inequitably. Note that this use of the term bias is different from the one used in statistics, which refers to a failure to correctly represent reality but not necessarily in a way that particularly affects certain groups.

The forms of bias in AIs today that are most discussed by the media are the ones that lead to racism2

3 4 5 and sexism6 7. Other biases8 9 10 11 12 have also been identified. These biases in AI are often a reflection of which societies are most heavily represented in the training data (such as English-speaking communities) as well as the biases within these societies.

Work to reduce existential risk is sometimes presented as opposed to work addressing bias in current systems, but the AI safety

community's focus on existential risk doesn’t mean it’s unsympathetic to concerns about bias. Yoshua Bengio, who has worked on AI ethics for many years, rhetorically asks: “should we ignore future sea level rises from climate change because climate change is already causing droughts?” Humanity can address both classes of problems if it decides to prioritize them both. Furthermore, some research areas such as interpretability are useful toward both goals. On the governance side, there is some overlap in the techniques and institutions to make AI fair and to make AI safe.

That being said, we choose to concentrate on existential risk because we perceive the dangers of superintelligence

to be both imminent and of the greatest importance.


  1. These fit within the larger concepts of AI Ethics and FATE (fairness, accountability, transparency, ethics). ↩︎

  2. Google Photos would tag black people as gorillas in 2015, and the problem was fundamentally hard to fix. ↩︎

  3. Racial discrimination appears in face recognition technology. ↩︎

  4. AI-assisted predictive policing and bail setting exhibits racial biases (among others). ↩︎

  5. Facial recognition is more prone to false-positives with Black faces, leading to wrongful arrests. ↩︎

  6. Most of the data used to train AIs comes from men's lives. ↩︎

  7. Women with good credit scores can get lower limits on their credit cards. ↩︎

  8. AI-assisted hiring practices might discriminate in problematic ways. ↩︎

  9. Data on older adults is sometimes excluded in training datasets for health-related AI applications. ↩︎

  10. AI correctly tags western bridal dresses as such, but not brides from other cultures. ↩︎

  11. AI-powered speech and movement tracking software used in hiring could be prejudiced against disabled people. ↩︎

  12. AI-powered gender and sexual orientation recognition might discriminate against people who don't fit well in the gender binary or might be used to actively discriminate LGBTQ people. ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.