What about AI that is biased?
One harmful effect of AI is that it can lead to bias and discrimination.1 The ability to ‘look inside’ a model, to understand how it works and why it produces a specific output
Bias in AI2
The forms of bias in AIs today that are most discussed by the media are the ones that lead to racism3
Work to reduce existential risk is sometimes presented as opposed to work addressing bias in current systems, but the AI safety community’s focus on existential risk doesn't mean it’s unsympathetic to concerns about bias. Yoshua Bengio
AI researcher who won the Turing Award in 2018 and is the scientific director at MILA.
A research area that aims to make machine learning systems easier for humans to understand.
That being said, we choose to concentrate on existential risk because we perceive the dangers of superintelligence
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
These fit within the larger concepts of AI Ethics and FATE (fairness, accountability, transparency, ethics). ↩︎
This use of the term bias is different from the one used in statistics, which refers to a failure to correctly represent reality but not necessarily in a way that particularly affects certain groups. ↩︎
Google Photos would tag black people as gorillas in 2015, and the problem was fundamentally hard to fix. ↩︎
Racial discrimination appears in face recognition technology. ↩︎
AI-assisted predictive policing and bail setting exhibits racial biases (among others). ↩︎
Facial recognition is more prone to false-positives with Black faces, leading to wrongful arrests. ↩︎
Most of the data used to train AIs comes from men's lives. ↩︎
Women with good credit scores can get lower limits on their credit cards. ↩︎
AI-assisted hiring practices might discriminate in problematic ways. ↩︎
Data on older adults is sometimes excluded in training datasets for health-related AI applications. ↩︎
AI correctly tags western bridal dresses as such, but not brides from other cultures. ↩︎
AI-powered speech and movement tracking software used in hiring could be prejudiced against disabled people. ↩︎
AI-powered gender and sexual orientation recognition might discriminate against people who don't fit well in the gender binary or might be used to actively discriminate LGBTQ people. ↩︎