What about AI that is biased?
Bias and discrimination1 The study of ethical principles for AI systems and their creators to follow. In practice, “AI ethics” often refers to a cluster of concerns about present systems that include algorithmic bias and transparency. The ability to ‘look inside’ a model, to understand how it works and why it produces a specific output
Bias in AI refers to systematic errors and distortions in the data and algorithms used to train AI systems that cause those systems to treat people inequitably. Note that this use of the term bias is different from the one used in statistics, which refers to a failure to correctly represent reality but not necessarily in a way that particularly affects certain groups.
The forms of bias in AIs today that are most discussed by the media are the ones that lead to racism2
Work to reduce existential risk is sometimes presented as opposed to work addressing bias in current systems, but the AI safety
A research field about how to prevent risks from advanced artificial intelligence.
A research area that aims to make machine learning systems easier for humans to understand.
That being said, we choose to concentrate on existential risk because we perceive the dangers of superintelligence
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
These fit within the larger concepts of AI Ethics and FATE (fairness, accountability, transparency, ethics). ↩︎
Google Photos would tag black people as gorillas in 2015, and the problem was fundamentally hard to fix. ↩︎
Racial discrimination appears in face recognition technology. ↩︎
AI-assisted predictive policing and bail setting exhibits racial biases (among others). ↩︎
Facial recognition is more prone to false-positives with Black faces, leading to wrongful arrests. ↩︎
Most of the data used to train AIs comes from men's lives. ↩︎
Women with good credit scores can get lower limits on their credit cards. ↩︎
AI-assisted hiring practices might discriminate in problematic ways. ↩︎
Data on older adults is sometimes excluded in training datasets for health-related AI applications. ↩︎
AI correctly tags western bridal dresses as such, but not brides from other cultures. ↩︎
AI-powered speech and movement tracking software used in hiring could be prejudiced against disabled people. ↩︎
AI-powered gender and sexual orientation recognition might discriminate against people who don't fit well in the gender binary or might be used to actively discriminate LGBTQ people. ↩︎