Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

Are Google, OpenAI, etc. aware of the risk?

The major AI companies are thinking about existential risk from AI. OpenAI founder Sam Altman has said: “And the bad case — and I think this is important to say — is like lights out for all of us.” Anthropic was founded by ex-OpenAI employees specifically to focus on safety. Google DeepMind has a Safety Research division that concentrates on misalignment. These organizations have collaborated on research endeavors.1

Leaders and employees at these companies have signed a statement saying: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, as of 2024, the majority of the effort these organizations put forward is towards capabilities research, rather than safety.

Further reading:


  1. The paper Concrete Problems in AI Safety was a collaboration between researchers at Google Brain (now Google Deepmind), Stanford, Berkeley, and OpenAI. ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.