Are Google, OpenAI, etc. aware of the risk?
The major AI companies are thinking about existential risk from AI. OpenAI founder Sam Altman has said: “And the bad case — and I think this is important to say — is like lights out for all of us.” Anthropic was founded by ex-OpenAI employees specifically to focus on safety. Google DeepMind has a Safety Research division that concentrates on misalignment. These organizations have collaborated on research endeavors.1
However, as of 2024, the majority of the effort these organizations put forward is towards capabilities research, rather than safety.
Further reading:
-
AI Lab Watch ranks labs on various criteria related to AI safety.
-
SaferAI produces a similar ranking.
The paper Concrete Problems in AI Safety was a collaboration between researchers at Google Brain (now Google Deepmind), Stanford, Berkeley, and OpenAI. ↩︎