How might AGI kill people?

The general problem with predicting the exact ways an AGI could outmaneuver humans is that we can predict that an extremely smart and capable agent would be able to achieve its goals (since that's what smart and capable means), but we can’t predict exactly how it will achieve those goals. This is because it is likely to come up with plans and factors that we didn’t even consider. This vingean uncertainty is worth keeping in mind when discussing ways an AGI could win in a confrontation with humans: you might think you have a counter-move for everything, and still reliably end up losing.

With this in mind, an AGI that would want to harm humans could:

  • Hack our weapons systems and turn them against us

  • Create bioweapons and either deploy them autonomously or distribute them to rival nations or terrorist organizations

  • Socially manipulate humans in ways that incite conflict that lead to war

  • Hack into anything connected to the internet and cause car/plane/ship crashes

This list is of course not exhaustive. For many of the cases listed above, the AGI would need to be connected to the internet, but “boxing an AGI” might be harder than it seems.

Further reading: