Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
One strategy to ensure the safety of a powerful AI is to keep it contained in a secure software environment, or “box” it. Keeping an AI system in a box would make it safer than letting it roam free. However, even boxed AI systems might not be safe enough.
Reliably boxing intelligent agents
Agent
A system that can be understood as taking actions towards achieving a goal.
Experimentally, humans have convinced other humans to let them out of the box. Spooky.
Keep Reading
Continue with the next entry in "Objections and responses"
How can AI cause harm if it can't manipulate the physical world?
NextOr jump to a related question