Edit AnswerTags: Damaged's Answer to Ryan Paton's question on Intro to AI Safety

From Stampy's Wiki
Log-in is required to edit or create pages.

You do not have permission to edit this page, for the following reason:

The action you have requested is limited to users in the group: Users.


Tags:
Cancel

Answer text

You would start to run into the whack-a-mole problem. Basically, whenever you make a hard "don't ever do X" rule, you will absolutely wind up having to make dozens of exceptions each time the AI works around said rule. Ex: Make a medical research AI and program it to Not harm Living Creatures AI halts, since any action it takes will cause harm to at least one single-celled organism You make an exception for anything under a few hundred cells AI creates a new medication that has a side effect of killing gut flora/fauna—anyone who takes it dies of malnutrition You make an exception to the exception for things living inside humans AI halts trying to make a de-worming drug because it cannot harm things living in humans Etc