Open Problem
6 pages tagged "Open Problem"
Can you give an AI a goal which involves “minimally impacting the world”?
Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?
Would AI alignment be hard with deep learning?
What would a good solution to AI alignment look like?
What technical problems is MIRI working on?
What is Eliciting Latent Knowledge (ELK)?