Loweren's question on Mesa-Optimizers

From Stampy's Wiki
Loweren's question on Mesa-Optimizers id:UgyvLrmQXkuz53 oC-x4AaABAg
This question has not been reviewed. Feel free to review it.
Mark as: , , , , , , , , , , ,
Question Info
Reviewed as Unreviewed (edit)
Tags: None (add tags)
Asked by: Loweren
OriginWhere was this question originally asked
YouTube (Comment link)
On video: The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
Date: 2021-02-16T20:21
Asked on Discord? No
YouTube Likes: 5
Reply count: 2
Difficulty: Normal

Great explanation! I heard about these concepts before, but never really grasped them. So on 19:45, is this kind of scenario a realistic concern for a superintelligent AI? How would a superintelligent AI know that it's still in training? How can it distinguish between training and real data if it never seen real data? I assume programmers won't just freely provide the fact that AI is still being trained.

Discussion