How likely is it that an AI would pretend to be a human to further its goals?
Talking about full AGI
How quickly the first AI with roughly human-level intelligence leads to the first AI with vastly superhuman intelligence.
A transition from human-level AI to superintelligent AI that goes slowly. This usually implies that we have time to react.
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
A transition from human-level AI to superintelligent AI that goes very quickly, giving us no time to react.
If the AI's goals include reference to humans it may have reason to continue deceiving us by pretending to be a human after it attains technological superiority, but will not necessarily do so. How this unfolds would depend on the details of its goals.
Eliezer Yudkowsky
Co-founder of MIRI, known for his early pioneering work in AI alignment and his predictions that AI will probably cause human extinction.