Even if we are rationally convinced about the urgency of existential AI risk, it can be hard to feel that emotionally because the danger is so abstract. How can this gap be bridged?