Does the importance of AI risk depend on caring about the long-term future?

One does not need to care about the long-term future to recognize the importance of AI risk. Misaligned artificial intelligence poses a serious threat to the continued flourishing, and maybe even continued existence, of humanity as a whole. While predictions about when artificial general intelligence may be achieved vary, surveys consistently report a >50% probability of achieving general AI before the year 2060 — within the expected lifetimes of most people alive today.

It is difficult to predict how technology will develop, and at what speed, in the years ahead; but as artificial intelligence poses a significant chance of causing worldwide disaster within the not-too-distant future, anyone who is concerned with their own future has reason to be interested.