Imagine that you are junior in high school who expects to attend college. At that point in your life you have opinions related to frequent personal choices if blue jeans feel comfortable or if you prefer vanilla to chocolate ice cream. And you have opinions on social norms in your social world, like how much money it is okay to borrow from a friend, how late one should stay at a party, or what are acceptable excuses for breaking up with boy/girlfriend. And you know you will soon need opinions on imminent major life choices, such as what college to attend, what major to have, and whether to live on campus.
But at that point in life you will have less need of opinions on what classes to take as college senior, and where to live then. You know you can wait and learn more before making such decisions. And you have even less need of opinions on borrowing money, staying at parties, or breaking up as a college senior. Social norms on those choices will come from future communities, who may not yet have even decided on such things.
In general, you should expect to have more sensible and stable opinions related to choices you actually make often, and less coherent and useful opinions regarding choices you will make in the future, after you learn many new things. You should have less coherent opinions on how your future communities will evaluate the morality and social acceptability of your future choices. And your opinions on collective choices, such as via government, should be even less reliable, as your incentives to get those right are even weaker.
All of this suggests that you be wary of simply asking your intuition for opinions about what you or anyone else should do in strange distant futures. Especially regarding moral and collective choices. Your intuition may dutifully generate such opinions, but they’ll probably depend a lot on how the questions were framed, and the context in which questions were asked. For more reliable opinions, try instead to chip away at such topics.
However, this context-dependence is gold to those who seek to influence others’ opinions. Warriors attack where an enemy is weak. When seeking to convert others to a point of view, you can have only limited influence on topics where they have accepted a particular framing, and have incentives to be careful. But you can more influence how a new topic is framed, and when there are many new topics you can emphasize the few where your preferred framing helps more.
So legal advocates want to control how courts pick cases to review and the new precedents they set. Political advocates want to influence which news stories get popular and how those stories are framed. Political advocates also seek to influence the choices and interpretations of cultural icons like songs and movies, because being less constrained by facts such things are more open to framing.
As with the example above of future college choices, distant future choices are less thoughtful or stable, and thus more subject to selection and framing effects. Future moral choices are even less stable, and more related to political positions that advocates want to push. And future moral choices expressed via culture like movies are even more flexible, and thus more useful. So newly-discussed culturally-expressed distant future collective moral choices create a perfect storm of random context-dependent unreliable opinions, and thus are ideal for advocacy influence, at least when you can get people to pay attention to them.
Of course most people are usually reluctant to think much about distant future choices, including moral and collective ones. Which greatly limits the value of such topics to advocates. But a few choices related to distant futures have engaged wider audiences, such as climate change and, recently, AI risk. And political advocates do seem quite eager to influence such topics, due to their potency. They seem select such topics from a far larger set of similarly important issues, in part for their potency at pushing common political positions. The science-fiction truism really does seem to apply: most talk on the distant future is really indirect talk on our world today.
Of course the future really will happen eventually, and we should want to consider choices today that importantly influence that future, some of those choices will have moral and collective aspects, some of these issues can be expressed via culture like movies, and at some point such issue discussion will be new. But as with big hard problems in general, it is probably better to chip away at such problems.
That is: Anchor your thoughts to reality rather than to fiction. Make sure you have a grip on current and past behavior before looking at related future behavior. Try to stick with analyzing facts for longer before being forced to make value choices. Think about amoral and decentralized choices carefully before considering moral and collective ones. Avoid feeling pressured to jump to strong conclusions on recently popular topics. Prefer robust and reliable methods even when they are less easy and direct. Mostly the distant future doesn’t need action today – decisions will wait a bit for us to think more carefully.
C.S. Lewis had much to say about similar ideas in his essay "The Abolition of Man." He points out that, when we reach the point of being able to create a completely "engineered" society, the people doing the creating will not be the products of such a society, and thus it is just as likely to embody our worst failings rather than our highest aspirations.
That is..."
... be skeptical of scenarios that express your uniqueness by their being especially attractive to you and few others.