7 Comments

C.S. Lewis had much to say about similar ideas in his essay "The Abolition of Man." He points out that, when we reach the point of being able to create a completely "engineered" society, the people doing the creating will not be the products of such a society, and thus it is just as likely to embody our worst failings rather than our highest aspirations.

Expand full comment

That is..."

... be skeptical of scenarios that express your uniqueness by their being especially attractive to you and few others.

Expand full comment

And of course if you're worried about AI runaway, making irreversible decisions now seems extremely dangerous, since we don't know much yet.

I agree with this. In general I am disturbed by the willingness of many futurist types to pursue actions that seem likely to have enormous consequences, either hugely positive or hugely negative, where which of those is the case depends on questions we don't currently have remotely confident answers to. In such a situation, prematurely acting on your very weak tentative conclusions is completely the wrong option. If you absolutely have to make a choice now then do so on your best current understanding, but for God's sake seek more information first if it's even slightly possible to do so.

Expand full comment

I bet that many people who tell stories about the future don't set out make allegorical/moralistic tales about the present. It just comes out like that because of patterns in how we think. Our future-directed cognitive mechanisms evolved for one reason only: they helped us make better present-day decisions. Imagine it was physically possible for meatbags like us to make genuinely accurate predictions about the future, if only we spent a few generations selecting for the trait. But suppose this mechanism would only teach us truths that have no direct bearing on the choices we have to make that day. I can't imagine how there would ever develop a selection pressure for such a trait. Maybe that's why we don't have it. All our future-directed thought machinery is about hopes, fears, hedges, rewards, etc. It evolved to make us better at deferring gratification, and generally acting in the interest of our future *selves* and direct kin. It's no wonder that most human efforts at thinking about the future eventually collapse into "the future could get this bad, or the future could be this good, and we must steer *now* toward the good." It's the only kind of future-directed thinking we've been built to do.

Expand full comment

It appears to show great distrust of future people.

Given that moral standards tend to rise over time (see Pinker's _Better Angels of Our Nature_), distrusting the morals of future people may be the exact opposite of what someone worried about future morals ought to be doing.

And of course if you're worried about AI runaway, making irreversible decisions now seems extremely dangerous, since we don't know much yet.

Expand full comment

Unfortunately, there is too often willful ignorance where the objective is to avoid facts which may be unpleasant or have unpleasant implications for other beliefs. Too often avoidance is not about reserving judgment but about not having to revise erroneous prior judgments couched in terms of ignorance so we can assume defaults favorable to our biases. This commonly becomes worse as we age and become entrenched in our habits and opinions to the point of assuming them as facts. Decisions about the future are uncertain and difficult, but they are often the most important long term, and decisions not to decide are decisions too.

Expand full comment

What do you think the recent talk of wanting to create a universe-wide superintelligent AI to lock-in current human values is indirectly saying about the world today?

Expand full comment