23 Comments

Perhaps we should consider the option of a "lock in" for dynamic instability? (assuming one exists and can be verified to some degree and also verified to have vulnerabilities allowing for it's dissolution).

Expand full comment

This probably bears on the topic:

Quantifying Future Time Orientation

http://akarlin.com/2012/08/...

http://akarlin.com/wp-conte...

Expand full comment

I did not mean "competition to control" to imply analysis via a causal model of reality.,

Expand full comment

You say "So far history can be seen as a fierce competition by various kinds of units (including organisms, genes, and cultures) to control the distant future". I find it hard to agree with this statement. Natural evolution is a peculiar optimization process - a culling of designs generated by random mutations, which optimizes inclusive fitness of the individual replicators. I think the phrase "competition to control" implies the existence of a goal embedded in competing replicators. Goals in turn imply the existence of causative models of reality. Even simple organisms embody some models of aspects of reality, and with increasing sophistication the models become more comprehensive and far reaching. But evolution itself does not have an independent model of reality, and most organisms have only models that deal with short-term future, where "short-term" means a time-frame relevant to the organism's inclusive fitness. Evolution does not usually generate minds possessing reality models that reach far beyond this time-frame, for the reason that modeling the far future is complicated. Complicated biological processes require precisely selected genetic traits. The local environment that performs culling of organisms usually does not contain the information about the future that would be needed to precisely select traits relevant to modeling the distant future. In other words, evolution is myopic and by extension most evolved organisms do not target the distant future either. (In case you wonder - sequoias exist but they still embody only information relevant to their inclusive fitness, however many thousands of years may be involved here)

There are some additional caveats to the above discussion, relevant to the evolution of lifespan limitations under some plausible conditions but these are beyond the scope of this post.

Humans are exceptional - we are fundamentally different from other evolved mechanisms and from evolution itself in that we build world-models that extend vastly beyond the scope of our inclusive fitness. It is only with the coming of scientific humanity that truly controlling the distant future is even in principle possible. But the ancient adaptations that evolved to optimize inclusive fitness are hard to cast off. That some humans have the ability to look into the far future does not mean that a lot of future-oriented action actually takes place.

So this is why I disagree that competition to control the distant future has been fierce. I think it in fact never happened.

This said, I agree that the time for long views is coming. Humans, and soon AIs will have comprehensive causative models of reality spanning billions of years. Those who choose the long-term view and have the superior model of reality will be able to out-maneuver the mayflies every step of the way, at least in habitats where short-term breeding is not the absolutely dominant strategy for survival. In those favorable habitats sequoias will thrive, weeds will fail to take root.

My guess is that the long-term entities will be produced by psychological self-modification of humans or by accidental release of self-modifying AI. Humans who choose to optimize very long term goals will either modify themselves to be long-lived or will exert strong control of the goal systems of their offspring. They will have to compete for survival against mayflies (unmodified humans), and weeds (humans who modify themselves to optimize short-term acquisition of resources for replication or other uses). In some habitats weeds will be impossible to displace. In other habitats enormous superhuman organisms will be possible. Mayflies will flitter here and there.

Maybe the future belongs to immortal sleepers, awaiting the coldest future where the amount of computations doable with finite energy rises asymptotically. In the intervening eons these old gods will rise only every few hundred million years, survey their demesne and eradicate mayfly infestations, ever bubbling up from primordial oozes.

These will be very exciting trillions of years.

Expand full comment

Some will object to the creation of powerful entities whose preferences disagree with those of familiar humans alive at the time.” Because of the near/far dichotomy, each human being has conflicting preferences. The artificial beings we would be creating would be far-only creatures, lacking our near-type motivations. This would make them different from existing human beings, but in a way that we, when in far-mode, desire.

Expand full comment

If (1) you devise an improvement in the operation of long-term prediction markets, so that they work well, and if (2) then people generally appreciate your accomplishment, you will be a great benefactor of mankind. Unfortunately, (1) is probably unlikely and (2) is probably even less likely (given (1)).

Expand full comment

You should see the world as already full of mindsets contending to influence future minds. It isn't enough to just add a new one, you have to ask what competitive advantages yours will have.

Expand full comment

One way to affect future is to launch intellectual vectors to affect future minds. For example, if we create a rationality mind set package that is available for everyone to be adopted and has a chance of increasing the rationality of future humans, then it would probably leave a positive net effect on future. Unique about them is that these vectors can spread from mind to mind and therefore would be self-sustaining and also others can improve them in the future (have some features of ideologies).

Expand full comment

This thought came to me as I read your post: One way to incentivize long-term thinking is to reduce short-term risk. Then, only long-term risk would constrain planning in a binding way.

Counter-intuitively, the solving of all or most short-range problems may be a prerequisite for dedicated long-range problem-solving.

And I might be biased, but it seems to me that economic growth is the best way to solve most short-range problems.

Expand full comment

I don't see evidence that non-capitalism takes a longer view effectively.

Expand full comment

Robin, any thoughts on the putative tendency of capitalism to focus on the short term, to the detriment of all (long-term)?

Expand full comment

Everything you just said can be true, but this implies nothing about the degree to which plans would be "locked in". The plans would change when the plan-makers want them changed.

Expand full comment

He begins by talking about "various kinds of units ... to control the distant future," and look at the way he uses the word "control" in his hypothetical slow ems scenario. Notice also how he admits that long-term plan implementers would be "powerful entities" with priorities that don't align with the people over whom they have power. It sure looks to me that he's rooting for plan-makers with the power to coerce future people to stay on task.

Expand full comment

lump, why do you think that Robin advocated lock-in? If I understand him, he *is* considering ways for our plans to be made with more consideration for the long-term future, but that wouldn't require them to be locked in (i.e. it wouldn't prevent them from being revised at any time in light of new information).

Expand full comment

Better human simulacra may help significantly with individual incentives by communicating more vividly the impact of modeled scenarios. Imagine the third family dinner in a row including the same impoverished Yemeni, or dinner with a random family from a century into the simulated future of one's own region, or a sequence of discussions with one's now-aged granddaughter of her grandchildrens' prospects under various Monte Carlo scenarios.

Expand full comment

I think we are there already. Long term visions and goals exist, of which you have many examples. Figuring out how to achieve them is a strictly evolutionary process which I think would be hard to imagine improving upon. Perhaps incentives can bring more attention to these goals and provide an ecosystem where the evolutionary process can flourish. Knowing that you need to store grain for the winter doesn't tell you how to do so, just might give you a few ideas.

Expand full comment