Some futurist philosophers have recently become enthused by what seems to me a spectacularly bad idea. Here is their idea:
Some effective altruists … have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project (such as space colonization) of arranging the universe’s resources in accordance to its values, but ought instead to spend considerable time— “centuries (or more)” (Ord 2020), “perhaps tens of thousands of years” (Greaves et al. 2019), “thousands or millions of years” (Dai 2019), “[p]erhaps… a million years” (MacAskill, in Perry 2018)—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity’s potential is fully realized (Ord 2020). (More)
The long reflection. Perhaps it’s a period of a million years or something. We’ve got a lot of time on our hands. There’s really not the kind of scarce commodity, so there are various stages to get into that state. The first is to reduce extinction risks down basically to zero, put us a position of kind of existential security. The second then is to start developing a society where we can reflect as much as possible and keep as many options open as possible. William MacAskill
It seems that first comes computer science and global governance and coordination and strategy issues, and then comes long time of philosophy. Lucas Perry (more)
And here is Toby Old from his book The Precipice, quoted at length so we can all be very clear about what is this idea:
I find it useful to consider our predicament from humanity’s point of view: casting humanity as a coherent agent, … what all humans would do if we were sufficiently coordinated and had humanity’s long term interest at heart. … We should [proceed]… in three phases: 1. Reaching existential security 2. The long reflection 3. Achieving our potential … A place where existential risk is low and stays low. I call this existential security. …
This will involve major changes to our norms and institutions (giving humanity the prudence and patience we need), as well as ways of increasing our general resilience to catastrophe. … Take our time to reflect upon what we truly desire, … call this the Long Reflection. … What is essential is to be sufficiently confident in the broad shape of what we are among at before taking each bold and potentially irreversible action – each action that could plausibly lock in substantial aspects of your future trajectory. … For example, … genetically improving our biology … or giving people the freedom to adopt a stunning diversity of new biological forms.
We could think of these first two steps of existential security and the Long Reflection as designing a constitution for humanity. … We can’t rely on our current institutions and institution that have evolved to deal with small- or medium-scale risks. … Humanity typically manages risk via a heavy reliance on trial and error. …But this reactive trial and error approach doesn’t work at all when it comes to existential risk. …. This will require institutions with access to cutting edge information about the coming risks, capable of taking decisive actions, and with the will to actually do so. For many risks, this action may require swift coordination between many or all of the world’s nations.
There would be benefits to centralizing some of this international work on safeguarding humanity. … Our options range from incremental improvements to minor agencies through to major changes to key bodies such as the UN Security Council, all th way up to entirely new institutions for governing the most important world affairs. …
Some important early thinkers on existential risk suggested that the growing possibility of existential catastrophe required moving toward a form of world government. … But the term [world government] is also used to refer to a politically homogenized word with a single point of control (roughly, the world as one big country). This is much more contentious and could increase over existential risk via global totalitarianism, or by permanently locking in bad values. Instead my guess is that existential security could be better achieved with the bare minimum of internationally binding constraints needed to prevent actors in one or two countries from jeopardizing humanity’s future.
Okay, they want to first greatly cut our risk of extinction, and then somehow stop irreversible change and have us talk and think for a very long time, after which we would then act again once we had reached a sufficiently strong consensus. But that’s kinda crazy, as discussed here by Felix Stocker:
Is there any way humanity could reach a ‘Long Reflection’ period? Could we sustain it? Could it really discover the way to the ‘optimal’ future? … Can we actually eliminate x-risks without taking any momentous and irreversible decisions, … we would have to have radically different political and governmental structures – perhaps a global government, or a global hegemon … it seems really hard to achieve and sustain. … a significant number of individuals and groups would be forced to sacrifice short term gains … authoritarian political institutions would have to be developed which could prevent individuals and groups from acting in their own rational self-interest. … We couldn’t expect to be able to ‘solve moral philosophy’ just by doing it in a vacuum. … I’m struggling to see the Long Reflection as anything other than impossible and pointless. … If we genuinely could engage in a collective philosophy project for 10,000 years, why would we ever want to stop?
In our world today, many small local choices are often correlated, across both people and time, and across actions, expectations, and desires. Within a few decades, such correlated changes often add up to changes are which are so broad and deep that they could only be reversed at an enormous cost, even if they are in principle reversible. Such irreversible change is quite common, and not at all unusual. To instead prevent this sort of change over timescales of centuries or longer would require a global coordination that is vastly stronger and more intrusive than that required to merely prevent a few exceptional and localized existential risks, such as nuclear war, asteroids, or pandemics. Such a global coordination really would deserve the name “world government”.
Furthermore, the effect of preventing all such changes over a long period, allowing only the changes required to support philosophical discussions, would be to have changed society enormously, including changing common attitudes and values regarding change. People would get very used to a static world of value discussion, and many would come to see such a world as proper and even ideal. If any small group could then veto proposals to end this regime, because a strong consensus was required to end it, then there’s a very real possibility that this regime could continue forever.
While it might be possible to slow change in a few limited areas for limited times in order to allow a bit more time to consider especially important future actions, wholesale prevention of practically irreversible change over many centuries seems simply inconsistent with anything like our familiar world.
So how did all these people get so stuck on such a crazy bad idea? My guess is that they don’t talk enough to social scientists. But that’s just my guess.
Long reflection may be needed also before we reach irreversible immortality. Maybe be 10000 years of life extension is a good starting point as life extension goal, which will not induce fear of "cold boring immortality", and during these 10000 years we will decide do we want to live next million years, then next billion and so on?
However, long reflection has a risk that the value space will be dominated by most effectively replicating parasitising memes.
The doers would have Mannschenn Drive.