77 Comments

Long reflection may be needed also before we reach irreversible immortality. Maybe be 10000 years of life extension is a good starting point as life extension goal, which will not induce fear of "cold boring immortality", and during these 10000 years we will decide do we want to live next million years, then next billion and so on?

However, long reflection has a risk that the value space will be dominated by most effectively replicating parasitising memes.

Expand full comment

The doers would have Mannschenn Drive.

Expand full comment

(You started with "Suppose there is nothing...remotely human...in space." I assume you meant other than us humans.) I agree with the general "something would" idea. But don't forget that space is three-dimensional, so we have a two-dimensional choice of where to go. So combining those two points: in a wide space, clumps of activity will become slowly less human, some quickly goo, and some more human, but as long as the more human has some direction to grow in, that's the more durable one. I think it misses the point to think of humanness as a kind of fragility.

Expand full comment

Ok lets steelman the long reflection.

Lets imagine there are some subset of technologies and actions that are really really problematic. One slight mistake can cause X-risk or S-risk in all sorts of subtle ways.

Other technologies are basically fine. Work continues on the latest in portable fusion reactors, because fusion reactors are really not that dangerous.If we don't find some way to stop one idiot accidentally destroying the world, some idiot will sooner or later. We better find some way of stopping the idiots destroying the world with advanced tech without turning the world into an Orwellian dictatorship.

If we assume that most humans don't want to destroy the world, a majority vote might work. Maybe prediction markets. Maybe advanced AI.

So suppose we find some pretty nice state, the most dangerous 10% of technologies are labelled as off limits, only to be unlocked by a 2/3 majority vote of humanity.

(Which we think will only happen if we can find something safe and very useful to do with these techs) Maybe some of those techs will stay locked for a million years or forever. Humanity is having a nice time and will continue to do so.

Expand full comment

Ya, those are risks. Why does that mean no Amish will remain to try again?

I did say "commit to leave them alone".

Expand full comment

Suppose that there is nothing that is remotely human and remotely near optimal at replicating in space.Evolution and competitive pressure push constantly towards some non-sentient replicator goo. We could keep our humanity and spread across the universe 10% slower, but unless humanity as a whole was coordinated, there would always be a pressure to sacrifice a little more of your humanity to get ahead. And in a vast civilization, someone would.

Expand full comment

If the resulting x-risk kills us, there's a chance the Amish will still be around to try again.

Nope.

And the concern is less about an X risk that kills everyone. Its more imagine we had failed to agree which computations count as sentient, so some people have video game NPC's that others think are suffering.

Or maybe a small number of humans self modify to be more rapacious replicators. So we get "humans" that have lost all art, love etc in favour of whatever it takes to grab resources quickly.

Expand full comment

A 90 lightyear radius bubble is hardly a galactic empire.

Expand full comment

Death is irreversible, so lets get rid of it and then do the long reflection.

Expand full comment

Making a conscious effort to slow down and have a bit of civilisational reflection before we switch on a utility-maximising self-replicating AI does seem like a good idea.

But there's also something hilarious about the idea of two moral philosophers, Toby Ord and Will Macaskill saying, without a hint of irony, "You know what we need to sort out the future of humanity. Several millennia of moral philosophy. Just put us in charge and we'll work out what humanity needs to do."

It's like Prof. Hanson saying that the future of humanity depends on handing several millennia's worth of global power to contrarian economists.

Expand full comment

Not pausing to reflect does not equate to being aggressive. I would expect that whether alien civilizations see each other as a threat depends largely on the power imbalance between them. Based on that assumption, not pausing to reflect will result in more power faster, which is likely to increase the odds of other civilizations seeing you as a threat. That seems like a likely side effect of power. The main alternative is being weak, puny and not threatening. Given a choice, I think most would prefer to adopt the more powerful position.

Expand full comment

You may decrease the likelihood of your descendants being "eaten" by aliens, but you may as well increase it, because they probably are more likely to perceive you as a threat if you perceive them as a threat.

At the same time, if you grow like crazy without reflection your descendants probably are more likely to wipe each other out. The "Long reflection" is probably a crazy bad idea, but that doesn't mean that it's always bad to reflect

Expand full comment

"We need to do all the things, choose every choice, go willy-nilly across the universe and spread our seed to the furthest limits."

according to which principle do we need to, though? I for one don't see any particular need for that. I'm not saying that humanity does not deserve to survive, but I am not sure whether it needs to either. Perhaps we don't need to reflect collectively on this for even a minute, but perhaps it wouldn't be a bad idea if everyone who has the time and some saying in the matter would give it a good amount of thought?

"We are just men and women, trying to get by. "

Well I suppose that depends. Some people seem to do a lot more than just getting by

You seem to do some fair amount of philosophizing for someone who doesn't seem particularly enamored by the idea?

Expand full comment

First of all, a misanthropic ruler wouldn't leverage the police state to bring about human extinction because it would require the enforcers commit suicide too. If they don't kill off everybody it's not an existential threat. A much better strategy for a misanthropic ruler to bring about human extinction would be via nuclear weapons or bioweapons.

And if we're talking about existential risks of misanthropic political factions getting their hands on dangerous weaponry we might just as well be talking about the risk of terrorism. A set of self-interested rulers or governing authorities would probably be a useful, coordinated defense against such political factions. A more dispersed governing authority would presumably put up a less coordinated defense against nihilists.

Not to mention, we have game theoretical reasons to believe that dispersed governing authorities are susceptible to more coordination problems in general and I think almost all of the biggest existential risks are coordination problems; or at least failures to solve most existential risks are due to coordination problems.

Expand full comment

I disagree. I can imagine a number of circumstances in which a credible threat from a sovereign comprises an existential risk. Here’s one:

Consider, for example, a democratic republic. If this state possesses a police power sufficient to pose a credible threat of violence toward all members of the populace, it is only spared the realization of that risk by a political process.

Should an anti-human nihilist faction rise to political prominence and win a governing majority—let’s say an extremist Malthusian faction that campaigns on the notion that civilization must self-eradicate for the sake of the biosphere—the police power itself would by definition become an existential threat.

That any governmentality could become a similar threat to its own polity is an existential threat that would need to be addressed before a “long contemplation” could rationally be initiated.

Without such a credible threat(ener), how else could a long contemplation be enforced?

Expand full comment

Sitting around and talking to each other to figure out what to do for 10 thousand years, huh? I think the author displays overconfidence in the idea that philosophy produces universal truth. I am skeptical.

Our ethics, morals, and values are ultimately just software that we run because it renders a competitive advantage. But this competitive advantage is conditional upon our environment. So if you are a static society, whose software is attuned to surviving in a static society, and you try to leverage this software to produce answers about how to live as an expansive society then you're going to get the wrong answers.

It's the equivalent of a fish debating his odds of survival venturing out of the oceans, given his physical faculties. Of course he's going to conclude that staying in the ocean is the correct answer, because that's the environment his body is attuned to.

Expand full comment