12 Comments

Yes, firms do rot.

Expand full comment

I don't have stats handy, but surely firms exhibit senescence too. Famously, the “ABCs of corporate senescence” are: "Arrogance, Bureaucracy and Complacency". Founders leave, secret sauces get lost or diluted. Why would organizations not exhibit senescence? Most of the same natural laws that make other complex systems age and die apply to them too. Here are some aging mechanisms that would seem to apply to corporations: * The reliability theory of aging; * The antagonistic pleiotropy theory of aging; * The pathogen theory of aging.

Expand full comment

No wonder singletons are considered a solution to prevent AI takeover.

Expand full comment

You're off-topic.

Expand full comment

It seems most questionable whether a central power aligned towards deliberation can be expected to stay so aligned.

Expand full comment

All democrats are pedophiles

Expand full comment

What's your P(foom) approximately?

Expand full comment

There's the possibility that one can reason oneself to a different set of values than one had originally. For instance, perhaps there is something inconsistent or unjustified about one's original set of values. If one values consistency in one's thoughts, like any intelligent being would, then one would try to resolve the inconsistency. And if one values justifying one's views, then one would try to find good reasons to hold those specific values, in a way that doesn't just seem like motivated reasoning.

For instance, if someone thinks murder is wrong, but being a soldier and killing for your country is right, then there is at least on the surface a contradiction between the two perspectives. If the person who holds these views thinks long enough about this, they would be inclined to either reject one or the other view, or build a deeper framework that explains - for a non-arbitrary, intellectually honest reason - why killing in one scenario is wrong and killing in the other scenario is right.

We would expect that a global hegemony might go through a similar process. It would feel the need to justify its own values to itself, in a way that makes sense, and this could lead to change in its values over time. Eventually it might settle on a set of values that seem completely consistent and justified to itself. And would that be a bad thing?

Expand full comment

yes. fair. agree. This is beyond MacAskill's frame.

my reaction was to your broader statement of immortality risk. Plausible a high risk point for a stagnant world government that had enough teeth to stop innovation and lock in stasis, it would be shortly after invention of immortality (via ems or biological or however it happened). Even if that risk never gets quite that high. And even then the leaders would have to change to avoid rot, even if copies of their younger selves are their replacements. I think we disagree to the extent of that risk but to me this seems one of the likelier ways it might happen (even if overall chance of success of that is low).

Expand full comment

I agree that immortality may cut innovation and growth rates, but not because anything will last "forever". MacAskill described immortals with unchanging values, and that's what I was disagreeing with him about.

Expand full comment

Seems like C.S. Lewis did it better half a century ago in _The Abolition of Man_. He made many of the same predictions/warnings, though in his case the threat wasn't immortal AI but human manipulation of human minds. And in fact I still think that's more dangerous than machine minds.

Expand full comment

You say "Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario."

and earlier "Now mere immortality seems far from sufficient to create either value stability or a takeover."

I agree on skepticism about foom.

But I do think our existing norms and institutions are not suited for immortality, and this *by itself* could lead to stagnation. There are plenty of historical leaders both good and bad who, if immortal in power, would corrupt the system to keep them there forever. Hitler, Stalin, Alexander the Great, even perhaps Churchill and FDR. Trump of course. This would lead to global government, and inevitably stagnation because progress and change could lead to that leader going out of power. Perhaps MacAskill didn't pursue this in his book as a negative, since he thinks global government would be good. But it's a plausible risk and concern. But it's not really tied to AI or foom. Just having immortality by itself introduces the risk.

Now of course, immortals will rot. But the Foundation TV show based on Asimov's books came up with a clever solution to that. The emperor has a young clone to groom, a mid-age clone ro rule, and an elderly clone to provide sage wisdom. For software AIs, of course it'd be even easier to keep the line of succession within the family copy.

Anway, mostly I'm agreeing with your post. But the exception or thing which I think you get wrong is claiming immortality doesn't pose a threat to create a stagnant society. Even if perhaps the AI robot probes who leave the solar system will of course take over eventually, if they are allowed to get out before grabby aliens come.

Expand full comment