Governments are overthrown when a sufficiently large fraction of local potential military power coordinates to organize a rebellion. To prevent this, a totalitarian government can try to stop people from talking to each other about their inclinations toward or plans re rebellions. Governments can do this by controlling schools and news media, and by hiding many spies among the population. And by packing people densely enough that all said is heard by many, and authority is close at hand to punish violators. And also by recursively doing this all the more among the government official who manage all this.
However, if this society isn’t completely isolated from other societies, then the many extra costs produced by this system can make it lose to outside competitors. (This seems to have happened to many totalitarian governments so far in history.) And the more contact there is across borders, the more than insiders may be able to escape, or to coordinate with each other via outsiders, or to learn that insiders are worse off.
How has this situation changed in the last few centuries? On the one hand, today’s world grows faster, it talks, travels, and trades more widely, and it is more inter-dependent, all of which increases these problems of contact with and competition with outsiders. On the other hand, we have gotten a lot better at managing large organizations, which allows for big complex governments, and with today’s tech it is easier to spread approved news and schooling to everyone. Also, totalitarians could put microphones and detectors at each person to see what they say, and thus don’t have to pack people so close.
What about in the future? You might think that AI also helps to automatically listen and report suspicious talk, but I suspect that for below-human-level AI this is relatively easy to evade by just talking more indirectly. You might also be able to directly put “kill switches” on people, in effect putting bombs on them, but I also don’t see this offering that much advantage over the usual easy ways governments have to kill disorganized locals.
As I discuss in my book Age of Em, those with direct access to the computers running brain emulations should be able to read the surface of em minds. (And also to directly end local copies.) However, I don’t see this offering that much advantage over being able to hear and read everything said, and to control their sources of news and education. Rebels could talk indirectly in ways missed by shallow mind reading, and might be helped by lazy, corrupt, or rebellious enforcers. A bigger concern is that most of the em world would be crammed into one or a few big cities, which makes a world government more feasible and likely. (More on that below.)
In two posts of July 13 & 27, Holden Karnofsky says there’s a substantial chance that widespread totalitarian governments controlling the virtual environments of digital people (e.g., ems) could lock themselves in power for tens of billions of years.
The 21st century could … determine the entire future of the galaxy for tens of billions of years, or more. … a chance of “value lock-in” here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years. … [via put mind uploads] in virtual environments that automatically reset, or otherwise “correct” the environment, whenever the society would otherwise change in certain ways (for example, if a certain religion became dominant or lost dominance). … “lock in” particular religions, rulers, etc. (July 13)
Much more stable. … because digital people need not die or physically age, and their environment need not deteriorate or run out of anything. As long as they could keep their server running, everything in their virtual environment would be physically capable of staying as it is. …
Government turns authoritarian … [makes] virtual environment … so that certain things … can never be changed – such as who’s in power. If such a thing were about to change, the virtual environment could simply prohibit the action or reset to an earlier state. …
server might be way out in outer space, light-years from anyone who’d be interested in [altering that server]. (July 27)
Like in the TV series Uploads or the science fiction novel Fall; or, Dodge in Hell, Karnofsky seems to imagine the em world as an afterlife heaven or hell, where activity in that world has no relation to activity to built or sustain that world. Thus ems there could live in any virtual videogame story whatsoever, which could enforce any arbitrary rules whatsoever. Such as erasing the whole world and starting over whenever the totalitarian government is threatened.
However, as I describe in detail in my book Age of Em, in a competitive world dominated by ems it is the ems who do most all work, including transportation, building and maintaining computers, supplying them with energy and cooling, writing all the code, etc. And in fact, most ems must work most of the time just to earn enough to survive.
In this context, it would be crazy expensive to build a big world of ems who do no useful work, but instead just play out some virtual game. Whomever paid for that would be outcompeted by other worlds of ems who work. Yes, it would be much cheaper to make changes that don’t much effect the productivity of ems. Like changing the colors of their virtual wallpaper. But the more that your totalitarian em world has to pay more for spies to listen for possible rebel talk, and to limit useful news and interactions in order to limit rebel coordination, then the more that your totalitarian world would lose to other worlds who don’t pay these costs.
Yes, crazy expensive payments are sometimes made, but they’d be far from the usual case. Even today, some billionaire might pay people to come live on his island where instead of working they play out some fantasy game. But as far as I know no billionaires wants this enough to make even tiny versions of this today.
Over many orders of magnitude of change, the em economy should grow fast, causing big fast changes to the supporting infrastructure, and to the work habits and relations of most jobs. In this situation, if most of your ems are doing useful work, work matched to their changing equipment, infrastructure, organizations, and relations to outsiders, then erasing them all and replacing them with copies from generations ago is crazy expensive; those previous ems only know how to work productively in their previous world, not in this new changed world. Yes, eventually tech change should nearly stop, but even then many practices, arrangements, organizations, and relations are likely to keep changing.
Note also that even though individual ems could live forever, they’d likely have a limited length career, after which they’d have to retire and be replaced by younger more flexible workers.
Totalitarian concerns make more sense regarding civilization-wide “world” governments, which face no outside competition. Or perhaps where the cost of defense is so much less the cost of offense that an isolated local “world” government needn’t worry much about competition from outsiders. Such places might be able to sustainably pay crazy costs to enforce local totalitarian regimes. I do worry about such scenarios.
Finally, let me note that Karnofsky does mention Age of Em:
Age of Em, an unusual and fascinating book. It tries to describe a hypothetical world of digital people (specifically mind uploads) in a lot of detail, but (unlike science fiction) it also aims for predictive accuracy rather than entertainment. In many places I find it overly specific, and overall, I don’t expect that the world it describes will end up having much in common with a real digital-people-filled world. (July 27)
So he suggests my book is mostly wrong, but doesn’t mention any specific way he thinks it is wrong? Other than this key point of his ignoring cost/competition issues, I don’t see how any of his descriptions of em worlds conflict with my descriptions from Age of Em.
I am not following the claim that virtual environments are irrelevant here.
If we had any sufficient combination of a world government, a small set of closely coordinating or mutually influencing governments, and a situation where defense *of space settlements against incoming space probes* is easy relative to offense (this is distinct from the defense-offense balance on Earth), then I would be much more worried about lock-in conditional on having the technology to create digital resetting space settlements than I am about lock-in conditional on today's technology. Does that seem unreasonable to you?
I also want to acknowledge that the "full reset" is an extreme case and an intuition pump (although I think it could be a real issue under conditions laid out above). Virtual environments could also be set up to e.g. reset all of the minds, while preserving specific kinds of info (e.g., results of R&D). This could further lower the competitiveness cost. I haven't exhaustively gone through the ways virtual environments could be used to lock in particular properties of a community, but it seems to me that they provide a lot more tools than exist today for turning momentary power into lock-in.
Let's say a landlord treats me well because always pay on time and don't make trouble, and it's costly to replace tenants like me. In the em world there is no such cost, because new, instantly productive people can be created on demand (by copying the most productive ems). If we had a photocopier for dairy cows, we would treat them even worse than we do now. Imagine such a cow negotiating for... anything! For ems, who would be constantly milked for their cognitive labor, we would literally have a copier. That copier, which would make them insta-replacable with a probably more productive model and less fussy model, would completely undercut any negotiating power.