28 Comments

Tech gets harder to master. Newer, more difficult tech cannot become distributed across teams as fast as earlier, simpler tech. Can any team be so superior that no other team can replicate the research? I'd argue it gets more probable by the day.

There will come a point when no usable information will seep outside the walls of firms or small teams. Some techs will evolve into black boxes. You really can't tell exactly what goes into a CPU chip these days.

There's a high probability that there will be, if there already aren't, "untouchable" tech firms, with so tricky in-house theoretical knowledge, research and production methods and equipment, that no matter how much resources are thrown at it, competitors can't catch up.

And if these people are smart enough to stay quiet and out of sight, which they will be, the competition won't even know what to look for, until it's way way too late. The spy organizations of the world know this.

Expand full comment

The US was the first to develop nuclear weapons. We promptly used two, then stopped.

IIRC, in 1945 the US only had two available to use. And it used both of them.

And I don't think it had many for some years afterwards. A handful of kiloton-range bombs do not convey worldwide omnipotence.

For purposes of comparison, the RAF and USAAF dropped ~ one million tons of bombs on Germany in the last year of the war.

Expand full comment

Can you prove that the vassals would have been better off if Cortes had been defeated?

Given a choice between overlords who wanted a lot of gold and silver, and overlords who wanted to cut my heart out, I think I would be better off with the former option.

I might point out that the vast majority of deaths were due to epidemics, and the bugs presumably did not care which human won the war.

Expand full comment

I stand corrected.

Expand full comment

That 1% number was for a similar but not identical scenario:http://www.overcomingbias.c...

Expand full comment

Better off materially perhaps, but there's a lot of painful hate to suppress before you reach Realpolitik.

Expand full comment

Hmm, I am sure that you said that. Oh well, I take it back. But if you say "the probability is high enough to justify a large effort to avoid that bad scenario" I certainly agree, but it seems more important to focus on the more easily winnable em scenario as far as "activism" goes.

Expand full comment

Cotres was able to trick many of the Axtecs' vassals into supporting him even though the vassals would have been better off if Cortes had been defeated.

Expand full comment

It was Cortes who instigated the rebellions so that's an example of his application of social intelligence ('a bit of clever oratory here and there').

Expand full comment

I should have linked to this interview with John Mueller. Nukes? No Big Whoop

Expand full comment

Robert, somewhat nitpicking but I think the Allied powers are actually more vulnerable to the charge of [pinky&thebrain]TRYING TO TAKE OVER THE WORLD[/pinky&thebrain] than the Axis, who were never nearly as coordinated. The nukes we did drop (which, combined with tests, used up most of our supply) also killed fewer people than our conventional firebombing did.

mjgeddes:I think it noteworthy that the Aztecs had many defeated vassals underneath them who revolted when the opportunity came along, so it wasn't just Cortez and his merry men.

Expand full comment

I seem to recall Hernán Cortés:

"Cortés' contingent consisted of 11 ships carrying about 100 sailors, 530 soldiers (including 30 crossbowmen and 12 arquebusiers), a doctor, several carpenters, at least eight women, a few hundred Cuban Natives and some Africans, both freedmen and slaves."

This was apparently enough to 'do over' the entire Aztec Empire! Plently of other examples from history of small forces with a small technology advantage completely 'doing over' a large number of opponents.

--

Actually, I'm sure that 'overwhelming the rest of the world' would be ridiciously easy. As you say, "we have become more dependent on one another via a more elaborate international division of labor". That's a big exploitable weakness, just get control of few key pieces of intrastructure in a surprise attack and the whole thing falls over long before any opponents can cooperate. Your faith in 'the market' is quite misplaced.

In fact a takeover may not even be obvious. 'The greatest trick the devil ever pulled was convincing the world he didn't exist'. Dramatic displays of power are a human status display after all. A super-intelligence could simply manipulate things from the shadows, coordinating events and pulling strings to achieve desired results - a bit of clever oratory here, getting a few key people into power there, accumulating a bit of wealth to do this and that - make it all look like chance - and run the show in secret, like 'Hitch-Hikers Guide To the Galaxy' where the colorful 'President of the Galaxy' Zaphod Beeblebrox was actually all a front for an anonymous guy in hut somewhere who was the only one who really knew what was going on.

Expand full comment

Robin, I don't think we really disagree much here. I mainly want to establish that there is at least one plausible scenario where a takeover of the world is possible, in which case it has to be a significant component of one's expected utility computation (cf. Pascal's wager), especially for those who consider the ultra-competitive Malthusian outcome to be of little utility.

The division of control between investors and the core team obviously depends on the relative value of capital vs. insight, which I don't have much to say about. But I imagine if the first upload has already been created, and the only thing needed to take over the world is more capital to purchase large quantities of general purpose, off-the-shelf computing hardware, I can quickly obtain additional capital at a very low cost (i.e., billions of dollars of investment for a tiny share of control over the venture).

Expand full comment

The only part that matters: AI reaches critical point, gets smarter, uses extra smarts to get even smarter, repeat. Think nuclear fission, not agricultural revolution. Taking an outside view of a scenario doesn't work when you don't use the right reference class. The creation of a really powerful optimization process, like nuclear fission, is a phenomenon outside the realm of economics. This is not 'technology': technology is a tool we use to enhance our smarts. Creating smarts themselves is whole 'nother realm.

Or, in the esoteric and nearly dead language of 'Laugh out loud: feline!':TEH ONLY PART DAT MATTERS: AI REACHEZ CRITICAL POINT, GETS SMARTR, USEZ EXTRA SMARTS 2 GIT EVEN SMARTR, REPEAT. FINKZ NUCLEAR FISHUN, NOT AGRICULTURAL REVOLUSHUN. TAKIN AN OUTSIDE VIEW OV SCENARIO DOESNT WERK WHEN U DOAN USE TEH RITE REFERENCE CLAS. TEH CREASHUN OV RLY POWERFUL OPTIMIZASHUN PROCES, LIEK NUCLEAR FISHUN, IZ FENOMENON OUTSIDE TEH REALM OV ECONOMICS. DIS AR TEH NOT TECHNOLOGY: TECHNOLOGY IZ TOOL WE USE 2 ENHANCE R SMARTS. CREATIN SMARTS THEMSELVEZ IZ WHOLE NOTHR REALM.

Expand full comment

I don't recall publishing an estimate such a low probability estimate. I certainly agree the probability is high enough to justify a large effort to avoid that bad scenario. (Even a 1% chance is enough.)

Expand full comment

Wei, you'll need to secure access to the resources you will need to run all those ems, and do so against attempts by others to coordinate to deny you access to those resources. The more that your development is a surprise, and the more prepared you are, the better a chance you have to achieve a surprise takeover.

On investors, if an em transition is anticipated enough, we should expect very large amounts of capital to be collected behind teams that attempt to achieve the first em. These collections will of course be very concerned to maintain control of those teams. Even if one team wins and overwhelms the others, the size of the community behind that team would not be small.

Expand full comment