23 Comments

I think in the discussion above, there is a lot of conflation between causes for what is termed "total tech war." You can find yourself in a total tech war merely by believing that the other agents see it as such. Or you can independently analyze the situation and determine that the best way to maximize your own payoff is to treat it as a total tech war regardless of what the other agents will think about it. If the space of advantages as upward cliffs as Eliezer suggests, then it is not unreasonable to believe an agent with a time sensitive, but utterly dominating, advantage will rationally decide the most payoff happens by acting in accordance with a total tech war plan of action. This is especially true if part of the cliff-advantage is the ability to analyze a situation more deeply and rapidly than competitors. I don't see any reason why extra, special arguments are needed to justify this as a realistic scenario within AI FOOM.

Expand full comment

if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument.

IT is the bleeding edge of technology - and is more effective than most tech at creating inequalities - e.g. look at the list of top billionaires.

Machine intelligence is at the bleeding edge of IT. It is IT's "killer application". Whether its inventors will exploit its potential to provide wealth will be a matter of historical contingency - but the potential certainly looks as though it will be there. In particular, it looks as though it is likely to be mostly a server-side technology - and those are the easiest for the owners to hang on to - by preventing others from reverse-engineering the technology.

Expand full comment

Ian:What benefits could slaves or free men provide an AI (or a group of EMs) that it could do itself 100 times better with nanobots and new processor cores. Foxes do not enslave rabbits.Even if it were just deciding to enslave us or let us be 'free', it would know that it could always free us later. (us or future generations)

General Thinker:A simulated human can be put in a simulated environment that is sped up right along with them. They do a year's worth of cognitive work, and play, and sleep, in hours. They experience the full year.

Expand full comment

@GenericThinker

Unless you plan to build an AI and immediately hook it up to all your nuclear weapons or give it free run of any and all factories for making weapons or nanotech (whatever that is nano being applied to all manner of things and now being essentially a buzz word) there is no reason to assume friendly or not an AI would destroy the world. That entire scenario is fictional, one can build an AI in an isolated computer system with little to no danger.. . .As long as one keeps a human in the loop with the AI and properly designs the hardware and what the hardware is connected to one can make a safe trip into "mind design space".

Eliezer has made a very persuasive case that a "human in the loop" would not be adequate to contain a super-intelligent AI. See also his post That Alien Message. This post also addresses the point you raise when you write, "A final note there is this continued talk of 100's or 1000's of times human speed what does that even mean? What would thinking at a thousand times our current speed mean? How would that even work? Our thoughts and senses etc are tied closely to our sense of time. If you speed that up why would one expect that would be better? Seems to me that this would merely be like listening to music on fast-forward."

Neither of these addresses the question of whether an artificial super-intelligence is likely to be built in the near future. But they might make you reconsider your expectation that it could be easily controlled if it was built.

Expand full comment

Eliezer, if everything is at stake then "winner take all" is "total war"; it doesn't really matter if they shoot your or just starve you to death. The whole point of this post is to note that anything can be seen as "winner-take-all" just by expecting others to see it that way. So if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument. It is possible that you could make such an argument work based on the "tech landscape" considerations you mention, but I haven't seen that yet. So consider this post to be yet another reminder that I await hearing your core argument; until then I set the stage with posts like this.

To answer your direct questions, I am not suggesting forbidding speaking of anything, and if "unfriendly AI" is defined as an AI who sees itself in a total war, then sure it would take a total war strategy of fighting not trading. But you haven't actually defined "unfriendly" yet.

Carl, I replied at your blog post, but will repeat my point here this time. You say total war doesn't happen now because leaders are "comfortable" and humans are risk-averse with complex preferences, but there would be a total war over the solar system later because evolved machines preferences would be linear in raw materials. But evolutionary arguments don't say we evolve to only care about raw materials, and they only suggest risk-neutrality with respect to fluctuations that are largely independent across copies that share "genes." With respect to correlated fluctuations evolutionary arguments suggest risk-averse log utility.

Expand full comment

A winner-take-all scenario doesn't imply destructive combat or any sort of military conflict.

Just so. For example, in biology, genes can come to dominate completely by differential reproductive success - not just by killing all your competitors. Warfare is different from winning.

Expand full comment

Will underestimates the competitive potential of the Trifid.

Expand full comment

There may be, as Robin notes, a bias towards seeing new tech through the lens of total war. There is also a bias for optimism, based off mellenia of shot messengers and a bias to see the future through the lens of the present.

I see in this post a warning against mentioning potentially disasterous consequences to AI. This scares me. When even rationalists ought to feel ashamed of considering such possibilities then we are in trouble. Life is a powerful, dangerous thing. When we are considering creating new, potentially more powerful forms of life we need to consider how to do so without rendering ourselves extinct. The challenge is not to prove that we will not end up with a stable economically competitive non-combative equilibrium. The challenge is to look for every possibility that could lead to total war and make damn sure we've considered them before we touch the on switch.

Carl's 'Reflective Disequilibria' post is a good one. We cannot assume conflict with or between AIs will follow the trend of human wars. The competitive superiority of 'free men' makes for some damn good movies and has a strong impact on human history. However, that is an ideosyncracy that a third generation em will probably not possess.

Expand full comment

Carl, it's fantastic that you're finally blogging. You should link to it in your posting name, like the rest of us.

Expand full comment

To increase your chances of survival you want to be have as little compute power as you can get away with. So that you don't waste resources on things that don't intrinsically get you more resources. Look at plants, fantastically successful, but no use for a brain and they would be worse off with one.

Greater computational ability doesn't automatically lead to winning. Only if there is something worth discovering or predicting with the intelligence, another energy source or a cheaper way of doing something you need to do to survive, does it make sense to invest in compute power.

Expand full comment

Thanks, Carl.

Expand full comment

I have made a post on the differences between conflicts among humans and among optimizers with utility linear with resources at my blog.

Expand full comment

"And so far as I can tell, as a matter of simple fact, an AI project run at that level of competence will destroy the world."

How is that even logical? Unless you plan to build an AI and immediately hook it up to all your nuclear weapons or give it free run of any and all factories for making weapons or nanotech (whatever that is nano being applied to all manner of things and now being essentially a buzz word) there is no reason to assume friendly or not an AI would destroy the world. That entire scenario is fictional, one can build an AI in an isolated computer system with little to no danger. Sure its true that one can destroy the world through AI but an unfriendly AI does not necessitate that conclusion that the world will be destroyed. The issue here is there seems to be a lack of understanding of what is real in the field of AI and what is purely fictional.

If we are total honest, we are so far away from AGI at the moment that debating friendliness at this point is like debating nuclear war before the discovery of hard radiation. As long as one keeps a human in the loop with the AI and properly designs the hardware and what the hardware is connected to one can make a safe trip into "mind design space".

A final note there is this continued talk of 100's or 1000's of times human speed what does that even mean? What would thinking at a thousand times our current speed mean? How would that even work? Our thoughts and senses etc are tied closely to our sense of time. If you speed that up why would one expect that would be better? Seems to me that this would merely be like listening to music on fast-forward.

Expand full comment

Share Tech, Trade: 10 utilonsTake Over Universe: 1000 utilons

But you would ultimately get less through taking over the world than trading with it. Temporarily, you would get more, as you simply grab everything in sight, but long term you would get only the output of a bunch of slaves, vs. the higher output of free men.

And surely an AI, who would potentially live forever, would think long term, and not create such a table. But then I guess a dabbler-designed one might not.

Expand full comment

We could be in any of the three following domains:

1) The tech landscape is naturally smooth enough that, even if participants don't share technology, there is no winner-take-all.

2) The tech landscape is somewhat steep. If participants don't share technology, one participant will pull ahead and dominate all others via compound interest. If they share technology, the foremost participant will only control a small fraction of the progress and will not be able to dominate all other participants.

3) The tech landscape contains upward cliffs, and/or progress is naturally hard to share. Even if participants make efforts to trade progress up to time T, one participant will, after making an additional discovery at time T+1, be faced with at least the option of taking over the world. Or it is plausible for a single participant to withdraw from the trade compact and either (a) accumulate private advantages while monitoring open progress or (b) do its own research, and still take over the world.

(2) is the only regime where you can have self-fulfilling prophecies. I think nanotech is probably in (2) but contend that AI lies naturally in (3).

Expand full comment

I generally refer to this scenario as "winner take all" and had planned a future post with that title.

I'd never have dreamed of calling it a "total tech war" because that sounds much too combative, a phrase that might spark violence even in the near term. It also doesn't sound accurate, because a winner-take-all scenario doesn't imply destructive combat or any sort of military conflict.

I moreover defy you to look over my writings, and find any case where I ever used a phrase as inflammatory as "total tech war".

I think, that in this conversation and in the debate as you have just now framed it, "tu quoque!" is actually justified here.

Anyway - as best as I can tell, the natural landscape of these technologies, which introduces disruptions much larger than farming or the Internet, is without special effort winner-take-all. It's not a question of ending up in that scenario by making special errors. We're just there. Getting out of it would imply special difficulty, not getting into it, and I'm not sure that's possible. -- such would be the stance I would try to support.

Also:

If you try to look at it from my perspective, then you can see that I've gone to tremendous lengths to defuse both the reality and the appearance of conflict between altruistic humans over which AI should be built. "Coherent Extrapolated Volition" is extremely meta; if all competent and altruistic Friendly AI projects think this meta, they are far more likely to find themselves able to cooperate than if one project says "Libertarianism!" and another says "Social democracy!"

On the other hand, the AGI projects run by the meddling dabblers do just say "Libertarianism!" or "Social democracy!" or whatever strikes their founder's fancy. And so far as I can tell, as a matter of simple fact, an AI project run at that level of competence will destroy the world. (It wouldn't be a good idea even if it worked as intended, but that's a separate issue.)

As a matter of simple decision theory, it seems to me that an unFriendly AI which has just acquired a decisive first-mover advantage is faced with the following payoff matrix:

Share Tech, Trade: 10 utilonsTake Over Universe: 1000 utilons

As a matter of simple decision theory, I expect an unFriendly AI to take the second option.

Do you agree that if an unFriendly AI gets nanotech and no one else has nanotech, it will take over the world rather than trade with it?

Or is this statement something that is true but forbidden to speak?

Expand full comment