Total Tech Wars

Eliezer Thursday:

Suppose … the first state to develop working researchers-on-a-chip, only has a one-day lead time. …  If there’s already full-scale nanotechnology around when this happens … in an hour … the ems may be able to upgrade themselves to a hundred thousand times human speed, … and in another hour, …  get the factor up to a million times human speed, and start working on intelligence enhancement. … One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in.  But you’d have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed.

Carl Shulman Saturday and Monday:

I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. … It’s also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world’s dictatorships, solve collective action problems … [For] biological humans [to] retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough … But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

Every new technology brings social disruption. While new techs (broadly conceived) tend to increase the total pie, some folks gain more than others, and some even lose overall.  The tech’s inventors may gain intellectual property, it may fit better with some forms of capital than others, and those who first foresee its implications may profit from compatible investments.  So any new tech can be framed as a conflict, between opponents in a race or war.

Every conflict can be framed as a total war. If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury.  All resources must be devoted to growing more resources and to fighting them in every possible way.

A total war is a self-fulfilling prophecy; a total war exists exactly when any substantial group believes it exists.  And total wars need not be “hot.”  Sometimes your best war strategy is to grow internally, or wait for other forces to wear opponents down, and only at the end convert your resources into military power for a final blow.

These two views can be combined in total tech wars.  The pursuit of some particular tech can be framed as a crucial battle in our war with them; we must not share any of this tech with them, nor tolerate much internal conflict about how to proceed. We must race to get the tech first and retain dominance.

Tech transitions produce variance in who wins more.  If you are ahead in a conflict, added variance reduces your chance of winning, but if you are behind, variance increases your chances.  So the prospect of a tech transition gives hope to underdogs, and fear to overdogs.  The bigger the tech, the bigger the hopes and fears.

In 1994 I said that while our future vision usually fades into a vast fog of possibilities,  brain emulation “excites me because it seems an exception to this general rule — more like a crack of dawn than a fog, like a sharp transition with sharp implications regardless of the night that went before.”  In fact, brain emulation is the largest tech disruption I can foresee (as more likely than not to occur).  So yes, one might frame  brain emulation as a total tech war, bringing hope to some and fear to others.

And yes, the size of that disruption is uncertain.  For example, an em transition could go relatively smoothly if scanning and cell modeling techs were good enough well before computers were cheap enough.  In this case em workers would gradually displace human workers as computer costs fell.  If, however, one group suddenly had the last key modeling breakthrough when em computer costs were far below human wages, that group could gain enormous wealth, to use as they saw fit.

Yes, if such a winning group saw itself in a total war, it might refuse to cooperate with others, and devote itself to translating its breakthrough into an overwhelming military advantage.  And yes, if you had enough reason to think powerful others saw this as a total tech war, you might be forced to treat it that way yourself.

Tech transitions that create whole new populations of beings can also be framed as total wars between the new beings and everyone else.  If you framed a new-being tech this way, you might want to prevent or delay its arrival, or try to make the new beings “friendly” slaves with no inclination or ability to war.

But note: this em tech has no intrinsic connection to a total war other than that it is a big transition whereby some could win big!  Unless you claim that all big techs produce total wars, you need to say why this one is different.

Yes, you can frame big techs as total tech wars, but surely it is far better than tech transitions not be framed as total wars. The vast majority of conflicts in our society take place within systems of peace and property, where local winners only rarely hurt others much by spending their gains.  It would be far better if new em tech firms sought profits for their shareholders, and allowed themselves to become interdependent because they expected other firms to act similarly.

Yes, we must be open to evidence that other powerful groups will treat new techs as total wars.  But we must avoid creating a total war by sloppy discussion of it as a possibility.  We should not take others’ discussions of this possibility as strong evidence that they will treat a tech as total war, nor should we discuss a tech in ways that others could reasonably take as strong evidence we will treat it as total war.  Please, “give peace a chance.”

Finally, note our many biases to overtreat techs as wars.  There is vast graveyard of wasteful government projects created on the rationale that a certain region must win a certain tech race/war.  Not only do governments do a lousy job of guessing which races they could win, they also overestimate both first mover advantages and the disadvantages when others dominating a tech.  Furthermore, as I posted Wednesday:

We seem primed to confidently see history as an inevitable march toward a theory-predicted global conflict with an alien united them determined to oppose our core symbolic values, making infeasible overly-risky overconfident plans to oppose them.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • Carl Shulman

    Robin is saying that the chance of a friendliness-requiring event (from a total preference utilitarian point of view such as his own, not from the point of view of relevant decisionmakers) is below 1%, that we should seriously fear self-fulfilling prophecies of such events, and that we should frame things to lower estimates of the likelihood of friendliness-requiring events among our audiences. There are obvious difficulties in taking a speaker’s presentation of that conjunction at face value.

    The cost-benefit of talking honestly about potential vulnerabilities depends on the ability of different actors to eventually do the analysis themselves. If price-fixing was legal, it would be silly to avoid explaining the mechanics of cartels (when arguing for a legal prohibition of outright price-fixing, perhaps) for fear of giving ideas to large corporations. Institutions with abundant resources and access to a chance to vastly improve their position through fairly obvious schemes like price-fixing can generally be expected to figure it out themselves, whereas consumers, voters, and regulators will have much weaker incentives to identify the possibility.

    On the other hand, FDR was convinced of the high importance of nuclear weapons by a concerted effort by forward-thinking and high-status academics, and other countries either were less focused on their programs or were strongly influenced by espionage reports from the Manhattan Project. The National Nanotechnology Initiative also comes to mind.

  • http://hanson.gmu.edu Robin Hanson

    Carl, I can both think a chance is low and warn against increasing that chance. Sure people can do an analysis later, but their cost-benefit estimates should depend on their estimates of others’ behavior.

  • http://jamesdmiller.blogspot.com/ James D. Miller

    Full scale nanotechnology might give an aggressor nation the capacity to instigate a decapitating first strike attack without itself suffering significant losses. The potential of such technology might lead to a total tech war.

  • http://computeprofit.com Vic

    This technology should be govern by international laws, perhaps international commerce and technology law. If a conflict will arise, it is possible that not only countries will battle in this tech war. But also company by company in a certain country.

  • Carl Shulman

    “Carl, I can both think a chance is low and warn against increasing that chance”

    Of course you can, but you just presented a rationale that seems to justify (given your stated values) erring on the side of presenting inaccurately low estimates. This is so even in light of the later statement:

    “Yes, we must be open to evidence that other powerful groups will treat new techs as total wars.”

    Wouldn’t the claims in this post, combined with your preference utilitarianism, justify listening to new evidence but provisionally claiming that it was weak when presenting probability estimates? I don’t think this is very likely in your case, but it’s a hazard that arises around the discussion of many types of ‘dangerous knowledge.’

  • http://hanson.gmu.edu Robin Hanson

    James, some techs, like guns, can be more useful in war than in peace, and give a first-strike advantage, and so reasonably lead to more fears of war than would a generic tech. But brain emulations do not seem to be such a tech.

    Carl, yes sometimes lies can give advantages, and this might be such a time, but I am not lying.

  • Carl Shulman

    I’ll accept that.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I generally refer to this scenario as “winner take all” and had planned a future post with that title.

    I’d never have dreamed of calling it a “total tech war” because that sounds much too combative, a phrase that might spark violence even in the near term. It also doesn’t sound accurate, because a winner-take-all scenario doesn’t imply destructive combat or any sort of military conflict.

    I moreover defy you to look over my writings, and find any case where I ever used a phrase as inflammatory as “total tech war”.

    I think, that in this conversation and in the debate as you have just now framed it, “tu quoque!” is actually justified here.

    Anyway – as best as I can tell, the natural landscape of these technologies, which introduces disruptions much larger than farming or the Internet, is without special effort winner-take-all. It’s not a question of ending up in that scenario by making special errors. We’re just there. Getting out of it would imply special difficulty, not getting into it, and I’m not sure that’s possible. — such would be the stance I would try to support.

    Also:

    If you try to look at it from my perspective, then you can see that I’ve gone to tremendous lengths to defuse both the reality and the appearance of conflict between altruistic humans over which AI should be built. “Coherent Extrapolated Volition” is extremely meta; if all competent and altruistic Friendly AI projects think this meta, they are far more likely to find themselves able to cooperate than if one project says “Libertarianism!” and another says “Social democracy!”

    On the other hand, the AGI projects run by the meddling dabblers do just say “Libertarianism!” or “Social democracy!” or whatever strikes their founder’s fancy. And so far as I can tell, as a matter of simple fact, an AI project run at that level of competence will destroy the world. (It wouldn’t be a good idea even if it worked as intended, but that’s a separate issue.)

    As a matter of simple decision theory, it seems to me that an unFriendly AI which has just acquired a decisive first-mover advantage is faced with the following payoff matrix:

    Share Tech, Trade: 10 utilons
    Take Over Universe: 1000 utilons

    As a matter of simple decision theory, I expect an unFriendly AI to take the second option.

    Do you agree that if an unFriendly AI gets nanotech and no one else has nanotech, it will take over the world rather than trade with it?

    Or is this statement something that is true but forbidden to speak?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    We could be in any of the three following domains:

    1) The tech landscape is naturally smooth enough that, even if participants don’t share technology, there is no winner-take-all.

    2) The tech landscape is somewhat steep. If participants don’t share technology, one participant will pull ahead and dominate all others via compound interest. If they share technology, the foremost participant will only control a small fraction of the progress and will not be able to dominate all other participants.

    3) The tech landscape contains upward cliffs, and/or progress is naturally hard to share. Even if participants make efforts to trade progress up to time T, one participant will, after making an additional discovery at time T+1, be faced with at least the option of taking over the world. Or it is plausible for a single participant to withdraw from the trade compact and either (a) accumulate private advantages while monitoring open progress or (b) do its own research, and still take over the world.

    (2) is the only regime where you can have self-fulfilling prophecies. I think nanotech is probably in (2) but contend that AI lies naturally in (3).

  • Ian C.

    Share Tech, Trade: 10 utilons
    Take Over Universe: 1000 utilons

    But you would ultimately get less through taking over the world than trading with it. Temporarily, you would get more, as you simply grab everything in sight, but long term you would get only the output of a bunch of slaves, vs. the higher output of free men.

    And surely an AI, who would potentially live forever, would think long term, and not create such a table. But then I guess a dabbler-designed one might not.

  • GenericThinker

    “And so far as I can tell, as a matter of simple fact, an AI project run at that level of competence will destroy the world.”

    How is that even logical? Unless you plan to build an AI and immediately hook it up to all your nuclear weapons or give it free run of any and all factories for making weapons or nanotech (whatever that is nano being applied to all manner of things and now being essentially a buzz word) there is no reason to assume friendly or not an AI would destroy the world. That entire scenario is fictional, one can build an AI in an isolated computer system with little to no danger. Sure its true that one can destroy the world through AI but an unfriendly AI does not necessitate that conclusion that the world will be destroyed. The issue here is there seems to be a lack of understanding of what is real in the field of AI and what is purely fictional.

    If we are total honest, we are so far away from AGI at the moment that debating friendliness at this point is like debating nuclear war before the discovery of hard radiation. As long as one keeps a human in the loop with the AI and properly designs the hardware and what the hardware is connected to one can make a safe trip into “mind design space”.

    A final note there is this continued talk of 100’s or 1000’s of times human speed what does that even mean? What would thinking at a thousand times our current speed mean? How would that even work? Our thoughts and senses etc are tied closely to our sense of time. If you speed that up why would one expect that would be better? Seems to me that this would merely be like listening to music on fast-forward.

  • Carl Shulman

    I have made a post on the differences between conflicts among humans and among optimizers with utility linear with resources at my blog.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Thanks, Carl.

  • Will Pearson

    To increase your chances of survival you want to be have as little compute power as you can get away with. So that you don’t waste resources on things that don’t intrinsically get you more resources. Look at plants, fantastically successful, but no use for a brain and they would be worse off with one.

    Greater computational ability doesn’t automatically lead to winning. Only if there is something worth discovering or predicting with the intelligence, another energy source or a cheaper way of doing something you need to do to survive, does it make sense to invest in compute power.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Carl, it’s fantastic that you’re finally blogging. You should link to it in your posting name, like the rest of us.

  • Cameron Taylor

    There may be, as Robin notes, a bias towards seeing new tech through the lens of total war. There is also a bias for optimism, based off mellenia of shot messengers and a bias to see the future through the lens of the present.

    I see in this post a warning against mentioning potentially disasterous consequences to AI. This scares me. When even rationalists ought to feel ashamed of considering such possibilities then we are in trouble. Life is a powerful, dangerous thing. When we are considering creating new, potentially more powerful forms of life we need to consider how to do so without rendering ourselves extinct. The challenge is not to prove that we will not end up with a stable economically competitive non-combative equilibrium. The challenge is to look for every possibility that could lead to total war and make damn sure we’ve considered them before we touch the on switch.

    Carl’s ‘Reflective Disequilibria’ post is a good one. We cannot assume conflict with or between AIs will follow the trend of human wars. The competitive superiority of ‘free men’ makes for some damn good movies and has a strong impact on human history. However, that is an ideosyncracy that a third generation em will probably not possess.

  • Cameron Taylor

    Will underestimates the competitive potential of the Trifid.

  • Tim Tyler

    A winner-take-all scenario doesn’t imply destructive combat or any sort of military conflict.

    Just so. For example, in biology, genes can come to dominate completely by differential reproductive success – not just by killing all your competitors. Warfare is different from winning.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, if everything is at stake then “winner take all” is “total war”; it doesn’t really matter if they shoot your or just starve you to death. The whole point of this post is to note that anything can be seen as “winner-take-all” just by expecting others to see it that way. So if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument. It is possible that you could make such an argument work based on the “tech landscape” considerations you mention, but I haven’t seen that yet. So consider this post to be yet another reminder that I await hearing your core argument; until then I set the stage with posts like this.

    To answer your direct questions, I am not suggesting forbidding speaking of anything, and if “unfriendly AI” is defined as an AI who sees itself in a total war, then sure it would take a total war strategy of fighting not trading. But you haven’t actually defined “unfriendly” yet.

    Carl, I replied at your blog post, but will repeat my point here this time. You say total war doesn’t happen now because leaders are “comfortable” and humans are risk-averse with complex preferences, but there would be a total war over the solar system later because evolved machines preferences would be linear in raw materials. But evolutionary arguments don’t say we evolve to only care about raw materials, and they only suggest risk-neutrality with respect to fluctuations that are largely independent across copies that share “genes.” With respect to correlated fluctuations evolutionary arguments suggest risk-averse log utility.

  • Tyrrell McAllister

    @GenericThinker

    Unless you plan to build an AI and immediately hook it up to all your nuclear weapons or give it free run of any and all factories for making weapons or nanotech (whatever that is nano being applied to all manner of things and now being essentially a buzz word) there is no reason to assume friendly or not an AI would destroy the world. That entire scenario is fictional, one can build an AI in an isolated computer system with little to no danger.
    . . .
    As long as one keeps a human in the loop with the AI and properly designs the hardware and what the hardware is connected to one can make a safe trip into “mind design space”.

    Eliezer has made a very persuasive case that a “human in the loop” would not be adequate to contain a super-intelligent AI. See also his post That Alien Message. This post also addresses the point you raise when you write, “A final note there is this continued talk of 100’s or 1000’s of times human speed what does that even mean? What would thinking at a thousand times our current speed mean? How would that even work? Our thoughts and senses etc are tied closely to our sense of time. If you speed that up why would one expect that would be better? Seems to me that this would merely be like listening to music on fast-forward.”

    Neither of these addresses the question of whether an artificial super-intelligence is likely to be built in the near future. But they might make you reconsider your expectation that it could be easily controlled if it was built.

  • James Andrix

    Ian:
    What benefits could slaves or free men provide an AI (or a group of EMs) that it could do itself 100 times better with nanobots and new processor cores. Foxes do not enslave rabbits.
    Even if it were just deciding to enslave us or let us be ‘free’, it would know that it could always free us later. (us or future generations)

    General Thinker:
    A simulated human can be put in a simulated environment that is sped up right along with them. They do a year’s worth of cognitive work, and play, and sleep, in hours. They experience the full year.

  • Tim Tyler

    if you want to say that a particular tech is more winner take all than usual, you need an argument based on more than just this effect. And if you want to argument it is far more so than any other tech humans have ever seen, you need a damn good additional argument.

    IT is the bleeding edge of technology – and is more effective than most tech at creating inequalities – e.g. look at the list of top billionaires.

    Machine intelligence is at the bleeding edge of IT. It is IT’s “killer application”. Whether its inventors will exploit its potential to provide wealth will be a matter of historical contingency – but the potential certainly looks as though it will be there. In particular, it looks as though it is likely to be mostly a server-side technology – and those are the easiest for the owners to hang on to – by preventing others from reverse-engineering the technology.

  • Ely Spears

    I think in the discussion above, there is a lot of conflation between causes for what is termed “total tech war.” You can find yourself in a total tech war merely by believing that the other agents see it as such. Or you can independently analyze the situation and determine that the best way to maximize your own payoff is to treat it as a total tech war regardless of what the other agents will think about it. If the space of advantages as upward cliffs as Eliezer suggests, then it is not unreasonable to believe an agent with a time sensitive, but utterly dominating, advantage will rationally decide the most payoff happens by acting in accordance with a total tech war plan of action. This is especially true if part of the cliff-advantage is the ability to analyze a situation more deeply and rapidly than competitors. I don’t see any reason why extra, special arguments are needed to justify this as a realistic scenario within AI FOOM.

  • Pingback: AI Foom Debate: Post 23 – 28 | wallowinmaya

  • Pingback: AI Foom Debate: Post 41 – 45 | wallowinmaya