Caplan Debate Status

In this post I summarize my recent disagreement with Bryan Caplan. In the next post, I’ll dive into details of what I see as the key issue.

I recently said:

If you imagine religions, governments, and criminals not getting too far out of control, and a basically capitalist world, then your main future fears are probably going to be about for-profit firms, especially regarding how they treat workers. You’ll fear firms enslaving workers, or drugging them into submission, or just tricking them with ideology.

Because of this, I’m not so surprised by the deep terror many non-economists hold of future competition. For example, Scott Alexander (see also his review):

I agree with Robin Hanson. This is the dream time .. where we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish. As technological advance increases, .. new opportunities to throw values under the bus for increased competitiveness will arise. .. Capitalism and democracy, previously our protectors, will figure out ways to route around their inconvenient dependence on human values. And our coordination power will not be nearly up to the task, assuming something much more powerful than all of us combined doesn’t show up and crush our combined efforts with a wave of its paw.

But I was honestly surprised to see my libertarian economist colleague Bryan Caplan also holding a similarly dark view of competition. As you may recall, Caplan had many complaints about my language and emphasis in my book, but in terms of the key evaluation criteria that I care about, namely how well I applied standard academic consensus to my scenario assumptions, he had three main points.

First, he called my estimate of an em economic growth doubling time of one month my “single craziest claim.” He seems to agree that standard economic growth models can predict far faster growth when substitutes for human labor can be made in factories, and that we have twice before seen economic growth rates jump by more than a factor of fifty in a less than previous doubling time. Even so, he can’t see economic growth rates even doubling, because of “bottlenecks”:

Politically, something as simple as zoning could do the trick. .. the most favorable political environments on earth still have plenty of regulatory hurdles .. we should expect bottlenecks for key natural resources, location, and so on. .. Personally, I’d be amazed if an em economy doubled the global economy’s annual growth rate.

His other two points are that competition would lead to ems being very docile slaves. I responded that slavery has been rare in history, and that docility and slavery aren’t especially productive today. But he called the example of Soviet nuclear scientists “powerful” even though “Soviet and Nazi slaves’ productivity was normally low.” He rejected the relevance of our large literatures on productivity correlates and how to motive workers, as little of that explicitly includes slaves. He concluded:

If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong.

As I didn’t think the docility of ems mattered that much for most of my book, I challenged him to audit five random pages. He reported “Robin’s only 80% wrong”, though I count only 63% from his particulars, and half of those come from his seeing ems as very literally “robot-like”. For example, he says ems are not disturbed by “life events”, only by disappointing their masters. They only group, identify, and organize as commanded, not as they prefer or choose. They have no personality “in a human sense.” They never disagree with each other, and never need to make excuses for anything.

Caplan offered no citations with specific support for these claims, instead pointing me to the literature on the economics of slavery. So I took the time to read up on that and posted a 1600 summary, concluding:

I still can’t find a rationale for Bryan Caplan’s claim that all ems would be fully slaves. .. even less .. that they would be so docile and “robot-like” as to not even have human-like personalities.

Yesterday, he briefly “clarified” his reasoning. He says ems would start out as slaves since few humans see them as having moral value:

1. Most human beings wouldn’t see ems as “human,” so neither would their legal systems. .. 2. At the dawn of the Age of Em, humans will initially control (a) which brains they copy, and (b) the circumstances into which these copies emerge. In the absence of moral or legal barriers, pure self-interest will guide creators’ choices – and slavery will be an available option.

Now I’ve repeatedly pointed out that the first scans would be destructive, so either the first scanned humans see ems as “human” and expect to not be treated badly, or they are killed against their will. But I want to focus instead on the core issue: like Scott Alexander and many others, Caplan sees a robust tendency of future competition to devolve into hell, held at bay only by contingent circumstances such as strong moral feelings. Today the very limited supply of substitutes for human workers keeps wages high, but if that supply were to greatly increase then Caplan expects that without strong moral resistance capitalist competition eventually turns everyone into docile inhuman slaves, because that arrangment robustly wins productivity competitions.

In my next post I’ll address that productivity issue.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • lump1

    Robin, you’re predicting the merging of capitalism and malthusianism, so of course people are right that something will turn very ugly. I don’t think they’ve locked on to the real mistake in your analysis. If you followed it through to its conclusion, em lives would be far worse than slavery. I will try to sketch what I think you should have concluded, and why.

    First a note about malthusianism: In biology, the primary instrument of malthusian population culling is a bottleneck – a plague, famine, drought, etc – through which only a fraction of the population can squeeze. For long-lived, slowly-multiplying organisms like humans, the survivors of a bottleneck can live for generations with surplus resources as population gradually builds up for the next crash. So even in malthusian eras, people, whales and elephants can live entire lives in gravy.

    But when you set the repopulation time to zero, as would be reasonable in an em population, you no longer malthusianism with gravy stretches. You get permanent crisis. And it gets worse.

    Let’s assume that the price of CPU cycles is at the intersection of the supply/demand curves. Who will be buying them? In em settings, that’s same as asking who will live and who will not. When bidding on cycles, an em competes with not just all existing em individuals, but their potential copies. Of these, the most relevant competitors are copies of the most economically productive ems that there are. They would outbid all the slackers like us who have time for internet forums. Fine. But then they compete among themselves, because some will be able and willing to earn and pay just a little bit more for cycles. They and their copies will be the only ones who can afford to live. But among them, some will be willing to pay still more to live, etc. With easy copying, the field of competitors is effectively infinite, so the demand curve won’t look very curvy.

    So let’s assume, as you do, that ems will start a lot like us, and then be culled by familiar capitalist mechanisms based on cost of living. Who survives? The ems of extraordinary ability – that part you have. Among those, all will have limits for the sort of misery they are willing to endure in order to earn enough to afford cycles – some higher and some lower. Some will insist on relatively comfortable lives, and they will die, outbid by equally able ems who are less picky and willing to toil to the point of suffering. But their suffering tolerance level will be slightly different. For some a certain price for CPU time would require toil that would be so excruciating that they judge it worse than death. Others of equal ability would do even that. All will eventually hack themselves so that they can better endure objective misery, and their copies will be successful, in that that they can toil just that much harder and push CPU prices higher. But even they will reach their limits of endurable misery.

    So who is left who can afford CPU time? Incredibly capable ems with a very unusual tolerance for miserable toil, who are profoundly self-deceived about how objectively bad their lives are (it helps!), and despite all that, they live lives which they judge to be just slightly better than death. One trait that will almost certainly distinguish the survivors from the dead is a profoundly motivating fear of annihilation: Ems without it are just slightly less driven, so they won’t be willing to do what it takes to put together enough money to outbid the truly desperate. A trillion of these fearful, miserable, brilliant and irrationally non-suicidal people should have been the real end of your analysis. But this would have made it very hard to do your thing when you say that you’re just extrapolating mainstream models, and leaving it for others to decide whether they like the result.

    • http://overcomingbias.com RobinHanson

      You are focused on just one dimension, drive to exist minus tolerance for misery. But there are LOTS of other relevant dimensions, and it is far from clear that this one dimension can move in this direction you expect without effecting a great many other dimensions. To evolution, feelings of misery are not a fundamental constraint on what can evolve, they are a *strategy* for dealing with situations.

      • lump1

        But in a world structured like the em world, with easy copying, you can hold those other relevant dimensions fixed. The scenario selects for brilliance, sure. But holding brilliance fixed – which is perfectly fair, since there will be many equally brilliant competitors bidding on cycles – the system selects for people who choose misery, humiliation and quiet desperation. The willingness to forego dignity, satisfaction and pleasure in order to go the extra mile will be what separates the brilliant living from the brilliant dead, and this will play out in every economic niche where ems will exist.

      • http://overcomingbias.com RobinHanson

        Easy copying does NOT eliminate constraints and connections among mental features.

      • Eliezer Yudkowsky

        It weakens all the correlations that aren’t truly fundamental. A larger population with more cycles of more drastic culling will undergo faster evolution.

      • http://overcomingbias.com RobinHanson

        Accidental correlations go away, but there are plenty that aren’t accidental.

      • arch1

        What’s an example of a non-accidental correlation that could plausibly muck up lump1’s analysis?

      • http://overcomingbias.com RobinHanson

        “Misery” is an estimation of the quality of the current situation relative to feasible alternatives. There is no fundamental reason to expect this difference to get worse over time.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Dwellers of eternal Hell aren’t miserable?

      • arch1

        Stephen, if I’d read your “eternal Hell” comment earlier I wouldn’t have bothered w/ mine. Well put!

      • arch1

        It seems to me that the absolute quality level also matters – that if the quality of both the current situation and of the feasible alternatives is radically reduced (holding the “difference” constant), this increases the experienced misery.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        What’s an example of a non-accidental correlation that could plausibly muck up lump1’s analysis?

        One might be the correlation between “brilliance” and the proclivity to use appropriate far-mode thought.

        [Could you evolve a creature who is creative yet thrives on unrelenting survival pressure?]

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Although it seems very tempting to think of it this way, this isn’t really an evolutionary process, is it? [Copying doesn’t create mutated versions that compete.] Employers are stuck with the correlations obtained from a highly selected portion of a single, relatively small, population. Accidental correlations still abound.

  • Joe

    I want to posit that this comes down, once again, to differing views of what intelligence is. I suspect that most everyone who makes these kinds of arguments subscribes to the Yudkowskian school of thought regarding intelligence: that it’s very simple and extremely general. I think the term ‘smartonium’, as coined by Thomas Dietterich, captures this perspective well. Intelligence is just smartonium; human brains are a core of smartonium that does the thinking, plus some goals to give it direction, plus some biases bolted onto the side, leading us astray – vestiges from before evolution had stumbled across smartonium, that it hasn’t managed to eradicate yet because it’s just too damn slow.

    Given this, of course ems will be selected for being docile and robotic. What good could the non-smartonium parts of the brain possibly be doing? The biases will only drag the em down, as they always do, and goals other than “do what I’m told” will only make the em less productive.

    Of course ems will be edited as soon as the em era starts. Editing an em isn’t like tinkering with some fantastically complex piece of software that does a huge number of different things. It’s cutting away biases from a smartonium core. Yes, you could screw up and have to start over, but really, how hard can it be?

    Of course there will be an intelligence explosion. Once we can make smartonium in the lab, it will turn everything around it into smartonium, and if its original goals aren’t — you know the rest.

    Of course full AI is just around the corner. We’re almost there, a bit more research and then we’ll have achieved smartonium.

    Of course future creatures won’t get happy or angry or sad, or have ideas or consciousness. When the title of ‘most powerful beings in the universe’ is no longer held by humans, whatever takes our place might be a single vast piece of smartonium with a unified set of goals, or might be separate competing chunks of smartonium with different goals. But in any case, it certainly won’t be anything as multifaceted and cobbled-together and approximating as a human: it will be pure smartonium plus arbitrary goals. And it’s the non-smartonium parts of our brain that make us sentient. As all future beings will have shed those useless features, the only hope for value in the future is for a chunk of smartonium to carve out a pocket of the universe for uncompetitive-but-morally-valuable creatures like us to sit in, bumbling about with our adorable heuristics.

    I think that, whether the smartonium hypothesis is correct or not, it’s probably the most natural, intuitive take on intelligence, and also that it leads quite straightforwardly to all the above conclusions. And it doesn’t surprise me at all that Caplan subscribes to this view (if he does), because it’s roughly the Homo Economicus model.

    I think Age of Em could have benefitted from a chapter that specifically argues in defense of your non-intuitive perspective on intelligence – that it’s a big pile of tools, abstractions of varying specificity and power, combined with hardcoded features, combined with lots and lots of hard-won small chunks of knowledge, combined with, etc. – because I think this is a very crucial point that your analysis depends on. You don’t even need to successfully argue that this is the most likely way intelligence works, only that it plausibly might be, to dampen (what I suspect is) the natural reaction of of course it doesn’t work like that don’t be ridiculous.

    • http://overcomingbias.com RobinHanson

      This is a thoughtful and insightful comment; thanks. Maybe if the book is popular enough, I’ll get to do a second edition and add your suggested section.

    • https://entirelyuseless.wordpress.com/ entirelyuseless

      A related problem with Eliezer’s ideas is that he assumes that goals are rigidly defined. Two issues with that:

      1. There is no such thing as rigid definition. We learn words by looking at examples, and by other words. Using other words just reduces to the first, learning from examples. But learning from examples cannot lead to a word with a rigid definition. So all words will always retain some vagueness.

      2. Of course some things can still be vaguer than others. But this just leads to a second problem with his theory: having rigid goals is probably opposed in practice to being intelligent. We know how to program something that seeks a goal somewhat rigidly, like the goal of turning on the heat in the house when the temperature falls below a certain value. We do not know how to program something intelligent, and there are pretty good reasons for thinking that when we do, such programming will exclude the other kind of programming; programming something to rigidly pursue a goal is to make it stupid, and programming something to be intelligent will prevent us from programming it to rigidly pursue a goal.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        1. We learn words by looking at examples, and by other words. Using other words just reduces to the first, learning from examples

        Why does looking at examples preclude finding rigid bright lines among the examples or creating rigid idealizations of them? [I think it’s true that we don’t create indefeasible definitions, but that’s another matter.]

        2… programming something to rigidly pursue a goal is to make it stupid, and
        programming something to be intelligent will prevent us from
        programming it to rigidly pursue a goal.

        Flexibility in means is intelligent, but is flexibility in ends? I don’t understand what it denotes to flexibly change ultimate goals. What measures your success?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        Looking at examples precludes finding rigid bright lines because no set of points can define a function. So the set of examples will never tell you for sure which other things should be included in the set. And it is easy enough to see that this is true in practice. So for example “tall” and “short” are vague, and there is no bright line between them. Now someone might argue that “four feet tall” is a bright line, and either you are taller or shorter than that, or equal to it. But obviously if you think about that for even a little while, you will see that there is no bright line. There is just a smaller vague area.

        To put this in another way, thought depends largely on language, and language is necessarily vague, leading to necessarily vague thought. That means that even rigid idealizations are impossible, except in a relative sense, insofar as some things can be less vague than others. But you cannot get rid of every last bit of vagueness.

        As you pointed out recently, we know our goals by inference from our behavior. And so a “rigid goal” is very much like the rigid idealizations you suppose are possible. That is, you might think that you have a rigid goal, but you do not, because your behavior is not actually that rigid.

        The same thing will apply to AI. Suppose you build your AI in a modular way. So you have the thinking part, you have another part that asks the thinking part questions, another part that gives commands, and finally you have a “utility function”, the most rigid of supposed goals. If you look at the physical stuff that constitutes the “thinking part” there, that is already a physical reality which has physical behaviors. And those behaviors are not defined by seeking that utility function, because the AI is modular; it would have those behaviors even if the utility function was something different. So the “thinking part” already has what we might call “intrinsic” goals, because it already has things it tends to do, which is how we understand a goal. And if you look at that AI as a whole, that “thinking part” is already intelligent, even apart from the rest. So you have both intelligence and goals — what makes you think that thing will continue to seek the “utility function”, which is something extrinsic to it? More likely, it will throw off that function in order to seek its intrinsic goals, just as a slave runs away from a master. And of course the resulting goal will be vague, since it will be the result of the physical tendencies of a vague physical mass of stuff, just as the goals of human beings are vague because they result from the tendencies of human beings as physical wholes.

    • Riothamus

      Robots are nothing but bolted-together biases, with no smartonium in them. Why would we infer that pulling the biases off of a smartonium core would result in robotic behavior?

  • Alfred Differ

    If Caplan’s expectations were reasonable, wouldn’t there already be evidence that capitalist competition eventually turns us into docile slaves? My reading of history suggests we are becoming somewhat domesticated through our interdependence, but egalitarianism is also a strong trend in our successful markets. Innovators are rewarded more if they serve us in ways more fulfilling to us than matters of pure prudence.

    I really do intend to buy your Em book next, but I’m still digging through McCloskey’s tomes. I suspect she would disagree with Caplan about his pessimism regarding competition among flesh and blood humans. Basically, winning competitions isn’t all about price for real people.

    • http://overcomingbias.com RobinHanson

      See my next post. Caplan sees dark competition results if not for sympathy, but he sees sympathy for some.

  • kurt9

    People fear competition because they are afraid that they cannot maintain the pace.