Roberts On Robots

Russ Roberts and I talked for over 90 minutes on the possibility of a future robot-induced “singularity”, i.e., a sudden drastic social change, including a much faster growth rate. As this was his longest podcast in over two years, Russ seems to have been quite engaged by the subject.

Russ said “This may all sound crazy but Robin makes it actually sound plausible.” His two main points of skepticism were:

  1. I say our brains are big piles of brain cells that send signals to each other. We’ve know what parts the brain is made of, and they are the same familiar parts that everything else around us is made of. We know well how these parts interact locally, though figuring out what this implies on large scales is usually beyond our calculation abilities. So a good enough model of the parts and how they are connected must reproduce the same overall input-output behavior. Russ says “there is a reductionist element to this which says–and this is controversial–all there is to our brain is its physicality. Nothing else there. That’s not universally accepted, correct? … Being a religious person I’m capable of imagining something that is not observable.”
  2. I say prices usually fall when a very elastic supply curve rapidly gets cheaper. Russ would probably agree for something like computer memory, but is reluctant to agree for wages – he doesn’t think cheap plentiful immigrants lower wages. I say that if trillions of immigrants willing to work for a dollar an hour were waiting just offshore, letting in as many as wanted in would lower wages to that level. So I say cheap robots getting cheaper fast should rapidly lower wages for tasks they do. Russ objects “You can’t just say your wage will be driven down, because if there are complementary types of labor they’ll increase the wage rate of some people. … There’s all these complicated secondary effects.” I say all things considered, the likely effect is falling wages.

The first point reminds me of my disagreements with Tyler:

The three items on which Tyler most clearly identifies a disagreement [with me] are all in hard science and technology … Tyler doesn’t know that much about hard science and technology. … And yet Tyler feels confident enough in his perception of expert consensus on such topics to base his disagreements with me on them, even though I’ve spend years in such area.

The second point seems easier to settle, as it is just an application of standard econ theory.  Any other economists care to weigh in?

Added 5Jan: Karl SmithNick Rowe , Steven Hsu weigh in.

GD Star Rating
Tagged as: , , ,
Trackback URL:
  • Russ says “there is a reductionist element to this which says–and this is controversial–all there is to our brain is its physicality.”

    More precisely, all there is to the physical functioning of our brain is its physicality. And who could doubt that? (That’s all you need for your robot singularity, right? It doesn’t matter whether the bots are genuinely conscious, so long as they have the same physical effects.)

    • Finch

      Even accepting that there’s nothing mystical going on, there may very well be physics we don’t understand. Roger Penrose has long argued this, although it’s hardly mainstream. That physics may introduce constraints on the speed or size of brains that are not obvious.

      I don’t buy that argument, but I can’t prove it wrong.

      • michael vassar

        You don’t need to prove it wrong to say that it’s not motivated by any significant evidence and that there are strong reasons for expecting people to be biased towards making such claims regardless of their truth.

      • JenniferRM

        OK… I’ll take the bait 🙂

        The problem is that in his book, Penrose was just *wrong*.

        Either you can be charitable and imagine that he was confused by something difficult to understand outside his area of specialty that confuses most people (IE “consciousness”), or you can be uncharitable and imagine that he was willing to pander to people without the patience to understand the meat of complex arguments and who really want to believe that humans are magically sparkly beings of total potential and complete unpredictable freedom, with a healthy dose of quantum-esque powers of believing-makes-it-so.

        If you read his book he gives a fantastic pop science explanation of all kinds of subjects around computing, coding, and quantum mechanics and so on, up to the inclusion of a crowning moment of awesome when he gives an actual universal turing machine, bit for bit, that is his own design as far as I remember.

        After hundreds of pages of this he gives about two pages of hand waving argument nominally related to Goedel’s Incompleteness Theorem that *completely* drops the ball and is just gibberish when it comes to proving that human consciousness is uncomputable. He argues that since mathematicians can all agree about Goedel’s Incompleteness Theorem, they must be doing something more than merely mechanically formal and thus their consciousness must be something outside the powers of a turing machine. The pages and page of quantum backstory is ignored — I think its just there in an “argument by putting impressively difficult material next to your actual claims”.

        In the meantime, our internal mental experiences, our ability to speak and negotiate with each other in good faith, and our moral worth have almost nothing to do with our ability to recognize that a Goedel number’s content is not provable within a given reasoning system, and they certainly do not have the connection Penrose suggests they have.

        “Roger Penrose cannot honestly, fully, and consistently believe this statement.”

        Roger Penrose’s inability to honestly, fully, and consistently believe that statement makes it true and marks him as incapable of authentically recognizing something that is, in fact, true. But this state of affairs doesn’t mean he isn’t conscious and it doesn’t even prove that he is computable or not, because Goedel’s Incompleteness Theorem and Turing machines don’t actually have much to do with each other. It just means that his mental contents, considered as a formal proof system, are necessarily either incomplete or inconsistent — he is as subject to Goedel’s theorem as anything.

        So either Penrose’s argument was just a mass of confusion and the champion of uncomputably quantum consciousness was incapable of constructing a coherent argument against conscious reductionism, or he set up an argument with a gigantic glaring error that shows that he is, in fact, computable, because he has a Goedel number. “Like a robot”. Like all of us. Like God, if God exists and does not “magically precede reason and stuff“.

        There’s always the possibility that consciousness might be demonstrated to be uncomputable in the future by some *other* argument that was actually *good*. But until someone assembles a stronger zombie from the remnants of the corpse of Penrose’s argument, it would be silly to bet money on his conclusion.

        And if you deny that a robotic singularity could happen based on Penrose’s reasoning (and thereby do nothing to make sure it isn’t inimical to your family and all your FB friends) you’ll have bet substantially more than a bit of cash on the subject… and won’t you feel silly if you and your grandchildren are ground up into robotic fuel paste in 30 years because you were paralyzed by confusion about consciousness right now?

      • mjgeddes

        Penrose is an Oxford man you know , JenniferRM , he could be reading this blog. Do you really think he (IQ 180, one of the world’s greatest mathematical-physicists) would make argument based on such a stupid misunderstanding as one you mention?

        “Roger Penrose cannot honestly, fully, and consistently believe this statement.”

        He would just roll his eyes at this. He deals with this simple misunderstanding in the later book ‘Shadows of the Mind’. His puzzle is the more general one:

        “The mathematical community (taken as a single entity) cannot…..” not “Roger Penrose cannot…”

        In fact, there’s a geniune puzzle, he just got the solution wrong. He’s actually right in that no deductive or inductive reasoning system is powerful enough to perform Godelian reflection. So Penrose should be applauded for pointing this puzzle out.

        The solution is not quantum gravity though (I agree his theory is nonsense), as I realized later on, the solution is categorization/analogical inference. Categorization is more powerful than Bayes. Only categorization enables full reflection. But Bayesian inference only allows for a limited level of categorization.

        It’s all so clear. ‘Probability’ should be replaced with the more general notion of ‘similarity’. ‘Priors’ are simply reference classes based on ‘similarities’ (far mode), which are then converted into the lower level notion of ‘probabilities’ via Bayes (near mode).

      • Do you really think [Penrose] would make argument based on such a stupid misunderstanding as one you mention?

        That book *was* heavily based on serious misunderstandings.

      • Finch

        If it wasn’t obvious, I’m not arguing for Penrose’s specific approach which I agree is almost certainly wrong.

        The general idea that the physics of consciousness is weird and not yet understood, though, is worth taking seriously.

      • mjgeddes

        Let’s ignore Penrose’s proposed physics solutions to consciousness (Qantum gravity, non-computability) for which there’s no evidence. I think we can rightly dismiss these theories. Instead lets focus on the part I think he got right, the Godel puzzle:

        He’s saying lets take the mathematical community as a whole (which includes every single intelligent entity working on mathematics). Treat all the individual algorithms of the brains of those searching for mathematical truth as a single combined algorithm. Now take the Godel number of that single gigantic algorithm. That’s a mechanical procedure. But Penrose is pointing out that there’s no non-sentient (knowable mechnical) procedure that can possibly understand why the Godel statement of the algorithm is true. I think he’s absolutely right about this.

        The clear conclusion is that Bayesian inference (non-sentient probability shuffling) is incapable of assessing the truth or falsity of mathematical axioms, and therefore cannot be fully capturing the intuitive components of general intelligence. The only way to escape this conclusion is to claim that mathematical intuition is random (the axioms are just selected at random), but that is nonsense.

        To sum up: sentient reasoning systems must be more powerful than any Bayesian reasoning system.

      • Essentially, Penrose claimed that mathematicians can do what no machine can do – by using their mysterious and infallible mathematical intuition. Of course, no evidence that humans could reliably perform this feat either was provided. His whole argument was without merit: “Penrose seems to make a fairly elementary error right at the beginning” – Dennett.

  • rapscallion

    Isn’t the great filter(s) a good reason to think that the singularity will never happen?

    • Not if life is rare – and we are locally first.

  • James D. Miller

    Very little time might pass between when we have trillions of robots and a technological singularity with machine intelligences 10^40 or so times faster than the human brain. Conditional on the singularity not killing us we can’t have much confidence in any prediction for post-singularity economic conditions.

    Nanotechnology and virtual reality could increase capital equipment at a fast enough rate so that the worker/capital ratio doesn’t significantly rise.

    We could experience a Kurzweilian merger in which human abilities increase at a fast enough rate so that wages don’t fall even if we have lots of robots.

    A slight tax on ems used to subsidize human wages would cause after-subsidy human wages to rise even with a trillion robots. This would basically be a continuation of the earned income tax credit.

    Governments currently keep wages high in many professions by restricting entry. A post- trillion robots expansion of this policy would cause human wages to rise in many professions if trillions of ems entered the general workforce.

    We might develop technology to create ultra-intelligent but also ultra-expensive machines before we develop cheap robots.

    Taking an outsider view of the situation indicates that better technology usually boosts wages.

    Many experts in artificial intelligence don’t believe we will have artificial general intelligence for centuries if ever.

    • michael vassar

      Evaluating experts is difficult without a close look, but I can honestly say that I have never seen an expert who had obviously thought carefully about the subject conclude that we won’t have AGI for centuries. The closest is Hofstadter, who says centuries but explicitly says that he’s basing his conclusion on wishful thinking, though he seems to think a century is plausible in an earlier part of the argument before he uses wishful thinking.

  • JAMayes

    “So I say cheap robots getting cheaper fast should rapidly lower wages for tasks they do.”

    They might without government intervention. I doubt governments would refrain from intervening.

  • Pingback: Tweets that mention Overcoming Bias : Roberts On Robots --

  • Michael Rosefield

    Russ says “there is a reductionist element to this which says–and this is controversial–all there is to our brain is its physicality. Nothing else there. That’s not universally accepted, correct? … Being a religious person I’m capable of imagining something that is not observable.”

    I would contend that, as a religious person, they are capable of imagining that they are capable of imagining something that is not observable. That is, they can’t imagine it at all, realistically speaking; the actual concept eludes them, but the words they use to describe it give them the impression, the window-dressing of an actual concept. It’s basic mental reification.

    • Well said. This is a good description of “p-zombies”, too. You can’t imagine them, but you can imagine you can.

    • This is exactly what I thought.

      I don’t know if I’ll be able to listen to the whole podcast, but, Robin, did you ask him to describe what he was imagining? And why you need to be religious to be able to imagine it?

  • Speaking for what it is worth as a guy from the internet with a B.A. in Economics, I agree with you on point number 2.

    I take well Russ Roberts’ objection. But again, it is like you say: on the whole, businesses will be paying less for the labor that can be done by ever-cheaper and ever–more productive machines.

    I welcome any such development. I think we are already at the point where economics and capitalism have made tens, hundreds, or thousands of millions of people “useless”, in the sense that there are so many more people living than there are paying jobs out there for them.

    I welcome further exacerbation of this imbalance, because then we will have greater incentive and pressure to redefine our economic paradigms (to something more equitable, free, and enlightened than we have now).

    As more and more people (educated or uneducated) find themselves in the absurd position of being willing and able to work in a world that is already saturated with willing and able workers, there will be more incentive and pressure to come up with an arrangement that doesn’t impoverish the un-needed souls just because they are un-needed.

    Perhaps Russ Roberts assumes that there is no workable world order that doesn’t include people working jobs that suit the needs of businesses, getting paid wages and salaries based on the supply and demand for workers of these jobs. Perhaps it makes him uneasy to think about a future where there are fewer paying jobs available for the human labor pool to compete for. Perhaps that’s why he doesn’t want to admit that robot laborers will further drive down wages. I speculate this will all the authority of a guy from the internet.
    “God Bless You, Mr. Rosewater / Pearls Before Swine”. Good book.

    • Other economics systems are likely to be less productive (in terms of total or even per capita output) and so will not win the evolutionary battle between different forms of social organisation.

  • The population discussion reminded me of a question I have asked many times and never gotten a straight answer to: are above-subsistence incomes post the industrial revolution the result of a disequilibrium due to technology growth outpacing population growth or the result of new technology meaning there is no longer a trade-off between population density and per capita incomes (at least at prevailing population levels).

    Even if the latter it’s possible at a much higher population level we would face such a trade-off.

    • James Oswald

      Technology is outpacing population growth, but it may not be disequilibrium. Birth rates are also decreasing dramatically. As long as population growth is lower than technological growth, humanity will remain above subsistence. Additionally, technology is not exogenous – the more people there are, the more scientists and engineers there are and the more technology will be discovered.

  • Once someone gives the sort of claim that Roberts is giving in what you label claim 1, I’m not sure rational dialogue about the Singularity is helpful. You disagree on much more fundamental issues that would need to be resolved well before any discussion of these issues can occur. But, maybe pull a page from Eliezer’s book and ask him if that means he’ll give up his religion if we do get computers that can effectively duplicate human intelligence?

  • Robert S


    Your point about dramatically falling wages is echoed pretty strongly by Martin Ford in his book The Lights in the Tunnel. and blog at

    However, I seem to recall that you did a review of his book a while back and you didn’t like it at all…. So if you agree that robots would kill jobs and result in lower wages, why don’t you agree with his premise? Just curious…I found the book pretty compelling myself.

  • Richard, agreed.

    James, with robots techs that lower the cost of ordinary capital also lower the cost of “labor.”

    Robert W, subsistence wages result from equilibrim with competition, free entry, an elastic supply curve, and rapid production. I don’t understand what you are saying about density.

    Robert S, why not read my post on that?

    • Why declining marginal returns to more labour? Presumably you need some other input to be limited and have declining returns when worked harder. Before 1800, that was land (why I talked about population density). What will it be in the future do you think? I assumed that Roberts was arguing that there was no other factor we couldn’t just produce more of.

      • Diminishing returns are pretty generic, and require only scarce, not limited, resources.

      • Robert Wiblin

        One factor has diminishing returns when it is used along with something else that isn’t growing as quickly. A second whole earth orbiting the sun wouldn’t be the poorer for this earth. For that matter doubling the population of earth at this point wouldn’t necessarily make us poorer (the gain in public good of knowledge might outweight less space and capital per person). If you want to persuade Roberts you could describe the complementary factor of production that we will have less of per capita in the future.

  • vaniver

    Talking about subsistence and Malthusian traps seems odd, since isn’t a necessary condition for a Malthusian trap that you oscillate around the income level where birth rate = death rate?

    Because as soon as we’ve got robot generic unskilled laborers, I guarantee you we will have robot kids and/or robot catgirls, and so even with moderate life extension it seems unlikely the birth rate will ever exceed the death rate. It seems to me much more likely that we’ll hit post-scarcity than that we’ll hit Malthus again.

  • James Oswald

    2.) Robin, I mostly agree with your conclusion, but not your presentation. There is no demand curve for labor, only that which labor produces. Wages in equilibrium are equal to the prices times the quantities of what is produced. Productivity will increase dramatically as technology increases, so in order for wages to decrease, there needs to be a shift in the relative prices of what they produce. Relative prices cannot fall, thus there needs to be something that increases in price.

    Ultimately there are two things required for production – natural resources and human effort. If human effort becomes non-scarce, natural resources will increase in relative price. Societies who are able to establish property rights to natural resources will be able to live a pre-singularity existence if they so choose, just as communities of the Amish are able to today.

    Given the current population, the raw materials available in the universe do not limit our consumption, only the effort required to harvest them. I envision a society where the currency is status and respect rather than material wealth.

  • Lord

    Complementary is mostly myth compared with substitution. His argument would be that they are creative and will create new opportunities and technology, making larger markets with more specialization. The problem is this will not happen until wages are driven to near marginal cost because it will always be cheaper to substitute than innovate, and innovation mostly occurs when wages or other resources are high and worth substituting for through innovation. They would put pressure on the resources they use and this pressure would induce some investment on material saving innovation but not until such pressure was anticipated and this innovation would further reduce wages. They may have no more desire to reproduce than us, but we have finite lives and offer some possibility of change through offspring. People with work always believe they are irreplaceable until they are replaced, and then they are just stunned.

  • John 4

    I think your dismissive attitude toward Tyler and Russ is a little ridiculous, since we have pretty much no idea how the mind works at any level of specificity. (i.e., we have no idea how, say, mental representation works; we have no idea how to “realize” thought processes that are semantic, and many of ours apparently are; we have no idea how to fit “qualia” into a physicalistic conception of the mind, etc.) I agree with you that the mind = the brain, but we have only philosophical arguments for that claim, which aren’t any more convincing than most other philosophical arguments. Which is to say, they’re far from rationally compelling.

  • Chris T

    a sudden drastic social change

    Could you define this? What time frame are we talking about? There is likely a lower limit to how quickly human society can uptake and adapt to a new technology. ie: Adoption and use of the internet has been very rapid by historical standards, but its full effects are still taking decades to play out.

  • James Oswald

    @Lord – Complementarity vs. Substitution is a distraction. Capital doesn’t substitute for labor, it is another form of roundabout labor. Labor is required to produce the capital that in turn makes labor more productive. There is a shift in the required skills to produce, but never any substitution of labor generally. Individuals get substituted for, since they don’t have the required skills to participate in the new production process. However, in a future where EMs could produce anything humans could produce, it would be irrelevant what skills you had. The greatest scientist or doctor in the world could be substituted just as easily as a manual laborer.

  • Pingback: Unlike Immigrants, Robots Will Permanently Drive Down Real Wages «  Modeled Behavior()

  • Matt

    Premise 1 seems mostly based on what we care about. Because identity and consciousness are things that only exist because we define them we seem to have wiggle room to make them be whatever we want them to be. In other words I don’t think anyone knows what these things are, they just pick a definition that works into their metaphysics or just the things they care about. If you only care about the results of small interactions between particles then you can just define that as legitimate consciousness and be perfectly satisfied. Russ cares about other things (things you probably don’t believe exist) and therefore is not willing to define identity or consciousness the way you are. He sees it as needing a soul.

  • I think your discussion about reductionism could have perhaps been more generous to the other side. There is still serious debate in philosophical circles about the possibility of dualism. Current thinkers on the issue, for instance, should at least be aware of Chalmer’s zombie argument for dualism – even if they don’t agree with it:

    In general the current discussion in philosophy of mind is very rich and I think non-philosophers would benefit from some dabbling. There are lots of different theories that involve distinctions of greater subtlety that might allow you guys to occupy common ground (non-reductive physicalism, property-dualism). In fact, when listening to your discussion I wasn’t entirely sure if you were arguing for a monist, physicalistic reductionism. If that’s your view, then you’ll find there are lots and lots of philosophers that will disagree with you.

    But often you defer to a functionalist view – which says that mental states are multiply realisable. This is distinct from the ontological claim about what stuff exists – and is presumably compatible with modified versions of both reductionism and dualism.

    In anycase – the argument overall, as I understood it, did not depend on anything other than some kind of functional view. I’d go somewhat further though and suggest that it’s not entirely clear that we’d have to micro-analyse the brain and build something roughly similar. However we do it – a functional view suggests that all we have to do is get the outputs, relative to inputs roughly right. Specialised research is being carried out in just about every field imaginable to automate various isolated processes. To get the kind of machine we want may simply involve a kind of unification of all these specialised domains. The end result may structurally be quite unlike a human brain. As long as it outputs like one, however, is all we really need.

  • James Oswald
  • Finch

    @Michael Vasser

    You don’t need to prove it wrong to say that it’s not motivated by any significant evidence and that there are strong reasons for expecting people to be biased towards making such claims regardless of their truth.

    It seems pretty clear that there’s something weird about consciousness. If it’s not physics, it’s ghosts and goblins, which is even less appealing.

    Strong-AI arguments don’t explain well why my consciousness appears where it does and has the borders it does in a tick-tock universe.

    Penrose is not a crank, but I certainly yield that there’s very little evidence for any particular physical explanation of consciousness at this point. Something is going on, though, and we don’t know what it is. It doesn’t seem like a big leap to say we don’t know whether or not that knowledge might impose limits.

    • Finch

      Vassar. Sorry!

  • The two biggest problems I see on Robin’s end are the ideas that uploads will get anywhere in the face of engineered machine intelligence – and that intelligences will stay small, and be poor. IMHO, it is much more realistic to think about engineered intelligences – and about planetary-scale super-rich creatures.

  • Pingback: Selgin’s ‘Less Than Zero’: Required Reading for Central Bankers()

  • Given that there is currently no AGI, and no EMs, nor any research program extant showing promise of leading to either one it is interesting to read all the frantic emotionalism on this topic by commentators here and on LW. “won’t you feel silly if you and your grandchildren are ground up into robotic fuel paste in 30 years” definitely deserves some sort of prize for true-believer thinking, right up there with “rapture” fairy-tale movies and the like.

    A nice demonstration that, in the end, religious modes of thinking are alive and well even among those who purport to have abandoned them.

  • Abelard Lindsey

    Tyler has made clear in previous arguments with Hanson on issues such as cryonics that he knows diddly squat about science and technology. Tyler may be a fine economist. But he is completely ignorant about science and technology.


    Penrose’s arguments about quantum consciousness are a complete red herring with regards to the creation of A.I. All Penrose is saying is that the human brain is a room-temperature quantum computer. His arguments are nothing more than a proof of principle for the creation of room-temperature quantum computers.

    Of course we can still make real A.I. if Penrose is correct. The only difference is that they would be based on quantum computers rather than digital computers. The argument that Penrose’s theory precludes the creation of real A.I., even based on quantum computers, is complete hogwash.

  • Unilateral Disarmament

    Russ Roberts positions are constrained by his autistic support of unilateral free trade and open borders. In the absence of actual evidence, he has written novels and fables to support those positions, which signal his status and loyalty to the old high priesthood of academic economics, but also, since a writer’s books are his babies, make it very difficult for him to support any argument that would betray them.

  • Regarding the economic question, we have some experience with this, but I’m not sure what it tells us. ~1900 when machines took over from human muscles did labor costs go down? In the 1980s as computers took over for clerical work did clerical pay go down? It doesn’t seem in either case that wages dropped particularly. Meanwhile, there is presumably a long term trend that all the production gets spread between all the humans. If production goes up because of robots, then the increased production has to be split between
    1) Labor
    2) Capital
    3) ? Robots ?

    I suspect it is politically unstable for the excess production from robots to go to capital without some going to labor. Capital formation and posessions is a function of the laws, and the laws can change. Obviously our legals sytems have always allowed some share of production to capital, enough to allow capital to be accumulated, but the broad sweep is that capital tends not to be accumulated by families for more than a few generations. If Robots are THAT productive and are still not people, I believe the political solution will be to allocate the excess production between their owners and the rest of the population. Heck maybe they will be publicly owned, like the beaches in california or the monopoly on lotteries or the road system.

    BUt it doesn’t seem the public record is that productive innovations that displace labor permanently reduce the wages to labor. Rather the opposite, the wages to labor have continued to rise even as productive innovations have in a micro-sense competed labor away.