AI Boom Bet Offers

A month ago I mentioned that lots of folks are now saying “this time is different” – we’ll soon see a big increase in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries. Recently Elon Musk joined in:

The risk of something seriously dangerous happening is in the five year timeframe … 10 years at most.

If new software will soon let computers take over many more jobs, that should greatly increase the demand for such software. And it should greatly increase the demand for computer hardware, which is a strong complement to software. So we should see a big increase in the quantity of computer hardware purchased. The US BEA has been tracking the fraction of the US economy devoted to computer and electronics hardware. That fraction was 2.3% in 1997, 1.7% in 2003, and 1.58% in 2008, and 1.56% in 2012. I offer to bet that this number won’t rise above 5% by 2025. And I’ll give 20-1 odds! So far, I have no takers.

The US BLS tracks the US labor share of income, which has fallen from 64% to 58% in the last decade, a clear deviation from prior trends. I don’t think this fall is mainly due to automation, and I think it may continue to fall for those other reasons. Even so, I think this figure rather unlikely to fall below 40% by 2025. So I bet Chris Hallquist at 12-1 odds against this (my $1200 to his $100).

Yes it would be better to bet on software demand directly, and on world stats, not just US stats. But these stats seem hard to find.

Added 3p: US CS/Eng college majors were: 6.5% in ’70, 9.7% in ’80, 9.6% in ’90, 9.4% in ’00, 7.9% in ’10. I’ll give 8-1 odds against > 15% by 2025. US CS majors were: 2.4K in ’70, 15K in ’80, 25K in ’90, 44K in ’00, 59K in ’03, 43K in ’10 (out of 1716K total grads). I’ll give 10-1 against > 200K by 2025.

Added 9Dec: On twitter @harryh accepted my 20-1 bet for $50. And Sam beats my offer: 

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • lump1

    You could win that bet and still have a world in which a vast number of people are replaced by automation. Even now, when a shop (or industry) that once employed humans goes robotic, I expect that a very small portion of the transition costs go to the computer hardware.

    • http://overcomingbias.com RobinHanson

      If you don’t like my proposals, make a counter-proposal.

      • Mark Bahner

        Hi,

        I’ve predicted that the number of tractor-trailer drivers, the number of cashiers, and the number of material movers (e.g., loading dock workers) will all be down by about 90 percent from their 2012 values by 2044.

        And I’ve predicted that the number of jobs in what were the top 15 job categories in 2012 will be down by more than 50 percent by 2044.

        http://markbahner.typepad.com/random_thoughts/2014/11/jobs-vulnerable-to-artificial-intelligence-part-2.html

        I’d be willing to bet on those predictions.

      • http://don.geddis.org/ Don Geddis

        Technology certainly changes what jobs people do. “Beginning in 1840 at roughly 70 percent of the labor force, agricultural employment fell to about 40 percent in 1900, 10 percent in 1950, and remains at about 2 percent today.” (From: http://www.minnpost.com/macro-micro-minnesota/2012/02/history-lessons-understanding-decline-manufacturing )

        The question under debate is whether labor as a whole will change, whether people will be unable to find jobs. It’s not especially interesting to make a prediction that the particular mix of human jobs will change as productivity increases. Everybody expects that.

      • http://overcomingbias.com RobinHanson

        Is there any evidence that this would represent more job churn and change than we’ve seen over the last century? I’m looking to bet on indicators of change from past rates of change.

      • Mark Bahner

        OK, I’ve also
        predicted that gross world product will average increases of more than 7% per
        year in the ten years ending January 1, 2030, and by more than 10% per year in
        the ten years ending in January 1, 2040. Considering that (according to Angus
        Maddison) GWP has only increased by more than 7% in one year (7.2% in 1964) in
        all of human history, that’s a pretty big change from past rates of
        change.

      • MarkBahner

        “Is there any evidence that this would represent more job churn and change than we’ve seen over the last century?”
        I’m not aware of any time in the last century where three of the fifteen most common jobs declined by 90 percent within 30 years, or that 50% of the fifteen most common jobs went down by 90%. And I’m positive it hasn’t happened as a result of computers.

      • http://overcomingbias.com RobinHanson

        I can’t take your word for it. For the bet offers I made, I cited historical stats to help folks see prior trends.

      • Mark Bahner

        I wrote that I’m not aware of any time in the last century where three of the fifteen
        most common jobs declined by 90 percent within 30 years, or that the fifteen most common jobs went down by 50%. And I wrote that I’m positive it
        hasn’t happened as a result of computers.

        You respond, “I can’t take your word for it. For the bet offers I made, I cited historical stats…”

        You want me to provide evidence of something of which I’m not aware? That’s quite a trick.

        But as far as me being positive that computers haven’t caused a 90% decline in three of the fifteen most common jobs in any 30-year period, or a 50% decline in the fifteen most common jobs…don’t you already know that to be obviously true? (Thirty years ago, the Apple McIntosh was the latest thing!)

        If you don’t, I’ll give you some incentive to find out: If you can identify three of the fifteen most common jobs in the U.S. that have suffered a 90% decline in 30 years as a result of computers, or 50% of the fifteen most common jobs, I’ll give you $1000. And if you can’t find such jobs…no charge. (The final judge on this would be any person on whom we could agree. And I’ll give her/him $40 for her/his time.)

  • Blackwater

    Distributing wealth from versions of yourself that are extraordinarily rich in absolute terms to worlds in which you aren’t? I like it.

  • http://don.geddis.org/ Don Geddis

    I totally agree with Robin. The frustrating thing is that the technology doomsayers may be right … but on the scale of centuries and millenia, not years or decades. They’re all vastly overoptimistic about the speed of change in society (and about how close AI is to real success).

    And in the case of jobs lost to automation, those failures are combined with a failure to understand basic economics, how a rise in productivity (the ability to produce the same output with less labor) does NOT cause a rise in unemployment (an absolute loss of jobs for humans). Centuries of evidence that the proposed causality is false, yet people’s uneducated intuitions lead them to make the same claims over and over again.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      rise in productivity (the ability to produce the same output with less labor) does NOT cause a rise in unemployment (an absolute loss of jobs for humans)

      It doesn’t always cause a rise in unemployment, but doesn’t the logical possibility of complete AI show that under some circumstances it will necessarily cause such a rise?

      • http://don.geddis.org/ Don Geddis

        Perhaps I should say: in all of previous economic history to date, productivity improvements did not increase (long-term) unemployment.

        You wish to posit that AI success will be “different”. You can try to make that argument, but it isn’t self-evidently true. As a hypothetical example, if there were millions of human-level — or even post-human — “slaves” for each real human, that might simply make all remaining humans tremendously rich. Perhaps they would no longer have “jobs”, only in the sense that you no longer need to find your own berries in the field and hunt your own game. It’s a burden that you voluntarily give up. Recall that unemployment only counts people who are actively seeking work; those who have withdrawn from the labor force are not “unemployed”.

      • IMASBA

        Such solutions would require high-level planning and new legislation, for that to happen people do need to be slightly worried. The message should not be “don’t worry, everything will work out on its own”, but “yes, you should be worried, but there is a way to solve these problems if we keep our heads cool and introduce certain reforms”.

      • VV

        In principle we can hypothesize a dystopia where AI replace most human labor, but all AIs are owned by a fraction of the population who get all the utility produced by the AIs, while the majority of the population are left to starve to death or are actively exterminated, as the AI-owning elites don’t need them anymore as employees or even as customers.

      • IMASBA

        Or a dystopia where the masses are stripped of their political power and work as entertainment/servants for wages that are pathetically low compared to GDP/capita. With a small dynastic elite owning the world. There’s plenty of ways the whole thing could go wrong if policy isn’t adjusted to a reality where AI can compete with humans in pretty much every field.

    • Mark Bahner

      “The frustrating thing is that the technology doomsayers may be right …
      but on the scale of centuries and millenia, not years or decades.”

      I completely disagree. The scale is 1-4 decades. Circa the year 2020, the cost of a computer capable of 1 quadrillion operations per second (~1 petaflop) will be $1000. And by 2030, $1000 worth of computer will buy 1 quintillion operations per second (1 exaflop).

      The number of Walmart stores in the U.S. is currently 4200. I doubt even 400 will be open 30 years from now. They’ll all be replaced by e-commerce. And the e-commerce warehouses will likely have no human employees, and there will definitely not be any human delivery drivers.

      Ray Kurzweil is (generally) right. And Elon Musk is (generally) right. Things are going to be changing very, very fast very soon. Before the middle of the century, homo sapiens sapiens will no longer be the smartest beings on the planet.

      • http://don.geddis.org/ Don Geddis

        It’s obvious that you disagree. So put your money where your mouth is, and make a public bet with Robin.

        I think you’re wildly overoptimistic about near-term progress in AI. And especially whether “the smartest beings” simply comes down to “more operations per second”.

        BTW: 30 years ago, Walmart had fewer than 400 stores. The growth of Walmart over the last three decades hasn’t been particularly overwhelming to human society at large.

      • Mark Bahner

        “And especially whether “the smartest beings” simply comes down to “more operations per second”.”

        That wasn’t what I meant (although I can understand how what I wrote could have been misunderstood in that manner).

        I’m talking about that there will come a time within the next 30 years that a computer could be in a room with the smartest human beings on the planet, and it will say, “I’m the smartest @#$% in this room.”

        And the humans will know it is right. It will be able to score higher than any human on the planet on any IQ test. And it will know as much or more about any subject as any human in that room (given ~5 minutes of access to the Internet).

        You’re also misunderstanding my point about Walmart (in part because it’s applicable to Target, Kroger, Lowes, etc. etc.). Within 30 years, it is likely that essentially all the positions in all those stores will be gone.

        It’s easy to say, “Oh, well, they’ll find something else to do.” And then there are all the people who build all those stores, and provide and service the equipment. They’ll also be out of jobs. But it’s easy to say, “Oh, well, they’ll also find something else to do.”

      • http://don.geddis.org/ Don Geddis

        “there will come a time within the next 30 years” I understand your claim. I don’t believe you. I’ve watched AI “progress” for the last 30 years. Progress is extremely slow, and the gap with humans remains tremendously large.

        “It’s easy to say, Oh, well, they’ll find something else to do.” To turn it around, for centuries many people have worried, “I can’t imagine what all those people will do when those jobs go away, so I’m going to predict that there will be nothing for them to do.” And every time, when the future comes, unemployment doesn’t actually rise. Agriculture drops from 70% of the population to 2% in 150 years. You, in 1850, would not be able to imagine what all those people with lost agricultural jobs would end up doing instead. But the failure of your imagination is not evidence that productivity improvements result in unemployment, especially given the strong historical evidence against it.

      • MarkBahner

        “I understand your claim. I don’t believe you.”

        I’m predicting that sometime in the next 30 years, a computer will score equal
        to or higher than any human being on earth on all common IQ tests, such as the Wechsler Adult Intelligence Scale (WAIS), the Stanford-Binet Intelligence
        Scale, the Woodcock-Johnson Test of Cognitive Abilities, etc.

        Do you disagree? (And Robin, do you disagree?)

        “To turn it around, for centuries many people have worried, ‘I can’t imagine what all those people will do when those jobs go away, so I’m going to predict that there will be nothing for them to do.'”

        Where did I ever “predict that there will be nothing for them to do?”

        The top 10 private employers in the U.S. at present include:

        #1 –> Walmart
        #5 –> UPS
        #6 –> Target
        #7 –> Kroger
        #8 –> Home Depot

        I predict that the average reduction in employment in those five companies will
        be more than 80 percent within 30 years, due to computers. (In particular,
        as a result of computer-driven vehicles and automated warehouses.) Again, it’s easy for you say that it will be no problem for all those people to find other jobs. I’ll bet you’re not going to be one of them.

        “But the failure of your imagination is not evidence that productivity improvements result in unemployment,”

        The lack of your knowledge–and complete lack of your empathy–does not negate the undisputed truth that changes in technology can lead to significant unemployment and tremendous hardship.

        Have you ever been laid off? As part of a significant layoff in your town’s major employer? When you had a family for which you needed to provide? Ever taken a cut in pay as a result of a lost job? Lost health insurance as a result of a lost job? Been forced to move out of state to find new work?

      • Avi Eisenberg

        If I had a copy of those tests and the answers, I could easily write a program that spat out the correct answers. And a computer can’t score higher than any human on Earth unless no one gets them all right. Besides, we can’t measure the top end of the IQ spectrum accurately anyway. Standard tests only go up to 140.

      • MarkBahner

        OK, what test result would convince you a computer is intelligent?

      • Avi Eisenberg

        It’s hard to point to a specific thing that humans can do, as that can be reverse-engineered. Like people pointed to chess as the thing a computer had to better than humans to be “smarter” than us, and that happened >15 years ago, and no one thinks we got AI then. I believe that the Turing test can be beaten if you have enough time and patience; just keep on feeding it whatever a human said in response to the other human, and eventually you have a lookup list that can fool a human for a specific amount of time. (It would take millennia, but then the work becomes speeding it up, not developing intelligence.) Anything with well defined input-output can be cheated on in a similiar way.

        That said, the way that real intelligence could convince me it is smarter than me is by solving something that humans haven’t been able to solve. For example, cures to cancer or solutions to the Millennium problems (or any other big open problem in science).
        Of course, I’d also be convinced if it took over the world …
        If a computer can’t convince me that it is intelligent, then it can’t be too smart. That is, even if I wouldn’t give any tests, I still expect to recognize an AI when I saw one.

      • MarkBahner

        “That said, the way that real intelligence could convince me it is smarter than me is by solving something that humans haven’t been able to solve. For example, cures to cancer…”

        Or a computer pointing out to experts that it is the cells around a cancer are more important than the cancer cells themselves…something that the cancer experts didn’t know?

        See this video by Jeremy Howard, in which he explains why Robin Hanson and Don Geddis are wrong, and why this time *is* different:

        https://www.youtube.com/watch?v=xx310zM3tLs

      • http://don.geddis.org/ Don Geddis

        “video by Jeremy Howard, in which he explains … why this time *is* different” You’ve been misled by his marketing propaganda, and you fail to appreciate the difference between mere machine learning, vs. full AI. As for his singularity predictions at the end, it’s easy to find lots of people making wild predictions (e.g. Kurzweil). The whole point of Robin’s post is that he doesn’t find such unsupported predictions to be very plausible.

      • MarkBahner

        Re: Jeremy Howard, “You’ve been misled by his marketing propaganda, and you fail to appreciate the difference between
        mere machine learning, vs. full AI.”

        OK, pick some test you think demonstrates “full AI.” You like the Turing Test? Fine. I will be happy bet you that a computer will pass the Turing Test within the next 30 years. (Since you apparently think it will take “centuries or millenia,” you should be willing to give me pretty
        steep odds. ;-))

        “As for his singularity predictions at the
        end, it’s easy to find lots of people making wild predictions (e.g. Kurzweil). The whole point of Robin’s post is that he doesn’t find such unsupported predictions to be very plausible.”

        Here’s a partial list of predictions I’ve
        made so far on this thread, with brief supporting information in parenthesis
        (where I feel like it):

        1) One thousand dollars’ worth of microprocessor will perform at about 1 petaflop by 2020, and 1 exaflop by 2030. (http://en.wikipedia.org/wiki/FLOPS.)

        2) From 2012 to 2044, there will be approximately a 90 percent reduction in the number of cashiers, long distance truckers, and material movers (e.g., dock workers) in the U.S.

        3) Gross world product will increase by an average of more than 7% per year in the ten years ending in 2030, and by more than 10% per year in the ten years ending in 2040.

        4) A computer will pass the Turing Test before 2044.

        5) The number of stores for Walmart, Target, Kroger, and Home Depot in the U.S. will decrease by 90% by 2044.

        I’d appreciate you both identifying for each one: 1) whether you think the prediction is plausible, 2) whether you think the prediction reflects “this time is different” with regards to past predictions of the impact of computers on society, and 3) whether you are willing to bet against the prediction.

      • http://don.geddis.org/ Don Geddis

        Strongly disagree about #4 (“Turing Test”), at least if we’re talking about a “real” test, with trained judges and hours of time. (Otherwise, you could argue that Eliza already passed “the test” decades ago.)

        #3 (“10% annual world GDP growth”) seems highly unlikely.

        The others I have no strong opinion about (nor do I think they’re particularly important, for the status of overall human civilization).

      • MarkBahner

        “Strongly disagree about #4 (“Turing Test”), at least if we’re talking about a “real” test, with trained judges and hours of time. ”

        Well, I think it’s a crappy test anyway, particularly exactly as Alan Turing originally described it in 1950.

        My much more fundamental and important prediction is that computers will have more general intelligence than an unassisted human brain before the middle of the century. (And not just a little more, either!) But I’m not sure we would ever be able to agree on what “general intelligence” is. My proposal that a computer in room of the smartest human beings saying it is smarter than all of them, and the humans knowing it was true, is the essential gist.

        “#3 (‘10% annual world GDP growth’) seems highly unlikely.”

        So if it happened, would you agree that your assessment of the impacts of computers on human civilization in the near term (the next 1-4 decades) was wrong?

        “The others I have no strong opinion about (nor do I think they’re particularly important, for the status of overall human civilization).”
        The fact that $1000 worth of microprocessor is likely to be capable of 1 petaflop in 2020, and 1 exaflop in 2030 (and 1 zettaflop before 2040) is not *directly* important to civilization. But it’s *indirectly* the most important thing in the history of civilization. It’s why humans will no longer be the smartest species on the planet before the middle of the 21st century. And it’s why Elon Musk (and many others) recognize computers as a potential existential threat.

      • http://don.geddis.org/ Don Geddis

        “General intelligence” isn’t so hard to detect, and Turing’s fundamental insight is that the best probe is a wide-ranging natural language conversation. I think we agree on what it would be like, if AI succeeded. We just disagree on the timing. I think “a few decades from now” is wildly overoptimistic.

        “It’s why humans will no longer be the smartest species.” No, Moore’s law and flop growth, while certainly helpful, is not at all the primary key for AI. We’ve already had Moore’s law for many decades. Think of the gap between Samuel’s checkers program in the 1950’s, and human intelligence. How much has Moore’s law improved computers in all those decades since? How much has the gap between AI and humans closed? Or, to think of it another way: I want to have a 5-minute intelligent conversation with a computer. I’ll give you a year of supercomputer computation today, to accomplish it. Can you write the AI program? No? If you can’t do it today, with a year of supercomputer time, then what does it matter that a few decades from now, you’ll be able to get that much computation in 5 minutes for $1000?

        The problem with AI today is NOT that it’s too slow. It’s that it doesn’t work. Making it faster doesn’t solve that problem.

      • MarkBahner

        Hi Don,

        You write, “‘General intelligence’ isn’t so hard to detect, and Turing’s fundamental insight is that the best probe is a wide-ranging natural language conversation.”

        Yes, but he substituted in the idea of being human for being intelligent. That was a fundamental problem; it set the path for artificial intelligence research in the wrong direction. Intelligence is about solving problems, not being human. As I, others, and you yourself realize, if a computer said, “I’m a computer; I know nothing about feelings”…it would flunk the Turing Test. That’s very bad.

        “The problem with AI today is NOT that it’s too slow. It’s that it doesn’t work. Making it faster doesn’t solve that problem.”

        Here’s a graph of microprocessor speed with a bunch of animals on the right side of the y-axis:

        http://www.frc.ri.cmu.edu/~hpm/talks/revo.slides/2030.html

        Now, I completely understand and accept that there is not any sort of one-to-one correspondence such that 1 MIPS gives you a worm, 1000 MIPS give you a lizard, and 1 billion MIPS (1 petaflop) gives you a human. All those need the pixie dust of software. But it’s indisputable that there is some sort of general correlation between processing speed and intelligence of life on earth. So it’s inconceivable to me that something that’s operating at a million times more instructions per second than a human brain–and the trend is pretty clear that this will happen circa 2040–will not somehow be comparable to a human brain in capabilities.
        That’s *overall* capabilities. For example, in translations of human speech…we won’t even be close. A computer operating at that speed will be able to translate in real-time the languages on earth…even multiple languages at a time. But things like…I have no idea…understanding that a piece of paper could cut soft butter but not hard butter…? we might still be better.

        Your statement that “AI doesn’t work” is, I think, a very sad outcome of the path Alan Turing set way back in 1950. Intelligence is about solving problems, not about being human. And the universe of problems that AI *can* solve is expanding exponentially. It’s not necessary that a computer that can spot trading patterns better than any human on earth can also carry on a conversation, in order that it be “intelligent.”

        Best wishes,
        Mark

      • IMASBA

        An AI that can fool a human into thinking it’s human is very useful for the economy: most of our jobs revolve around social interactions, not solving science’s great questions.

        But there is a much deeper meaning behind the Turing Test. It forces us to acknowledge the possibility of artificial intelligence, especially in the 1950s, but still true today, many people imagine some sacred divide between thinking and feeling humans (or aliens, or dolphins) and cold, calculating machines. Turing’s idea was that if we take human intelligence at face value, which we do through conversations, then it would be bigoted to require a higher standard from artificial intelligence. There will undoubtedly be false negatives because AI does not need to have human emotions, but that’s entirely beside the point. It’s not a practical intelligence detector, such a thing may not even be possible, it’s a device to make us think.

      • Mark Bahner

        “An AI that can fool a human into thinking it’s human is very useful for
        the economy: most of our jobs revolve around social interactions”

        Can you name some occupations where you think it would be great for a computer that can fake that it’s human?

        I’ll give you an example where it’s not. (Obviously, just an anecdote.) I was calling to renew bunch of prescriptions. One or two had expired. So the computer couldn’t handle it.

        I got a human on the line, and the woman helped me by saying that they’d contact my physician and get the prescription renewed, then send me the drugs and charge my card.

        That could have been equally well solved by a computer that was simply smarter than their existing computers. And by smarter, I don’t mean more able to fake like it was human, I mean more able to *solve my problem*. The woman was very nice, and very helpful. But I don’t expect her job to be around in 10-30 years. It’s too easy to get an automated system that says. “That prescription has expired. Would you like to renew?” “OK, I will contact your doctor for a renewal. Would you like me to charge your existing card for this medicine?” “Anything else?” . “Goodbye.”.

        “But there is a much deeper meaning behind the Turing Test. It forces us to acknowledge the possibility of artificial intelligence, especially in the 1950s, but still true today, many people imagine some sacred divide between thinking and feeling humans (or aliens, or dolphins) and cold,
        calculating machines.”

        Yes, that was a very useful thing to do in 1950. But in 2014, any person who has such notions should simply get a phone with Siri, Cortana, or Google Now.

        P.S.

      • http://don.geddis.org/ Don Geddis

        Again, did you read Turing’s paper? He wasn’t trying to make computers “be human”. He consciously eliminated many aspects of “being human” that were irrelevant to intelligence. (E.g. using typewritten words instead of face-to-face communication.)

        Yes, the mimic game rules out certain entities that most of us would agree are intelligent also. But you seem to think that somehow Turing distracted the whole field of academic AI. The truth is, very very few people attempted to work directly on “pretend to be a human”. The multi-decade failure of (full) AI, the “pixie dust”, as you put it, isn’t due to lack of effort, or working on the wrong goal.

        You realize that just MIPS isn’t enough … but you seem to not know very much about what else besides MIPS might be required. That’s where the failure is. The MIPS will be “easy”. (And, yes, sufficient computational power is necessary.) But the “pixie dust” is hard, and has stubbornly resisted decades of attempts by an entire academic field, to narrow the gap with human performance.

        “t’s inconceivable to me that something that’s operating at a million times more instructions per second than a human brain … will not somehow be comparable to a human brain in capabilities.” Are you unfamiliar with software? If you make an arithmetic calculator a million times faster, do you magically get public-key encryption? Is there any difference between cryptography, vs. some (very fast!) multiplication tables?

        “Intelligence is about solving problems, not about being human.” I agree.

        “It’s not necessary that a computer that can spot trading patterns better than any human on earth can also carry on a conversation” Spotting trading patterns is domain-specific expertise. That’s “easy”. Samuel’s checkers in the 1950’s, medical diagnosis in the 1970’s (e.g. Mycin), computer chess in the 1990’s (e.g. Deep Blue). Wall St. already has automated program traders.

        The failure of AI, is in general intelligence, a problem-solving approach that works on any field. AI systems are “brittle”, and break down quickly when outside their area of narrow expertise.

        Recall the original post here. It’s about machines replacing humans in ALL jobs. You won’t get that, without domain-independent, general intelligence. And the multi-decade progress in AI, on the “pixie dust” software of general intelligence, has been very very slow.

      • MarkBahner

        “Recall the original post here. It’s about machines replacing humans in ALL jobs. You won’t get that, without domain-independent, general intelligence.”
        The original post was Robin claiming that he had tests to see whether “This time was different” (with respect to computers taking over human jobs).
        His tests were flawed. I proposed alternative tests, such as betting that three of the 15 most common jobs in the U.S. would decline by 90% within the next 30 years, due to computers…because computers have certainly not caused that sort of employment change before.
        I also proposed betting on gross world product annual growth exceeding 7 percent per year in the decade ending in 2030, and 10 percent per year in the decade ending in 2040.
        Both bets would provide dramatic evidence whether “this time is different”. (Note: His original “time” was the claims made about computers in the 1980s.)

      • http://don.geddis.org/ Don Geddis

        We all agree that better computers should increase productivity. Which should cause job churn and greater economic growth. Just like the introduction of electricity, motors, railroads, etc. did in the late 1800’s.

        “This time is different” implies some NEW effect on human civilization, compared to past productivity enhancements. In particular, one of the strongest points of debate is whether overall human unemployment will dramatically increase because of better computers. (Especially because the evidence from the introduction of the loom, container ships, etc. is that jobs change, but unemployment does not increase.)

        Your proposed tests don’t get at the key disagreement. We all expect job churn. And we all expect increased growth. What some of us don’t expect (but others do) is that (most) humans won’t have anything to do, because computers will replace them everywhere. (The technological singularity is likely eventually … but whether in the next few decades, that’s the disagreement.)

      • MarkBahner

        “Your proposed tests don’t get at the key disagreement.”

        My predictions do get at the key disagreement.

        Robin wrote that, “For example, David Brooks recently parroted Kevin Kelley saying this time is different because now we have cheaper hardware, better algorithms, and more data. But those facts were also true in most of the previous booms; nothing has fundamentally changed!”

        Well, if nothing has changed, then there is absolutely no reason to expect the world economy to grow by more than 7 percent per year in the decade ending in 2030, or more than 10 percent per year in the decade ending in 2040. As I wrote before, the GWP has only increased by >7% in *one year* in all of human history (7.2% in 1964). So he should be very happy to bet against me. (In fact, he should give me pretty great odds, since virtually no one in his entire profession thinks what I’m predicting will happen.)

        And if nothing has changed, then there’s no reason to expect computers to cause a 90% decrease in 30 years in 3 of the 15 most common occupations in the U.S. That has also clearly never happened before. (For example, computers have caused a 90% decrease in the employment of typewriter manufacturers and photographic film manufacturers, but they were never in the top 15 job categories in the U.S.)

        So if either or both of those predictions come to pass, “this time” most certainly is different from the time in 1983 when Robin thought computers/AI were going to make a big impact.

      • MarkBahner

        Oh, other predictions I made definitely get at the key disagreement. I predicted that the number of stores for Walmart, Target, Kroger, and Home Depot in the U.S. will decrease by 90% by 2044, and the number of UPS drivers would decrease by 90%. Computers haven’t caused that before.

      • http://don.geddis.org/ Don Geddis

        Note that your proposed bets aren’t restricted to (the unverifiable) “computers caused that”. Perhaps world GDP grows because the poorest billion (Africa, India) industrialize and reach middle incomes.

        It’s also not interesting enough to predict “change will happen”. Blockbuster was a huge company, and Netflix killed it in a decade. Kodak was important for a century, and digital cameras killed it in a decade.

        Productivity improvements regularly cause change, as the historical economic record demonstrates. To claim “this time is different”, you need something more. What’s going to cause this to eventually be viewed as not just another disruptive innovation, just like all the previous ones?

      • IMASBA

        “If a computer can’t convince me that it is intelligent, then it can’t be too smart.”

        Or you are just “bigoted” against computers… If it walks, talks and solves problems like an intelligent being you have to assume it is an intelligent being, or at least be just as skeptical about human intelligence. Also solving a problem humans have not solved yet is a) an unreasonable standard (you don’t expect every human you meet to do that, so why should you expect it of every AI?) and b) it may not prove much since the problem may not have been “unsolvable” to humans, people may just have needed a bit more time, resources or luck.

      • Avi Eisenberg

        If a computer did all that, I would be convinced. I’m saying that a smart computer would be able to convince me that it is smart. If asked to give specific tests, I’m giving ones that humans can’t do, to prevent cheating. That doesn’t mean i wouldn’t be convinced by things even humans can do, just that I can’t pre-commit to accepting anything that can be games.

      • IMASBA

        Everything can be a game. It’s up to you to figure out if the AI caould brute force the answer or had to be smart. Whether it truly “understands” the problem is something you can never know for sure, just as you can never know that of a human.

      • Avi Eisenberg

        I would say “understanding”, as you put it, is not well-defined. When you say I can never know something, that usually means that it doesn’t actually mean anything.

      • http://don.geddis.org/ Don Geddis

        “OK, what test result would convince you a computer is intelligent?” Funny, Alan Turing asked and answered this very question, in 1950: http://en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence

      • IMASBA

        You say that now, but a lot of people would retreat to the “Chinese Room” argument or keep moving the goalposts (harder questions, longer test time) if push came to shove. The Turing Test is a philosophical tool, not a universal intelligence detector.

      • http://don.geddis.org/ Don Geddis

        Yes, of course the Turing Test is not perfect as a real, practical bright-line test. It’s both too easy (untrained judges, only 5 minutes) and too hard (dolphins or aliens could easily be highly intelligent, without being able to mimic a human). But it’s clearly in the right direction, and a far far better start for a real test than something like an IQ test.

      • MarkBahner

        “Funny, Alan Turing asked and answered this very question, in 1950:…”

        I think Alan Turing’s answer was a very bad answer. Just because a computer can *not* fake like it’s human, doesn’t mean it’s stupid.

        I could ask the computer what its wife or girlfriend smelled like. But the computer doesn’t have a wife or a girlfriend, and it doesn’t have a sense of smell. That doesn’t make it stupid.

        Intelligence is the ability to solve problems, not the ability to pretend to be something one is not.

        “It’s both too easy (untrained judges, only 5 minutes) and too hard (dolphins or aliens could easily be highly intelligent, without being able to mimic a human).”

        Exactly. A parrot being able to mimic a human voice is not necessarily a sign of extraordinary intelligence. But a bird being shown a bunch of numbers and being told to pick out the number “6” and being able to do it is a smart bird. Or being shown triangles of different sizes and being told to point out which one is “bigger” and being able to do that is a smart bird.

        http://www.nytimes.com/2007/09/10/science/10cnd-parrot.html
        And if you asked Alex the parrot where the nearest restaurant was, and he told you that a Denny’s was only 0.8 miles away, on Broad Street, you would say he was a @#%& genius. But if you did the same thing with your cell phone, and it told you the exact same thing, you would say it was not intelligent, right?

      • http://don.geddis.org/ Don Geddis

        “I think Alan Turing’s answer was a very bad answer.” Did you even bother to read Turing’s paper, before criticizing it?
        http://mind.oxfordjournals.org/content/LIX/236/433

        You and I agree that Turing’s test might rule out some intelligent entities, so it would need to be improved to be practical. But you’ve talked about a parrot mimicking a human voice, and you’ve recommended IQ tests, both of which suggest that you haven’t even yet caught up to Turing’s brilliant insights of 1950. It’s like you’re trying to start from scratch to solve this problem, using only your own intuition, without realizing that a lot of really smart people have already worked on this very question for many decades.

        Turing’s proposal was “bad”, but it was a whole lot better than yours. The correct answer is surely something involving a free-range text conversation across a huge variety of topics. (Assuming that the entity can speak English; it’s a harder problem to evaluate non-verbal entities.) I agree that the actual “pretend to be a human” part is a distraction, but if you read Turing’s paper, you should think about that as an attempt to get your intuition to consider what “really matters” in intelligence (e.g. the contrast with “a male pretending to be female”), in order to discard numerous other irrelevant details which shouldn’t affect any accurate judgment of intelligence.

    • IMASBA

      “I totally agree with Robin. The frustrating thing is that the technology doomsayers may be right … but on the scale of centuries and millenia, not years or decades.”

      And I think that’s underestimating it… “10 years” is definitely too short but “centuries” is too long.

      “And in the case of jobs lost to automation, those failures are combined with a failure to understand basic economics, how a rise in productivity does NOT cause a rise in unemployment.”

      “Basic economics” can be very misleading. If say in 2050 a country is losing 10 percentage points of its jobs per year there will undoubtedly be potental for new jobs just like “basic economics” predicts but it may prove very hard to retrain 10 percentage points of your workforce per year and they will not be productive while undergoing training. Retraining takes time and skill and the people being retrained, pus their dependents still need to eat in the meantime, that requires enormous societal flexibility and planning, it won’t just solve itself. In the real world you cannot forego eating for a year if you eat doubly the next year.

      Real-world systems need time to adapt to new circumstances as well as safety nets that ensure individuals can keep taking risks that are necessary to make progress for the group possible.

  • Jared

    Wouldn’t unemployment be the obvious indicator to look at? My prediction would be that we won’t see the unemployment rate go higher than 15% in the next 10-20 years. Now some people might complain about people dropping out of the labor force so maybe I could say 20% for the U6 unemployment rate. Of course, this might not be a fair bet because I still have an out. There could be a particularly bad recession and/or the US could have really bad policies that raise unemployment to Spain levels. But if it doesn’t rise, that would seem like pretty clear proof that technological unemployment isn’t very concerning right now.

    • http://overcomingbias.com RobinHanson

      If you want to attract bets, then in addition to stating a claim, you should also give odds.

  • Carl

    I would state this … if Japan, Spain and the USA does go bankrupt. Then all bets are off. China has been raging a war of economics .. this is why on most online stores .. China gives out free shipping. Thus We all want the “deal” Killing manufacturers in North America.
    On top of this .. China has taken out the “petro dollar” The US dollar .. Now .. China has made agreements with over 30 country’s to drop the US dollar and trade in the gold standard on just about everything .. like oil / gas. Japan is over 10 trillion in debt .. the US is way over 17 trillion … this is out outrageous on so many levels … there will be a day of judgment .. sooner than latter … like it or not.

  • IMASBA

    Robin, in your bet with Chris Hallquist what are the constraints on policy changes? For example a shorter workweek or a basic income could change the labor share of income significantly. I don’t think that’ll happen before 2025, especially not in the US, but in general people could use such policy changes as excuses to protest when they lose a bet.

    • http://overcomingbias.com RobinHanson

      Our bet has no escape clauses. So we are folding in all such possibilities into our overall estimates.

      • IMASBA

        In that case there’s almost no way you could lose if you ask me.

      • Avi Eisenberg

        I may be interested in taking your 20-1 bet, but I have some questions first. Who wins if the BLS stops tracking that stat or no longer exists for whatever reason? What does “computer hardware” include? If Google comes out with their car system, for example, and charges $2,000 as an addon to a regular car, and achieves high adoption rates, does that get counted as “hardware” or under automotive? For that matter, do today’s cars’ computer systems count towards computer hardware in those stats?

      • http://overcomingbias.com RobinHanson

        If BLS stopped such stats, I guess I’d think the bet called off unless someone could point to a good close substitute stat. If BLS continues, it decides what are computers and electronics.

      • Avi Eisenberg

        Do you know where embedded computer systems like the ones in cars are included in the statistics?

      • http://overcomingbias.com RobinHanson

        I don’t happen to know; you can look up stuff on the BLS website as easily as I.

  • Tige Gibson

    The implication of this post is that people have to tolerate even the idea that people won’t be working to earn their living which starkly contrasts the recent libertarian hack posts on this site.

  • Silent Cal

    Glad to see someone actually considering what the onset of technological unemployment would look like. The historical record doesn’t mean it will never happen, but it does say loud and clear that subjective assessment of how impressive technology has gotten is completely useless in making this judgment. Does anyone else have any thoughts on what statistical evidence we should expect to see if this really were the beginning of the end for the value of human labor?

  • Geoff Brown

    Robin, could you link your explanations for labor’s declining share. I searched the site but didnt come up with anything obvious. Sorry if i missed it. Thanks.

    • http://overcomingbias.com RobinHanson

      I haven’t blogged on that topic.

      • Geoff Brown

        I’d be very interested in your thoughts and I think others would too. Future blog topic?? 😉

  • Pingback: Overcoming Bias : Missing Engagement