64 Comments

I'd be very interested in your thoughts and I think others would too. Future blog topic?? ;)

Expand full comment

I haven't blogged on that topic.

Expand full comment

Robin, could you link your explanations for labor's declining share. I searched the site but didnt come up with anything obvious. Sorry if i missed it. Thanks.

Expand full comment

Note that your proposed bets aren't restricted to (the unverifiable) "computers caused that". Perhaps world GDP grows because the poorest billion (Africa, India) industrialize and reach middle incomes.

It's also not interesting enough to predict "change will happen". Blockbuster was a huge company, and Netflix killed it in a decade. Kodak was important for a century, and digital cameras killed it in a decade.

Productivity improvements regularly cause change, as the historical economic record demonstrates. To claim "this time is different", you need something more. What's going to cause this to eventually be viewed as not just another disruptive innovation, just like all the previous ones?

Expand full comment

Oh, other predictions I made definitely get at the key disagreement. I predicted that the number of stores for Walmart, Target, Kroger, and Home Depot in the U.S. will decrease by 90% by 2044, and the number of UPS drivers would decrease by 90%. Computers haven't caused that before.

Expand full comment

"Your proposed tests don't get at the key disagreement."

My predictions do get at the key disagreement.

Robin wrote that, "For example, David Brooks recently parroted Kevin Kelley saying this time is different because now we have cheaper hardware, better algorithms, and more data. But those facts were also true in most of the previous booms; nothing has fundamentally changed!"

Well, if nothing has changed, then there is absolutely no reason to expect the world economy to grow by more than 7 percent per year in the decade ending in 2030, or more than 10 percent per year in the decade ending in 2040. As I wrote before, the GWP has only increased by >7% in *one year* in all of human history (7.2% in 1964). So he should be very happy to bet against me. (In fact, he should give me pretty great odds, since virtually no one in his entire profession thinks what I'm predicting will happen.)

And if nothing has changed, then there's no reason to expect computers to cause a 90% decrease in 30 years in 3 of the 15 most common occupations in the U.S. That has also clearly never happened before. (For example, computers have caused a 90% decrease in the employment of typewriter manufacturers and photographic film manufacturers, but they were never in the top 15 job categories in the U.S.)

So if either or both of those predictions come to pass, "this time" most certainly is different from the time in 1983 when Robin thought computers/AI were going to make a big impact.

Expand full comment

We all agree that better computers should increase productivity. Which should cause job churn and greater economic growth. Just like the introduction of electricity, motors, railroads, etc. did in the late 1800's.

"This time is different" implies some NEW effect on human civilization, compared to past productivity enhancements. In particular, one of the strongest points of debate is whether overall human unemployment will dramatically increase because of better computers. (Especially because the evidence from the introduction of the loom, container ships, etc. is that jobs change, but unemployment does not increase.)

Your proposed tests don't get at the key disagreement. We all expect job churn. And we all expect increased growth. What some of us don't expect (but others do) is that (most) humans won't have anything to do, because computers will replace them everywhere. (The technological singularity is likely eventually ... but whether in the next few decades, that's the disagreement.)

Expand full comment

"Recall the original post here. It's about machines replacing humans in ALL jobs. You won't get that, without domain-independent, general intelligence."The original post was Robin claiming that he had tests to see whether "This time was different" (with respect to computers taking over human jobs). His tests were flawed. I proposed alternative tests, such as betting that three of the 15 most common jobs in the U.S. would decline by 90% within the next 30 years, due to computers...because computers have certainly not caused that sort of employment change before.I also proposed betting on gross world product annual growth exceeding 7 percent per year in the decade ending in 2030, and 10 percent per year in the decade ending in 2040. Both bets would provide dramatic evidence whether "this time is different". (Note: His original "time" was the claims made about computers in the 1980s.)

Expand full comment

"An AI that can fool a human into thinking it's human is very useful for the economy: most of our jobs revolve around social interactions"

Can you name some occupations where you think it would be great for a computer that can fake that it's human?

I'll give you an example where it's not. (Obviously, just an anecdote.) I was calling to renew bunch of prescriptions. One or two had expired. So the computer couldn't handle it.

I got a human on the line, and the woman helped me by saying that they'd contact my physician and get the prescription renewed, then send me the drugs and charge my card.

That could have been equally well solved by a computer that was simply smarter than their existing computers. And by smarter, I don't mean more able to fake like it was human, I mean more able to *solve my problem*. The woman was very nice, and very helpful. But I don't expect her job to be around in 10-30 years. It's too easy to get an automated system that says. "That prescription has expired. Would you like to renew?" <"Yes."> "OK, I will contact your doctor for a renewal. Would you like me to charge your existing card for this medicine?" <"Yes."> "Anything else?" <"No.">. "Goodbye.".

"But there is a much deeper meaning behind the Turing Test. It forces us to acknowledge the possibility of artificial intelligence, especially in the 1950s, but still true today, many people imagine some sacred divide between thinking and feeling humans (or aliens, or dolphins) and cold, calculating machines."

Yes, that was a very useful thing to do in 1950. But in 2014, any person who has such notions should simply get a phone with Siri, Cortana, or Google Now.

P.S.

Expand full comment

Again, did you read Turing's paper? He wasn't trying to make computers "be human". He consciously eliminated many aspects of "being human" that were irrelevant to intelligence. (E.g. using typewritten words instead of face-to-face communication.)

Yes, the mimic game rules out certain entities that most of us would agree are intelligent also. But you seem to think that somehow Turing distracted the whole field of academic AI. The truth is, very very few people attempted to work directly on "pretend to be a human". The multi-decade failure of (full) AI, the "pixie dust", as you put it, isn't due to lack of effort, or working on the wrong goal.

You realize that just MIPS isn't enough ... but you seem to not know very much about what else besides MIPS might be required. That's where the failure is. The MIPS will be "easy". (And, yes, sufficient computational power is necessary.) But the "pixie dust" is hard, and has stubbornly resisted decades of attempts by an entire academic field, to narrow the gap with human performance.

"t's inconceivable to me that something that's operating at a million times more instructions per second than a human brain ... will not somehow be comparable to a human brain in capabilities." Are you unfamiliar with software? If you make an arithmetic calculator a million times faster, do you magically get public-key encryption? Is there any difference between cryptography, vs. some (very fast!) multiplication tables?

"Intelligence is about solving problems, not about being human." I agree.

"It's not necessary that a computer that can spot trading patterns better than any human on earth can also carry on a conversation" Spotting trading patterns is domain-specific expertise. That's "easy". Samuel's checkers in the 1950's, medical diagnosis in the 1970's (e.g. Mycin), computer chess in the 1990's (e.g. Deep Blue). Wall St. already has automated program traders.

The failure of AI, is in general intelligence, a problem-solving approach that works on any field. AI systems are "brittle", and break down quickly when outside their area of narrow expertise.

Recall the original post here. It's about machines replacing humans in ALL jobs. You won't get that, without domain-independent, general intelligence. And the multi-decade progress in AI, on the "pixie dust" software of general intelligence, has been very very slow.

Expand full comment

An AI that can fool a human into thinking it's human is very useful for the economy: most of our jobs revolve around social interactions, not solving science's great questions.

But there is a much deeper meaning behind the Turing Test. It forces us to acknowledge the possibility of artificial intelligence, especially in the 1950s, but still true today, many people imagine some sacred divide between thinking and feeling humans (or aliens, or dolphins) and cold, calculating machines. Turing's idea was that if we take human intelligence at face value, which we do through conversations, then it would be bigoted to require a higher standard from artificial intelligence. There will undoubtedly be false negatives because AI does not need to have human emotions, but that's entirely beside the point. It's not a practical intelligence detector, such a thing may not even be possible, it's a device to make us think.

Expand full comment

Hi Don,

You write, "'General intelligence' isn't so hard to detect, and Turing's fundamental insight is that the best probe is a wide-ranging natural language conversation."

Yes, but he substituted in the idea of being human for being intelligent. That was a fundamental problem; it set the path for artificial intelligence research in the wrong direction. Intelligence is about solving problems, not being human. As I, others, and you yourself realize, if a computer said, "I'm a computer; I know nothing about feelings"...it would flunk the Turing Test. That's very bad.

"The problem with AI today is NOT that it's too slow. It's that it doesn't work. Making it faster doesn't solve that problem."

Here's a graph of microprocessor speed with a bunch of animals on the right side of the y-axis:

http://www.frc.ri.cmu.edu/~...

Now, I completely understand and accept that there is not any sort of one-to-one correspondence such that 1 MIPS gives you a worm, 1000 MIPS give you a lizard, and 1 billion MIPS (1 petaflop) gives you a human. All those need the pixie dust of software. But it's indisputable that there is some sort of general correlation between processing speed and intelligence of life on earth. So it's inconceivable to me that something that's operating at a million times more instructions per second than a human brain--and the trend is pretty clear that this will happen circa 2040--will not somehow be comparable to a human brain in capabilities.That's *overall* capabilities. For example, in translations of human speech...we won't even be close. A computer operating at that speed will be able to translate in real-time the languages on earth...even multiple languages at a time. But things like...I have no idea...understanding that a piece of paper could cut soft butter but not hard butter...? we might still be better.

Your statement that "AI doesn't work" is, I think, a very sad outcome of the path Alan Turing set way back in 1950. Intelligence is about solving problems, not about being human. And the universe of problems that AI *can* solve is expanding exponentially. It's not necessary that a computer that can spot trading patterns better than any human on earth can also carry on a conversation, in order that it be "intelligent."

Best wishes,Mark

Expand full comment

"General intelligence" isn't so hard to detect, and Turing's fundamental insight is that the best probe is a wide-ranging natural language conversation. I think we agree on what it would be like, if AI succeeded. We just disagree on the timing. I think "a few decades from now" is wildly overoptimistic.

"It's why humans will no longer be the smartest species." No, Moore's law and flop growth, while certainly helpful, is not at all the primary key for AI. We've already had Moore's law for many decades. Think of the gap between Samuel's checkers program in the 1950's, and human intelligence. How much has Moore's law improved computers in all those decades since? How much has the gap between AI and humans closed? Or, to think of it another way: I want to have a 5-minute intelligent conversation with a computer. I'll give you a year of supercomputer computation today, to accomplish it. Can you write the AI program? No? If you can't do it today, with a year of supercomputer time, then what does it matter that a few decades from now, you'll be able to get that much computation in 5 minutes for $1000?

The problem with AI today is NOT that it's too slow. It's that it doesn't work. Making it faster doesn't solve that problem.

Expand full comment

"Strongly disagree about #4 ("Turing Test"), at least if we're talking about a "real" test, with trained judges and hours of time. "

Well, I think it's a crappy test anyway, particularly exactly as Alan Turing originally described it in 1950.

My much more fundamental and important prediction is that computers will have more general intelligence than an unassisted human brain before the middle of the century. (And not just a little more, either!) But I'm not sure we would ever be able to agree on what "general intelligence" is. My proposal that a computer in room of the smartest human beings saying it is smarter than all of them, and the humans knowing it was true, is the essential gist.

"#3 ('10% annual world GDP growth') seems highly unlikely."

So if it happened, would you agree that your assessment of the impacts of computers on human civilization in the near term (the next 1-4 decades) was wrong?

"The others I have no strong opinion about (nor do I think they're particularly important, for the status of overall human civilization)."The fact that $1000 worth of microprocessor is likely to be capable of 1 petaflop in 2020, and 1 exaflop in 2030 (and 1 zettaflop before 2040) is not *directly* important to civilization. But it's *indirectly* the most important thing in the history of civilization. It's why humans will no longer be the smartest species on the planet before the middle of the 21st century. And it's why Elon Musk (and many others) recognize computers as a potential existential threat.

Expand full comment

Strongly disagree about #4 ("Turing Test"), at least if we're talking about a "real" test, with trained judges and hours of time. (Otherwise, you could argue that Eliza already passed "the test" decades ago.)

#3 ("10% annual world GDP growth") seems highly unlikely.

The others I have no strong opinion about (nor do I think they're particularly important, for the status of overall human civilization).

Expand full comment

I wrote that I'm not aware of any time in the last century where three of the fifteen most common jobs declined by 90 percent within 30 years, or that the fifteen most common jobs went down by 50%. And I wrote that I'm positive it hasn't happened as a result of computers.

You respond, "I can't take your word for it. For the bet offers I made, I cited historical stats..."

You want me to provide evidence of something of which I'm not aware? That's quite a trick.

But as far as me being positive that computers haven't caused a 90% decline in three of the fifteen most common jobs in any 30-year period, or a 50% decline in the fifteen most common jobs...don't you already know that to be obviously true? (Thirty years ago, the Apple McIntosh was the latest thing!)

If you don't, I'll give you some incentive to find out: If you can identify three of the fifteen most common jobs in the U.S. that have suffered a 90% decline in 30 years as a result of computers, or 50% of the fifteen most common jobs, I'll give you $1000. And if you can't find such jobs...no charge. (The final judge on this would be any person on whom we could agree. And I'll give her/him $40 for her/his time.)

Expand full comment