Discussion about this post

User's avatar
Overcoming Bias Commenter's avatar

Dan Ariely has a TED talk out that led me to reconsider. Unless he really botched up his numbers, current inequality is more undesirable than I had recognized.

Expand full comment
Mark Bahner's avatar

"What exactly is wrong with that explanation?"

The main problem is that I think he's wrong. ;-) But the another(less important ;-)) problem is that virtually everyone inside--andoutside--the AI research community also thinks he's wrong.

Robin is saying that the best guess for when human-level AI--ofthe type able to perform the majority of human jobs at approximately humanlevels--is 2-4 centuries. The vast majority of knowledgeable people both insideand outside the AI research community think he’s wrong, and the actualnumber is more like 2-4 decades. Whether Robin is right, or almost everyoneelse who is knowledgeable about the subject is right, is a hugely importantquestion. In fact, as Elon Musk and others have noted, it’s quite possibly anexistential question. (Most estimates of the time from human-level AI tosuper-intelligent AI are only 3-30 years.)

"Is this more than an accusation of hubris?"

Yes, it's also an accusation of bias...which a man who runs a blog titled "Overcoming Bias" ought to be struggling to avoid.

Robin is saying "the crowd" is wrong. That is an extraordinary claim, and requires extraordinary evidence. Robin hasn't come close to providing such evidence.

For example, Robin appears to totally neglect hardware: 1) If flash drive memory prices continue to come down as they have over the past 3 decades, by 2024, $1 will buy 1 terabyte of storage, and by 2036, $1 will buy 1 petabyte of storage; 2) Similarly, circa 2050-2060, $1000 worth of computing power will be able to perform as many calculations per second as all the human brains on earth, combined. It's simply not credible to do an analysis of likely future progress in AI that ignores those trends. (It would be slightly different if Robin claimed that progress in memory per dollar, and computations per second per dollar were somehow likely to freeze at present values for the next 200 years. But only slightly different, because such a position that technological progress will freeze isn't really credible, absent Terminators, nuclear war, or some other monumental disaster.)

http://www.singularity.com/...If Robin really thinks he's right, and human-level AI capable of performing most jobs that humans currently do is 200-400 years out, he should be making his case in various academic and public forums, because it's an important question. But the crowd thinks the number is more like 20-40 years out. And there's little doubt in my mind*** that the crowd is much more likely to be correct.***Having read both members of the crowd who promote timelines similar to the crowd, such as Andrew McAfee and Ray Kurzweil, and Robin's separate analysis.P.S. Where Robin could and should have taken issue with Martin Ford's claims is whether computers taking jobs causes the economy to collapse...or to expand.

Expand full comment
74 more comments...

No posts