76 Comments

Dan Ariely has a TED talk out that led me to reconsider. Unless he really botched up his numbers, current inequality is more undesirable than I had recognized.

Expand full comment

"What exactly is wrong with that explanation?"

The main problem is that I think he's wrong. ;-) But the another(less important ;-)) problem is that virtually everyone inside--andoutside--the AI research community also thinks he's wrong.

Robin is saying that the best guess for when human-level AI--ofthe type able to perform the majority of human jobs at approximately humanlevels--is 2-4 centuries. The vast majority of knowledgeable people both insideand outside the AI research community think he’s wrong, and the actualnumber is more like 2-4 decades. Whether Robin is right, or almost everyoneelse who is knowledgeable about the subject is right, is a hugely importantquestion. In fact, as Elon Musk and others have noted, it’s quite possibly anexistential question. (Most estimates of the time from human-level AI tosuper-intelligent AI are only 3-30 years.)

"Is this more than an accusation of hubris?"

Yes, it's also an accusation of bias...which a man who runs a blog titled "Overcoming Bias" ought to be struggling to avoid.

Robin is saying "the crowd" is wrong. That is an extraordinary claim, and requires extraordinary evidence. Robin hasn't come close to providing such evidence.

For example, Robin appears to totally neglect hardware: 1) If flash drive memory prices continue to come down as they have over the past 3 decades, by 2024, $1 will buy 1 terabyte of storage, and by 2036, $1 will buy 1 petabyte of storage; 2) Similarly, circa 2050-2060, $1000 worth of computing power will be able to perform as many calculations per second as all the human brains on earth, combined. It's simply not credible to do an analysis of likely future progress in AI that ignores those trends. (It would be slightly different if Robin claimed that progress in memory per dollar, and computations per second per dollar were somehow likely to freeze at present values for the next 200 years. But only slightly different, because such a position that technological progress will freeze isn't really credible, absent Terminators, nuclear war, or some other monumental disaster.)

http://www.singularity.com/...If Robin really thinks he's right, and human-level AI capable of performing most jobs that humans currently do is 200-400 years out, he should be making his case in various academic and public forums, because it's an important question. But the crowd thinks the number is more like 20-40 years out. And there's little doubt in my mind*** that the crowd is much more likely to be correct.***Having read both members of the crowd who promote timelines similar to the crowd, such as Andrew McAfee and Ray Kurzweil, and Robin's separate analysis.P.S. Where Robin could and should have taken issue with Martin Ford's claims is whether computers taking jobs causes the economy to collapse...or to expand.

Expand full comment

Because they don't know your method for accurately predicting therate of progress in AI?

What exactly is wrong with that explanation? Is this more than an accusation of hubris?

I don't think it can be denied that it's discouraging that leading researchers in AI think they're competent to make futurological predictions without understanding something so basic as the biased nature of the inside view. This apparent blindness might lend credence to the Yudkowsky view that folks need training in general rationality--except that E.Y. (and even Robin's disciple, Grace) use the inside view without compunction.

[I wonder how these experts respond when confronted with the methodology and results of Hanson's survey.]

[But it could be that I'm underestimating the cleverness of Hanson's survey methodology.]

Expand full comment

So the bottom line is that you're right, and human-level artificial intelligence of the type that can perform the majority of jobs that exist in the U.S. today will not come for 200-400 years? And virtually the entire AI research community is wrong? Because they don't know your method for accurately predicting the rate of progress in AI?

I’m curious about whether you read the rest of the Reason issue in which your book review was located. Did you read the interview of Andrew McAfee? Here’s a question and a response from him:

http://reason.com/archives/...

reason: In the next, say, five to 10 years, what are the first jobs to go?

McAfee: One of the quickest ones to me looks like different flavors of customer service reps, where they're using their language skills.They're using their pattern-matching skills. Our technologies are really, really good at both of those right now. They're going to get worlds better over the next five to 10 years, so people doing that kind of knowledge work, I think, are going to face some unemployment headwinds.

Depending on the regulatory environment, I think a highly functional, autonomous vehicle is easily in that timeframe, so we have a lot of people who drive for a living now who are going to be confronted by automation.

I think if a piece of technology is not already the world's best medical diagnostician, it easily will be in five or 10 years. Now, I don't know if, again, there are going to be regulatory policy changes that would allow that technology to diffuse. But if that happens, we've got a lot of people who diagnose us for a living who are going to be confronted by technology that does it better.

How was the McAfee interview compatible with your assessment of the likely rate of progress in AI (i.e., that human-level AI won’t come for 200-400 years)?

Or this from the same issue:

http://reason.com/archives/...

“Speaking at the conservative American Enterprise Institute in March, Bill Gates hinted that a little freaking out might be in order: ‘Software substitution, whether it's for drivers or waiters or nurses, [is] progressing..Twenty years from now, labor demand for lots of skill sets will be substantially lower. I don't think people have that in their mental model.’"

How was that compatible with your assessment of the likely rate of progress in AI?

And if Andrew McAfee’s and Bill Gates’ views are not compatible with yours, why are they wrong? Because they haven’t done your informal surveys?

Expand full comment

"Inside view" isn't the "view of insiders."

[Added.] See "Beware of inside view" ( http://www.overcomingbias.c... )

Expand full comment

As I've explained to you before, experts are more trustworthy when asked to describe past rates of progress in their narrow subfield, than when asked to forecast future rates of progress in very large areas, most of which they don't know much about.

Expand full comment

I advised Robin to ask himself: "Why do experts in AI have such dramatically different estimates for the likely arrival date of human-level AI? Are virtually all of them wrong, or am I?"

Steven Diamond responds: "He answers that question: the inside view biases their estimates. The bias created by the inside view is to underestimate the time because of overparticularization."

But he arrives at his estimate by his interpretation of the answers of people who are very much AI "insiders". And he touts his own time in AI research.

Again, he should explain why virtually the entire AI community is wrong, and he is right. Just to take an example, Ray Kurzweil thinks that a computer will pass the Turing Test by 2029. Robin should explain why Ray Kurzweil is going to be very, very wrong, and it will be more like 200-400 years before a computer passes the Turing Test.

Expand full comment

He answers that question: the inside view biases their estimates. The bias created by the inside view is to underestimate the time because of overparticularization.

[I think engineers may be particularly inclined to take an inside view; and Robin might not stress the issue enough because he (like them) tends to be a near-mode supremacist, while the outside view is far mode.]

Expand full comment

I think your AI progress estimates use a very flawed methodology. And you ought to be able to see that in part by asking the question, "Why do experts in AI have such dramatically different estimates for the likely arrival date of human-level AI? Are virtually all of them wrong, or am I?"

Expand full comment

"So? That doesn't mean it's not income from labor."

It's relevant to the claim that the very rich don't pay a high effective tax rate (usually lower than someone who receives an actual salary of say $100k). This is true regardless of whether or not you regard the increase in value of shares and stock options you own as income you worked for.

"Piketty isn't exactly a reliable source. Hedoesn't understand basic supply and demand"

Unlike Deirdre I have read every word of the book and she quotes him out of context so to say. Piketty ues the example that if housing prices in a city were realy high because of scarcity then eventually more and more people would find ways to live outside of the city (or when oil becomes more expensive investments in green energy will increase and eventually demand for oil will fall), a process that can take decades, but it will happen (and if it doesn't that only makes Piketty's r > g argument stronger: Piketty was actually anticipating a criticism of his own conclusions here, he doesn't believe any eventual lapse in demand would play a significant role!). It is striking that Deirdre tells her story in an Econ 101 class, because only there would (almost) all of the students think of highly simplified rules without knowing the many hidden assumptions behind it (applying the rule of demand and supply to the long term requires the assumption of a static world).

Second, Deirdre keeps attacking Piketty as a Marxist and as being too mathematical, which she attributes to products of French economics, if she had actually read the book she'd have known Piketty distances himself from Marx, gives several examples of where he thinks marx was completely wrong and criticizes marx for not using or compiling statistics to make his points. Piketty was himself never a Marxist, Deirdre was, and is now a conservative christian libertarian, it seems to me she is projecting her own inability to relativize her view of reality, and her own need to seek out extremist crowds to belong to unto Piketty and Piketty's readers. Piketty did in fact work in both the UK and the US (does Deirdre know this?) and in the book he actually criticizes American economists for being preoccuppied with complicated mathematical models that try to answer questions that are not that important. His book reads quite smoothly, there's not a single derivative or integral in it.

Finally Deirdre says that there was only an increase in inequality in 3 countries (US, UK and Australia). This is patently untrue: Piketty found an increase in inequality, for both wealth and income, in all the countries he had data for (and this even included Sweden). For the majority of the countries in the world there was not enough data available but these are mostly small developing countries.

Oh, wait, I'm not done yet. Deirdre seems to think Piketty wants to tax the rich to give to the poor. Near the end of the book he actually explains that the taxes he proposes would only initially provide significant revenue. This revenue is not the goal of those taxes. The actual goals are a) record keeping of all income and capital and b) to stop huge fortunes from accrueing in the first place (they are essentially sin taxes). Where does this misunderstanding come from? Well, she didn't read the book, but most of all Deirdre does not see economic inequality, no matter how high, as a problem in itself. Piketty (like myself) sees it as a threat to democracy, but also the civil and economic freedoms of the non-rich. Piketty articulated this very clearly in an interview: he said Bill Gates once called him and said he would prefer taxation on consumption (which is environmentally friendly and could be progressive, if you exclude the very rich, the 0.1%). Piketty then asked him what "consumption" means for someone like Bill Gates. Piketty said something like "for normal people consumption means buying groceries, for you it means buying political influence".

Amazingly Deirdre claims pro-rich legislation in the UK caused an increase in inequality there, but doesn't make the connection that perhaps pro-rich legislation is positively correlated with political donations and other forms of influence by the rich, which are at least proportional to economic inequality.

Expand full comment

That is why I asked people if they'd seen progress accelerate. They haven't on average.

Expand full comment

Click on the link and find out.

Expand full comment

What's the other 10%?

Expand full comment

Entrepeneurial income is not taxed as income from labor

So? That doesn't mean it's not income from labor. In fact, that's exactly what it is. When Bill Gates became a billionaire at a young age, that wealth came from his labor working to build Microsoft.

Piketty isn't exactly a reliable source. He doesn't understand basic supply and demand.

Billionaires receive their income virtually entirely in forms that are not taxed as income from labor

Care to supply a source or are you just arguing by assertion? I have supplied a source. As has Stephen Diamond. Both sources contradict your assertion.

which in most countries means being taxed lower than income from labor

Since since all investments are made with post tax money, all capital gains are a double tax. Thus those who pay capital gains pay a higher tax rate than those who have only wage incomes. In other words, your claim is false.

Expand full comment

Entrepeneurial income is not taxed as income from labor. According to Piketty the so-called "wages" of the richest wage earners (who are not the richest people in the world but can fall within the 0.1%) mostly consist of stock options (taxed as capital gains when cashed). Billionaires receive their income virtually entirely in forms that are not taxed as income from labor (which in most countries means being taxed lower than income from labor, even without the use of tax avoidance constructions or tax havens). This isn't really surprising: not even the largest corporations in the world pay their executives enough to become billionaires.

Expand full comment

There doesn't look like there's any significant difference between the two graphs. "Business" is clearly "entrepeneurial income" and a large part of that "investment" breakdown likely is as well.

Expand full comment