AI Progress Estimate

From ’85 to ’93 I was an AI researcher, first at Lockheed AI Center, then at the NASA Ames AI group. In ’91 I presented at IJCAI, the main international AI conference, on a probability related paper. Back then this was radical – one questioner at my talk asked “How can this be AI, since it uses math?” Probability specialists created their own AI conference UAI, to have a place to publish.

Today probability math is well accepted in AI. The long AI battle between the neats and scruffs was won handily by the neats – math and theory are very accepted today. UAI is still around though, and a week ago I presented another probability related paper there (slides, audio), on our combo prediction market algorithm. And listening to all the others talks at the conference let me reflect on the state of the field, and its progress in the last 21 years.

Overall I can’t complain much about emphasis. I saw roughly the right mix of theory vs. application, of general vs. specific results, etc. I doubt the field would progress more than a factor of two faster if such parameters were exactly optimized. The most impressive demo I saw was Video In Sentences Out, an end-to-end integrated system for writing text summaries of simple videos. Their final test stats:

Human judges rated each video-sentence pair to assess whether the sentence was true of the video and whether it described a salient event depicted in that video. 26.7% (601/2247) of the video-sentence pairs were deemed to be true and 7.9% (178/2247) of the video-sentence pairs were deemed to be salient.

This is actually pretty impressive, once you understand just how hard the problem is. Yes, we have a long way to go, but are making steady progress.

So how far have we come in last twenty years, compared to how far we have to go to reach human level abilities? I’d guess that relative to the starting point of our abilities of twenty years ago, we’ve come about 5-10% of the distance toward human level abilities. At least in probability-related areas, which I’ve known best. I’d also say there hasn’t been noticeable acceleration over that time. Over a thirty year period, it is even fair to say there has been deceleration, since Pearl’s classic ’88 book was such a big advance.Robin Hanson

I asked a few other folks at UAI who had been in the field for twenty years to estimate the same things, and they roughly agreed – about 5-10% of the distance has been covered in that time, without noticeable acceleration. It would be useful to survey senior experts in other areas of AI, to get related estimates for their areas. If this 5-10% estimate is typical, as I suspect it is, then an outside view calculation suggests we probably have at least a century to go, and maybe a great many centuries, at current rates of progress.

Added 21Oct: At the recent Singularity Summit, I asked speaker Melanie Mitchell to estimate how far we’ve come in her field of analogical reasoning in the last twenty years. She estimated 5 percent of the way to human level abilities, with no noticeable acceleration.

Added 11Dec: At the Artificial General Intelligence conference, Murray Shanahan says that looking at his twenty years experience in the knowledge representation field, he estimates we have come 10% of the way, with no noticeable acceleration.

Added 4Oct’13: At an NSF workshop on social computing, Wendy Hall said that in her twenty years in computer-assisted training, we’ve moved less than 1% of the way to human level abilities. Claire Cardie said that in her twenty years in natural language processing, we’ve come 20% of the way. Boi Faltings says that in his field of solving constraint satisfaction problems, they were past human level abilities twenty years ago, and are even further past that today.

Let me clarify that I mean to ask people about progress in a field of AI as it was conceived twenty years ago. Looking backward one can define areas in which we’ve made great progress. But to avoid selection biases, I want my survey to focus on areas as they were defined back then.

Added 21May’14: At a private event, after Aaron Dollar talked on robotics, he told me that in twenty years we’ve come less than 1% of the distance to human level abilities in his subfield of robotic grasping manipulation. But he has seen noticeable acceleration over that time.

Added 28Aug’14: After coming to a talk of mine, Peter Norvig told me that he agrees with both Claire Cardie and Boi Faltings, that on speech recognition and machine translation we’ve gone from not usable to usable in 20 years, though we still have far to go on deeper question answering, and for retrieving a fact or page that is relevant to a search query we’ve far surpassed human ability in recall and do pretty well on precision.

Added 14Sep’14: At a closed academic workshop, Timothy Meese, who researches early vision processing in humans, told me he estimates about 5% progress in his field in the last 20 years, with a noticeable deceleration.

Added 4Jan’15: At a closed meeting, Francesca Rossi, expert in constraint reasoning, gave an estimate of 10%, with deceleration. Margret Boden, author of Artificial Intelligence and Natural Man (1977), estimated 5%, but for no particular subfield.

Added 6July’15: David Kelley, expert in big data analysis, says 5% in last twenty years, sees acceleration only in last 2-3 years, not before that.

Added 18Apr’16: Henry Kautz, says in constraint satisfaction we were at human level 20 years ago and have moved to super human levels now. In language, he says we’ve moved 10% of the way, with a noticeable acceleration in the last five years.

Added 13July2016: Jeff Legault says that in robotics we’ve come 5% of the way in the last 20 years, and there was only acceleration in the last five years.

Added 08Sept2017: Thore Husfeldt says that in the field of human understandable explanation, we have come less than 0.5% of the distance.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • HM

    I’m an econ and stats student who came across Pearl’s book on causality which I found fascinating. As someone who has been a lot in both AI and Economics, how much do you think the latter could gain from using Pearl’s formalized approach to discuss causality?

    • Robin Hanson

      At the conference, Pearl was furious at economists for not using his approach to causality. He’s probably right that economists could gain a lot, though those gains would come at the cost of doing things a bit differently.

      • Will

        How do you propose this would work? What sort of data would you need, and what sort of analysis would you conduct on that data? The instrumental variables approach seems to me to be an application of Pearlean causality.

    • Ilya Shpitser

      Some economists know and understand notions of causality that are either identical to one defended by Pearl or comparable with Pearl’s in various ways (

      Econometricians apparently (this is second hand knowledge, I am not an expert on econometrics) understand “the instrumental variable model” very well.

      At the conference, Pearl seemed to think the problem is poor textbooks.  I distinctly remember Pearl railing at statisticians — he didn’t do that anymore (he points out there are many many causality talks now at the joint statistical meeting).

  • MileyCyrus

    How did you quantify something like the distance toward human AI levels?

    • Robin Hanson

      How do you quantify something? By attaching a number to it.

      • praxtime

        Wondering about the linear progress for past 30 years. That seems surprising, especially if you go back to the 1970’s where the hype versus progress was farther out of line. If you go back that far, you could argue it’s more upward/exponential than linear. Do you think linear progress is more likely to continue or could that linear progress be a temporary slow down on a longer exponential trend (at least for next 50 years)?

      •  Look back even further.  AI progress seems very bumpy, as a useful new technique is developed, hyped and thoroughly  exploited.  The new technique doesn’t live up to its hype, but a new tool is added to the toolbox, then slow progress until the next useful technique is found.  Overall, looking at it from the earliest AI, in the 1950s, Robin’s projections look more reasonable.

      • V V

         Linear progress seems a reasonable historical trend.

        Considering that hardware resources have grown exponentially so far, this is intuitively plausible since typical AI problems are at least NP-hard, which are conjectured to have superpolynomial (roughly exponential) time complexity.

        Of course AI progress didn’t happen just by throwing more clock cycles at the problems, algorithms also got much better, and a lots of domain-specific heuristics have been developed, but it seems to me that the Moore’s law was still the main driving force behind these improvements.

        Many sophisticated modern algorithms, like Monte Carlo methods and machine learning over large data sets, would have been completely impractical on hardware from ten years ago.

      • We use numbers to refer something. Ordinary “distance” comes in units like feet or meters, but its much less clear in terms of scientific research. You have sometimes spoke in terms of “insights”, where Eliezer believes there are a few “laws” of intelligence to be figured out but you believe that there are a vast number of tricks learned by our evolved brain. If there are a large number of basically equally important and difficult (in time and/or resources) to discover insights, it might be intuitive to say we’ve got a certain percent of the way and so we should expect a certain number of man/hours until completion. But under a view like Eliezer’s much less can be estimated.

  • Ilya Shpitser

    I agree that Video In Sentences Out was impressive.  I especially liked that while it used computer vision technology, the paper itself wasn’t quite a vision paper — it was building abstractions on top of vision algorithms, with those algorithms as a primitive.  To me, that’s a real sign of progress (people often complain that “mathy AI” has fragmented into a lot of pieces with very little synthesis work that builds impressive systems out of those pieces).

    I do feel, however, that impressiveness of vision demos may lead one to overstate progress towards general AI.

    Robin, have you asked statisticians about how far they feel general AI is?  Statistics is a field that’s been around longer than computer science itself, and is concerned with “learning” (though without the founder’s bias towards building a general “learner.”)  Perhaps they might have some perspective to offer on your question that AI is missing.

    It was a pleasure meeting you in person.

  • adrianratnapala

    I’m not an AI researcher, but I know people who are.  And when I talk to them about their work I get the impression that they end up stuck building what is essentially a better optimiser for domain specific fitness value.   I complain that real intelligence seems more open-ended than that. 

    The usual reaction is something along the lines of “Well, yes that true, but what they hell else can we do.  Do you have any actual positive suggestions?”.  And of course I don’t.

    Is there any sign of breaking out of this kind of game?  Also is the world stuck in it for some fundamental reason, or just because the “neats” have grown over-mighty.

    • V V

      they end up stuck building what is essentially a better optimiser for
      domain specific fitness value.   I complain that real intelligence seems
      more open-ended than that.

      The human brain appears to be essentially a large collection of domain-specific modules, with some degree of flexibility.
      There are also theoretical arguments (the no free lunch theorems, for instance) that rule out overly general optimizers.

      • adrianratnapala

        I think your point underlines my unease.

        These theoretical arguments are probably why my friend is right to say “…but what the hell else can we do.”  But if the brain is a collection of domain-specific modules, and even if those modules could be though of as optimisers, that doesn’t mean system is also an optimiser.

        The whole system is just something that was plonked into existence by evolution, and the problem of propagation in the real world is a very open ended one — and not really an optimisation problem.  

      • V V

        the problem of propagation in the real world is a very open ended one

        What do you mean exactly?

  • Doug

    If you consider the AI sub-field of machine learning then results have drastically accelerated in the past 20 years. Extraordinarily powerful concepts like bagging, boosting, cross-validation, L1 regularization, ensemble methods, stacking, and the kernel trick are all barely more than a decade old. Even in the past 5 years you have the extraordinarily promising field of deep learning, stochastic gradient methods, deep insights into hyper-parameter selection and much better understanding of feature selection and engineering.

    Compare to the state of machine learning in 1991 when state of the art were low quality backpropogation neural networks, naive bayes and single trees. I would say in 1991 machine learning would only outperform baseline linear/logistic regressions 90% of the time.

    So why is this relevant? Machine learning is clearly advancing much faster than the rest of AI with a higher growth rate. (The video-to-text result you cite is heavily dependent on machine learning). It was a tiny fraction of AI as a whole , but is increasing its share because of its effectiveness at solving increasingly difficult problems.

    So whereas AI as a whole may only be covering 5% of the distance every 20 years, the machine learning sub-field is growing at a much faster rate. Eventually it will consume a larger and larger proportion of problems in AI, until it has a significant effect on the rate of AI development. It’s not unreasonable to expect AI to start accelerating significantly from here out due to machine learning consuming the field.

    • Robin Hanson

      One model would be that at any one time there is a particular subfield of AI with especially rapid progress, and that perhaps half of total progress is due to such bursts. But if each decade it is a different field that is bursting, it will still take a long time to reach the goal. Is there evidence that machine learning will continue to have rapid progress in the coming decade? Or is its burst done?

      • Carl Shulman

        One model would be that at any one time there is a particular subfield of AI with especially rapid progress, and that perhaps half of total progress is due to such bursts”

        Do you claim this with respect to the past record of AI? Which subfields would you assign to which periods?

      • Robin Hanson

        I posted here about the subfield I know best. I’m not making claims about other subfields, but would like to encourage experts in those subfields to report comparable evaluations.

  • kurt9

    “If this 5-10% estimate is typical, as I suspect it is, then an outside view calculation suggests we probably have at least a century to go, and maybe a great many centuries, at current rates of progress.”

    This certainly puts paid to the fantasies of the singularity advocates.

    Based on what I have read in this field, I think such a time estimate is correct. The foreseeable future is biological (meaning bio-engineering and radical life extension, not AI or uploading).

    • This certainly puts paid to the fantasies of the singularity advocates.”
      Um… it’s not like other people in AGI field are children, and Robin is an adult adjudicating. There are many serious researchers in the field of AGI (Voss, Arel, etc.) who believe the goal will be achieved much, much sooner.

      • Pablo

        How many of these optimistic predictions are the result of  outside view calculations?

      • V V

         Historically, AGI predictions even by serious researchers tended to be wrong, and there is no evidence that we are at some specific point in time that allows AI researchers to make better predictions than before.

        Of course, Hanson’s prediction might be also wrong, so I think it’s better to just admit our ignorance and say that we have no idea about when and if AGI will be created.

        The only thing we can say with relative confidence is that human-level intelligence is physically possible, and probably computable, just because humans are physical systems and the laws of physics appear to be computable.

      • Carl Shulman

        There are many serious researchers in the field of AGI (Voss, Arel, etc.) who believe the goal will be achieved much, much sooner.”

        That’s the left tail of expert opinion, not the median.

  • kurt9

    Human-level AI is not necessary for the things we want to do. What is necessary is developments in manufacturing capabilities (3d printing.additive manufacturing, manufacturing robotics. etc.) that break the capital cost escalation so that small self-interested groups can do what only big companies and governments can do today. 

    • Carl Shulman

      Human-level AI is not necessary for the things we want to do.”

      Which “things” and which “we” do you mean?

  • arch1

    I can’t tell from my quick look whether it would have been possible for the “Video in Sentences Out” judges to blindly rate a mix of sentences from the AI system and from humans.  If so, I think that would have produced more objective, informative and interesting results.

  • Pingback: Overcoming Bias : Robot Econ Primer()

  • I’ve just added to this post.

  • Pingback: Overcoming Bias : I Was Wrong()

  • Pingback: Overcoming Bias : I Still Don’t Get Foom()

  • Pingback: I Still Don’t Get Foom - h+ Magazine()

  • WS Warthog

    “we’ve come less than 1% of the distance to human level abilities in his subfield of robotic grasping manipulation”

    Does he mean this sort of thing:

    Seems to me 1% is an implausibly low estimate.

  • Pingback: Overcoming Bias : This Time Isn’t Different()

  • Pingback: AI Impacts – Are AI surveys seeing the inside view?()

  • Pingback: Understanding AI risk. How Star Trek got talking computers right in 1966, while Her got it wrong in 2013. | Praxtime()

  • Pingback: AGI Timelines | Bayesian Investor Blog()

  • Pingback: Overcoming Bias : Researcher Returns Diminish()

  • Pingback: Overcoming Bias : Ems Give Longer Human Legacy()

  • Pingback: Overcoming Bias : Brains Simpler Than Brain Cells?()

  • Pingback: AI Impacts – AI Timeline Surveys()

  • Pingback: Overcoming Bias : Superhumans Live Among Us()

  • Neat-seeking Missile

    Any new updates on analogical reasoning based on recent progress in natural language understanding? Vector arithmetic in NLP and generative adversarial networks seems like an advance in that direction, though I’d put it at less than a 15% advance.

  • Pingback: Overcoming Bias : An Outside View of AI Control()