31 Comments
User's avatar
Tim Tyler's avatar

This was 10 years ago. Now, if we like, we can review the situation with the wisdom of hindsight.

Expand full comment
Neat-seeking Missile's avatar

Any new updates on analogical reasoning based on recent progress in natural language understanding? Vector arithmetic in NLP and generative adversarial networks seems like an advance in that direction, though I'd put it at less than a 15% advance.

Expand full comment
WS Warthog's avatar

"we’ve come less than 1% of the distance to human level abilities in his subfield of robotic grasping manipulation"

Does he mean this sort of thing: https://www.youtube.com/wat...

Seems to me 1% is an implausibly low estimate.

Expand full comment
RobinHanson's avatar

I've just added to this post.

Expand full comment
Overcoming Bias Commenter's avatar

"There are many serious researchers in the field of AGI (Voss, Arel, etc.) who believe the goal will be achieved much, much sooner."

That's the left tail of expert opinion, not the median.

Expand full comment
Overcoming Bias Commenter's avatar

"Human-level AI is not necessary for the things we want to do."

Which "things" and which "we" do you mean?

Expand full comment
TGGP's avatar

We use numbers to refer something. Ordinary "distance" comes in units like feet or meters, but its much less clear in terms of scientific research. You have sometimes spoke in terms of "insights", where Eliezer believes there are a few "laws" of intelligence to be figured out but you believe that there are a vast number of tricks learned by our evolved brain. If there are a large number of basically equally important and difficult (in time and/or resources) to discover insights, it might be intuitive to say we've got a certain percent of the way and so we should expect a certain number of man/hours until completion. But under a view like Eliezer's much less can be estimated.

Expand full comment
arch1's avatar

I can't tell from my quick look whether it would have been possible for the "Video in Sentences Out" judges to blindly rate a mix of sentences from the AI system and from humans.  If so, I think that would have produced more objective, informative and interesting results.

Expand full comment
Overcoming Bias Commenter's avatar

 Linear progress seems a reasonable historical trend.

Considering that hardware resources have grown exponentially so far, this is intuitively plausible since typical AI problems are at least NP-hard, which are conjectured to have superpolynomial (roughly exponential) time complexity.

Of course AI progress didn't happen just by throwing more clock cycles at the problems, algorithms also got much better, and a lots of domain-specific heuristics have been developed, but it seems to me that the Moore's law was still the main driving force behind these improvements.

Many sophisticated modern algorithms, like Monte Carlo methods and machine learning over large data sets, would have been completely impractical on hardware from ten years ago.

Expand full comment
Robin Hanson's avatar

I posted here about the subfield I know best. I'm not making claims about other subfields, but would like to encourage experts in those subfields to report comparable evaluations.

Expand full comment
Overcoming Bias Commenter's avatar

 Historically, AGI predictions even by serious researchers tended to be wrong, and there is no evidence that we are at some specific point in time that allows AI researchers to make better predictions than before.

Of course, Hanson's prediction might be also wrong, so I think it's better to just admit our ignorance and say that we have no idea about when and if AGI will be created.

The only thing we can say with relative confidence is that human-level intelligence is physically possible, and probably computable, just because humans are physical systems and the laws of physics appear to be computable.

Expand full comment
Overcoming Bias Commenter's avatar

the problem of propagation in the real world is a very open ended one

What do you mean exactly?

Expand full comment
Overcoming Bias Commenter's avatar

"One model would be that at any one time there is a particular subfield of AI with especially rapid progress, and that perhaps half of total progress is due to such bursts"

Do you claim this with respect to the past record of AI? Which subfields would you assign to which periods?

Expand full comment
adrianratnapala's avatar

I think your point underlines my unease.

These theoretical arguments are probably why my friend is right to say "...but what the hell else can we do."  But if the brain is a collection of domain-specific modules, and even if those modules could be though of as optimisers, that doesn't mean system is also an optimiser.

The whole system is just something that was plonked into existence by evolution, and the problem of propagation in the real world is a very open ended one -- and not really an optimisation problem.  

Expand full comment
Pablo Stafforini's avatar

How many of these optimistic predictions are the result of  outside view calculations?

Expand full comment
William Swift's avatar

 Look back even further.  AI progress seems very bumpy, as a useful new technique is developed, hyped and thoroughly  exploited.  The new technique doesn't live up to its hype, but a new tool is added to the toolbox, then slow progress until the next useful technique is found.  Overall, looking at it from the earliest AI, in the 1950s, Robin's projections look more reasonable.

Expand full comment