16 Comments

Compare things to the first nuclear bomb. There were many small improvements that each didn’t result in a working bomb. The effect on the maximum yield of a nuclear device was 0 for each of them individually. But when put together, we suddenly went from 0 to 22 kiloton TNT.

Expand full comment

It hasn't been refuted by building a full artificial intelligence/consciousness.

Expand full comment

Why am I an imbecile? Your answer better not include any assumptions about signalling.

Expand full comment

For all intents and purposes you're an imbecile.

Expand full comment

So Dreyfus was right about the (hitherto empirically reassured) "excessive optimism", but he was wrong about the (hitherto empirically vacuous) metaphysics ? So, for all intents and purposes Dreyfus was right and AI is precious more than a trendy, signalling moshpit at today's version of AI Woodstock?

Expand full comment

Maybe look in the mirror? These "sentiments" are motivated by the view that human beings are special entities that cannot be replicated by "mere dead matter", a view that has been oft refuted. Dreyfus was right about excessive optimism about how long it would take and how hard it would be to achieve human-level AI, but he was not right about the metaphysics.

Expand full comment

So if (for instance/et al.) Dreyfus' sentiments about AI turned out to be right (in any run), will it follow that all of this AI hype/speculation/vacuity/representational chest beating in the continued absence of agency, etc., turn out to be little more than 'signalling on steroids' ? Has it ever crossed your mind Robin, that your "work" on AI itself constitutes the very thing you go after with signalling theory?

Expand full comment

Advances that allow more self improvement are larger lumps. The claim is of some unusually large lumps.

Expand full comment

The major difference between progress in AI and any other form of progress is that AI is expected to be able to be self reinforcing. Under that assumption there will be some small lump that snowballs into the largest lump ever.

So far we've only seen few examples of self reinforcing improvements and always with small amplification factors. I think the main question is: how many self reinforcing improvements do we have to see, with what total amplification factor, to believe an improvement with a diverging amplification factor is possible? Or are there reasons to always expect amplifications to be bounded?

Expand full comment

The recent burst of interest in ML is mainly tied to published results. So we can look at the distribution of lumpiness in those published results.

Expand full comment

People arguing that ML progress lately has been rapid, or lumpy, have been pointing to published results. So clearly there are enough published results to which to apply my proposed test.

Expand full comment

I know that computer science is an academic field, but unlike most academic fields, the somewhere close to 0% of the progress in the field happens at the universities. And just like you might not expect to read journal articles about recent progress nuclear warhead miniaturization, you might also find that the potentially scariest leaps forward in software might be kept under wraps - for many reasons.

One of these is that all the scary software is probably harvesting more data than the makers are eager to admit. Another is that if you're on to something big, you don't want to tip off your competitors so you could more fully leverage your competitive advantage.

I guess I ultimately agree with your conclusion, but I think that computer science is so unlike the other academic disciplines that I find this argument by analogy doesn't work.

Expand full comment

This causes me to move my estimation of AI developments slightly further out.

It occurs to me that a simpler explanation for why the curve is so consistent is that the overwhelming majority of citations are produced by the same kind of institution, namely academia.

Skimming the paper, I was not able to find any mention of things like corporate or military research. By way of comparison with another field, I would expect that the lumpiest, most advanced research in marketing probably is trade secrets in places like Amazon and Google. In another example, algorithmic trading is a notoriously opaque branch of computer science. It seems like there are strong (monetary) incentives to keep important progress secret, which causes me to doubt the accuracy of citations as a measure.

Expand full comment

The claim that Alpha Go Zero is evidence of the new lumpiness is a claim that some relevant threshold has in fact been reached.

Expand full comment

I forgot to say that I'm glad you're sounding this note of caution. Also, if I could edit my comment above, I'd change "presumably" to "perhaps".

Expand full comment

I thought the idea behind the dominant system/lumpiness hypothesis is that AI's potential for reflexivity creates a threshold of system self-improvement capability beyond which things kind of explode to a degree unprecedented in other disciplines and eras.

If so, presumably the lack of citation lumpiness to date only implies that this threshold hasn't been reached yet.

Expand full comment