How Deviant Recent AI Progress Lumpiness?

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

(Sam didn’t raise the subject in his recent podcast with me.)

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant might be data on distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • arch1

    I thought the idea behind the dominant system/lumpiness hypothesis is that AI’s potential for reflexivity creates a threshold of system self-improvement capability beyond which things kind of explode to a degree unprecedented in other disciplines and eras.

    If so, presumably the lack of citation lumpiness to date only implies that this threshold hasn’t been reached yet.

    • arch1

      I forgot to say that I’m glad you’re sounding this note of caution. Also, if I could edit my comment above, I’d change “presumably” to “perhaps”.

    • The claim that Alpha Go Zero is evidence of the new lumpiness is a claim that some relevant threshold has in fact been reached.

  • Riothamus

    This causes me to move my estimation of AI developments slightly further out.

    It occurs to me that a simpler explanation for why the curve is so consistent is that the overwhelming majority of citations are produced by the same kind of institution, namely academia.

    Skimming the paper, I was not able to find any mention of things like corporate or military research. By way of comparison with another field, I would expect that the lumpiest, most advanced research in marketing probably is trade secrets in places like Amazon and Google. In another example, algorithmic trading is a notoriously opaque branch of computer science. It seems like there are strong (monetary) incentives to keep important progress secret, which causes me to doubt the accuracy of citations as a measure.

    • The recent burst of interest in ML is mainly tied to published results. So we can look at the distribution of lumpiness in those published results.

  • lump1

    I know that computer science is an academic field, but unlike most academic fields, the somewhere close to 0% of the progress in the field happens at the universities. And just like you might not expect to read journal articles about recent progress nuclear warhead miniaturization, you might also find that the potentially scariest leaps forward in software might be kept under wraps – for many reasons.

    One of these is that all the scary software is probably harvesting more data than the makers are eager to admit. Another is that if you’re on to something big, you don’t want to tip off your competitors so you could more fully leverage your competitive advantage.

    I guess I ultimately agree with your conclusion, but I think that computer science is so unlike the other academic disciplines that I find this argument by analogy doesn’t work.

    • People arguing that ML progress lately has been rapid, or lumpy, have been pointing to published results. So clearly there are enough published results to which to apply my proposed test.

  • The major difference between progress in AI and any other form of progress is that AI is expected to be able to be self reinforcing. Under that assumption there will be some small lump that snowballs into the largest lump ever.

    So far we’ve only seen few examples of self reinforcing improvements and always with small amplification factors. I think the main question is: how many self reinforcing improvements do we have to see, with what total amplification factor, to believe an improvement with a diverging amplification factor is possible? Or are there reasons to always expect amplifications to be bounded?

    • Advances that allow more self improvement are larger lumps. The claim is of some unusually large lumps.

      • Compare things to the first nuclear bomb. There were many small improvements that each didn’t result in a working bomb. The effect on the maximum yield of a nuclear device was 0 for each of them individually. But when put together, we suddenly went from 0 to 22 kiloton TNT.

  • Pingback: Rational Feed – deluks917()

  • jimpliciter

    So if (for example) Dreyfus’ sentiments about AI turn out to be right, will it follow that all of this AI hype/speculation just turn out to be signalling on steroids? Good thing you have you’re own back covered Robin.

    • Maybe look in the mirror? These “sentiments” are motivated by the view that human beings are special entities that cannot be replicated by “mere dead matter”, a view that has been oft refuted. Dreyfus was right about excessive optimism about how long it would take and how hard it would be to achieve human-level AI, but he was not right about the metaphysics.

      • jimpliciter

        So Dreyfus was right about the (hitherto empirically reassured) “excessive optimism”, but he was wrong about the (hitherto empirically vacuous) metaphysics ? So, for all intents and purposes Dreyfus was right and AI is precious more than a trendy, signalling moshpit at today’s version of AI Woodstock?

      • For all intents and purposes you’re an imbecile.

      • jimpliciter

        Why?

      • Peter David Jones

        It hasn’t been refuted by building a full artificial intelligence/consciousness.