Tag Archives: Growth

Why Do Algorithms Gain Like Chips?

Computer hardware has famously improved much faster than most other kinds of hardware, and most other useful things. Computer hardware is about a million times cheaper than four decades ago; what other widely useful thing comes has grown remotely as fast? Oddly, computer algorithms, the abstract strategies by which computer hardware solves real problems, seem to have typically improved at a roughly comparable rate. (Algorithm growth rates seem well within a factor of two of hardware rates; quotes below.) This coincidence cries out for explanation.

On the surface the processes that produce faster hardware and faster algorithms seem quite different. Hardware is made by huge companies that achieve massive scale economies via high levels of coordination, relying largely on internal R&D. Algorithms instead seem to be made more by many small artisans who watch and copy each other, and mostly focus on their special problem area. How is it that these two very different processes, with very different outputs, both grow at roughly the same remarkably fast rate? The obvious hypothesis is that they share some important common cause. But what? Some possibilities:

  • Digital – Both computer hardware and algorithms are digital technologies, which allow for an unusually high degree of formal reasoning to aid their development. So maybe digital techs just intrinsically grow faster. But aren’t there lots of digital techs that aren’t growing nearly as fast?
  • Software – Maybe software development is really key to the rapid growth of both techs. After all, both hardware and algorithm experts use software to aid their work. But the usual descriptions of both fields don’t put a huge weight on gains from being able to use better productivity software.
  • Algorithms - Maybe progress in hardware is really driven behind the scenes by progress in algorithms; new algorithms are what really enables each new generation of computer hardware. But that sure isn’t the story I’ve heard.
  • Hardware - Maybe there are always lots of decent ideas for better algorithms, but most are hard to explore because of limited computer hardware. As hardware gets better, more new ideas can be explored, and some of them turn out to improve on the prior best algorithms. This story seems to at least roughly fit what I’ve heard about the process of algorithm design.

This last story of hardware as key has some testable predictions. It suggests that since gains in serial hardware have slowed down lately, while gains in parallel hardware have not, parallel algorithms will continue to improve as fast as before, but serial algorithm gains will slow down. It also suggests that when even parallel hardware gains slow substantially in the future, because reversible computing is required to limit power use, algorithm gains will also slow down a comparable amount.

If true, this hardware as key theory also has policy implications. It suggests that it is much better to subsidize hardware research, relative to algorithm research; even with less research funding algorithm gains will happen anyway, if just a bit later. This theory also suggests that there is less prospect for self-improving algorithms making huge gains.

So what other explanations can we come up with, and what predictions might they make?

Added 5June: There are actually several possible ways that software progress might be determined by hardware progress. In the post I mentioned better hardware letting one explore more possible ideas, but it could also be that people already knew of better algorithms that couldn’t work on smaller hardware. Algorithms vary both in their asymptotic efficiency and in their initial overhead, and we might slowly be switching to bigger overhead algorithms.

Those promised quotes: Continue reading "Why Do Algorithms Gain Like Chips?" »

GD Star Rating
Tagged as: ,

What Predicts Growth?

I just heard a fascinating talk by Enrico Spolaore of this paper on what predicts local growth rates over the very long run. He considers three periods: before the farming revolution, from farming to 1500, and from 1500 to today. The results:

  1. The first regions to adopt farming tended to equatorial non-tropic coastal (but not island) regions with lots of domesticable animals (table 2, column 1).
  2. The regions that had the most people in 1500 were those that first adopted farming, and also tended to be tropical inland regions (table 2, column 4).
  3. The regions that were richest per person in 2005 had no overall relation to populous 1500 regions (table 3, column 1), yet were places of folks whose ancestors came from places where farming and big states first started. Rich places also tend to be cool (i.e., toward poles) coasts or islands (table 5) filled with people that are more related culturally and genetically to the industry-era leaders of US and Europe (tables 6,7).

These results tend to support the idea that innovation sharing was central. The first farming innovations were shared along coasts in mild environments, i.e., not too cold or tropical. During the farming era, sharing happened more via inland invasions of peoples, which tropics aided. Industry first thrived in islands better insulated from invasion, industry travel and trade was more sea-based, and sharing of industry was more via people who could relate more to each other.

Changing technologies of travel seem to have made a huge difference. When travel was very hard, it happened first along coasts in mild climates. As domesticated animals made long-distance land travel easier, inland invasions dominated. Then when sea travel made travel far easier, and invasions got harder, cultural barriers mattered most.

GD Star Rating
Tagged as: , , ,

Foom Debate, Again

My ex-co-blogger Eliezer Yudkowsky last June:

I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future – to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.

I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera. (more)

When we shared this blog, Eliezer and I had a long debate here on his “AI foom” claims. Later, we debated in person once. (See also slides 34,35 of this 3yr-old talk.) I don’t accept the above as characterizing my position well. I’ve written up a summaries before, but let me try again, this time trying to more directly address the above critique.

Eliezer basically claims that the ability of an AI to change its own mental architecture is such a potent advantage as to make it likely that a cheap unnoticed and initially low ability AI (a mere “small project machine in a basement”) could without warning over a short time (e.g., a weekend) become so powerful as to be able to take over the world.

As this would be a sudden big sustainable increase in the overall growth rate in the broad capacity of the world economy, I do find it useful to compare to compare this hypothesized future event to the other pasts events that produce similar outcomes, namely a big sudden sustainable global broad capacity rate increase. The last three were the transitions to humans, farming, and industry.

I don’t claim there is some hidden natural law requiring such events to have the same causal factors or structure, or to appear at particular times. But I do think these events suggest a useful if weak data-driven prior on the kinds of factors likely to induce such events, on the rate at which they occur, and on their accompanying inequality in gains. In particular, they tell us that such events are very rare, that over the last three events gains have been spread increasingly equally, and that these three events seem mainly due to better ways to share innovations.

Eliezer sees the essence of his scenario as being a change in the “basic” architecture of the world’s best optimization process, and he sees the main prior examples of this as the origin of natural selection and the arrival of humans. He also sees his scenario as differing enough from the other studied growth scenarios as to make analogies to them of little use.

However, since most global bio or econ growth processes can be thought of as optimization processes, this comes down to his judgement on what counts as a “basic” structure change, and on how different such scenarios are from other scenarios. And in my judgement the right place to get and hone our intuitions about such things is our academic literature on global growth processes.

Economists have a big literature on processes by which large economies grow, increasing our overall capacities to achieve all the things we value. There are of course many other growth literatures, and some of these deal in growths of capacities, but these usually deal with far more limited systems. Of these many growth literatures it is the economic growth literature that is closest to dealing with the broad capability growth posited in a fast growing AI scenario.

It is this rich literature that seems to me the right place to find and hone our categories for thinking about growing broadly capable systems. One should review many formal theoretical models, and many less formal applications of such models to particular empirical contexts, collecting “data” points of what is thought to increase or decrease growth of what in what contexts, and collecting useful categories for organizing such data points.

With such useful categories in hand one can then go into a new scenario such as AI foom and have a reasonable basis for saying how similar that new scenario seems to old scenarios, which old scenarios it seems most like if any, and which parts of that new scenario are central vs. peripheral. Yes of course if this new area became mature it could also influence how we think about other scenarios.

But until we actually see substantial AI self-growth, most of the conceptual influence should go the other way. Relying instead primarily on newly made up categories and similarity maps between them, concepts and maps which have not been vetted or honed in dealing with real problems, seems to me a mistake. Yes of course a new problem may require one to introduce some new concepts to describe it, but that is hardly the same as largely ignoring old concepts.

So, I fully grant that the ability of AIs to intentionally change mind designs would be a new factor in the world, and it could make a difference for AI ability to self-improve. But while the history of growth over the last few million years has seen many dozens of factors come and go, or increase and decrease in importance, it has only seen three events in which overall growth rates greatly increased suddenly and sustainably. So the mere addition of one more factor seems unlikely to generate foom, unless our relevant categories for growth causing factors suggest that this factor is unusually likely to have such an effect.

This is the sense in which I long ago warned against over-reliance on “unvetted” abstractions. I wasn’t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems. On the subject of an AI growth foom, most of those abstractions should come from the field of economic growth.

GD Star Rating
Tagged as: , , , ,