Some preliminary comments. First, to be clear, my estimate of future growth rates based on past trends is intended to be unconditional – I do not claim future rates are independent of which is the next big meta innovation, though I am rather uncertain about which next innovations would have which rates.
Second, my claim to estimate the impact of the next big innovation and Eliezer’s claim to estimate a much larger impact from "full AGI" are not yet obviously in conflict – to my knowledge, neither Eliezer nor I claims full AGI will be the next big innovation, nor does Eliezer argue for a full AGI time estimate that conflicts with my estimating timing of the next big innovation.
Third, it seems the basis for Eliezer’s claim that my analysis is untrustworthy "surface analogies" vs. his reliable "deep causes" is that while I use long-vetted general social science understandings of factors influencing innovation, he uses his own new untested meta-level determinism theory. So it seems he could accept that those not yet willing to accept his new theory might instead reasonably rely on my analysis.
Fourth, while Eliezer outlines his new theory and its implications for overall growth rates, he has as yet said nothing about what his theory implies for transition inequality, and how those implications might differ from my estimates.
OK, now for the meat. My story of everything was told (at least for recent eras) in terms of realized capability, i.e., population and resource use, and was largely agnostic about the specific innovations underlying the key changes. Eliezer’s story is that key changes are largely driven by structural changes in optimization processes and their protected meta-levels:
The history of Earth up until now has been a history of optimizers … generating a constant optimization pressure. And creating optimized products, not at a constant rate, but at an accelerating rate, because of how object-level innovations open up the pathway to other object-level innovations. … Occasionally, a few tiny little changes manage to hit back to the meta level, like sex or science, and then the history of optimization enters a new epoch and everything proceeds faster from there. …
Natural selection selects on genes, but generally speaking, the genes do not turn around and optimize natural selection. The invention of sexual recombination is an exception to this rule, and so is the invention of cells and DNA. … this tiny handful of meta-level improvements feeding back in from the replicators … structure the evolutionary epochs of life on Earth. …
Very recently, certain animal brains have begun to exhibit both generality of optimization power … and cumulative optimization power (…. as a result of skills passed on through language and writing). … We have meta-level inventions like science, that try to instruct humans in how to think. … Our significant innovations in the art of thinking, like writing and science, are so powerful that they structure the course of human history; but they do not rival the brain itself in complexity, and their effect upon the brain is comparatively shallow. …
Now… some of us want to intelligently design an intelligence that would be capable of intelligently redesigning itself, right down to the level of machine code. … That … breaks the idiom of a protected meta-level. .. Then even if the graph of "optimization power in" and "optimized product out" looks essentially the same, the graph of optimization over time is going to look completely different from Earth’s history so far.
OK, so Eliezer’s "meta is max" view seems to be a meta-level determinism view, i.e., that capability growth rates are largely determined, in order of decreasing importance, by innovations at three distinct levels:
- The dominant optimization process, natural selection, flesh brains with culture, or full AGI
- Improvements behind the protected meta-level of such a process, i.e., cells, sex, writing, science
- Key "object-level" innovations that open the path for other such innovations
Eliezer offers no theoretical argument for us to evaluate supporting this ranking. But his view does seem to make testable predictions about history. It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases. It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science. And it suggests other rate increases were substantially smaller.
The main dramatic events in the traditional fossil record are, according to one source: Any Cells, Filamentous Prokaryotes, Unicellular Eukaryotes, Sexual(?) Eukaryotes, and Metazoans, at 3.8, 3.5, 1.8, 1.1, and 0.6 billion years ago, respectively. Perhaps two of these five events are at Eliezer’s level two, and none at level one. Relative to these events, the first introduction of human culture isn’t remotely as noticeable. While the poor fossil record means we shouldn’t expect a strong correspondence between the biggest innovations and dramatic fossil events, we can at least say this data doesn’t strongly support Eliezer’s ranking.
Our more recent data is better, allowing clearer tests. The last three strong transitions were humans, farming, and industry, and in terms of growth rate changes these seem to be of similar magnitude. Eliezer seems to predict we will discover the first of these was much stronger than the other two. And while the key causes of these transitions have long been hotly disputed, with many theories in play, Eliezer seems to pick specific winners for these disputes: intergenerational culture, writing, and scientific thinking.
I don’t know enough about the first humans to comment, but I know enough about farming and industry to say Eliezer seems wrong there. Yes the introduction of writing did roughly correspond in time with farming, but it just doesn’t seem plausible that writing caused farming, rather than vice versa. Few could write and what they wrote didn’t help farming much. Farming seems more plausibly to have resulted from a scale effect in the accumulation of innovations in abilities to manage plants and animals – we finally knew enough to be able to live off the plants near one place, instead of having to constantly wander to new places.
Also for industry, the key innovation does not seem to have been a scientific way of thinking – that popped up periodically in many times and places, and by itself wasn’t particular useful. My guess is that the key was the formation of networks of science-like specialists, which wasn’t possible until the previous economy had reached a critical scale and density.
No doubt innovations can be classified according to Eliezer’s scheme, and yes all else equal relatively-meta innovations are probably stronger, but if as the data above suggests this correlation is much weaker than Eliezer expects, that has important implications for how "full AGI" would play out. Merely having the full ability to change its own meta-level need not give such systems anything like the wisdom to usefully make such changes, and so an innovation producing that mere ability might not be among the most dramatic transitions.