77 Comments

The examples you gave are of two different types: Systems where there is a constant inherent limit on predictability, and self-adaptive "red queen" systems in which the development of a new theory or model changes the system so as to make it as unpredictable as before.

Expand full comment

Thanks, that makes sense.I don't have anything like the expertise to quantify how much optimization separates humans from chimps in, say, bits, or equivalent-programmer-hours, or any other currency useful for present purposes, which puts me in the uncomfortable position of judging word against word based on authority. Maybe you'd like to get into the numbers?

Expand full comment

There would be limits to FOOM without new physics, but a sufficiently intelligent agent with access to all present human knowledge will very likely be able to derive new physics from existing experimental results.

Even if it can't, the limits a lack of new physics imposes on FOOM might be sufficiently distant that, from the perspective of our present human civilization, they might as well not exist. If the agent can FOOM to world domination without new physics, how much do we care that it might have to pause expansion to run some experiments some time later?

Expand full comment

The first versions of self-replicators might be very slow, but the way I pictured it, they could be out of sight for a long time, doing their thing. Before they came close to hitting any resource ceilings (the way self-replicating rabbits sometimes do), they could produce a mighty industrial force that could catch everyone else by surprise.

That's because this could all happen while everybody else is stuck in a model where they think they should produce only things for which there are customers. We're not that interested in automated hyperproduction because we can meet customer demands with more or less traditional means. And sure, machines are gradually replacing workers, and newer machines will replace the older machines, but all this is just to make stuff for someone to buy. It can all keep going without there being much effort put into Von Neumann machines in the asteroid belt. So whoever sends out the first one might have a long run of uncontested exponential growth, and it doesn't matter much how slowly it starts. The growth line gets pretty steep soon enough.

Expand full comment

I'm saying that unanswered physics questions and science questions in general ARE the point. Without being able to advance science by experimentation you are left with imagined hypotheses in a box which have been unverified by evidence. There is thus no way to gain a tractable advantage in the real world. Of course, an entity with all the knowledge of the world would be a very interesting (and perhaps charismatic) conversation partner, but it would hardly be able to take over the world without interacting with it and it certainly couldn't FOOM without doing real science in the real world.

Expand full comment

My 3-level model of cognition in terms of modelling capabilities explains past facts and predicts future FOOM:

Cognitive capability:

Models of the External World (Level 1)Models of the Self (Level 2)Models of Models (Level 3)

The big leap between animals and humans ( level 1 >>2 ) was the ability to form self-models - that is what lead to language and all the social coordination and communication involved in modern society.

The leap to level 3 will be a general purpose 'language of thought' functioning as a universal ontology or standard ('a theory of everything'), capable of integrating many separate cognitive modules into a single general purpose system. That's a FOOM.

Expand full comment

1) Agree that evolutionary biology puts sharp limits on how much marginal cognitive innovation can have been added to the human innate toolbox compared to the chimpanzee innate toolbox.

Ever hear of arms races and sexual selection?

Expand full comment

I think I should change "for free" to "cheap for some, relative to the value of the prize" :-). Also maybe a sketch will make this scenario easier to criticize and (hopefully) rip to shreds:1) <tbd amount="" of="" business-as-usual="" time="" elapses="">2) at some point, scalable "AI-researcher-equivalent AGI" technology is attained3) one or more projects scale up such technology radically with the goal of reaching "crossover"4) <tbd number="" (millions="" billions?)="" researcher-year-equivalents="" later,="" crossover="" is="" achieved="" 5)="" foom="">

Expand full comment

I'm with you up until #5. First it is likely possible to make a human equivalent mind far faster and cheaper than are humans. Second, it may well be possible to eventually have a thousand times as many useful modules as human minds contain, and to make each module a hundred times more effective. With more better modules, a mind might be vastly better at creating cognitive content.

Expand full comment

I dispute 2.

[Moreover, if you're right about RH's view, I would also disagree in that I think language result in a deep rewiring of the brain (but also that language evolved gradually).

That communication has no analog in AGI seems a dubious argument because of its arbitrarily drawing the boundary around suitable analogs narrowly.]

Back to disputing 2--There were apparently intense selectional pressures over a prolonged time for human intelligence to develop. (There are arguments that human intelligence must have developed suddenly. I've thought EY is endorsing such arguments.) I haven't elsewhere seen the claim (or warrant for it) that there's not enough evolutionary space between chimps and humans--in just that realm where humans are most different from chimpanzees.

Expand full comment

I attempt to interpret your position as saying that you:

1) Agree that evolutionary biology puts sharp limits on how much marginal cognitive innovation can have been added to the human innate toolbox compared to the chimpanzee innate toolbox.

2) Agree that humans seem to have produced a much greater volume of productive cognitive content than chimps.

3) Disagree that this points to compact 'architectural' innovations that decrease the cost of cognitive content.

4) Believe 1+2 is best explained by pointing to human communication and greater human population sizes only.

5) Believe that humans are near the limit of efficiency in creating cognitive content. Now that we have language, humans are efficient enough that further costs are irreducible, or require large up-front payments for new tools in order to achieve small marginal cost decreases in particular content domains.

Therefore you see no reason to believe that large cognitive productivity differentials could apply between an AI and a human as a result of the AI containing relatively few and compact cognitive innovations.

Sound fair/accurate?

Expand full comment

For the record: I deny that I said anything about saltation, certainly not if that means only a few mutations. And I certainly know enough evolutionary biology with math to know better than to consider it as teleological, though we may look back at history and see trends reflecting sustained selection pressures. "Stephen Diamond" is strawmanning me, probably willfully so.

Expand full comment

The argument I'm seeing from EY goes like this:1. Premise: Humans are much more capable than chimpanzees in important ways.2. Premise: Evolutionary theory puts fairly low limits on how much adaptation separates humans from chimpanzees.3. Conclusion: A small amount of adaptation can, at least sometimes, cause a large increase in capability.

There are vague terms in the above, but I hope it can at least be a starting point.

You've taken issue with his imprecise phrasing of 1), but you seem to agree with my phrasing, which I think is the important one. But you also don't seem to be disputing 2); if anything you're arguing that less adaptation could have taken place.

I'm not sure how the argument relies on evolution being directed; if it's a random walk or whatever, doesn't that make premise 2 stronger? Premise 1 isn't drawn from evolutionary theory at all but from observation of humans and chimpanzees today.

(Incidentally, Robin seems to accept the argument but dispute its relevance, attributing the outsized effect to a threshold relating to communication that has no analog in the AGI situation)

Expand full comment

It seems to me that this would come for free given a scalable AGI above a certain capability threshhold (perhaps roughly that of a typical human AI researcher); and that absent global decline it's just a matter of time before *that* exists.(I hope I'm mistaken. The foom variant in which the lead project is secretive seems a stretch but alas not out of the question, and nightmarish in its likely consequences; but *any* plausible foom scenario seems to have great potential for good or evil).

Expand full comment

A sufficiently large and diverse set of tools can give you a general toolkit, but that is different from having a single general tool. I see humans and software as mostly becoming general by having large toolkits, and much less by having particular very general tools. Communication tools can create thresholds below which you can't talk, and above which you can talk a lot. But I'm otherwise skeptical about there being critical architectures, that make a huge difference in the value gained from a set of tools.

Expand full comment

EY believes something like a saltation (or two) separates us from chimps; consequently, we may have a saltational development of AI. I deny the premise.

I probably wasn't clear on the relevance of the issue about humans getting more utility. The relevance is only indirect. (I was struck by the claim because it seems absurd at several levels.) If EY (tacitly) sees evolution as a directed process (or at least that intellectual evolution is), then it becomes easy to see the development of AI as something like physics rather than like engineering (because 'intelligence' then has a nature).

Expand full comment