15 Comments

> like double

I think what this sort of reasoning suggests then is this 'like double' is kind of like the collector for 'unknown unknowns' - that to be more realistic is to make this fudge factor much higher and to look for ways of breaking it apart - see http://www.overcomingbias.c...

It also suggests that there's an higher order analog here, since you're a part of a large class of both firms, architects, and developers working on similar problems (or components of similar problems) that you can actually count on working with. The further from the crowd you travel, the bigger this fudge factor can be, too.

Expand full comment

But instead of plain-speaking, Eliezer offers a long Jesus-like parable

It sounds as though you disapprove. However, I hope Eliezer is not discouraged from producing dialogs, because as a reader I find these more enjoyable and digestible than his non-dialogs.

Expand full comment

Eliezer, our systems have always had a great many dimensions on which they could potentially self-improve, and progress has always consisted in substantial part of accumulating better processes and inputs for self-improvement on more and more dimensions. Similarly part of progress today is our getting slowly more molecular precision and parallel scale in manufacturing. You talk as if some improvement on self-improvement or molecular manufacturing was suddenly going to throw out the old system, but improving those things has long been a part of the old system.

Expand full comment

Eliezer wrote: "1) Fully wraparound recursive self-improvement."

Could you define that somehow. I'm really not sure what you mean by it. Could it possibly be defined in the language of folk psychology?

For example: A FWRSI capable entity is an entity that can make explicit all possible beliefs that other agents adopting the intentional stance towards could attribute to it and alter each of them.

Expand full comment

Humans represent a rather weak example of a modern self-improving system. You are better off considering the man-machine civilisation. The improvements so far there are on a much larger scale - despite the fact that an important component of the system is currently exhibiting considerable inertia.

Expand full comment

The other thing is that I am juggling multiple balls right now and can't really respond at length separately to the Singularity arc; if I do do posts, I want them to be generalized and build up the sequence, so that it does eventually get to the point where I can talk about AI. I'm not being cryptic, I'm being modular.

Expand full comment

Yep, key phrase is "fully wraparound".

If you wanted four things any one of which would suffice to break any attempted analogy between the Intelligence Explosion and the human economy to the strength of predicting the doubling time from one to the other, they would be:

1) Fully wraparound recursive self-improvement.

2) Thought at transistor speeds.

3) Molecular manufacturing.

4) One mind synthesizing another mind with specifiable motivational structure.

Expand full comment

Unknown, I think the key phrase is that "fully wraparound." The difference between reading this blog and rewriting your own source code is probably not trivial. Also, what Nick said.

Expand full comment

So there are already recursive self improvers. Nonetheless, nothing extraordinary seems to have happened on account of this.

Human civilization.

Expand full comment

IRL I am a software architect for a large multinational. We usually estimate by breaking a task down in to knowable pieces that can be estimated using past experience, and then add up the total.

So in the language of these posts I guess we apply inside view until we find components similar enough to past work done to estimate using outside view. Note that when to stop using inside view is objective, it is as soon as we encounter a familiar enough thing that's there's no point breaking it down further. If we encounter a sui generis component that can't be broken down, we just go on the "instincts" of the most experienced people, and put a big time buffer in there (like double).

Expand full comment

"I never compare the dawn of fully wraparound recursive self-improvement to anything except the transition from a universe of stable patterns to a universe of replicators."

This seems like a strange comparison to me, since we already have recursive self-improvement, while you didn't have replicators before you had them.

We already have recursive self improvement because there are many respects in which we can make efforts to improve ourselves, with some success, and the more we improve, the more capable we become as well. For example, it is possible to overcome bias to some extent; and the more you overcome bias, the capable you will be of distinguishing truth from error, and therefore of distinguishing better ways to overcome bias from inferior ways. And this is but an example. Because we are already general intelligences, even if not artificial, we can strive to improve ourselves in whatever respect we please, even in respect to our ability to improve ourselves.

So there are already recursive self improvers. Nonetheless, nothing extraordinary seems to have happened on account of this.

Expand full comment

So to summarize, hard to predict things are hard to predict. Am I understanding this correctly?

Expand full comment

Eliezer, I did not mean to suggest outside views trump insides, just that they deserve as much attention and tend to be neglected. Apologies if I was not clear enough. I am also not very taken with accelerating change, but not because that fails somehow as a possible pattern we could see and understand, but because economic data don't support it actually happening recently. And your unwillingness to consider such a pattern is telling.

I agree that the more things differ the harder it is to make useful comparisons, but I don't see a sharp edge where usefulness falls to zero, as you seem to. You keep talking about "surface analogies," "structurally different phenomena," and "processes with greatly different internal causal structures" as ways apparently to denote where the boundary is, but I think this just reflects your inexperience with and under-appreciation of social science. To usefully compare two things you just need some useful common abstraction of which they are both examples. Yes, many of our useful abstractions are in terms of detailed causal structures, but many other useful abstractions are not. Many of those later abstractions are in social science, and I have been drawing on many of those in the outside view I have outlined.

Just because when you look at something you find nothing usefully like it does not mean that others cannot find such things - what you can make useful comparisons with depends on the set of abstractions you understand.

Expand full comment

Futurism involves choosing the targets of your predictions wisely.

Transistor densities are relatively predctable, but stockmarket fluctuations are not.

In the case of the first movers of the AI revolution, it seems that many of the details of what will happen depend on who develops AI, and what they tell it to do.

If AI is kept on servers - and only deployed in robots in tamper-proof hardware - there is considerable potential for it making its designers very rich.

OTOH, if AI is run as an open source project under the control of some kind of communist collective, we may see a very different outcome.

AI depends to some extent on investment captial - a fact which would seem to favour the first scenario - but I wouldn't bet much money against either result.

IMHO, the advantage given to the developers is the type of historical detail which we should not put too much effort into predicting.

In the longer term, probably most instances of the human genome will wind up in museums. Any alleles that go the distance seem reasonably likely to be drawn from many different humans - but probably only a small fraction of those currently alive. Bacterial genomes probably have vastly more to contribute to the future than our own do.

Re: When I write about the Singularity myself, I never compare the dawn of fully wraparound recursive self-improvement to anything except the transition from a universe of stable patterns to a universe of replicators.

Comparisons of what is happening now with the origin of life are overblown, IMHO. The previous genetic takeovers are one relevant point of comparison. The next-nearest thing was the evolution of sex - though that was (probably) not quite so dramatic.

Expand full comment

'Twas more the part about "Of course, you will try to dismiss this in favor of an Inside View" that annoyed me enough to write a parable.

From your yesterday:

Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn't apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading.

That did seem like a pretty strong claim that your particular Outside View, naming four particular past things "singularities", would indeed trump any attempt at Inside reasoning; moreover, to undermine the motives of anyone who tried to suggest it.

I also thought I was pretty explicit about not having aerospace-like access to Friendly AI issues as yet: the domains of usefulness go:

Inside View (precise)Outside ViewInside View (imprecise)

The lower you go, the more uncertainty you have to deal with:

IVP: Very strong; effortful but straightforward; precise laws.OV: Strong; easy; statistical with many samples.IVI: Weak; confusing and rationality-challenging; unknown unknowns.

But just like you can't use an aerospace-precise Inside View on software project management, you can't use a statistically reliable Outside View on strange new phenomena with different internal causal structures.

I have distrusted attempts to reason by surface analogies my whole life. I am not singling you out for criticism; I have the same problem with Kurzweil and all other theories of Predictably Accelerating Change, that they try to extrapolate trends over changes in the deep causal structures that appear to be responsible for the trends.

When I write about the Singularity myself, I never compare the dawn of fully wraparound recursive self-improvement to anything except the transition from a universe of stable patterns to a universe of replicators. So if I were to call these two things both a "singularity" - which I never would - does it follow that, just as we formerly transitioned from logarithmic rates of pattern-production to exponential rates of pattern-production, that we are now to transition from exponential to something else? Should I criticize you for failing to take the Outside View with respect to this new classification, which is the only intuitively obvious one?

No; it is just an analogy; nothing quantitative follows from it; the Outside View does not stretch over differences that large. I use the comparison for illustration, that is all.

At most, the replicator transition helps you to understand the concept of a break with history; it doesn't mean that there's going to be any detailed correspondence between the second break and the first one.

Expand full comment