Parsing The Parable

The timing of Eliezer’s post on outside views, directly following mine on an outside view of singularity, suggests his is a reply to mine.  But instead of plain-speaking, Eliezer offers a long Jesus-like parable, wherein Plato insists that outside views always trump inside views, that it is obvious death is just like sleep, therefore that "our souls exist in the house of Hades." 

I did not suggest mine was the only or best outside view, or that it trumps any inside view of singularity. Reasonable people should agree inside and outside views are both valuable, and typically of roughly comparable value.  So if Eliezer thought my outside analysis was new and ably done, with a value typical of outside analyses, he might say "good work old boy, you’ve made a substantial contribution to my field of Singularity studies." 

Instead we must interpret his parable.  Some possibilities:

  • His use of Plato’s analogy suggests he thinks my comparison of a future AI revolution to the four previous suddenly growth rate jumps is no better motivated than Plato’s (to Eliezer poorly motivated) analogy.
  • His offering no other outside view to prefer suggests he thinks nothing that has ever happened is similar enough a future AI revolution to make an outside view at all useful.
  • His contrasting aerospace engineers’ success to schedulers’ failures in inside views, suggests he thinks he has access to inside views of future AIs whose power is more like aerospace engineering than project scheduling. 

Look, in general to do a multivariate statistical analysis of a set of related cases one must judge what cases to include, with what variables to describe them, and what kind of a model of multivariate relations to apply.  So yes when there is more uncertainty there can be more disagreements about the best approach, and the outside view becomes less useful.

But more uncertainty also makes inside views less useful.  When many parameter value combos are possible one must choose a distribution with which to sample over them.  And tractable analyzes must focus on a few factors considered the most important.  More uncertainty makes for more disagreements here as well.  So I don’t yet see a general rule saying inside views tend to be more valuable when there is more uncertainty.  

Future AI is so important and hard to study I’d think interested folks would grab at any concrete guides they could find, including careful outside views.  I look forward to hearing clear reviewable results from an inside analysis, particularly on the crucial question I addressed of transition-induced inequality.  So far all I’ve seen is folks noting that we don’t know enough to exclude the possibility of huge inequality, which by itself seems a pretty weak argument. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://yudkowsky.net/ Eliezer Yudkowsky

    ‘Twas more the part about “Of course, you will try to dismiss this in favor of an Inside View” that annoyed me enough to write a parable.

    From your yesterday:

    Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn’t apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading.

    That did seem like a pretty strong claim that your particular Outside View, naming four particular past things “singularities”, would indeed trump any attempt at Inside reasoning; moreover, to undermine the motives of anyone who tried to suggest it.

    I also thought I was pretty explicit about not having aerospace-like access to Friendly AI issues as yet: the domains of usefulness go:

    Inside View (precise)
    Outside View
    Inside View (imprecise)

    The lower you go, the more uncertainty you have to deal with:

    IVP: Very strong; effortful but straightforward; precise laws.
    OV: Strong; easy; statistical with many samples.
    IVI: Weak; confusing and rationality-challenging; unknown unknowns.

    But just like you can’t use an aerospace-precise Inside View on software project management, you can’t use a statistically reliable Outside View on strange new phenomena with different internal causal structures.

    I have distrusted attempts to reason by surface analogies my whole life. I am not singling you out for criticism; I have the same problem with Kurzweil and all other theories of Predictably Accelerating Change, that they try to extrapolate trends over changes in the deep causal structures that appear to be responsible for the trends.

    When I write about the Singularity myself, I never compare the dawn of fully wraparound recursive self-improvement to anything except the transition from a universe of stable patterns to a universe of replicators. So if I were to call these two things both a “singularity” – which I never would – does it follow that, just as we formerly transitioned from logarithmic rates of pattern-production to exponential rates of pattern-production, that we are now to transition from exponential to something else? Should I criticize you for failing to take the Outside View with respect to this new classification, which is the only intuitively obvious one?

    No; it is just an analogy; nothing quantitative follows from it; the Outside View does not stretch over differences that large. I use the comparison for illustration, that is all.

    At most, the replicator transition helps you to understand the concept of a break with history; it doesn’t mean that there’s going to be any detailed correspondence between the second break and the first one.

  • Tim Tyler

    Futurism involves choosing the targets of your predictions wisely.

    Transistor densities are relatively predctable, but stockmarket fluctuations are not.

    In the case of the first movers of the AI revolution, it seems that many of the details of what will happen depend on who develops AI, and what they tell it to do.

    If AI is kept on servers – and only deployed in robots in tamper-proof hardware – there is considerable potential for it making its designers very rich.

    OTOH, if AI is run as an open source project under the control of some kind of communist collective, we may see a very different outcome.

    AI depends to some extent on investment captial – a fact which would seem to favour the first scenario – but I wouldn’t bet much money against either result.

    IMHO, the advantage given to the developers is the type of historical detail which we should not put too much effort into predicting.

    In the longer term, probably most instances of the human genome will wind up in museums. Any alleles that go the distance seem reasonably likely to be drawn from many different humans – but probably only a small fraction of those currently alive. Bacterial genomes probably have vastly more to contribute to the future than our own do.

    Re: When I write about the Singularity myself, I never compare the dawn of fully wraparound recursive self-improvement to anything except the transition from a universe of stable patterns to a universe of replicators.

    Comparisons of what is happening now with the origin of life are overblown, IMHO. The previous genetic takeovers are one relevant point of comparison. The next-nearest thing was the evolution of sex – though that was (probably) not quite so dramatic.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, I did not mean to suggest outside views trump insides, just that they deserve as much attention and tend to be neglected. Apologies if I was not clear enough. I am also not very taken with accelerating change, but not because that fails somehow as a possible pattern we could see and understand, but because economic data don’t support it actually happening recently. And your unwillingness to consider such a pattern is telling.

    I agree that the more things differ the harder it is to make useful comparisons, but I don’t see a sharp edge where usefulness falls to zero, as you seem to. You keep talking about “surface analogies,” “structurally different phenomena,” and “processes with greatly different internal causal structures” as ways apparently to denote where the boundary is, but I think this just reflects your inexperience with and under-appreciation of social science. To usefully compare two things you just need some useful common abstraction of which they are both examples. Yes, many of our useful abstractions are in terms of detailed causal structures, but many other useful abstractions are not. Many of those later abstractions are in social science, and I have been drawing on many of those in the outside view I have outlined.

    Just because when you look at something you find nothing usefully like it does not mean that others cannot find such things – what you can make useful comparisons with depends on the set of abstractions you understand.

  • PK

    So to summarize, hard to predict things are hard to predict. Am I understanding this correctly?

  • Unknown

    “I never compare the dawn of fully wraparound recursive self-improvement to anything except the transition from a universe of stable patterns to a universe of replicators.”

    This seems like a strange comparison to me, since we already have recursive self-improvement, while you didn’t have replicators before you had them.

    We already have recursive self improvement because there are many respects in which we can make efforts to improve ourselves, with some success, and the more we improve, the more capable we become as well. For example, it is possible to overcome bias to some extent; and the more you overcome bias, the capable you will be of distinguishing truth from error, and therefore of distinguishing better ways to overcome bias from inferior ways. And this is but an example. Because we are already general intelligences, even if not artificial, we can strive to improve ourselves in whatever respect we please, even in respect to our ability to improve ourselves.

    So there are already recursive self improvers. Nonetheless, nothing extraordinary seems to have happened on account of this.

  • Ian C.

    IRL I am a software architect for a large multinational. We usually estimate by breaking a task down in to knowable pieces that can be estimated using past experience, and then add up the total.

    So in the language of these posts I guess we apply inside view until we find components similar enough to past work done to estimate using outside view. Note that when to stop using inside view is objective, it is as soon as we encounter a familiar enough thing that’s there’s no point breaking it down further. If we encounter a sui generis component that can’t be broken down, we just go on the “instincts” of the most experienced people, and put a big time buffer in there (like double).

  • Nick Tarleton

    So there are already recursive self improvers. Nonetheless, nothing extraordinary seems to have happened on account of this.

    Human civilization.

  • Z. M. Davis

    Unknown, I think the key phrase is that “fully wraparound.” The difference between reading this blog and rewriting your own source code is probably not trivial. Also, what Nick said.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Yep, key phrase is “fully wraparound”.

    If you wanted four things any one of which would suffice to break any attempted analogy between the Intelligence Explosion and the human economy to the strength of predicting the doubling time from one to the other, they would be:

    1) Fully wraparound recursive self-improvement.

    2) Thought at transistor speeds.

    3) Molecular manufacturing.

    4) One mind synthesizing another mind with specifiable motivational structure.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    The other thing is that I am juggling multiple balls right now and can’t really respond at length separately to the Singularity arc; if I do do posts, I want them to be generalized and build up the sequence, so that it does eventually get to the point where I can talk about AI. I’m not being cryptic, I’m being modular.

  • Tim Tyler

    Humans represent a rather weak example of a modern self-improving system. You are better off considering the man-machine civilisation. The improvements so far there are on a much larger scale – despite the fact that an important component of the system is currently exhibiting considerable inertia.

  • Will Pearson

    Eliezer wrote: “1) Fully wraparound recursive self-improvement.”

    Could you define that somehow. I’m really not sure what you mean by it. Could it possibly be defined in the language of folk psychology?

    For example: A FWRSI capable entity is an entity that can make explicit all possible beliefs that other agents adopting the intentional stance towards could attribute to it and alter each of them.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, our systems have always had a great many dimensions on which they could potentially self-improve, and progress has always consisted in substantial part of accumulating better processes and inputs for self-improvement on more and more dimensions. Similarly part of progress today is our getting slowly more molecular precision and parallel scale in manufacturing. You talk as if some improvement on self-improvement or molecular manufacturing was suddenly going to throw out the old system, but improving those things has long been a part of the old system.

  • Constant

    But instead of plain-speaking, Eliezer offers a long Jesus-like parable

    It sounds as though you disapprove. However, I hope Eliezer is not discouraged from producing dialogs, because as a reader I find these more enjoyable and digestible than his non-dialogs.