Em Econ, London Style

Together with the provocative (Skype super-developer) Jaan Tallinn, I’ll speak on em econ next Saturday 2-5pm in London:

In this extended (3 hour) session, Robin Hanson and Jaan Tallinn will revisit and expand the material from their ground-breaking presentations from the Singularity Summit 2012 – presentations that Vernor Vinge, commenting shortly afterwards, described as refutations of the saying that “there is nothing new under the sun”. (more)

Jaan will talk on:

The incredible coincidence that we were born just decades before an imminent technological singularity that threatens to break our model of the evolution of the entire universe.

Added 19Dec: Here slides, bad audio from the talk. Here are slides, audio from my talk at the Oxford AGI-Impacts conference talk a few days before.

Added May ’13: Here is video of my Oxford conference talk.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • John

    “The incredible coincidence that we were born just decades before an imminent technological singularity”
    Is Jaan Tallinn so philosophically illiterate and so illogical as to really claim that or is it just a marketing trick to attract attention to his new “centre” in Cambridge?

    There is NOTHING inevitable about the future. If tomorrow a nuclear war errupts, the singularity he (supposedly) craves will not come about. If a totalitarian anti-technological world government is established (unlikely, but with subjectively estimated probability of ~7-8%) the singularity will not come about. If quantum computing turns out to be impossible, the singularity he envisions will not be realized (though if we define a singularity as the development of a species of biological/mechanical beings with greater than human intelligence, in this case some sort of singularity is very likely).

    There are at least a hundred things with non-zero probabilty that can happen and that can prevent a singularity from occurring. And yet, here they are, supposedly ‘respectable’ men claiming to be prophets…

    • http://juridicalcoherence.blogspot.com/ srdiamond

      I’m not sure this is anything worse than a poor choice of words. I would have said “Inexorable.”

      But what of this “incredible  coincidence”? Shouldn’t noting this high degree of coincidence impel us to adjust our priors downward? 

    • Carl Shulman

      Jaan does not deny the possibility of show-stoppers, slow timelines, etc and assigns probability mass to them. Looks like an instance of the problem of short blurbs and room for qualifying statements therein to me.

    • VV

      A few days ago I was browsing a book on doomsday predictions from ancient times to the present day. In *every* historical time there is always somebody who believes that humanity, or the universe as a whole, is on the verge of some abrupt massive change, for the better or worse.

      I see no reason to believe that these Singularity prophets know any better, especially since when you actually test their technological development forecasts, they turn out to be quite poor.

      • Carl Shulman

        Were there any interesting non-supernatural ones before 1800?

      • VV

         What do you mean by supernatural?

        Doomsday predictions before recent times usually involved some sort of divine intervention, hence you could consider them supernatural, but to people making them the existence of a deity who intervened in the world was something as certain as the existence of the earth is certain to us.

        The cognitive processess that underlay the these predictions are likely the same in all times, while the specific content of the predictions changes to be consistent with the system of beliefs of those who make the prediction (and their intended audience).

        Take the typical crackpot 21 December 2012 cataclysm predictions, for instance. Various versions exist, some involve clearly spiritual or supernatural content, others are pseudoscientific but purely naturalistic (they usually involve an encounter with a ‘Nibiru planet’ or something like that). Obviously, the naturalistic versions are not significantly more credible than the spiritual versions: they are both based on the same memes and cognitive biases, not proper inferences from observable evidence.

        Technological singularity predictions may not be making trivial scientific errors like the 2012 cataclysm predictions, but still they fail to provide scientifically strong arguments, therefore the best explanation for the existence of these predictions is that they are the way the doomsday meme manifests itself in people with a basic scientific education who reject traditional religions.

        Note that singularity advocates are often engineers/inventors or philosophers, but only very rarely actual scientists. That pattern has been noted before for other religious and religious-like beliefs.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        they are the way the doomsday meme manifests itself in people with a basic scientific education who reject traditional religions.

        I became convinced of that the Singularity was a deity substitute upon realizing the reverence with which Yudkowsky’s followers hold the “Sequences.” Like the Christian Bible, its a long, poorly written text that for those reasons becomes sacred.

        But there’s one salient difference, which I mention to signal fairness. The Yudkowskyites don’t expect moral perfection of their Prophet.

      • dmytryl

        srdiamond: Yea. Observe how ‘rationality’ is so successful at filling religion shaped holes (afterlife, destiny, purpose, old testament even) but so ineffective at forming some correct testable beliefs or even correct logical chains (see speed of evolution stuff).

      • VV

        @google-8a859b151b507f070cefe46a035c0a99:disqus  Yudkowsky can be considered an amateurish philosopher, Chalmers and Bostrom are professional philosophers.

        Engineers may be less likely than the technically uneducated population to buy into “nutty” beliefs, but more likely than scientists. This has been noted anecdotically for Creationism (the so called Salem’s hypothesis) and IIRC, it has been observed with actual evidence for Islamic fundamentalism.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Yudkowsky can be considered an amateurish philosopher, Chalmers and Bostrom are professional philosophers.

        Strange bedfellows. Chalmers occupies the extreme anti-materialist niche in the philosophy of mind.

        Yudkowsky has few kind words for philosophy, but his subject matter is the same. Yet his method is in one respect opposite. Whereas professional philosophers are overconcerned with responding to other professional philosophers, Yudkowsky prides himself on total ignorance of what anybody else had to say on these subjects. The result is that his positions are an eclectic combination of personal prejudices.

      • dmytryl

        VV:

        Engineers may be less likely than the technically uneducated population
        to buy into “nutty” beliefs, but more likely than scientists. This has
        been noted anecdotically for Creationism (the so called Salem’s
        hypothesis) and IIRC, it has been observed with actual evidence for
        Islamic fundamentalism.

        You need actual rates, though. There’s probably a lot more engineers than scientists, so among the educated that believe in what ever, engineers dominate.

        The philosophers got to be a very self selected bunch nowadays – most of the former philosophy is science, and the philosophy is actually an incredibly narrow field limited only to questions the answers to which we can’t check, as such checks had shown it’s complete inefficacy everywhere else. Trying to answer hard, grand questions with the methods that are unable to answer correctly any question the answer to which you can check, that is a very odd quest. If you are curious about qualia, study mathematics, maybe you’ll make small step towards understanding how those arise, with some luck; if question of qualia bugs you, do philosophy, you’ll have a fake answer or a feeling of knowing more about such, but you’ll have it now, not maybe in 100 years maybe in 1000 years.

  • Dave Lindbergh

    Will it be streamed?

  • Drewfus

    “The incredible coincidence that we were born just decades before an imminent technological singularity…”

    So even though science can’t explain the interaction of the 302 neurons of C. elegans in 2012, we are right on track for reverse engineering the ~100,000,000,000 neurons of the human brain by 2029!

    Detailed maps of the brain and super-fast computers will fail to achieve this. Real insight is required now, and just as much as in the 1950s. I guess the idea of the Singularity helps keep morale and funding up, if nothing else.

    • Carl Shulman

      Last I heard, Jaan’s median estimate for broadly human-level AI was around 8 decades, not 2. And he certainly has never endorsed Kurzweil’s 2029 prediction as you suggest.

      • Drewfus

        That’s worse. On what conceivable basis can anyone make serious predictions of major scientific breakthroughs or milestones* 8 decades into the future?

        * Is the Singularity a breakthrough or a milestone? It is of course portrayed as milestone technologically, but also as a breakthrough era, socio-economically. That is the key to its appeal – merging breakthroughs into milestones to the point that exponential development appears to be the only requirement for the miracle to occur – no genius Eureka moments required. Now, simply observe historical and current technology growth, and it all seems so reasonable.

      • dmytryl

        Well, mind uploading might happen by 2092 without any major breakthrough, but it is virtually guaranteed not to happen by 2030.

        One claims unlikely to be likely, other claims impossible to be likely, what’s worse?

      • Drewfus

        “Well, mind uploading might happen by 2092 without any major breakthrough…”

        That is magical thinking. Imaging the brain and replicating that image does not give you another brain – it gives you noise. It certainly does not provide a model of the brain, and a model is what you want, not some useless simulation of an existing brain – unless you’re running an exhibition.

        “One claims unlikely to be likely, other claims impossible to be likely, what’s worse?”

        I made no claim, explicit or implicit. General AI might be achieved by 2029, or by Christmas, but there is no reason to suppose it is likely by any particular date. The present hints at the future – it does not point to it.

        Singularity thinking is bad thinking. It provides us with an excuse to avoid the necessity of searching for the underlying mechanisms of mind and brain, the difficulty of which cannot be more obvious after decades, if not centuries of trying.

        Some people say that Singularity is a religious concept. I disagree, but whatever is the underlying motivation for it, consider that the Western world outside of the United States is now only mildly religious. Reaction to The Great Stagnation is probably a better explanation (and definately a more ironic one). Singularity is self-medication for those suffering the disappointment of retarded technological and scientific progress. It maintains our self-belief … for a while.

      • dmytryl

        That’s not magical thinking, in principle there’s no need for any giant breakthroughs in our understanding, just gradual progress, with technological advancements like scanning blockface microscope (i’m actually working on software for processing data from it).

        The point is that 2092 mind uploading and running simulation is a possibility, while 2030 mind uploading and running simulation is not even a possibility, so the Tallinn argues for inevitability of what’s merely unlikely, while Krutzweil argues for the inevitability of impossible.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Some people say that Singularity is a religious concept. I disagree, but whatever is the underlying motivation for it, consider that the Western world outside of the United States is now only mildly religious. Reaction to The Great Stagnation is probably a better explanation (and definitely a more ironic one). Singularity is self-medication for those suffering the disappointment of retarded technological and scientific progress. It maintains our self-belief … for a while.

        Singularity thinking is fueled by first-generation atheists, a peculiarly American phenomenon exactly because of the country’s religiosity.

        But I think, like you, that it’s (also) fueled by economic stagnation (although our diagnoses are different as to their cause), although that doesn’t distinguish it from religions: “Religion is the opium of the people.” Religion has always arisen as compensation for worldly deprivation.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        On what conceivable basis can anyone make serious predictions of major scientific breakthroughs or milestones* 8 decades into the future?

        At least taken literally, that’s a poor argument. There’s a Latin name for the fallacy that I won’t try to retrieve, but it amounts to inferring that something isn’t so from your own present inability to conceive it. Hanson has given reasons in this blog.

        What you’re saying brings the whole “discipline” of “futurology” into question if there’s an in-principle reason why predictions can’t be made far into the future. If there is one, you should state it.

        Maybe an empirical approach is appropriate. How have the best futurological predictions fared? I know science-fiction writers used to generally assume that by now our automobiles would fly. Surely they would drive themselves without human assistance. Since I’m not a futurological enthusiast, someone would have to show that the field itself has a track record before I’d be beguiled into considering specific arguments (unless the arguments themselves prove to be interesting). 

      • http://juridicalcoherence.blogspot.com/ srdiamond

        That is the key to its appeal – merging breakthroughs into milestones to the point that exponential development appears to be the only requirement for the miracle to occur – no genius Eureka moments required.

        I think their outlook is that it’s a milestone as far as AI happening, a breakthrough that it be “safe.”

        Perhaps you accord excessive importance to acts of individual genius. I think those who have studied the question find that simultaneous or near-simultaneous invention is common; the question of the extent to which inventions are the result of the history of prior invention, which will “inevitably” occur in the mind of some genius or another, is somewhat an open question today.

        What I find most amazing is that the Singulatarians, the vanguard of the next scientific and social revolution, embrace the most backward philosophical ideas: moral realism [“morality” being part of the “utility function” programmed into an AI]; compatibilist free will; and the actual existence of phenomenological experience, these falsehoods constituting absolute roadblocks precluding success in their venture. What makes me nearly certain the leaders aren’t serious [that there’s a bit of a scam element] is that they don’t seem to care. (In a recent discussion, Yudkowsky said Muehlhauser never understood the former’s interminable screed on morality; Muehlhauser said perhaps Yudkowsky will explain it to him one afternoon.)

      • Drewfus

        @srdiamond

        “Perhaps you accord excessive importance to acts of individual genius.”

        That is an attitude (of mine), more than a prediction based on historical patterns of invention. It is the ‘use-value’ of assuming moments of genius will be required, rather than assuming any inevitability, let alone time-table, that i’m interested in. While you might be fairly harsh on the specific philosophies of the Singulatarians, my only philosophic comment on this is that we should be focused on the journey, not the destination.

        “At least taken literally, that’s a poor argument.”

        Perhaps because it was a question?

        “…the fallacy that … amounts to inferring that something isn’t so from your own present inability to conceive it.”

        How could i, or anyone, conceive of the ability to make accurate long-term science/tech predictions, if no one has ever demonstrated that ability? Sure i can imagine someone with this ability, but that’s where it stops – i’m not a religious type.

        “How have the best futurological predictions fared?”

        I’m with you on this, in that i’m not particularly interested in the whole subject. Futurology should be limited to looking at Intel’s processor roadmap for the next few years, and similar short-range projections.

        On the other hand, i think yourself and a few other regular commenters here are too keen on the psycho-analysis of those you disagree with philosophically and/or politically.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        “At least taken literally, that’s a poor argument.”Perhaps because it was a question?”…the fallacy that … amounts to inferring that something isn’t so from your own present inability to conceive it.”How could i, or anyone, conceive of the ability to make accurate long-term science/tech predictions, if no one has ever demonstrated that ability? Sure i can imagine someone with this ability, but that’s where it stops – i’m not a religious type.

        A rhetorical question! I mean, come on! You say it’s a “question” and then go on to defend the position it reflects. one can’t have a discussion that way.Hanson presents arguments; hence “conceives” the possibility. Many things have been conceived before they are accomplished. The argument is almost too silly to address.

      • Drewfus

        @f26939f398e5b2e21ea353b06370c426:disqus You say it’s a “question” and then go on to defend the position it reflects. one can’t have a discussion that way.

        Completely disagree. You’re inferring a position on me that i do not hold – that it is impossible to predict the future several decades out, in principle. That’s not my claim. However, i’m still within my rights to defend another position – i have no reason to believe that the ability to predict the future decades hence is currently within anyone’s power.

        If i had been around in the mid-1950s and someone in that era said to me they predict a superpower will send men to the Moon and return them to Earth by the end of the next decade, responding “how could you possibly predict that” and denying that anyone has the ability to make that prediction, does not also mean i deny the possibility that that prediction could successfully be made in principle, it simply means i see no evidence that the ability to make a successful prediction of that nature exists.

        “Many things have been conceived before they are accomplished.”

        Vastly less than the number of things conceived that never see the light of day. This reminds me of Jacque Fresco, a man who designs lots of cool, futuristic looking stuff, and then labels what he does ‘the future’, as if calling something the future would make it come true.

  • dmytryl

    One thing that is pretty damn stupid, in my opinion, is the transition from exponential growth to some faster than exponential growth. As it is, the computers are very significantly used to speed up development of computers and to increase number of people working on development of computers. Okay, there’s a supercomputer running uploaded human brain. That’s 1 more human, that may or may not be able to usefully contribute. On a planet of 7 billions. Where the resources required for that uploading and running would have supported several biological humans.

    Of course, when you really want singularity/rapture/end days, you can handwave in something like uploads becoming hyperintelligent, or superhuman AI, or what ever, but at that point you’re just rationalizing the doomsday meme, not trying to predict anything..

    • Paul Christiano

      > On a planet of 7 billions. Where the resources required for that uploading and running would have supported several biological humans. That diversion of resources is not a speed up.

      Indeed, if uploads are much more expensive than humans then their economic relevance is limited. We are specifically thinking about changes that occur as ems replace humans. This may well happen continuously (though probably fairly quickly) as the cost of running emulations falls and eventually reaches parity with humans (as has been discussed many times here and elsewhere).

      The time required to build a computer and start running something on it is much faster than the time required to raise a human.

      > One thing that is pretty damn stupid, in my opinion, is the transition from exponential growth to some faster than exponential growth in some dramatic fashion that didn’t already happen.

      In simple models, you may get superexponential growth when machines become good enough substitutes for knowledge workers, or when population growth is proportional to economic output for some other reason.  Of course this only works if returns to technology diminish slowly enough, and eventually it’s got to stop as you run up against physical limits. But I don’t see why a period of superexponential growth is stupid. Even very simple models wouldn’t predict superexponential growth so far, unless they have increasing returns to capital (which is an orthogonal issue). I don’t know much economics, but this seems to be fairly straightforward and simple. Computers being used to speed up the development of computers is very obviously not the important criterion.

      (Incidentally, Robin is talking about faster exponential growth, which I think is uncontroversial if machines substitute for humans, and I don’t think the distinction makes much difference to Jaan’s point. Maybe you are talking about something else.)

      • dmytryl

        Well, the Kruzweil kind of singularitarians take the existing exponential growth – every 1.5 years doubling the speed of computers and then argue that in the glorious future the speed of doubling itself will be doubled in such a fashion, which indeed actually hits a singularity in the mathematical sense. Ignoring the fact that existing rapid exponential growth is already a result of computers speeding up their own development (and recruiting more humans).

        With regards to singularity as limit of prediction, that’s the pipe dream of futurists to ingrain an assumption that their limit of prediction is not ‘yesterday’ but some interesting timeframe into the future. There’s no non-trivial predictions that can be made even without any speed ups to progress, it’s just that speeding up of progress makes it abundantly clear for shorter timespans.

      • VV

         Anyway, the serial speed of computers has pretty much plateaued, and improvements due to parallelization are limited by Amdahl’s law.

      • dmytryl

         VV:

        Amdahl’s law, fortunately, does not affect brain simulation as the brain is also a parallel system.

        Moore’s law is about to plateau, though, likely in fewer than 4 doublings. The past improvements were of decreased cost, as well – photolithography is immensely cheaper per component than discrete elements, discrete transistors are much cheaper than vacuum tubes, vacuum tubes are cheaper than electromechanical relays, etc. But there is no replacement in sight that would be cheaper than photolithography. And that sort of thing doesn’t come around overnight.

      • VV

         

        In simple models, you may get superexponential growth when machines
        become good enough substitutes for knowledge workers, or when population
        growth is proportional to economic output for some other reason.

        Growth of what? GDP? Computer processing power? Intelligence?

        I would think that super-exponential models are extremely controversial. What model do you have in mind?

      • Paul Christiano

        GDP or processing power, or *any* other thing that people talk about when they talk about “growth.”

        I’m not an economist so don’t know anything non-obvious about modeling growth. The first google result for “growth model” gives Solow-Swan, which seems to work fine and be popular.

        In this model, if capital can substitute completely for labor, then there are constant returns to capital. So the capital stock will begin growing exponentially, independent of population growth, at a rate which depends directly on multifactor productivity. (Though now multifactor productivity reduces to average capital productivity, or whatever.)

        If multifactor productivity is increasing, then the rate of exponential growth would increase. Multifactor productivity has been increasing exponentially for practically all of history as far as I can tell, so you would minimally expect an exponentially increasing rate of exponential progress. 

        If tech progress depended only on output and we kept up the historical relationship between output and tech progress, then you would get an equation of the form d K / d T = K^(1 + alpha) for some small alpha. And that sends K to infinity, i.e. the model breaks down. But this might well give you a period of very fast growth.

  • Pingback: Overcoming Bias : Not Science, Not Speculation

  • Pingback: Overcoming Bias : Not-So-Strange Futures