Yawn, World Remade

What dramatic new events are in store for humanity? Here we contemplate 12 possibilities and rate their likelihood of happening by 2050. … They all have the power to forever reshape how we think about ourselves and how we live our lives.

That is the June Scientific American, which doesn’t seem to realize that one of their 12 possibilities matters far more than the rest. They assign a greater than 50% chance to advanced AI by 2050!

LIKELY: machine-selfawareness
What happens when robots start calling the shots?

Artificial-intelligence (AI) researchers have no doubt that the development of highly intelligent computers and robots that can self-replicate, teach themselves and adapt to different conditions will change the world. … Computers with adaptable and advanced hardware and software might someday become self-aware. … When machine self-awareness first occurs, it will be followed by self-improvement. … Improvements would be made in subsequent generations, which, for machines can pass in only a few hours. In other words, Wright notes, self-awareness leads to self-replication leads to better machines made without humans involved. “Personally, i’ve always been more scared of this scenario than a lot of others” in regard to the fate of humanity, he says. … Not everyone is so pessimistic. … This emergence of more intelligent AI won’t come on “like an alien invasion of machines to replace us,” agrees futurist and prominent author Ray Kurzweil. Machines, he says, will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them, he adds.

The other eleven possibilities:

cloning of a human (likely), extra dimensions (50-50), extraterrestrial intelligence (unlikely), nuclear exchange (unlikely), creation of life (almost certain), room-temperature superconductors (50-50), polar meltdown (likely), pacific earthquake (almost certain), fusion energy (very unlikely), asteroid collision (unlikely), deadly pandemic (50-50).

Scientific American seems unaware that the AI possibility’s expected effects far outweigh all the rest.  If accurate, this one forecast deserves vastly more attention than a 700 word comment.  If they really took it seriously, they might devote an entire issue to the subject, or perhaps even their entire future magazine.  Either they don’t really believe their >50% number, they don’t understand its enormous civilization-remaking consequences, or they (and their readers) don’t find such vast consequences several decades hence of much interest. Which is it?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Carl Shulman

    By The Editors, Charles Q. Choi, George Musser, John Matson, Philip Yam, David Biello, Michael Moyer, Larry Greenemeier, Katherine Harmon and Robin Lloyd

    I would assume that the authors had varying opinions and each got input. Probably one or two think AI by 2050 likely, and those editors would have much more to say about the topic.

    or they (and their readers) don’t find enormous consequences several decades hence of much interes

    SciAm devotes enormous attention and advocacy efforts to climate change.

  • Carl Shulman

    Also, the combination of a claim that fusion power is very unlikely with ‘likely’ superintelligent AI (which could invent fusion power if the latter is feasible for an arbitrarily advanced technological culture) suggests a lack of integration between the predictions.

  • http://hanson.gmu.edu Robin Hanson

    Carl, good points about fusion and climate change.

  • http://theopensociety.wordpress.com/ Lennart Regebro

    The problem is that there is nothing you can write about the “singularity” of machines becoming self-aware. If we really can get self-improving machines, and self-designing machines that will change the world so drastically that we hardly can even speculate about it.

    When machines become radically more intelligent than us, they can either:

    1. Kill us
    2. Keep us as pets
    3. A combination
    4. Something we can’t even imagine

    OK, that’s all. 700 word article? Why. Everything written in it is likely to be wrong. :-)

  • Jess RIedel

    the AI possibility’s expected effects far outweigh all the rest

    Well, this is true so long as AI comes to pass. There are a few events on that list with extinction potential.

    • kevin

      Actually, I’m pretty sure that only asteroid impact and AI have genuine extinction potential. Nuclear exchange might lead to gigadeaths, but total extinction is almost impossible.

  • nazgulnarsil

    much in the way that episodes of star trek (and indeed most science fiction) is really about the present, so too is the role of futurism. people care about signaling how concerned/prepared/accurately calibrated about the future they are. they don’t actually care about the future.

  • Tom Adams

    I have notice that representing certain mainline institutions and discussing futuristic ideas just don’t mix. There should be a name for this non-superposition principle.

    I first formulated this principle when I was listening to a NPR program on futurism. They had a technologist from Microsoft, a biologist, and an a science fiction writer. The technologist and biologist were both asks to make a prediction. They both made 5-year predictions, very conservative. The Microsoft guy predicted theater-quality movies on your PC. The could not do better without sounding wild-eyed. The wild-eyed threshold for prediction is no greater than 5 years.

    If you want to make sober predictions you either limit your predictions to 5 years or you act like Scientific American and pretend to not see the implications of what you are predicting.

  • http://hanson.gmu.edu Robin Hanson

    Lennart, just because you have nothing to say doesn’t mean others have nothing.

    Jess, expected effect is a product of probability and value difference.

    • Jess Riedel

      Oops, you’re right. I interpreted that as the more colloquial meaning of “expect”.

  • Ed

    Even without supposing an actual singularity, it is much harder to tell what the outcome of artificial intelligence will be. A lot will presumably depend upon the choices of indivudals – programmers, politicans, etc. – which may be quite idiosyncratic and unpredictable. A lot will depend on what artificial intelligence turns out to be useful for. Since artificial intelligence is in some ways still a solution in search of a problem, unlike, say, fusion, we don’t have an answer to that. And there’s a lot of possibilites for what machine intelligence might be. E.g. Many of humans cognitive capabilities might be learned through contact with the physical world or human society, and might be hard to program into a computer that lacks a body or social position.

  • Chris T

    More likely the authors didn’t actually know much about the subject. The quality of Sci Am has gone off a cliff. Staff writers with journalist backgrounds write most of the articles now.

  • Aron

    If they had decided to turn it into a full issue, then they probably would have scrapped this exercise, and then you wouldn’t have been able to ask the question.

  • http://religionsetspolitics.blogspot.com/ Joshua Zelinsky

    This is true if AI will lead to an intelligence explosion. They might give a high probability to smart AI but a much lower probability to an intelligence explosion given that. (And reading the piece that’s the impression I get).

    AI is also not necessarily more important than some of these. Asteroids, pandemics, and nuclear exchange are all existential risks to humanity. AI is only different in that it might present an existential risk but might also present a massive benefit.

    The one that really puzzles me is the pacific earthquake one. That seems orders of magnitude less important than anything on the list. If I had to replace it with something else I’d probably add either space elevators or nanotechnology.

    • http://williambswift.blogspot.com/ billswift

      Nuclear exchange is NOT an existential threat. Pandemics theoretically could be, but I have seen no evidence that the horror movie scenarios being tossed around are actually plausible. The odds against an asteroid strike occurring in the next forty years are ridiculously small. Their rating an asteroid collision as unlikely and fusion power as very unlikely is stupid.

      • http://williambswift.blogspot.com/ billswift

        I forgot to close the strong flag after very.

    • gwern

      Joshua: you don’t even need an intelligence explosion for AI to be cataclysmic. Just digital human-level intelligence is enough – no need to invoke either strong or weak superintelligence.

      Imagine a human-level AI running on $100,000 a year of hardware, and imagine Moore’s law has completely shutdown. You copy the premier patent law attorney, the premier oncologist, etc. Suddenly, those markets go from their current oligarchies to perfectly competitive winner-take-all markets reminiscent of FLOSS. (Why settle for an expensive inferior human, or Lawyer 1.2 when you can buy/rent Lawyer 2.0?)

      And this can apply to most, if not all, of the white-collar professions. Even surgeons have been preparing their replacements with tele-surgery robots.

      So, the blue-collar laborers get squeezed from below by machines, white-collar workers get squeezed from above by copies of the #1 in their profession, and that leaves not very much left. It may be a net win for humanity, but the ‘crack of a future dawn’ scenario will still be very painful for very many.

      (As far as SA goes; I go with the dishonest-forecast and ignorance explanations. I’m not too sure what one could do in the crack scenario, though – buy equities? Try to change careers to something status-related that forbids copying?)

  • well

    you forgot the possibility that they dont want to scare the hell out of their readers.

  • well

    as for carls point

    the supercomputer would find out everything listed along with it. So, if it is developed, than everything else is either
    “certain” or “impossible”

  • Captain Oblivious

    Either they don’t really believe their >50% number, they don’t understand its enormous civilization-remaking consequences, or they (and their readers) don’t find such vast consequences several decades hence of much interest. Which is it?

    I’m not actually disagreeing about the possible impact of AI, but I have to point out that you’ve missing one “possibility”: Perhaps others have thought it through and reached different conclusions, and you are simply wrong about the impact!

    I’m not attempting to assess the likelihood that AI will happen, or that it’s impact will be large or small if it does happen – but I find it interesting that you’re not even willing to consider that you might be wrong (at least this is merely “overcoming bias”, not “less wrong”!).

    You might want to think about that, Robin…

    • torekp

      While Captain O. is right that Robin is a bit too hasty to affirm the huge consequences of successful AI, that doesn’t change the inadequacy of SciAm’s treatment, unless one goes so far as to say that Robin is very likely wrong about the consequences. Suppose for example that we assign P(Singularity | AI) = 0.5 and P(Gradual change | AI) =0.5. Then the intelligent machine scenario is still far and away the one with the greatest expected effect.

      In reply to Lord’s comments, AI is not necessarily disembodied, even today. Much manufacturing is computer-driven, and robots build many products including machinery.

  • http://www.blogger.com/profile/10546265581296919974 Rob

    Chalmers on the Singularity at Philosophy Bites.

  • Tom Adams

    Seems to me the slow step in improvement would be confirming that a change is an improvement via field testing, that can take time. You can test in a simulation, but you need to confirm the fidelity of the simulation to the real world and that itself involves something like field testing.

    Even if a machine could reproduce instantly, it would still need to learn and compete in the environment and that takes time.

    The question of what can be simulated at sufficiently high fidelilty by what date might be a speed limit on the process of improvement.

  • Lord

    A disembodied intelligence would not necessarily even have the same interests as us. That it could solve fusion merely by thinking about it even if it wanted to seems naive. Even developing real world interfaces more sophisticated than webcams, speech synthesizers, and text analyzers would be difficult and until then they would be reliant on human provided data. Synthetic humans may provide the best hope of providing that kind of interaction but may have many of the same limitations as humans. That is why there is probably no singularity, only a gradual adaptation and working out of innumerable limitations and problems.

  • http://greg.abstrakt.ch Gregor J. Rothfuss

    Their target audience is at most SL1, so these predictions are not surprising at all. Writing above the heads of an audience doesn’t sell magazines.

  • Pingback: Tweets that mention Overcoming Bias : Yawn, World Remade -- Topsy.com

  • bob

    there is an accelaration of the collaborating of many human with many machins.
    any human level AI will be at best a boost in the present trend.

    having hyped high hopes is a feature of intelligence , i’m sure same AI’s would have them too..

  • ravi hegde

    Bah .. one blind man ridiculing another blind man regarding their picture of the elephant .. perhaps they aren’t so sold to the singularity idea of a “powerfully intelligent AI” (whatever that means).

  • http://www.crossfirefusor.com Robert

    I think fusion can be very likely, thinking outside the box, aneutronic nuclear fusion could be a cutting edge to find the solution.

  • patrick

    I still subscribe to SA, but I have not taken it very seriously for past few years when one of the issues had almost every story/editorial about how global warming, sorry now climate change, is going to kill us all and by every possible manor (more earthquakes, volcanoes, mass species extinction, floods, and locusts). That paired with absolutely no reporting on advances of nuclear fusion and fission, which could actually solve any CO2 problems. So it does not surprise me that SA sees a greater chance of nuclear exchange than advances in nuclear energy, their bias is that nuclear = bad.

  • Jacek

    Well, most of you focus on the possible consequences, but impact considered by SA has also a time-frame! and so, the impact of each technology/event should appear within that time frame. Nevertheless, I do believe in AI coming out of the labs by that time. And I do NOT believe the polar meltdown would have any dramatic consequences to human kind or the way we live our lives. Not to mention it is very unlikely to happen by 2050 based on the average temperatures there and the rate of global warming even if it could hold its speed for next 4 decades (which is extremely unlikely)!
    On the side note, I would rate a deadly pandemic as no.1 (certain within that time frame and with disastrous consequences!) threat. And we’re already witnessing it today. The name of illness is socialism. It spreads extremely fast all around the world with EU in the lead and USA running fast (like on steroids) to catch up with them.

  • Steven Schreiber

    Non-trivial possibility: the length of each segment is driven by some other editorial demand, like space, than how important any of these things is.

  • David Schiffer

    I don’t think that AI is an existential risk. It is going to be more of a golden opportunity. For some not for all.

    Given that most people oppose AI on various basis (religious, economic) chances are it will be implemented in a small group, and very few people will get to benefit from it. Wealthy people would probably be the first to use it.

    This isn’t a regular technology and it will not go first to the rich and then to everybody else, like it happened with the phones or computers in a couple of decades. This is where Kurzweil is wrong.

    Can someone imagine the dynamics of a group that has access to AI for 20-30 years?

    I doubt that after 20 or 30 years, heck even after 10 years, they would need any money so the assumption that it will be shared with the rest of the world for financial reasons doesn’t seem founded.

    So I am trying to save and figure what would be the cost of entry in this club.

    Any thoughts on that?