Miller’s Singularity Rising

James Miller, who posted once here at OB, has a new book, Singularity Rising, out Oct 2. I’ve read an advance copy. Here are my various reactions to the book.

Miller discusses several possible paths to super-intelligence, but never says which paths he thinks likely, nor when any might happen. However, he is confident that one will happen eventually, he calls Kurzweil’s 2045 forecast “robust”, and he offers readers personal advice as if something will happen in their lifetimes.

I get a lot of coverage in chapter 13, which discusses whole brain emulations. (And Katja is mentioned on pp.213-214.) While Miller focuses mostly on what emulations imply for humans, he does note that many ems could die from poverty or obsolescence. He make no overall judgement on the scenario, however, other than to once use the word “dystopian.”

While Miller’s discussion of emulations is entirely of the scenario of a large economy containing many emulations, his discussion of non-emulation AI is entirely of the scenario of a single “ultra AI”. He never considers a single ultra emulation, nor an economy of many AIs. Nor does he explain these choices.

On ultra AIs, Miller considers only an “intelligence explosion” scenario where a human level AI turns itself into an ultra AI “in a period of weeks, days, or even hours.” His arguments for this extremely short timescale are:

  1. Self-reproducing nanotech factories might double every hour,
  2. On a scale of all possible minds, a chimp isn’t far from von Neuman in intelligence, and
  3. Evolution has trouble coordinating changes, but an AI could use brain materials and structures that evolution couldn’t.

I’ve said before that I don’t see how these imply a weeks timescale for one human level AI to make itself more powerful than the entire rest of the world put together. Miller explains my skepticism:

As Hanson told me, the implausibility of some James Bond villains illustrates a reason to be skeptical of an intelligence explosion. A few of these villains had their own private islands on which they created new powerful weapons. But weapons development is a time and resource intensive task, making it extremely unlikely that the villains small team of followers could out-innovate all of the weapons developers in the rest of the world by producing spectacularly destructive instruments that no other military force possessed. Thinking that a few henchmen, even if led by an evil genius, would do a better job at weapons development than a major defense contractor is as silly as believing that the professor on Gilligan’s Island really could have created his own coconut based technology. …

Think of an innovation race between a single AI and the entirety of mankind. For an intelligence explosion to occur, the AI has to not only win the race, but finish before humanity completes its next stride. A sufficiently smart AI could certainly do this, but an AI only a bit brighter than von Neumann would not have the slightest chance of achieving this margin of victory. (pp.215-216)

As you can tell from this quotation, Miller’s book often reads like the economics textbook he wrote. He is usually content to be a tutor, explaining common positions and intuitions behind common arguments. He does, however, explain some of his personal contributions to this field, such as his argument that preventing the destruction of the world can be a public good undersupplied by private firms, and that development might slow down just before an anticipated explosion, if investors think non-investors will gain or lose just as much as investors from the change.

I’m not sure this book has much of a chance to get very popular. The competition is fierce, Miller isn’t already famous, and while his writing quality is good, it isn’t at the popular blockbuster popular book level. But I wish his book all the success it can muster.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • Tim Tyler

    Normally, investors invest in order to help ensure they benefit more than non-investors from any resulting changes. They are unlikely to stop wanting this – and if others don’t want their investments, then their loss probably won’t be missed.
     

    • Tax Slave

      Don’t worry. They will just buy off the state to force the rest of us to cover their losses. Public risk, private profit. It’s the bankster way.

  • V_V

    Singularity was already an old idea when Kurzweil’s last book came out and when you and Yudkowsky had your debate.From your review, it doesn’t seem that this book adds anything significant.

  • Jay

    It seems to me that we already have ultrapowerful AIs, foremost among them the Googleplex.  If it lacks any particular motivations or desire to grow, that’s only because it isn’t clumsily welded into an ape.

  • kurt9

    But weapons development is a time and resource intensive task, making it
    extremely unlikely that the villains small team of followers could
    out-innovate all of the weapons developers in the rest of the world by
    producing spectacularly destructive instruments that no other military
    force possessed.

    Generally this is true, especially for nuclear weapons technology. However, there are two caveats that must be mentioned. One, large institutions such as governments and large corporations are bureaucracies, and it is common knowledge that bureaucracies have a hard time with innovation. Two, we talk a lot of the AI revolution, which is still mostly theoretical to me. However, the parallel revolution in manufacturing; 3D printing.additive manufacturing and later some kind of nanotechnology, will make it possible for small groups to accomplish things that only governments and large corporations can do now. I call this the manufacturing revolution or singularity. It will tip the balance in favor of the individual and small groups. Indeed, Peter Thiel is adamant that this revolution is absolutely essential for the preservation and expansion of individual liberty. I completely agree with him on this point.

    Governments and large corporations are dinosaurs, and rightly deserve extinction. However, they will not go quietly into the night. The pursuit of liberty, as always in the past, will require struggle.

    I remain skeptical on the promise of A.I. We still know little about neurobiology and even when we do understand it, modeling it on semiconductor-based computers using software will prove to be a very difficult feat.

     

    • V_V

       

      However, the parallel revolution in manufacturing; 3D printing.additive
      manufacturing and later some kind of nanotechnology, will make it
      possible for small groups to accomplish things that only governments and
      large corporations can do now. I call this the manufacturing revolution
      or singularity. It will tip the balance in favor of the individual and
      small groups.

      What is the one thing that can be manufactured easily without expensive equipment?

      Software.

      Who does actually make most of commercial software?

      Large corporations.

  • http://entitledtoanopinion.wordpress.com TGGP

    New readers might not realize that Miller didn’t just post here once but a decent number of times.

  • GNZ

    maybe the book covers this but I imagine this scenario.

    1) There is a system that provides additional intelligence to people (imagine a chip in your brain to some software that processes Google searches  of sets of data that you pass to it then gives you back intelligent answers to your queries).
    2) One of these systems proves better than the others and so dominates the market. In at least some tasks it beats other models by a tiny fraction of a percent and due to the nature of those markets, people (and system itself) can leverage that to force other systems out.

    3) people with such a device outperform those without it in pretty much any area that they might care about (from dancing to dating to working)

    4) impossible to tell if you use it unless you want people to know.

    5) pretty cheap to make a mass produced receiver, almost 0 marginal cost to have another user added to the system, simple operation to have it installed no maintenance required within a lifetime.

    6) now lets say it proves advantageous for the system to be “intelligent”.

    Now there is no race between the AI and the rest of intelligence as everyone is using the AI to do most of the heavy lifting anyway. With appropriate controls this AI just continues to intelligently serve requests and engages in whatever private internal thoughts that it cares to concern itself with.

  • dmytryl

    The human level AI is just 1 more human that’s maybe working on AI, for a speedup factor of perhaps 0.00000001% (assuming 10 billions population at time of AI) . Actually, wait, even that is over optimistic. Human level AI is a speedup of perhaps 0.00000001% after 20 years.

    It is still possible that humans would improve the AI to superhuman level relatively quickly, but I am dubious about that. A lot of intelligent things we do are NP complete or even EXPSPACE. One can of course give the scifi response of ‘heuristics’, and that bit of technobabble would do to close a plot hole in a scifi story. Outside the context of making stories, though, for plenty of problems there are no good heuristics or no heuristics substantially better than a known one. People tend to e.g. imagine very deep prediction of chaotic systems by a superintelligent being. That’s a task which requires exponential knowledge, space, and computing time, in the length of the prediction, and it’s the fundamental property of system you’re trying to predict (sensitivity to initial conditions, see Lyapunov’s exponent). On anything exponential, you need to be to mankind as mankind is to amoeba to merely double your ability.

  • Mark Bahner

    “I’ve said before that I don’t see how these imply a weeks timescale for one human level AI to make itself more powerful than the entire rest of the world put together.”

    Let’s say artificial intelligence is only able to double its level of intelligence every 6 months (not even tremendously faster than Moore’s Law). That still means that in 5 years it improves in intelligence by a factor of 1000. (!!!)

    So in a little more than one term of a President, computers go from as smart as humans to 1000 times smarter. That’s mind-boggling.

  • http://www.gwern.net/ gwern
  • Pingback: What I’ve been reading | Pablo's miscellany

  • Pingback: James Miller on Unusual Incentives Facing AGI Companies | Machine Intelligence Research Institute