Stross on Singularity

I’ve long enjoyed the science fiction novels of Charlie Stross, so I’m honored that he linked to my Betterness Explosion from his Three arguments against the singularity:

I periodically get email from folks who, having read “Accelerando”, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. … It’s time to set the record straight. … Santa Claus doesn’t exist. …

(Economic libertarianism is based on … reductionist … 19th century classical economics — a drastic over-simplification of human behaviour. … If acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism.) …

I can’t prove that there isn’t going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won’t work, or that we are or aren’t living in a simulation. … However, … the prospects aren’t good.

First: super-intelligent AI is unlikely because … human-equivalent AI is unlikely. … We’re likely to leave out … needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own. … We clearly want machines that perform human-like tasks. … But whether we want them to be conscious and volitional is another question entirely.

Uploading … is not obviously impossible. … Imagine most of the inhabited universe has been converted to a computer network, … programs live side by side with downloaded human minds and accompanying simulated human bodies. … A human mind would lumber about in a massively inappropriate body simulation. … I strongly suspect that the hardest part of mind uploading … [is] the body and its interactions with its surroundings. …

Moving on to the Simulation Argument: … anyone capable of creating an ancestor simulation wouldn’t be focussing their attention on any ancestors as primitive as us. … This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment … We may eventually see mind uploading, but … our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it. …

The simulation hypothesis … we can’t actually prove anything about it. …. Any way you cut these three ideas, they don’t provide much in the way of referent points for building a good life. … It’s unwise to live on the assumption that they’re coming down the pipeline within my lifetime.

Alas Stross’s post is a bit of a rant – strong on emotion, but weak on argument. Maybe Stross did or will explain more elsewhere, but while he makes clear that he doesn’t want to associate with singularity fans, Stross doesn’t make clear that he actually disagrees much. Most thoughtful singularity fans probably agree that where possible hand-coded AI would be designed to be solicitous and avoid human failings, that simple unmodified upload minds are probably not competitive creatures in the long run, and that only a tiny fraction of our distant descendants would be interested in simulating us. (We libertarian-leaning economists even agree that classical econ greatly simplifies.)

But the fact that hand-coded AIs would differ in many ways from humans says little on the key issues of when AI will appear, how fast they’d improve, how local would be that growth, and how fast the world economy would grow as a result. The fact that eventually unmodified human uploads would not be competitive says little on the key issues of whether uploads come before powerful hand-coded AI, how long nearly unmodified uploads would dominate, or just how far from humans would be the most competitive creatures. And the fact that few descendants would simulate ancestor humans says little on the key question of how that small fraction multiplied by the vast number of descendants compares to the actual number of ancestor humans. (And the fact that classical econ greatly simplifies says little on the pleasantness of libertarian policies.)

Stross seems smart and well-read enough to have interesting things to say on these key questions, if only he can overcome his personal revulsion against affiliating with singularity fans, to directly engage these questions.

GD Star Rating
a WordPress rating system
Tagged as: , , ,
Trackback URL:
  • http://www.uncrediblehallq.net/ Chris Hallquist

    I think it’s pretty clear that Stross does disagree with singularity fans. He thinks both hand-coded human-equivalent AIs and EMs are a long way off, not at all in the foreseeable future.

    Personally, given my current knowledge base, I’d predict that we’ll get EMs long before we get human-equivalent AI, and that modifying the EMs into something better will be a long, slow process.

    Do you think otherwise?

  • http://lukeparrish.rationalsites.com Luke Parrish

    … It’s unwise to live on the assumption that they’re coming down the pipeline within my lifetime.

    Is this an endorsement of cryonics?

  • http://www.webgeekhq.net Craig

    I think “personal revulsion against affiliating with singularity fans” is a bit extreme. More like annoyance with singularity zealots.

    He seems to regard the topics as interesting things to talk about, just not likely to happen anytime soon, and certainly not worth planning your life around.

    If he truly didn’t want to interact with singularity fans, I think he’s smart enough to know better than to write blog posts on the subject.

  • http://www.ciphergoth.org/ Paul Crowley

    Luke: he considers cryonics “faith-based” and “barely more practical than the Ancient Egyptian mummy-makers”.

    • http://lukeparrish.rationalsites.com/ Luke Parrish

      Luke: he considers cryonics “faith-based” and “barely more practical than the Ancient Egyptian mummy-makers”.

      Would this statement apply to all cryonics that might feasibly be achieved within a human lifetime per his prediction, or is it limited to cryonics achieved to date? This is a crucial, frequently glossed-over distinction.

  • IVV

    Robin,

    I’m sure you’ve already addressed this somewhere, but I’m combining some of your thoughts, and this is what I’m seeing:

    Ems would be held at subsistence-level rates. One of the advances that would allow for mass emulation is the singularity. Would it not follow that if the singularity is possible, then it is likely that it has already happened, and that we are all subsistence-level ems within a simulation run by the singularity-intelligence?

  • Aron

    ” how long nearly unmodified uploads would dominate, or just how far from humans would be the most competitive creatures”

    These are all things that seem poorly integrated in your own writings.

  • vampirarchist

    As he explains in the comments, he’s sick of fat singularity fanboys pestering him because of his work:

    I’m not convinced that the singularity isn’t going to happen. It’s just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it’s going to be until they can upload into AI heaven and leave the meatsack behind.

    (Maybe if they paid more attention to meatsack maintenance in the short term they’d have a better chance of surviving to see the rapture of the nerds — and even enjoy the meatsack experience along the way.)

    Moravec’s writing is what turned me on to transhumanism in the first place, in the late 1980s/early 1990s.

  • Esebian

    strong on emotion, but weak on argument

    He certainly is a polyhistor, but sadly this applies to many a statement of his (or it’s just me still being cross with him for getting snarky at a nuclear power pro/contra). I as a layman can’t say much about the feasibility of emulating a complex neural network or other singularity-related topics, but his argumentation sounds reasonable, though.

    What I wonder is, if smart people should figure out a smart process of creating a superhuman or even “god-like” AI, how will they guarantee that its agenda is congruent with my and the rest of humanity’s well-being?