Aaronson on Singularity

Scott Aaronson says "The Singularity Is Far":

Last week I read Ray Kurzweil’s The Singularity Is Near, which argues that by 2045, or somewhere around then, advances in AI, neuroscience, nanotechnology, and other fields will let us transcend biology, upload our brains to computers, and achieve the dreams of the ancient religions, including eternal life and whatever simulated sex partners we want. … While I share Kurzweil’s ethical sense, I don’t share his technological optimism.  Everywhere he looks, Kurzweil sees Moore’s-Law-type exponential trajectories … I [instead] see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed … if the Singularity ever does arrive, I expect it to be plagued by frequent outages and terrible customer service. …

[Here is] why I haven’t chosen to spend my life worrying about the Singularity. … There are vastly easier prerequisite questions that we already don’t know how to answer.  … [We may] discover some completely unanticipated reason why … uploading our brains to computers was a harebrained idea from the start … Given our current ignorance, there seems to me to be relatively little worth saying about the Singularity – and what is worth saying is already being said well by others. … the Doomsday Argument … [gives] a certain check on futurian optimism.  …

Had it not been for quantum computing, I’d probably still be doing AI today. … I’d say that human-level AI seemed to me like a slog of many more centuries or millennia. … The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to humans than humans are to dogs. …


A point beyond which we could only understand history by playing it in extreme slow motion. … While … [this] kind of singularity is possible, I’m not at all convinced of Kurzweil’s thesis that it’s "near" (where "near" means before 2045, or even 2300).  I see a world that really did change dramatically over the last century, but where progress on many fronts (like transportation and energy) seems to have slowed down rather than sped up; a world quickly approaching its carrying capacity, exhausting its natural resources, ruining its oceans, and supercharging its climate; a world where … millions continue to die for trivial reasons, and democracy isn’t even clearly winning out over despotism; … before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message. …

Scott, we agree on many things.  Yes, hand coded AI seems damn hard and far off, yes AIs would be comprehensible and their arrival need not fix despotism, death, poverty, or ecological collapse, yes you shouldn’t be building AIs, yes doomsday arguments warn of doom, and yes most economists see steady modest growth, not the rapid acceleration Kurzweil touts – he says every tech ever hyped will realize its promise and more in decade or two.  And yes we might prefer to complete the Enlightenment before new tech disruptions. 

But it is not a matter of choosing what we want to happen when; it is a matter of not being blind-sided by what does in fact happen.  My "singularity" concern is AI via whole brain emulation, which seems likely within a half century or so, when it should remain quite relevant.  The neglect isn’t so much too few folks working to make it happen as too little integration of this scenario into future policy visions, including visions of poverty, ecology, population, despotism, etc.  Given its likelihood and potency, this scenario deserves more attention than Medicare trust-fund depletion, Chinese military dominance, or even global warming.   

Yes we are ignorant, but (as I think I’ve personally shown) such ignorance can be substantially reduced by more study, and the main reason almost no good academics study this is that most  academics think the subject much too silly.  If you could help to turn this tide of silliness-perception, now that could be of great value.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://www.scottaaronson.com Scott Aaronson

    Thanks for the reply, Robin. I hope I’m doing my small part to “turn the tide of silliness-perception”: I find the topic sufficiently interesting that I felt the need to justify why I haven’t devoted my life to it! 🙂 I also agree that the possibility of whole-brain simulation deserves attention; I simply don’t find it nearly as plausible within the next half-century as you do, and I don’t agree that it deserves more attention than global warming. Admittedly, the best argument I have for the difficulty of the problem is just the “AI Fermi Argument”: if it were that easy, then why wouldn’t there have been more success already? We have loads of computation cycles right now. The limiting factor seems to me to be the immense difficulty of reverse-engineering the brain.

  • http://hanson.gmu.edu Robin Hanson

    Well certainly something people haven’t done yet can’t be really easy, and something folks been unsuccessfully attempting for N years isn’t likely to succeed in the next N/10 years. The Future of Humanity Institute will have a detailed report on whole brain emulation out soon – I’ll be sure to blog it.

  • Tim Tyler

    Re: most economists see steady modest growth, not the rapid acceleration Kurzweil touts

    Kurzweil seems optimistic for the usual reasons – he sells technology to the public.

    However, he is pessimistic as well – he apparently thinks AI won’t arise until we can simulate the human brain, which he pencils in an 2029, whereas any reasonable assessment has that as one of the slowest – and therefore least likely paths to AI. Of course, it is no accident that a vision of AI based on human brains seems more palatable (and easier to sell) to many people than a machine takeover does.

    Re: hand coded AI seems damn hard and far off

    It is pretty difficult to tell how hard and far off it is. Many of the parties involved are motivated to claim AI is near (to attract funding) – so many experts on the issue can’t be trusted. The public haven’t got hold of AI yet. It’s hard to say much more than that.

    For example, James Harris Simons made 2.8 billion dollars last year with a computer program – which he won’t talk about. It seems as though he has the funds and motivation to create AI – if he hasn’t done it yet.

  • Birgitte

    “yes you shouldn’t be building AIs”

    Where did that suddenly come from? Surely you mean we shouldn’t be building AIs that turn us into paperclips?

  • Tim Tyler

    Re: yes we might prefer to complete the Enlightenment before new tech disruptions

    ISTM that AI is the mechanism of enlightenment. You can’t expect much enlightenment if your brain operates at 20 Hz – and is still a close cousin of a slug’s brain.

  • Aaron

    Tim: I believe they mean the cultural enlightenment, liberalism (in it’s broad forms), equal rights, and all that, not personal enlightenment.

  • Tim Tyler

    Re: the “AI Fermi Argument”: if it were that easy, then why wouldn’t there have been more success already? We have loads of computation cycles right now.

    IMO, computers today are too slow, feeble and expensive. It costs a few hundred thousand US Dollars to make a human being in the USA – and probably much less in China. You can’t buy brain-equivalent computing hardware for that price – and the operating costs for what you can buy for that are enormous. Cost/performance ratios are of critical importance to many economic applications of intelligence.

    Re: most economists see steady modest growth, not the rapid acceleration Kurzweil touts

    It’s technology that’s growing exponentially, not the GNP. It will take time for technology to displace humans from the economy – since at the moment technology is not very competitive in a critical area: thinking.

  • Nitpicker

    “It’s technology that’s growing exponentially, not the GNP.”
    Umm…the reason we talk about GDP growth as a percentage rather than in absolute terms is because GDP does grow exponentially, with a longer doubling time than integrated circuits on a chip. Compound interest and economic growth are classic examples of exponential functions in calculus textbooks.

  • http://michaelgr.com/ Michael G.R.

    “[…]millions continue to die for trivial reasons, and democracy isn’t even clearly winning out over despotism; … before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message.”

    I find that argument a bit weird.

    Why couldn’t you also say:

    “[…] millions continue to die, etc, before we think about connecting hundreds of millions of computers around the world for rich people to play with, a reasonable first step might be to eradicate poverty/whatever.” In fact, why have anything more than what is absolutely necessary to survive while there is still so much unfairness out there?

    Seems like a false choice.

    If those at the avant-garde had to wait for everybody else to catch up and for the world to be homogenous before taking another step, the world would be a worse place, not better. If you care about suffering, poverty, expanding human knowledge, liberty, etc, you should want to use the best tools for the job. Friendly AGI is the best tool for pretty much any job, IMO. We shouldn’t put all our eggs in the same basket, but right now the situation looks more like putting eggs in all baskets except that one.

  • Gwern

    “Given our current ignorance, there seems to me to be relatively little worth saying about the Singularity – and what is worth saying is already being said well by others. … the Doomsday Argument … [gives] a certain check on futurian optimism. …”

    Well, I suppose that is one way to interpret the Doomsday Argument in a singularity context. The other way of course is the optimistic way: we’ll be replaced by something better, not we’ll just cease to exist.

  • Thanatos Savehn

    Almost 20 years ago I, always an early adopter, went out on a limb and as the associate member of my firm’s technology committee pushed hard to buy Ray’s OCR technology so we could start scanning and, thereafter, searching two big clients’ mountains of paper needed in current and future discovery. We did. It didn’t work out so well.

    Ray’s predictions turned out to be waaaaaaay off. Indeed, his predictions about when OCR technology would get to a point where it would be more practical to scan docs than to simply retype them and search them using Boolean operators was off by about 4-fold. In other words, I’m guessing I’ll be dust whenever Ray’s singularity comes about.

    PS I’m actually far more sanguine about advances in biotechnology. The dark ages are passing; I predict we’ll be able to roll back the ravages of time beginning in 10 years. Then again, I’ve always been a believer.

  • http://profile.typekey.com/troped/ banapana

    I don’t see how anyone can make predictions about the future of intelligence (as Kurzweil does) based on hypothetical calculations of how much “processing power” the human brain has. There are very reasonable neurological arguments currently under discussion that would put the processing power of the human brain at factors of thousands more than current estimates. The kinds of predictions that Kurzweil et al makes are really just silly and only get any credibility because the people making them are really smart. Good science isn’t excepted because the people who said it were really smart. Good science gets excepted because it is rigorously tested. Where’s the proof that these predictions are reasonable? Furthermore, an explosion of artificial species doesn’t have to mean they’re intelligent, and intelligence may have nothing to do with processing power anyway.

  • http://profile.typekey.com/troped/ banapana

    And then there are those of us small individuals who can’t even spell “accepted.”

  • sonic

    There is a good reason not to be overly concerned about the ‘singularity’. That reason is that the ‘singularity’ will never occur.
    Of course one could argue with that last statement.
    But very few people want to argue with the notion that polluting the earth’s atmosphere is a bad idea and that we need to solve this problem.
    Why worry about something that is probably impossible when we have so many real problems to deal with?

  • Tim Tyler

    Re: There are very reasonable neurological arguments currently under discussion that would put the processing power of the human brain at factors of thousands more than current estimates.

    A factor of a thousand is ten doublings. Not too much for Moore’s law to sweat over. But where are these “very reasonable neurological arguments” – and what are they actually worth? The existing extimates come from multiple sources – as explained by Kurzweil – and they haven’t changed that much in 20 years.

  • mjgeddes

    As to Singularity:

    The ‘true believers’ in the idea that Libertarianism is the best political system are also the most enthusiastic supporters of the idea that ‘Bayes is the secret of the universe’. The ‘market’ tries to assign value to everything in purely functional terms. In much the same fashion, ‘Bayes’ tries to assign truth values purely in functional terms (ie external prediction sequences). Major irony there.

  • tim

    >But very few people want to argue with the notion that polluting the earth’s atmosphere is a bad idea and that we need to solve this problem.
    >Why worry about something that is probably impossible when we have so many real problems to deal with?

    In early 1945, how many people were occupied with ending World War II in comparison to how many were occupied with preventing atomic war? WW2 ended that year; the prospect of nuclear annihilation would remain a critical concern for decades to come. Of course, both issues required critical attention, just like obtaining food is a daily concern for me, while paying my rent is a monthly concern, and continuing my education is a yearly concern. However, that shouldn’t relegate my longest-term concerns – like what I’ll be doing in twenty years – to the dustbin.

  • Peter St. Onge

    Robin,

    Curious, are there people you think are doing a particularly good job of “integration of this scenario into future policy visions, including visions of poverty, ecology, population, despotism, etc,” either in fiction or non-fiction?

    Definitely agree on reducing the silliness factor, I guess the question is how to match Eliezer’s “future shock level” to the audience. Hard to do when SL1 media gloms on to SL4 concepts and ridicules them to SL0 readers…

  • http://shagbark.livejournal.com Phil Goetz

    The ‘true believers’ in the idea that Libertarianism is the best political system are also the most enthusiastic supporters of the idea that ‘Bayes is the secret of the universe’. The ‘market’ tries to assign value to everything in purely functional terms. In much the same fashion, ‘Bayes’ tries to assign truth values purely in functional terms (ie external prediction sequences). Major irony there.

    If by ‘irony’ you mean ‘consistency’.

  • http://michaelgr.com/ Michael G.R.

    “I don’t see how anyone can make predictions about the future of intelligence (as Kurzweil does) based on hypothetical calculations of how much “processing power” the human brain has.”

    You have no read Kurzweil’s book, have you?

    Maybe just watching this would be a good start:

    http://mitworld.mit.edu/video/327/

    His line of reasoning with regard to the amount of processing power it takes to emulate a human brain is also based on the spatial and temporal resolution of brain scanning techniques, which are also improving exponentially. He’s saying that at some point we’ll be able to scan a human brain with enough precision to then be able to simulate it in a model running on hardware that is fast enough (with enough memory, etc) to make it work.

    Nobody can know if that’s going to be the way AI will happen, but it’s one of many plausible paths.

  • mjgeddes

    >If by ‘irony’ you mean ‘consistency’

    No, the irony is that both are equally wrong 😉

    Bostrom’s Oracle’s the way to go man. Get into Ontology/KR. You only get a good Upper Ontology to a get a universal parser. ; Semantic Web is evolving, a good Upper Ontology will be commerically valuable soon. Get some money.

  • Tim Tyler

    Re: where are these “very reasonable neurological arguments” – and what are they actually worth?

    I notice that Stuart Hameroff claims the estimates are out by a factor of a trillion – e.g. see: A New Marriage of Brain and Computer.

    However, as far as I can tell, Hameroff is off in Penrose’s la-la land – and appears to have totally lost touch with reality 🙁

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    I pretty much agree with Robin in the OP (I haven’t read all the comments yet). Scott’s argument as quoted here seems suspiciously kitchen sink to me: he seems to start with an understandable emotional desire not to see Kurzweil’s singularity happen yet, and then throws a variety of unconnected but emotionally aligned reasons for it not to happen. In contrast I think Robin gets it right.

    I do think history suggests the end of the world, immortality, etc. is not in our lifetime any more than it was in Ponce de Leon or Charles Lindbergh’s lifetime, but there is some serious counterevidence in our time, such as the gap between the predicted time to complete the human genome project and its actual completion.

  • http://www.transhumangoodness.blogspot.com roko

    @hopefully anonymous: yes, that was exactly the impression I got when reading Aaronson’s post. For example, he gives timeframes of “centuries or even millenia” for human equivalent AI with no justification whatsoever. Scott: if you make specific predictions about the future, you should have a justification for them. Aaronson’s post looks like another example of emotionally motivated, pessmistic careless futurism, which eliezer dealt with in his bloggingheads interview with john Horgan.