Singularity PR Dupes?

I’m to speak at a $500-per-attendee Singularity Summit in New York in early October. “Singularity” is associated with many claims, but most are controversial. They say:

The Singularity represents an “event horizon” in the predictability of human technological development past which present models of the future may cease to give reliable answers, following the creation of strong AI or the enhancement of human intelligence.

(They also list related definitions.)  An awful lot of folks, perhaps even most, consider these ideas silly and/or crazy.  They also say:

The Singularity Summit is the world’s leading dialog on the Singularity, bringing together scientists, technologists, skeptics, and enthusiasts alike.

But looking over their program, I noticed that while many speakers are distinguished, those folks won’t directly address the controversial claims; they will instead talk on their usual topics.  A few will talk on how they are trying to design general machine intelligence, but only Kurzweil, Yudkowsky, and Salamon will speak directly to the main controversial issues, and they will take “pro” sides.  As far as I can tell, only I will take a somewhat con side (explained below), but only on some claims, and only tangentially to my brief talk.

It seems as if the organizers plan to gain credibility for their claims by having credible people speak at an event where some speakers make such claims, even if those credible speakers do not address those claims.  Such organizers even expect to gain credit for promoting a “dialog.”  How common is this strategy?  How effective?  How fair?  How much does agreeing to speak at such an event make it seem that you agree with its theme claims?   How many of the summit’s distinguished speakers do agree with those claims?

Those who followed my debate here at OB with Eliezer Yudkowsky last year (e.g., here, here) will be familiar with all this, but let me review.  Here are some of the more controversial claims associated with “singularity”:

  1. Progress is accelerating rapidly across a wide range of techs.
  2. Smarter than human machines are likely in a few decades.
  3. Such machines will induce dramatic and rapid social change.
  4. This change is impossible to foresee; don’t even try.
  5. A single localized super-smart machine or a cabal of them is likely to take over everything.
  6. That cabal’s values determine everything, but via self-modification could become anything.
  7. So everything depends on finding a way to give such machines stable values we like.
  8. No one should try to make super smart machines before knowing how to do this.

I disagree with many but hardly all of these:

  • No, overall neither econ nor tech progress is much accelerating lately.
  • Yes, smarter than human machines are likely in roughly a half century to a century or two, but most likely because whole brain emulations will first induce an important era of near human level machines.
  • Yes, this em era will bring huge rapid social changes, but we can and should use social science to foresee these changes.
  • Yes, this em era may well end via super smart machines, and yes it is hard to constrain the values of the distant future, but a single local machine or cabal taking over everything and then immediately evolving out of value control seems extremely unlikely.  It runs counter to most of our econ and tech innovation experience, and the theories we use to make sense of that experience.
  • Yes, a few powerful-enough mind-design insights could conceivably allow one brash team to leap this far ahead of the world, and some folks should think about how to give machines stable values we like, but most futurists should focus on more likely scenarios.
GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • http://www.capyblanca.com Alexandre Linhares

    As one of the Dr Frankensteins involved in the Cogsci/AI thing, I applaud you if you tell our friends there that they’re pretty much over the top. Just don’t expect much love from the audience. Doug Hofstadter was in one of these in Stanford some years ago, and he threw cold water all over the place (I think the whole thing is on the web somewhere); and later was shocked to see that the audience was just interested in confirming their awe for these silly daydreams.

  • http://www.overcomingbias.com/ Mike

    One thing that would make the Singularity people more credible when speaking about the impact of technology on society is if they included more than one or two people who study that. Instead, the people involved in this seem to be trying to set themselves up as the philosophers & intellectuals of the new age they herald, and we should listen to them because they’ve done some very clever things with computers.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    I think a lot of these folks should get less attention and Nick Bostrom and De Gray more attention. We could use more reduction of existential risk enthusiasts.

  • http://lesswrong.com/ Eliezer Yudkowsky

    But looking over their program, I noticed that while many speakers are distinguished, those folks won’t directly address the controversial claims; they will instead talk on their usual topics.

    This is a little unfair, Robin. Do you have any idea – this is an actual question, come to think – do you have any idea how hard it is to find prestigious speakers who have developed sensible theses on the Intelligence Explosion (recursive self-improvement) or Event Horizon (smarter-than-human intelligence)? Any halfway prestigious speaker who had developed any halfway complex thesis whatsoever on these topics would get an immediate invite to the Summit. But mostly what you find is, “Um… AI is hard… not gonna see it for another five thousand years” or “Accelerating technological change, woo woo!”

    So in preference to paying large amounts of money to have prestigious speakers say “AI is hard, not gonna see it for another five thousand year” or “accelerating change woo woo”, we pay the travel expenses of speakers who genuinely want to say interesting things that seem at least related to the topic.

    Those of you who are familiar with the debate that took place on OB between myself and Robin are invited to name one single debate on the Singularity of one-tenth the complexity that has ever happened anywhere.

    Mostly the questions are ignored. Getting speakers, at least some of them prestigious, to talk about at least related topics, and labeling the resulting event the “Singularity Summit” and giving it a lot of publicity, is part of the process of de-ignoring the question.

    You have written many times on how hard this sort of thing is to do – to get people to pay attention to unusual topics. What different strategy do you think the Singularity Summit realistically could pursue – i.e., not, “Pay Bill Clinton a million dollars to show up and talk about his personally developed thesis on endogenous growth models of machine self-improvement.” If you have a strategy that we should otherwise be pursuing, state it.

    And once again, we have, from the beginning of the Summit, been very neutral in our policy of inviting people who seem to have something to say, whether it seems “for” or “against”. In 2007, if I recall correctly, an actual majority of the speakers were Singularity skeptics – but at least Peter Norvig, Rodney Brooks, et. al. had something interesting to say about it. Of course they also mostly focused on the Accelerating Change thesis (Moore’s Law) in their skepticism, but you can’t have everything.

    • http://hanson.gmu.edu Robin Hanson

      Yes, for a topic where few prestigious folks have thought much, if you set as a goal getting publicity by getting prestigious folks to speak at your event, you will do better asking them to speak on vaguely related topics, relative to speaking directly to the topic. But if you really wanted a “leading dialog” on the topic, you’d sacrifice speaker prestige and require talks directly on the topic. And if you wanted to make the most intellectual progress in that dialog, you’d pick the folks who could talk most intelligently about it no matter how prestigious they were. (And if you thought me qualified, you might have me speak directly, rather than fitting it in tangentially.) Perhaps you could take out the “dialog” claim?

      • Carl Shulman

        Looking at the talks on the Summit websites from the past four years, I think it’s fair to say that the series of Summits is the leading conference for dialog on the topic, i.e. there is more and better dialog on the topic than at any other conference. Which conference would you say is ahead of the Singularity Summit in this respect? TED (Kurzweil, Susan Blackmore, maybe a few others have spoken on the topic)? SciFoo?

      • http://lesswrong.com/ Eliezer Yudkowsky

        But if you really wanted a “leading dialog” on the topic, you’d sacrifice speaker prestige and require talks directly on the topic.

        Why? These are big complicated issues that can be discussed in all sorts of venues other than 30-minute talks in expensive conferences. If the top priority was getting advanced original work done, maybe you would hold an informal workshop with carefully selected invitees. But that’s not the top priority.

        This all seems a bit idealistic for you, Robin. Hold a conference with unrecognized speakers and let it drop into the void? What for, besides making some kind of statement about idealism?

      • http://lesswrong.com/ Eliezer Yudkowsky

        (That is, not the top priority of the Singularity Summit.)

      • http://hanson.gmu.edu Robin Hanson

        Carl, as I told Anna, I was commenting on this particular event. Prior events by similar names may well have been more dialogs with skeptics.

        Eliezer, I did say “if.” Creating a dialog, especially with skeptics, may well not be your top priority. But if the appearance of a dialog even when there is not one is also not a top priority, why not drop the “dialog with skeptics” language from the summit website?

      • http://shagbark.livejournal.com Phil Goetz

        I agree with Eliezer – I would have thought that you would advise the Singularity Institute to gain acceptance of their views by affiliating themselves with higher-status scientists in just this way. Or at least admit that it would be a good strategy.

      • Patri Friedman

        As Phil says, I’m surprised you aren’t applauding them for their deft use of signaling in getting high-status affiliates. Obviously one goal for the Summit is to be an interesting dialog, but another is to increase the status of this area of research, and direct more funding an attention to it.

        I can see why you might feel their claim about being a “leading dialog” was slightly dishonest and be piqued by that, but do keep in mind that their strategy of increasing the status of futurism helps your career and research interests. They are contributing to an unusual public good that benefits you – shouldn’t you mix some appreciation in with your criticism?

  • Anna Salamon

    Some truth here regarding credibility-by-association, in that prestigious speakers draw increased interest to the Summit, and confer increased status on the topic of potential future sophisticated digital intelligence (AI/brain emulations), even if they talk about something else. I would hope that this will help to reduce the arbitrary aura of ‘silliness’ (of the arbitrary sort you often bemoan) and enable open, sincere engagement with the subject. One piece of evidence in support of that hope is the recent AAAI Panel on Long-Term Futures of AI which cited Kurzweil and the Singularity Institute in opening remarks explaining why it had been convened to bring serious academic consideration to the subject.

    On the other hand, there are a number of misleading claims here. For one thing, the three speakers you call “pro” (myself, Eliezer, and Kurzweil) are far from unanimously agreeing with your platform #1-8 of controversial claims. Kurzweil supports #1, but I do not, the conference organizer Michael Vassar does not, and Eliezer gave a talk at a previous Summit distancing his position from #1 (and #4). Kurzweil’s scenarios involve #2, but I would say such developments by 2040 are less likely than not, although likely enough to matter greatly in expected value terms, and Eliezer has not strongly emphasized any claims about AI development timelines. It’s absurd to include #4 in a “pro” platform describing the three mentioned speakers: I reject it, Eliezer has actively argued against it, and Kurzweil makes assorted claims about what will follow the development of super-smart machines. Kurzweil does not endorse #5-#8, and has argued against them. With sufficient adjustments claims #5-8 could probably be made to characterize my position, and probably Eliezer’s, but that would not mean unanimity.

    You suggest that there are no speakers critical of points in your list, but in fact there are several. Kurzweil will be critiquing #5-8, while Kurzweil’s trend-projection approach will be critiqued by tech forecasting speaker Bela Nagy. Anders Sandberg will lay out the brain emulation path to advanced digital intelligence, an alternative to the ‘powerful AI with potentially non-human values first’ possibility and thus a counter to #5-#8. Other speakers are likely to raise other criticisms.

    Looking at the overall history of the conference (this is the 4th, and there is time for only a limited number of speakers in each, with the mix in a particular year depending on speaker availability and other stochastic factors) a number of other speakers have evaluated some of the above eight points critically. Rodney Brooks said that he expected the eventual development of powerful AI, but little localization of influence (because of enhanced human capabilities and incremental development), with Peter Norvig having related comments. Doug Hofstadter’s speech was already noted in a comment above. James Hughes noted cognitive biases that might undermine the credibility of folk advancing claims from your list. John Horgan argued that “the Singularity is not near.” Marshall Brain and Bill McKibben raised critical points about the desirability of such technological developments, points that are relevant to a significant debate (even though it is not among your eight items). Others raised still other criticisms.

    • http://hanson.gmu.edu Robin Hanson

      I didn’t mean to imply that the eight claims were a package where all pro folk agreed with every claim. Yes, pro-singularity speakers may criticize each others’ varying singularity concepts, but that hardly constitutes a “dialog” between “skeptics and enthusiasts.” I agree there have been more critics in prior events; my post was about this event.

  • http://zbooks.blogspot.com Zubon

    Is the singleton claim common? I see it a lot in fiction, where having a single antagonist/messiah/combination is favored. I do not recall seeing it as much in serious claims. (“As much,” not “at all.”)

  • http://don.geddis.org/ Don Geddis

    #2, that AI will be solved “in a few decades”, seems like a red herring. While some singularity enthusiasts predict that, it’s hardly a mainstream view. I wonder how your objections change if you remove that single item. My guess is that there are a lot of folks who would generally agree with (some form of) all the claims except #2.

    If one thinks that it will all happen as you say (fear?), only it might take centuries or millenia, do your concerns change? Is it now simply that it’s premature to worry about it?

  • Robert Koslover

    $500 per attendee? Well, I certainly appreciate the fact that I do not have to shell out $500 for my daily dose of Overcoming Bias!

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Robin,
    Name some people you’d like to see there?
    I’d like to see Bostrom chairing a contentious panel, and de Grey as well, but this seems like a fine conference to me -although my interests are a bit skewed from the subject matter.

  • michael vassar

    I only disagree with Robin on the point about local vs non-local expansion. I’m not honestly convinced that he actually holds his supposed position in all seriousness however. His arguments for it seem literally incredibly bad, so I tend not to credit them as representing his actual opinion. Rather, I think that he favors his arguments as discursive ground rules because he sees them as a necessary foundation for discussion one step higher in quality than he sees promoted by the academic culture he is most concerned with.

    This seems analogous, to me, to Brian Caplan’s advocacy of free will. Caplan is simply too smart to be superstitious about free will. For instance, I don’t believe that he and I would differ much in the bets we would make regarding the speed at which two molecularly identical rooms full of people would diverge in their behavior. One doesn’t need superstition to think that on certain margins discourse will be improved by setting the assumption of “free will” as a ground rule.

    I will note that the culture of economics is one of accepting MASSIVELY simplified assumptions that are known to be not only false, but not even a sort-of-decent first approximation to the truth, such as rationality and complete information symmetry. I don’t disapprove. Arguments from such assumptions have been extremely fruitful and greatly inform my thinking, but we are talking about something that resembles modeling the Sun with the ideal gas law.

    Beyond that, Eliezer is completely seconded by me here, except that I have personally been in many more complicated debates on the Singularity than that between Eliezer and Robin, as have Eliezer, Carl and Anna.

    Finally, we ARE having an informal workshop with carefully selected invitees, not literally optimally selected invitees, but my life is more or less that, subject to status-related constraints.

    As for skeptics at this Summit, hold a poll at the workshop Robin. If I have magically identified a couple dozen new Singularitarians I will be greatly pleased. Certainly I expect many or most speakers to share their views on at least some of the points in the list. I don’t expect anyone to disagree with 3, and wouldn’t really be interested in engaging with people who didn’t agree with it, but I don’t expect the typical speaker to agree with a majority of the other points especially 2, which bears heavily on 3′s relevance.

    • http://hanson.gmu.edu Robin Hanson

      Michael, Caplan surely does believe in free will, and I really am skeptical about locality, and think econ assumptions are typically good enough approximations for a truth-orienting purpose. Perhaps if you posted or otherwise published your analyses on these issues we could respond to them. I don’t see the purpose of a summit survey; my complaint wasn’t about private views, but about the content of the talks. Yes the informal workshop might have more enthusiast-skeptic dialog directly on the controversial issues; I can’t tell yet as there is no program for that yet.

      • michael vassar

        Do you think Caplan and I differ in anticipations, even across hypothetical domains, because of this supposed difference in beleifs?

        You think that econ assumptions are nearly almost good enough, typically good enough, or often good enough to produce valid results? I certainly agree with the latter and said so. I don’t know what “truth orienting” means, so I don’t know if you disagree with me here. I certainly think that econ provides a very valuable tool kit for thinking about society… sufficiently valuable that I’m skeptical that anyone can have a very good idea about how society works without studying economics.

        Skepticism about locality, in the sense of placing a p<1 on it, or a p<.9 even? Hell yeah. p<.5? I’m skeptical. p<.1. I don’t buy it.

        I’d love it if you put more time into developing the program for the workshop. You are on the relevant thread.

    • http://yudkowsky.net/ Eliezer Yudkowsky

      I only disagree with Robin on the point about local vs non-local expansion. I’m not honestly convinced that he actually holds his supposed position in all seriousness however. His arguments for it seem literally incredibly bad, so I tend not to credit them as representing his actual opinion.

      That seems a bit strong. Are we talking about civilization-wide takeoff vs. BIABIAB (Brain in a Box in a Basement) takeoff? It hadn’t occurred to me to question that Robin actually believes in the former, and while his arguments display a very different idea of what sort of data to reason from and what kind of conclusions they can possibly license, such that I would reject the general form of the license as well as doubting the particular conclusions, I would hardly characterize the arguments as “incredibly bad”. Unless you’re just holding Robin to a really, really high standard.

      • Carl Shulman

        Vassar said literally incredibly bad, perhaps meaning that he doesn’t credit them? Michael?

      • michael vassar

        Yes, I literally don’t credit them as containing content that denotatively corresponds to Robin’s anticipations and/or to his models of optimal individual or social epistemology. Maybe his models of “best social epistemology likely to be achieved” though, as I said earlier.

        A very large part of the benefit of not moralizing about literal truth and untruth is that one can much more freely call attention to evidence that something is not literal truth and suggest alternative hypotheses without being moralistic about doing so.

        I don’t hold ANYONE to ANY standards. I do think that Robin in really really smart, and unusually epistemologically sophisticated for his intelligence. It is therefore not credible that he believes, narrowly in this place but not elsewhere, arguments of the form he is using.

  • http://www.theseedofreason.typepad.com Barnaby Dawson

    You say you’re skeptical of the ‘singularity’ but of the key claim you say:

    “Yes, smarter than human machines are likely in roughly a half century to a century or two, but most likely because whole brain emulations will first induce an important era of near human level machines.”

    I would roughly agree with this assessment (although I would estimate between 30 and 90 years before first human level intelligence and longer until they are common). I’m not sure about the path through whole brain emulations (although it seems a likely possibility) but I agree that there will be a significant period of near human level machines.

    What I mean to say is that you don’t seem to be a skeptic of the singularity at all as heavily implied by your introduction! I agree with your later points pretty well (although I might give differing reasons than you do in some cases) but wouldn’t describe myself as skeptical of the singularity because I agree (after much thought) with this key point.

    Indeed I have several friends who would all say they are not skeptical about the singularity but who completely agree with your points!

    Partly this is because none of them or myself consider the near future (30-50 years) to be predictable (in any detail) in any case (or ever to have been) so the predictability aspect seems weak to us.

    • http://hanson.gmu.edu Robin Hanson

      Yes, since I agree with many key points I am not clearly a skeptic, and might reasonably be lumped on the “pro” side. In which case there are no con-side folks scheduled to speak directly to the controversial issues.

      • Roko

        So your primary complaint is that there won’t be enough people stating falsehoods at SS09? i.e. we all agree that smarter than human intelligence is likely this century, but what you want is more people speaking who deny this?

        It strikes me that it would be more useful for you to ask for genuine debate amongst people who have differing extensions of the core facts that we all agree upon.

      • Carl Shulman

        Robin,

        Could you suggest some clearly skeptical folk with interesting points to invite next year?

      • http://yudkowsky.net/ Eliezer Yudkowsky

        Okay, we all know that if you can’t show a better way for an agent to do X, you can’t draw any conclusions about defects in the agent’s motives or reasoning around X.

        So to boil it all down: Name one person SIAI obviously should have invited to SS09 but didn’t.

      • http://www.transhumangoodness.blogspot.com Roko

        I think people should be selected on the basis of smartness and intellectual rigor and value of their ideas, not on the basis of whether they are “for” or “against” the idea of a smarter than human mind being possible in the next century, and that if it happens that all the smart people who have looked into the subject deeply agree that idea of a smarter than human mind being possible in the next century is probably correct, then that is no bad thing.

        Regarding suggestions…. I think that Hutter, Hanson, Yudkowsky and Bostrom are the smartest minds to have asked these questions. Of these we lack only Bostrom this year. I think that a substantive debate between these four would be of very high value.

      • Carl Shulman

        Roko,

        Dan Dennett and Richard Dawkins are both excellent thinkers and skeptical of near-term (next few decades) powerful AI, although they expect it eventually. They were invited to speak for that reason, but were not available.

      • http://www.transhumangoodness.blogspot.com Roko

        > Dan Dennett and Richard Dawkins are both excellent thinkers and skeptical of near-term (next few decades) powerful AI, although they expect it eventually. They were invited to speak for that reason, but were not available.

        This is both a good suggestion – invite Dennett and Dawkins next time – and unfortunate news that they were not able to make it this time.

        I would expect an extended dialogue between Dennett and Dawkins and the other four – Hanson, Bostrom, Hutter, Yudkowsky – to produce some interesting belief shifts.

      • http://hanson.gmu.edu Robin Hanson

        I don’t know who was invited, so I can’t say who should have been invited who wasn’t. But if you recall my main complaint was that the content of the talks didn’t address the key issues. I’m sure if I had been tasked to find the sharpest folks willing to actually talk to the controversial issues, including both pro and con, I could have found quite a few, including some tenured professors, who fit that bill. But that is work I haven’t done yet, for obvious reasons.

  • Raphfrk

    Is there a link to the debate from last year? (though I guess I could just browse back)

    • http://hanson.gmu.edu Robin Hanson

      I added two links in the post.

  • Lord

    No, there is no sign of accelleration.
    Perhaps smarter in some ways and dumber in many many other ways.
    Yes, but things will be much different even without them.
    Logically internally self contradictory.
    Only if it has the desire to do so.
    Such machines wouldn’t exist without such stable values, They would immediately become inert or self destruct.
    No one will be capable of doing so without doing this.

  • http://fasri.net Robert Bloomfield

    Here is what I don’t understand about those who advocate a singularity. The very nature of exponential growth is that the rate of growth in percentage terms is constant, and never reaches an asymptote or limit. The growth just continues at a constant rate, forever and ever. While the absolute magnitude of change (innovation) increases over time, I don’t see why this needs to result in qualitative differences on how it affects humanity.

    Doesn’t it mangle a metaphor to suggest that exponential growth somehow reaches an asymptotic level?

    But I guess it is sexier than saying “technology marches on, and its hard to predict what will happen, but boy, it sure will be something when AI gets better!”

    • http://www.transhumangoodness.blogspot.com Roko

      The position that you advocated is not held by anyone I know of. I would class myself as a singularitarian, and I don’t believe that some generalized, vague notion of progress will increase as a perfect exponential.

      You should take care not to attack a straw man.

      • Carl Shulman

        I believe Kurzweil claims that tech growth is not a smooth exponential at a constant proportional rate of growth, but rather that the rate of growth is itself increasing. He also doesn’t claim (from my recollection of his book) that technology will go to infinity, but rather quickly approach the limits of what we can achieve given natural limitations (no halting oracles, P!=NP, thermodynamic limits, etc) and then peter out.

    • Z. M. Davis

      Following Roko, the word “Singularity” is has been used in many ways. Silly claims by some about exponential growth are logically distinct from more plausible claims by other futurists. See “Three Major Schools” and “The Word ‘Singularity’ Has Lost All Meaning.”

      • http://fasri.net Robert Bloomfield

        Thanks for the link, ZM. My prior post reflected my guess that ‘singularity’ was a misleading choice of terminology. I am a bit surprised that the helpful essay ‘The Singularity Has No Meaning’ didn’t conclude by telling people to stop using the word. Instead, it encourages others to try to determine what people mean when they say it. I don’t think proponents of these forms of futurism are doing themselves any favors by using this term, as it just comes across as overpromoting hype … and now, I understand, reflects a willingness to use terms they can expect will be misunderstood and conflated with other ideas.

      • Z. M. Davis

        I don’t think proponents of these forms of futurism are doing themselves any favors by using this term [Singularity; it] reflects a willingness to use terms they can expect will be misunderstood and conflated with other ideas.

        Well, what do you expect them to do, exactly?—sometimes a term just sticks. Should SIAI rebrand itself as IEIAI and the Singularity Summit as the Intelligence Explosion Expo? And then change names again every time some nontrivial number of people egregiously misapprehends what they’re trying to say? Are you tempted to change your name every time someone confuses you with the other Robert Bloomfield?

      • gwern

        (I can’t seem to reply to Bloomfield, so I’ll just leave this post here.)

        Robert: trying to correct terminology can be easily counterproductive. Look at Richard Stallman; how many people discount him as a crank solely because of his efforts to get people to make a legitimate distinction like Linux vs. GNU/Linux (or ‘intellectual property’ or…)? Quite a few. And his attempts have helped him or the FLOSS movement not in the slightest.

    • http://timtyler.org/ Tim Tyler

      Proponents typically claim SUPER-exponential growth.

  • http://www.computerproblemssolvedcheap.com Richard Steven Hack

    The problem with this idiot notion of “stable values we like” is that humans have no useful stable values.

    Frankly, I’m utterly unconcerned about whether an advanced intelligence (whether Transhuman or AI) is in agreement with human values.

    You people are in for a serious surprise when such an intelligence is developed. It’s not going to have much if any human values – other than survival, which is the only value of significance to any truly sentient entity.

    • http://www.transhumangoodness.blogspot.com Roko

      “Frankly, I’m utterly unconcerned about whether an advanced intelligence (whether Transhuman or AI) is in agreement with human values.”

      does this include not caring about the AI killing you? :

  • http://fs.pkheavy.com Zach

    I can’t believe they get off charging so much for an event.

    • Joseph Knecht

      Yes, I agree. I attended the first 2 summits, and won’t be going to any more now that they are priced so expensively. The 1st one in 2006 was free, and in the 2nd year, it was just $25/day. The 3rd one jumped to $500/day ($350 if booked early enough), which makes $275/day for 2009 seem more reasonably priced, but it’s still too expensive.

      • http://shagbark.livejournal.com Phil Goetz

        The first and second were heavily subsidized. You’re complaining that something is too expensive because now you’re being asked to pay for it.

        If you think you can run it more cheaply, you could volunteer to be on the organizing committee for the next one.

      • michael vassar

        Can you please suggest any other example of an all day event held in a theater in New York City that costs under $249/day (tax deductible) for the most expensive tickets and under $160/day (still tax deductible) for the cheapest readily available non-student tickets? Nothing comes to mind to me.

      • Doug S.

        Well… the New York Anime Festival, held at the Javits Center, costs $60 for a weekend.

        I suspect they get money by renting space to dealers, though.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      Ha! Reminds me of Robin’s “New Better Game Theory” post.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    “and some folks should think about how to give machines stable values we like”

    It occurs to me that the United Nations may have the most legitimacy in terms of setting policy regarding this claim.
    How deferent are the thought leaders on this position to that idea?

    • http://lesswrong.com/ Eliezer Yudkowsky

      BWA HA HA HA HA HA *cough* *hack* HA HA HA

    • mycroft65536

      I think what Eliezer means is that what matters is that the machine’s stable values have to be accurate, for a much broader reading of the word “value” than is used in normal society. If the “legitimate” answer is wrong then the whole world goes to paperclips in a handbasket. The “optimization process” the UN would use to solve this problem wouldn’t even understand the question.They’d worry about the AI giving each country it’s due while it turned the earth into :-) faces.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        Well, there are probably reasonable thinkers who are skeptical of your assessment of a united nations based approach to the idea expressed in:“and some folks should think about how to give machines stable values we like”.

        That could be a useful addition to this or future conferences.

      • http://www.transhumangoodness.blogspot.com Roko

        There is a secret about the world that I have learned in the last 3 years that Hopefully Anonymous seems to be blissfully unaware ofMoTher secret is that the world is FULL of BULLSHIT. The UN is a special case if this; those who work there get nice cushy jobs and good salaries, and emit platitudes about all the nations coming together and cooperating. In reality, the UN is a place for nations to haggle with each other, where what each country gets out is proportional to how much military and economic power it already had.

        Like 99.999999% of the human race, the people there would fail to realize that FAI was a problem whose successful solution is neccessary for their continued existence.

        he united nations

    • http://rhollerith.com/blog Richard Hollerith

      In tricky situations involving science and human values, no organization has greater legitimacy than Starfleet. Remarkably, both Starfleet and the United Nations have close ties to the city of San Francisco, which also has a plentiful supply of prospective consultants in the form of psychics, spiritual healers other highly evolved beings.

      • http://www.transhumangoodness.blogspot.com Roko

        Hehe…

        From the depictions in the films, even starfleet would fumble FAI. They don’t exactly exude a thorough understanding of moral projectivism and human cognitive bias.

      • http://rhollerith.com/blog Richard Hollerith

        (Let me stop being sarcastic just long enough to assure everyone that I know Starfleet would fumble FAI.)

        But I see no choice but to hope that a collaboration between Starfleet and the U.N. will save us. What other organizations have enough thought leaders with the right credentials and policy experience? What other organizations have the necessary impressiveness?

      • Z. M. Davis
      • http://rhollerith.com/contact-richard-hollerith/ Richard Hollerith

        Hopefully Anonymous: although I still maintain that encouraging the U.N. to get involved is a terrible idea, I now wish I had not resorted to sarcasm. Please accept my apologies for any injury to your dignity.

    • Carl Shulman

      Nick Bostrom has things to say along these lines, as do I.

    • Carl Shulman

      Roko,

      Your explicit assumptions about HA are wrong, he is not clueless re BS.

  • http://www.vetta.org Shane Legg

    To the best of my knowledge both Schmidhuber and Hutter, who are both speaking at the conference this year, would agree with at least items 1, 2 and 3 in your list. To me these seem like the key points needed in order to say that one believes in the technological singularity in a reasonably strong sense. Nevertheless, I think that in general you raise an interesting point about the speakers. Indeed, while watching SS08 it also occurred to me that a number of the speakers didn’t appear to buy into the singularity concept in any significant way. That’s fine: people with different views should be there, but they should be there to express these views rather than just talk about their usual working in robotics or whatever. I suspect the deeper problem, however, was that the average amount of time that people in the audience had spent thinking about the singularity, either rightly or wrongly, easily exceeded that of some of the speakers. That’s a pretty strange situation to be in.

    The problem, as we all know, is that most singularity people are what you might call “layperson enthusiasts”, rather than people that society has approved as being “serious thinkers” by confering a professorship or what have you. Thus, if you want a serious conference with serious people talking, you’re in a bit of a bind. The good news is that there is a small but growing group of people like myself who have a significant interest in the singularity and who are doing or have done PhDs, have research publications in serious places, have worked in prestigious institutions with well known academics, and so on. We don’t yet have enough mana to bring to the conference, but over the next decade some of us will. Until then, the serious singularity discussion is going to happen outside of the main conference talks. For myself last year the conference talks were generally not very interesting as it was mostly old stuff I’d seen before, it was talking to the other attendees that really make the conference for me.

    • http://www.transhumangoodness.blogspot.com Roko

      Yes, it does seem important to foster a growing research community. But that community should be doing the task of solidifying understanding that the problem is a hard one, rather than working on building AGI – which is what we have at the moment.

  • michael vassar

    “I’m sure if I had been tasked to find the sharpest folks willing to actually talk to the controversial issues, including both pro and con, I could have found quite a few, including some tenured professors, who fit that bill.”

    Please try to Robin. I’ll be happy to see your list of interested speakers next year. Until then, I’ll take Eliezer’s 2:43 comment as the final word on this subject.

    Thanks Shane, for what seem to me to be important insights. Ironically, I don’t agree with claims 1 and 2, so I would be unlikely to see those claims as necessary for the ‘Singularitiarian’ designation, though they might be sufficient.

    • http://www.vetta.org Shane Legg

      On a slightly different topic, I think the comments in this thread neatly dispose of the fiction that people interested in the technological singularity have fallen into some standard unquestioned dogma regarding the central issues. In the discussion above we can already see a significant range of options about the basics, and including from some quite well known figures in the community.

      Even if the technological singularity turns out to be hogwash, recent accusations that characterise us as an unthinking uncritical group (a “cult”) are plain factually wrong. Quite the reverse, so much so that it’s rather amazing that the community generally manages to hang together!

    • http://hanson.gmu.edu Robin Hanson

      Even if I thought I could do someone’s job better than they do, I wouldn’t actually do their job if they, not I, were still going to get paid for the job being done.

      • michael vassar

        I expect to draw the same salary whether the Summit has more or fewer thoughtful academic critics, but you and I would both like there to be more. I don’t think they exist, at least, not in substantial numbers, if the sort of people who make up the Summit don’t count as critics.

  • http://hanson.gmu.edu Robin Hanson

    Off the 66 comments so far on this post, none yet address the questions I asked. I thought those were interesting basic issues in intellectual etiquette.

    • http://www.vetta.org Shane Legg

      You put forward a pointed criticism of an organisation, and thus implicitly the people who run it, and then ask some questions… of course people are not going to be focused on your questions! Did you really expect otherwise?

      And besides, each of your questions is worthy of at least a sizeable blog post and discussion. If you want to debates these things, pick one of them and write a few pages outlining your case.

    • michael vassar

      Fine Robin, answers? Dialog is a common buzzword but I think that to an unusual degree we actually mean its content, though more regarding the workshop than the Summit proper. How effective? Gaining credibility via association with high status people? I’d say it’s necessary but not sufficient to make an issue mainstream. How ‘fair’? Robin Hanson is asking? If you were Steve Reyhawk I would hear “how epistemically damaging, relative to X, both on the margin and inframarginally” and might give it some thought, but I have no idea what you mean by “fair”. Agreeing to speak at such an event implies little or no agreement regarding claims, but does imply, very strongly, the assertion that the organizers have enough status to be entitled to ask such claims and be engaged with by people of your level of status. I don’t know how many speakers agree with each claim associated with the Singularity. I wouldn’t break the claims down the way you did and don’t take some of them very seriously at all. It’s also not my job to speak for them, especially having set up a forum for them to speak for themselves.

  • UchicagoMan

    Personally, these claims seem far fetched (at least in the next 100-200 years). The reason being in my mind is that even if technology is accelerating at an exponential rate (in some sense, whatever that is), this doesn’t necessarily mean the problems being tackled aren’t also going to become exponentially more difficult to solve.

    I mean, even if we could simulate the brain properly, which we barely understand at this point, we would need to sift through all the information associated with it. Efficiency of information processing at every scale and level is essential (our brains are amazing in this regard).

    This is going to be a herculean task. Certainly technology and computers can do wonders now and I believe they will continue to integrate with daily human activity and help promote rapid exchange, analysis, and creation of information.

    However, creating and understanding “intelligent” entities is a different beast all together. It’s not going to pop into existence on its own. The equations aren’t going to write themselves. Someone (or something I suppose) needs to do some error checking.

    And now that I think of it. I can foresee us generating “quasi-intelligent” machines which can may mimic intelligent behavior, especially in terms of retrieving/processing information for human users (think of some hyper advanced context-relevant Google), or also facsimile type androids which have emotion-like responses etc.. But these will essentially be high-tech toys.

    I don’t think we could ask it “solve quantum gravity” or “create a being more intelligent than yourself” and expect much.

    The break through will be managing to link this ocean of organized information directly with the human brain (cybernetics). Once humans become experts or trained to make use and behold a gigantic amount of extra information, analysis on the fly, then they could guide themselves to deeper discovery. But this will not happen overnight.

    Like I mentioned earlier we don’t even understand our own intelligence and brain, and then we would need to engineer our cybernetic interlink to our brains properly without frying it!

    Lots of HARD WORK ahead, just like anything else meaningful in this world.

    Cheers.

    • http://markbahner.typepad.com Mark Bahner

      I don’t think we could ask it “solve quantum gravity” or “create a being more intelligent than yourself” and expect much.

      And how much can you expect if you asked humans to do the same thing? Humans have had 200,000+ years to do those things. So far, nada. I’m not impressed. ;-)

  • Pingback: ba feed » Seasteading and Singularity, by Arnold Kling

  • Pingback: Singularity PR Dupes? » Dig for Leadership - Stories that try to make the world a better place.

  • http://timtyler.org/ Tim Tyler

    Re: overall neither econ nor tech progress is much accelerating lately.

    I don’t really see how anyone can argue that the things Kurzweil actually claims are accelerating are not actually accelerating.

  • http://timtyler.org/ Tim Tyler

    Re: “Should SIAI rebrand itself as IEIAI and the Singularity Summit as the Intelligence Explosion Expo?”

    Better not to have gotten into the whole mess in the first place – but essentially, yes: perpetuating the dopey “Singularity” terminology is a bad thing to be doing – and I think that all those involved should cease at the earliest opportunity.

  • Pingback: Singularity FAQ: General Questions | the rational futurist