I’m to speak at a $500-per-attendee Singularity Summit in New York in early October. “Singularity” is associated with many claims, but most are controversial. They say: The Singularity represents an “event horizon” in the predictability of human technological development past which present models of the future may cease to give reliable answers, following the creation of strong AI or the enhancement of human intelligence.
As Phil says, I'm surprised you aren't applauding them for their deft use of signaling in getting high-status affiliates. Obviously one goal for the Summit is to be an interesting dialog, but another is to increase the status of this area of research, and direct more funding an attention to it.
I can see why you might feel their claim about being a "leading dialog" was slightly dishonest and be piqued by that, but do keep in mind that their strategy of increasing the status of futurism helps your career and research interests. They are contributing to an unusual public good that benefits you - shouldn't you mix some appreciation in with your criticism?
Re: "Should SIAI rebrand itself as IEIAI and the Singularity Summit as the Intelligence Explosion Expo?"
Better not to have gotten into the whole mess in the first place - but essentially, yes: perpetuating the dopey "Singularity" terminology is a bad thing to be doing - and I think that all those involved should cease at the earliest opportunity.
(I can't seem to reply to Bloomfield, so I'll just leave this post here.)
Robert: trying to correct terminology can be easily counterproductive. Look at Richard Stallman; how many people discount him as a crank solely because of his efforts to get people to make a legitimate distinction like Linux vs. GNU/Linux (or 'intellectual property' or...)? Quite a few. And his attempts have helped him or the FLOSS movement not in the slightest.
I don’t think we could ask it “solve quantum gravity” or “create a being more intelligent than yourself” and expect much.
And how much can you expect if you asked humans to do the same thing? Humans have had 200,000+ years to do those things. So far, nada. I'm not impressed. ;-)
Fine Robin, answers? Dialog is a common buzzword but I think that to an unusual degree we actually mean its content, though more regarding the workshop than the Summit proper. How effective? Gaining credibility via association with high status people? I'd say it's necessary but not sufficient to make an issue mainstream. How 'fair'? Robin Hanson is asking? If you were Steve Reyhawk I would hear "how epistemically damaging, relative to X, both on the margin and inframarginally" and might give it some thought, but I have no idea what you mean by "fair". Agreeing to speak at such an event implies little or no agreement regarding claims, but does imply, very strongly, the assertion that the organizers have enough status to be entitled to ask such claims and be engaged with by people of your level of status. I don't know how many speakers agree with each claim associated with the Singularity. I wouldn't break the claims down the way you did and don't take some of them very seriously at all. It's also not my job to speak for them, especially having set up a forum for them to speak for themselves.
Personally, these claims seem far fetched (at least in the next 100-200 years). The reason being in my mind is that even if technology is accelerating at an exponential rate (in some sense, whatever that is), this doesn't necessarily mean the problems being tackled aren't also going to become exponentially more difficult to solve.
I mean, even if we could simulate the brain properly, which we barely understand at this point, we would need to sift through all the information associated with it. Efficiency of information processing at every scale and level is essential (our brains are amazing in this regard).
This is going to be a herculean task. Certainly technology and computers can do wonders now and I believe they will continue to integrate with daily human activity and help promote rapid exchange, analysis, and creation of information.
However, creating and understanding "intelligent" entities is a different beast all together. It's not going to pop into existence on its own. The equations aren't going to write themselves. Someone (or something I suppose) needs to do some error checking.
And now that I think of it. I can foresee us generating "quasi-intelligent" machines which can may mimic intelligent behavior, especially in terms of retrieving/processing information for human users (think of some hyper advanced context-relevant Google), or also facsimile type androids which have emotion-like responses etc.. But these will essentially be high-tech toys.
I don't think we could ask it "solve quantum gravity" or "create a being more intelligent than yourself" and expect much.
The break through will be managing to link this ocean of organized information directly with the human brain (cybernetics). Once humans become experts or trained to make use and behold a gigantic amount of extra information, analysis on the fly, then they could guide themselves to deeper discovery. But this will not happen overnight.
Like I mentioned earlier we don't even understand our own intelligence and brain, and then we would need to engineer our cybernetic interlink to our brains properly without frying it!
Lots of HARD WORK ahead, just like anything else meaningful in this world.
You put forward a pointed criticism of an organisation, and thus implicitly the people who run it, and then ask some questions... of course people are not going to be focused on your questions! Did you really expect otherwise?
And besides, each of your questions is worthy of at least a sizeable blog post and discussion. If you want to debates these things, pick one of them and write a few pages outlining your case.
Off the 66 comments so far on this post, none yet address the questions I asked. I thought those were interesting basic issues in intellectual etiquette.
Hopefully Anonymous: although I still maintain that encouraging the U.N. to get involved is a terrible idea, I now wish I had not resorted to sarcasm. Please accept my apologies for any injury to your dignity.
I expect to draw the same salary whether the Summit has more or fewer thoughtful academic critics, but you and I would both like there to be more. I don't think they exist, at least, not in substantial numbers, if the sort of people who make up the Summit don't count as critics.
Even if I thought I could do someone's job better than they do, I wouldn't actually do their job if they, not I, were still going to get paid for the job being done.
On a slightly different topic, I think the comments in this thread neatly dispose of the fiction that people interested in the technological singularity have fallen into some standard unquestioned dogma regarding the central issues. In the discussion above we can already see a significant range of options about the basics, and including from some quite well known figures in the community.
Even if the technological singularity turns out to be hogwash, recent accusations that characterise us as an unthinking uncritical group (a "cult") are plain factually wrong. Quite the reverse, so much so that it's rather amazing that the community generally manages to hang together!
Singularity PR Dupes?
As Phil says, I'm surprised you aren't applauding them for their deft use of signaling in getting high-status affiliates. Obviously one goal for the Summit is to be an interesting dialog, but another is to increase the status of this area of research, and direct more funding an attention to it.
I can see why you might feel their claim about being a "leading dialog" was slightly dishonest and be piqued by that, but do keep in mind that their strategy of increasing the status of futurism helps your career and research interests. They are contributing to an unusual public good that benefits you - shouldn't you mix some appreciation in with your criticism?
Re: "Should SIAI rebrand itself as IEIAI and the Singularity Summit as the Intelligence Explosion Expo?"
Better not to have gotten into the whole mess in the first place - but essentially, yes: perpetuating the dopey "Singularity" terminology is a bad thing to be doing - and I think that all those involved should cease at the earliest opportunity.
Proponents typically claim SUPER-exponential growth.
Re: overall neither econ nor tech progress is much accelerating lately.
I don't really see how anyone can argue that the things Kurzweil actually claims are accelerating are not actually accelerating.
(I can't seem to reply to Bloomfield, so I'll just leave this post here.)
Robert: trying to correct terminology can be easily counterproductive. Look at Richard Stallman; how many people discount him as a crank solely because of his efforts to get people to make a legitimate distinction like Linux vs. GNU/Linux (or 'intellectual property' or...)? Quite a few. And his attempts have helped him or the FLOSS movement not in the slightest.
I don’t think we could ask it “solve quantum gravity” or “create a being more intelligent than yourself” and expect much.
And how much can you expect if you asked humans to do the same thing? Humans have had 200,000+ years to do those things. So far, nada. I'm not impressed. ;-)
Fine Robin, answers? Dialog is a common buzzword but I think that to an unusual degree we actually mean its content, though more regarding the workshop than the Summit proper. How effective? Gaining credibility via association with high status people? I'd say it's necessary but not sufficient to make an issue mainstream. How 'fair'? Robin Hanson is asking? If you were Steve Reyhawk I would hear "how epistemically damaging, relative to X, both on the margin and inframarginally" and might give it some thought, but I have no idea what you mean by "fair". Agreeing to speak at such an event implies little or no agreement regarding claims, but does imply, very strongly, the assertion that the organizers have enough status to be entitled to ask such claims and be engaged with by people of your level of status. I don't know how many speakers agree with each claim associated with the Singularity. I wouldn't break the claims down the way you did and don't take some of them very seriously at all. It's also not my job to speak for them, especially having set up a forum for them to speak for themselves.
Personally, these claims seem far fetched (at least in the next 100-200 years). The reason being in my mind is that even if technology is accelerating at an exponential rate (in some sense, whatever that is), this doesn't necessarily mean the problems being tackled aren't also going to become exponentially more difficult to solve.
I mean, even if we could simulate the brain properly, which we barely understand at this point, we would need to sift through all the information associated with it. Efficiency of information processing at every scale and level is essential (our brains are amazing in this regard).
This is going to be a herculean task. Certainly technology and computers can do wonders now and I believe they will continue to integrate with daily human activity and help promote rapid exchange, analysis, and creation of information.
However, creating and understanding "intelligent" entities is a different beast all together. It's not going to pop into existence on its own. The equations aren't going to write themselves. Someone (or something I suppose) needs to do some error checking.
And now that I think of it. I can foresee us generating "quasi-intelligent" machines which can may mimic intelligent behavior, especially in terms of retrieving/processing information for human users (think of some hyper advanced context-relevant Google), or also facsimile type androids which have emotion-like responses etc.. But these will essentially be high-tech toys.
I don't think we could ask it "solve quantum gravity" or "create a being more intelligent than yourself" and expect much.
The break through will be managing to link this ocean of organized information directly with the human brain (cybernetics). Once humans become experts or trained to make use and behold a gigantic amount of extra information, analysis on the fly, then they could guide themselves to deeper discovery. But this will not happen overnight.
Like I mentioned earlier we don't even understand our own intelligence and brain, and then we would need to engineer our cybernetic interlink to our brains properly without frying it!
Lots of HARD WORK ahead, just like anything else meaningful in this world.
Cheers.
You put forward a pointed criticism of an organisation, and thus implicitly the people who run it, and then ask some questions... of course people are not going to be focused on your questions! Did you really expect otherwise?
And besides, each of your questions is worthy of at least a sizeable blog post and discussion. If you want to debates these things, pick one of them and write a few pages outlining your case.
Roko,
Your explicit assumptions about HA are wrong, he is not clueless re BS.
Off the 66 comments so far on this post, none yet address the questions I asked. I thought those were interesting basic issues in intellectual etiquette.
Well... the New York Anime Festival, held at the Javits Center, costs $60 for a weekend.
I suspect they get money by renting space to dealers, though.
Hopefully Anonymous: although I still maintain that encouraging the U.N. to get involved is a terrible idea, I now wish I had not resorted to sarcasm. Please accept my apologies for any injury to your dignity.
I expect to draw the same salary whether the Summit has more or fewer thoughtful academic critics, but you and I would both like there to be more. I don't think they exist, at least, not in substantial numbers, if the sort of people who make up the Summit don't count as critics.
Even if I thought I could do someone's job better than they do, I wouldn't actually do their job if they, not I, were still going to get paid for the job being done.
On a slightly different topic, I think the comments in this thread neatly dispose of the fiction that people interested in the technological singularity have fallen into some standard unquestioned dogma regarding the central issues. In the discussion above we can already see a significant range of options about the basics, and including from some quite well known figures in the community.
Even if the technological singularity turns out to be hogwash, recent accusations that characterise us as an unthinking uncritical group (a "cult") are plain factually wrong. Quite the reverse, so much so that it's rather amazing that the community generally manages to hang together!