24 Comments

@hopefully anonymous: yes, that was exactly the impression I got when reading Aaronson's post. For example, he gives timeframes of "centuries or even millenia" for human equivalent AI with no justification whatsoever. Scott: if you make specific predictions about the future, you should have a justification for them. Aaronson's post looks like another example of emotionally motivated, pessmistic careless futurism, which eliezer dealt with in his bloggingheads interview with john Horgan.

Expand full comment

I pretty much agree with Robin in the OP (I haven't read all the comments yet). Scott's argument as quoted here seems suspiciously kitchen sink to me: he seems to start with an understandable emotional desire not to see Kurzweil's singularity happen yet, and then throws a variety of unconnected but emotionally aligned reasons for it not to happen. In contrast I think Robin gets it right.

I do think history suggests the end of the world, immortality, etc. is not in our lifetime any more than it was in Ponce de Leon or Charles Lindbergh's lifetime, but there is some serious counterevidence in our time, such as the gap between the predicted time to complete the human genome project and its actual completion.

Expand full comment

Re: where are these "very reasonable neurological arguments" - and what are they actually worth?

I notice that Stuart Hameroff claims the estimates are out by a factor of a trillion - e.g. see: A New Marriage of Brain and Computer.

However, as far as I can tell, Hameroff is off in Penrose's la-la land - and appears to have totally lost touch with reality :-(

Expand full comment

>If by 'irony' you mean 'consistency'

No, the irony is that both are equally wrong ;)

Bostrom's Oracle's the way to go man. Get into Ontology/KR. You only get a good Upper Ontology to a get a universal parser. ; Semantic Web is evolving, a good Upper Ontology will be commerically valuable soon. Get some money.

Expand full comment

"I don't see how anyone can make predictions about the future of intelligence (as Kurzweil does) based on hypothetical calculations of how much "processing power" the human brain has."

You have no read Kurzweil's book, have you?

Maybe just watching this would be a good start:

http://mitworld.mit.edu/vid...

His line of reasoning with regard to the amount of processing power it takes to emulate a human brain is also based on the spatial and temporal resolution of brain scanning techniques, which are also improving exponentially. He's saying that at some point we'll be able to scan a human brain with enough precision to then be able to simulate it in a model running on hardware that is fast enough (with enough memory, etc) to make it work.

Nobody can know if that's going to be the way AI will happen, but it's one of many plausible paths.

Expand full comment

The 'true believers' in the idea that Libertarianism is the best political system are also the most enthusiastic supporters of the idea that 'Bayes is the secret of the universe'. The 'market' tries to assign value to everything in purely functional terms. In much the same fashion, 'Bayes' tries to assign truth values purely in functional terms (ie external prediction sequences). Major irony there.

If by 'irony' you mean 'consistency'.

Expand full comment

Robin,

Curious, are there people you think are doing a particularly good job of "integration of this scenario into future policy visions, including visions of poverty, ecology, population, despotism, etc," either in fiction or non-fiction?

Definitely agree on reducing the silliness factor, I guess the question is how to match Eliezer's "future shock level" to the audience. Hard to do when SL1 media gloms on to SL4 concepts and ridicules them to SL0 readers...

Expand full comment

>But very few people want to argue with the notion that polluting the earth's atmosphere is a bad idea and that we need to solve this problem.>Why worry about something that is probably impossible when we have so many real problems to deal with?

In early 1945, how many people were occupied with ending World War II in comparison to how many were occupied with preventing atomic war? WW2 ended that year; the prospect of nuclear annihilation would remain a critical concern for decades to come. Of course, both issues required critical attention, just like obtaining food is a daily concern for me, while paying my rent is a monthly concern, and continuing my education is a yearly concern. However, that shouldn't relegate my longest-term concerns - like what I'll be doing in twenty years - to the dustbin.

Expand full comment

As to Singularity:

The 'true believers' in the idea that Libertarianism is the best political system are also the most enthusiastic supporters of the idea that 'Bayes is the secret of the universe'. The 'market' tries to assign value to everything in purely functional terms. In much the same fashion, 'Bayes' tries to assign truth values purely in functional terms (ie external prediction sequences). Major irony there.

Expand full comment

Re: There are very reasonable neurological arguments currently under discussion that would put the processing power of the human brain at factors of thousands more than current estimates.

A factor of a thousand is ten doublings. Not too much for Moore's law to sweat over. But where are these "very reasonable neurological arguments" - and what are they actually worth? The existing extimates come from multiple sources - as explained by Kurzweil - and they haven't changed that much in 20 years.

Expand full comment

There is a good reason not to be overly concerned about the 'singularity'. That reason is that the 'singularity' will never occur.Of course one could argue with that last statement.But very few people want to argue with the notion that polluting the earth's atmosphere is a bad idea and that we need to solve this problem.Why worry about something that is probably impossible when we have so many real problems to deal with?

Expand full comment

And then there are those of us small individuals who can't even spell "accepted."

Expand full comment

I don't see how anyone can make predictions about the future of intelligence (as Kurzweil does) based on hypothetical calculations of how much "processing power" the human brain has. There are very reasonable neurological arguments currently under discussion that would put the processing power of the human brain at factors of thousands more than current estimates. The kinds of predictions that Kurzweil et al makes are really just silly and only get any credibility because the people making them are really smart. Good science isn't excepted because the people who said it were really smart. Good science gets excepted because it is rigorously tested. Where's the proof that these predictions are reasonable? Furthermore, an explosion of artificial species doesn't have to mean they're intelligent, and intelligence may have nothing to do with processing power anyway.

Expand full comment

Almost 20 years ago I, always an early adopter, went out on a limb and as the associate member of my firm's technology committee pushed hard to buy Ray's OCR technology so we could start scanning and, thereafter, searching two big clients' mountains of paper needed in current and future discovery. We did. It didn't work out so well.

Ray's predictions turned out to be waaaaaaay off. Indeed, his predictions about when OCR technology would get to a point where it would be more practical to scan docs than to simply retype them and search them using Boolean operators was off by about 4-fold. In other words, I'm guessing I'll be dust whenever Ray's singularity comes about.

PS I'm actually far more sanguine about advances in biotechnology. The dark ages are passing; I predict we'll be able to roll back the ravages of time beginning in 10 years. Then again, I've always been a believer.

Expand full comment

"Given our current ignorance, there seems to me to be relatively little worth saying about the Singularity - and what is worth saying is already being said well by others. ... the Doomsday Argument ... [gives] a certain check on futurian optimism. ..."

Well, I suppose that is one way to interpret the Doomsday Argument in a singularity context. The other way of course is the optimistic way: we'll be replaced by something better, not we'll just cease to exist.

Expand full comment

"[...]millions continue to die for trivial reasons, and democracy isn't even clearly winning out over despotism; ... before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn't gotten the message."

I find that argument a bit weird.

Why couldn't you also say:

"[...] millions continue to die, etc, before we think about connecting hundreds of millions of computers around the world for rich people to play with, a reasonable first step might be to eradicate poverty/whatever." In fact, why have anything more than what is absolutely necessary to survive while there is still so much unfairness out there?

Seems like a false choice.

If those at the avant-garde had to wait for everybody else to catch up and for the world to be homogenous before taking another step, the world would be a worse place, not better. If you care about suffering, poverty, expanding human knowledge, liberty, etc, you should want to use the best tools for the job. Friendly AGI is the best tool for pretty much any job, IMO. We shouldn't put all our eggs in the same basket, but right now the situation looks more like putting eggs in all baskets except that one.

Expand full comment