94 Comments

October Open Thread?

Expand full comment

Intrade currently offers a contract for

"Barack Obama's Intrade value will increase more than John McCain's following the VP debate"

What information are we supposed to learn from this market?

Expand full comment

jsalvati: If you are asking about SIAI vs Givewell we should almost certainly talk. Try my email Michael no underscore Aruna at Y ahoo dot com

Expand full comment

Nick, what do you mean? Seeing only bad outcomes by jumping into that pattern is rationalization, if AI can find the best option, this option will be way better than everything we can come up with. Developing AI for the sake of AI is far from the core issue. If this is what we really want, Friendly AI will see that, but just playing with minds in general is clearly not a nice thing to do. It is not worth destroying the world.

Expand full comment

Vladimir, he may simply be disappointed with the apparent choice between two crappy options.

Dmitriy, wanting to develop it is point enough.

Expand full comment

Dmitriy, why is it a problem then? If overall outcome is bad, it's a bad idea to lead to that outcome.

Expand full comment

Dmitriy,

It's a good thing that X doesn't gets done then. Where is the problem in that?

Expand full comment

I do not know if this is quite a discussion topic, but it seemed worth noting here -- while I have no problem accessing stock market information at my workplace (etrade.com, etc), Intrade is blocked by our proxy software, under the category of gambling.I do rather wonder why a prediction market is considered gambling, but the stock market is not.

Expand full comment

I think this new study is very important.

News article here.

Original paper here.

A blog post describing the most important finding, glossed over in the news report.

The study is on political false beliefs and how they can be changed.

Short answer: Presenting people with evidence that contradicted a false belief made people more certain of that belief... but only for people who identified as conservatives. Furthermore, whether the source of the correction was given as The New York Times or Fox News didn't matter.

(Proof that conservatives are more irrational than liberals?)

Expand full comment

@ Stefan King

Thanks for your discussion of Snooks - it has refreshed my memories of reading his Collapse book. I think unfortunately that many people are not quite ready to accept dynamic strategy theory just yet - and it always amuses me how neo-Darwinism is so closely protected by many in the scientific community as if any attempt to overturn it on a scientific basis would open the gates to the Creationists. In my view such an approach is anti-scientific but speaks more to human nature I suppose. A classic example is Dawkins' insistence that there is no scientific alternative to Darwinian theory - which is false considering the body of Snooks' work - Collapse in particular.

@ Allan Crossman

Re: I'm only interested in getting to the bottom of why it's wrong. I think the odds of me coming to accept Snooks' views are under 1%. Anyway, I think Eliezer is telling us to shut up about Snooks.

A laudable attitude for a forum about overcoming bias!

Expand full comment

Vladimir: that's exactly the problem as I see it, if Y is an unavoidable negative consequence of X, X doesn't get done.

Nick: While exercising your brain seems like a good thing to do, if there is no practical use for a well developed mind, there is really no point in developing it. Humanity abandoned a lot of ability in the course of evolution, I wonder if technical singularity would make intelligence an obsolete survival trait

Expand full comment

So, lets say that at that point you can come up to the AI and say what you want and get it.

Following Vladimir's point, the AI need not do this, if your volition wouldn't want it to.

Would we by a chance destroy the main stimuli to discover and invent things? I mean, would you really want to study physics or math if you cannot possibly come up with a single original thought? Is there a single field of inquiry left to human race that allows for actual originality after SI takes a swing at solving every possible problem you can come up with?

"If you don't know, it's a mystery."

Expand full comment

Dmitriy,

You concerns have a pattern: if FAI (Friendly AI) does X, it's going to have a negative side effect Y, so that X+Y are worse than doing nothing. If you allow FAI to actually notice Y before doing X, this won't happen.

Expand full comment

I have been reading Eliezer's posts about friendliness and source of morality, the question I came to ask myself is "Do we, as humanity, actually WANT to create a superhuman intelligence, weather friendly or not?" It is fairly obvious, why you wouldn't want an unfriendly SI. Even if you somehow manage to contain it, and I don't see how one can do that, you cannot use anything it makes anyway. But it is somehow non-obvious to me why you would want a friendly SI.

Is there actually a project that a truly friendly (as described in SIAI guidelines) SI can engage in? I do not see how any sort of major positive change, such as end of world hunger or end of dependency on natural fuels can be accomplished without much economic and social unrest, resulting in temporary, but very major unhappiness. A SI that would be OK with a temporary unhappiness is obviously a bad thing (think a thousand years of Orwellian regime for the better future of the humanity) and a SI that doesn't allow for a temporary unhappiness will probably just sit on its nano-ass twiddling its nano-thumbs.

Suppose we solve the first problem and somehow balance friendliness just right to allow SI to actually act. So, lets say that at that point you can come up to the AI and say what you want and get it. Would we by a chance destroy the main stimuli to discover and invent things? I mean, would you really want to study physics or math if you cannot possibly come up with a single original thought? Is there a single field of inquiry left to human race that allows for actual originality after SI takes a swing at solving every possible problem you can come up with? Do we just fold our hands and enjoy the ride and the views? How would this be different from a Maximum Fun Device?

Expand full comment

On one of Eliezer's ethics posts (I think, it could have been earlier), I complained about his wordiness making his points and discussion hard to follow. I just came across an essay, and reread it, and recommend it to him, especially if he plans on writing a book for a larger audience. Blanshard's "On Philosophical Style", a link to one location is http://www.anthonyflood.com..., but it is available in several places, including in print. (For that matter, I'd also recommend it to Nick Bostrom and Dan Dennett, though they are already more readable than most other philosophers I have tackled.)

Expand full comment

since all humans share the same future light cone, sometimes one person's preferred future is incompatible with another person's. I cannot imagine that the implementors of the superintelligence can resolve the incompatibilities without engaging in quite a few instances of special pleading.

It seems unlikely that they will try. That's the "superintelligence from a benevolent democratic government" scenario - and how likely is that?

More likely superintelligences will not attempt to resolve incompatibilities between different human factions - rather they will promote the interests of those who constructed them.

Expand full comment