23 Comments

AGI-09 videos out - including some by Robin Hanson:

http://vimeo.com/3981759 - Economics of A.I.

http://vimeo.com/4007677 - Hard Takeoff Panel - Robin Hanson & Hugo de Garis

Expand full comment

How come my comment on the Uncanny Valley post has disappeared.

Expand full comment

Note on my new interpretation of QM:

I finally abandoned Many Worlds (MWI). Basically, I finally realized that QM actually implies a failure of reductionism. I now believe all the problems of QM come from conflating what are really different irreducible levels of reality. It's no surprise that Yudkowsky and co. are many-worlds believers, I now think this is a gross confusion arising from the attempt to interpret reality with a single-level model of causality (reductionism).

Listen:

"The delving of the dark energy diving through the sea of starry treasure powers the perfect spheres of the rain-drops ringing Paris, and the wetness in the eyes of the actress reflects the dripping candy floss orbs eaten on the amusement wheel, whose rotation sparks the royal crystal glass held in hand, cupping the champagne from the court of Champagne'"

There now. Doesn't that do more for you all than years of posts on Bayesian rationality ever will? ;)

Expand full comment

The NYTimes has an article on medical treatments which don't work. It lists various examples and discusses the "allure" of ideology-based health care. (Hat tip to slashdot)

Expand full comment

>work in this direction hasn't achieved anywhere near the success that Bayesian induction has.

Bayesian Induction alone is never going to work. The reason is this: to avoid computational intractability, many different types of knowledge representations are required. Induction alone can't provide the interfaces between the different representations, you need a system of mapping between different representations and analogy formation does that.

Further, Induction itself can be reinterpreted as just another analogy, since Induction involves a mapping between the *functional* properties of things - it depends on causal mappings, which rely on the fact that the mapping between states at two different time coordinates is 'smooth' or 'continuous'.

Academics have to be blind to overlook this whoppingly obvious fact.

To quote Hofstadler again;

"One should not think of analogy-making as a special variety of reasoning (as in the dull and uninspiring phrase “analogical reasoning and problem-solving,” a long-standing cliché in the cognitive-science world), for that is to do analogy a terrible disservice. After all, reasoning and problem-solving have (at least I dearly hope!) been at long last recognized as lying far indeed from the core of human thought. If analogy were merely a special variety of something that in itself lies way out on the peripheries, then it would be but an itty-bitty blip in the broad blue sky of cognition. To me, however, analogy is anything but a bitty blip — rather, it’s the very blue that fills the whole sky of cognition — analogy is everything, or very nearly so, in my view."

Ref:Presidential Lectures:DouglasR.Hofstadter

Another example of the blinkered nature of academics is the so-called Newcomb's Problem. What problem? You take the one box only, I don't need some super-duper decision theory to tell me this whoppingly obvious fact, just plain old Geddesian common sense.

Expand full comment

As for overcoming bias--I have reduced testosterone through drugs and then castration. Questions?

Expand full comment

A simple reply of "yes, I am handwaving it" would have sufficed.

Also, you have an interesting definition of "following"; Hofstadter has been on about analogy formation for the better part of three decades and, to my knowledge, work in this direction hasn't achieved anywhere near the success that Bayesian induction has. Nice try, though.

Expand full comment

>mjgeddes, do you actually have a rigorous formulation of analogy formation or are you just handwaving this?

Unfortunately the ideas still must have the status of 'wild speculation' at this time. *I know* I'm right about everything of course, but unfortunately rigorous formulations require a lot of brain-power and the time and money to spend all day and all night working on it, which I don't have and can't do, respectively.

All I can do is state the facts and hope someone picks up on it. I have finally seen a hint of a 'break' on OB with Robin Hanson's threads on the low-entropy puzzle - which I'm sure is the beginning of the chain of reasoning which will eventually validate my ideas - Hanson has realized something is very wrong with standard thermodynamics. This *must* be pursued further.

Also, Douglas Hofstadler (one of the worl'd top AI researchers) is following my line on analogy formulation. In a recent interview in 'Scientific American' he states:

"I'm working on a book with a French colleague, Emmanuel Sander, which is about how I see analogy as being the core of all of human thought"

Link:

Hofstadter Interview

Expand full comment

One thing I'd love to see something written about is placebo effect in animals. Is it a human-only phenomenon, or does it exist in other animals too? I've never seen anything about it, and this is a subject of very big consequences no matter if the answer is positive or negative.

Expand full comment

mjgeddes, do you actually have a rigorous formulation of analogy formation or are you just handwaving this? Seriously, if this is the foundation of rationality, please explain it in a way that does not assume neural cognition as a prerequisite.

One thing that was silly about the whole Google narrative, though-- any (sane) organization able to make a strong AI would understand the dangers of it getting loose, and would be very much invested in preventing that from happening. The idea of such a thing escaping stretches my suspension of disbelief.

Does history indicate that the originators of a complicated technology have typically understood and predicted its full effects? For instance, consider that the drug Sildenafil was first synthesized for the purpose of treating hypertension...

Expand full comment

One thing that was silly about the whole Google narrative, though-- any (sane) organization able to make a strong AI would understand the dangers of it getting loose, and would be very much invested in preventing that from happening. The idea of such a thing escaping stretches my suspension of disbelief.

Expand full comment

Eliezer at some point to anti-virus programs become immoral? There are currently IM chatbots that attempt to phish bank account details from people. If these become capable of passing the Turing test (or some other agreed upon test) can they morally just be deleted?

Expand full comment

If Google would have actually put a fraction of the effort that they put into dreaming up the April's Fools joke into actually *doing* something about AGI , we'd be there by now. The Google people are the fools.

To sum up my differences with the long Yudkowsky series on OB, the following table compares Yudkowsky's views with my own.

Yudkowsky:

Bayesian induction the foundation of rationalityValues a function of the human brainVolition/Freedom the foundation of valuesMany worlds QM interpretationIntelligence independent of consciousness & valuesSingle-level reality; reductionism

Geddes:

Analogy formation the foundation of rationalityUniversal terminal values built into universeBeauty the foundation of values3-level-time hidden-variable QM interpretationIntelligence dependent on consciousness & valuesMulti-level reality; failure of reductionism

As readers can see, we apparently disagree on everything.

Expand full comment

Haha, loving Cadie. See you on the other side everyone!

Expand full comment

John H. Conway is giving a series of lectures on the "Free Will Theorem" of Conway and Kochen: videos available here.

Expand full comment