74 Comments

Do you think that poverty and/or extreme poverty will ever be eliminated in a world with scarcity?

Expand full comment

A question, is Economics about predicting the economy?

Expand full comment

I do not believe in Cryogenics as a viable way to continue my life.

At the moment of death, my cell membranes begin to break down, in my brain, and information is lost. The process of freezing exacerbates the brain damage. It is not quite as bad as the ancient Egyptians drawing my brain out through my nose, but it is still - yes I will say impossible - to bring the frozen brain back to life.

Eliezer, when you claim to believe in Cryogenics, are you making a deliberate error, so that your most star-struck fans (including me) cannot think you incapable of error, and must test your statements for ourselves? Or do you really believe in it?

Expand full comment

Phil: what he claims to be an "outline" of a proof really doesn't say how he gets the result. It's only that one paragraph, the following paragraphs introduce the terminology for eq (1), they aren't part of the "outline".

He does say: "before measurements, identical copies of the observer exist in parallel universes" - (which is a not at all the conventional way to think of many worlds, but probably would not lead to an incorrect result in this case, although it would in an epr experiment)"a Bayesian probability density ... is NOT a relative frequency" - (but by repeating the experiment over and over the relative frequency interpretation would come to the same result; Tipler doesn't seem to realize that you can repeat the whole experiment, not just have repeated observations in one experiment)

I suspect that Tipler does make the mistake you suggested he might have made, though.

Anyway, he's wrong in stating that there would be a difference between many worlds and Copenhagen in this case, and his result in eq(1) is clearly wrong for any interpretation.

Expand full comment

Non-Many-Worlds quantum mechanics, based on the Born Interpretation of the wave function, gives only relative frequencies asymptotically as the number of observations goes to infinity. In actual measurements, the Born frequencies are seen to gradually build up as the number of measurements increases, but standard theory gives no way to compute the rate of convergence.

...seems to defeat his own thesis. The results of the test he proposes might well be what the MWI predicts - but he himself claims that other theories are vague on the issue - so what's the point? The conventional wisdom is here.

Expand full comment

Testing Many-Worlds Quantum Theory By Measuring Pattern Convergence Rateshttp://arxiv.org/abs/0809.4422

This was published recently, but it seems to have received very few discussion. The paper is really short (2 pages, without abstract and references it would be a single page) and claims to use Bayesian theory to provide a testable formula that should either prove or discard Many Worlds.

I didn't understand it, but I suspect Tipler may be trying to measure, in one world, how fast the pattern converges summed over many worlds, which I think would be a mistake.

I also suspect Tipler's ideas won't be given as careful an examination as they would have if someone else had put them forward.

Is there a physicist in the house?

Expand full comment

Testing Many-Worlds Quantum Theory By Measuring Pattern Convergence Rates

http://arxivblog.com/?p=656http://arxiv.org/abs/0809.4422

This was published recently, but it seems to have received very few discussion. The paper is really short (2 pages, without abstract and references it would be a single page) and claims to use Bayesian theory to provide a testable formula that should either prove or discard Many Worlds.

Expand full comment

Via Marginal Revolution, a research paper describes the behavior of people diagnosed with borderline personality disorder in a repeated trust game. The gist of it is that cooperation broke down because the subjects made no attempt to restore the counterpart's trust, even as her willingness to lend deteriorated.

Very interesting stuff, especially in light of the current credit crisis - how much of our economy is dependent on fragile cooperation mechanisms?

Expand full comment

Every one here seems to have 'unloaded all their chips' on Bayesian Induction. Continuing the poker game analogy, you could say that the AGI folks here have 'gone all in' on Bayes. Either they'll strike the jackpot...or they'll lose everything.

Of course if you define intelligence in a sufficiently narrow way (ie optimally achieving goals), then you can fix your definition so its fully captured by Bayes (M.Hutter, S.Legg etc.). But that doesn't mean that your conception of intelligence is neccessarily fully correct....

Let me suggest an alternative definition of intelligence, (which blog readers may well all find highly peculiar at first):

Intelligence is the ability to form effective representations of your own intentions/values - Marc Geddes

Folks should keep an open mind about the current 'Bayesian Induction' craze. There could be further advances still in store...

Expand full comment

I would say that it's bad to become an opiate addict because negative feedback mechanisms within the brain limit the effectiveness of opiates to produce sustainable pleasure. In other words, you eventually lose your capability to experience pleasure from both opiates and the events that naturally trigger that particular reward system. In the long run, you end up less happy than if you had never started taking them.

Expand full comment

You have to measure the things against each other, and it seems like the way to decide which thing to do is to pick whichever one brings the most happiness.

Hey, all you need now is a base-level formalisation of 'happiness' and we have a terminal value for our protean AI! So for the big prize, what do you mean by 'the most happiness', without resorting to terms like happy, fun or utility?

Expand full comment

So, the question is, are we optimizing for something other than happiness?

According to most biologists, yes. As with most other organisms, the utility function for humans is well modelled as "expected number of grandchildren" - with the "expectation" being based on the assumption that we are in something like our ancestral environment.

Is this a utility function a good match for hapiness? Probably not, at least according to this: "second and third children don't add to parents' happiness at all. In fact, these additional children seem to make mothers less happy than mothers with only one child".

Expand full comment

Joe, you might want to cf. "Not for the Sake of Happiness (Alone)" and "Fake Utility Functions." Also maybe CEV re mistaken beliefs leading to bad choices.

Expand full comment

pdf23ds

Do you think we should optimize for What-People-Want instead of happiness/pleasure? That seems like a viable alternative to What-Makes-People-Happy, but I don't think I understand it. Let me think "out loud" here.

There are some cases where people want things that are bad, because they're wrong about something. Like, what if I wanted to stab myself because I thought it would feel great? Also, what happens when two people want mutually exclusive things? You have to measure the things against each other, and it seems like the way to decide which thing to do is to pick whichever one brings the most happiness.

As for parents bringing up children, my understanding is that it might make the parents less happy but the good upbringing makes the children much more happy, for the rest of their lives. This still seems like it's optimizing for net happiness.

Expand full comment

I wonder if it's possible to intersect the interests of our prediction market guru and our FAI guru. How about the following hypothesis:

a) The probability of any given human institution developing AI is highly correlated to its funding.b) The easiest case for technology investment is when that investment actually supports a business model directly. c) The stock market as a predictive market produces rewards to the most accurate predictors.d) Predicting future trends for economic or business issues requires considerable synthesis of high-level pattern and low-level pattern matching (e.g. not strictly narrow AI).e) Therefore, it is likely that our first AI's may come from investment bank or hedge-fund equivalents.

Now let us also consider that you have increasing amounts of capital put behind the decisions of quants and their systems TODAY. If there is a pattern that indicates 'sell' to a large number of lower-AI systems, it can be profitable to predict THAT and trigger it. This of course sets up a nicely recursive environment of minds simulating minds.

Now perhaps its possible to construct an X cancels out X theory and the market works perfectly regardless of how esoteric it's participants may get. Could this be akin to Eliezer's pre-FAI thoughts?

I find it plausible that before AI's use us as tools, we will use them as tools to destroy ourselves.

Expand full comment

I would like to see some discussion of the housing bubble and bailout plans. Specifically, the times when the government intervenes into the market price mechanism; this time its saying a bunch of mortgage-backed securities are undervalued by the market.

Typically we rely on markets to set prices, knowing they do so better than any other mechanism. Occasionally this mechanism seems to "break". However, does that mean its rational to switch to another (political?) mechanism? Do we have a system that can accurately predict when markets go awry? If we do, do we have something that can out-perform distorted markets? Or is all of this just a "do something!" bias?

Expand full comment