45 Comments

Nah, humans are too fragile, high-maintenance and shortlived. I think it's quite possible that no (biological) human being will ever survive an interstellar trip, the minimum requirement for a Von Neumann machine. Our strength has traditionally been the self-reproduction once we arrive at our destination. Machines have a while to go before they match us in that regard.

Expand full comment

 Humans are Von Neumann machines.

Expand full comment

"within the Turing-complete (or better) laws of physics, random fluctuations may be able to produce numbers that outgrow the improbability of the coincidence"

 - that is actually the best a priori, non-theistic argument for our existence that I have ever heard. The only assumption it really relies on is full Turing completeness, which in particular implies that the universe is in some sense infinite.

And indeed it is plausible that eternal inflation is exactly what you described: a special kind of self-replicating process which (by chance) (sometimes) produces life, and produces an infinity of life at that.

http://en.wikipedia.org/wik...

Expand full comment

More insidiously still, within the Turing-complete (or better) laws of physics, random fluctuations may be able to produce numbers that outgrow the improbability of the coincidence. Inside a neutron star, a self replicating seed pattern can form which will convert the star to computer that will simulate enormous number of intelligent beings. It's not clear that the number of beings times the probability of formation of the seed pattern, is not a very huge value. (Putting aside neutron stars, the self replicating pattern creating a huge computer may be where we are heading)

edit: joking aside that possibility does freak me out a bit, I haven't yet worked out how it looks under 'what is the probability of world around me' approach. edit2: preliminarily all looks well, with no undue weight to such stuff.

Expand full comment

"It boils down to one thing: numbers that can be produced by theory grow faster than any computable function of size of the theory"

- yes, exactly. Couldn't have put it better myself.

Expand full comment

Very good point on the ceteris paribus. The issue is that 'all else' is not equal across those "theories" where counts of people are compared, and so formally, they should carry pretty much zero weight for breaking the very assumptions they logically require.

Expand full comment

>>>Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars.

Why? Oh merciful Cthulhu, why should I suppose this? There is no empirical evidence that any civilization but ours exists. Why would you assume, upon seeing a single dot on a blank map, that the map contains billions of invisible dots?

PS. Our universe also seems to lack Flying Spaghetti Monsters. Surely some great filter must destroy them before they reach Earth...

Expand full comment

One boxing will happen given enough non normative assumptions about predictor. For example, if predictor is simulating me, and I want real me to get paid, the situation is that I end up in uncertain world state - the world around me is either simulated, or real world, which can make me one-box. Other way is to self identify with the algorithm, and in this case your actions have causal consequences inside predictor and inside real world. Self identifying with algorithm is natural for software while self identifying with particular hardware requires extra work such as some unique ID and other such things; opposite is true for humans where self identifying with hardware comes innate.

The normative predictor, I gather, works rather like charisma of King David in the Gibbard and Harper's paper.

The LW stance is, well, heil Yudkowsky as decision theorist for calling Hofstadter's superrationality "timeless decision theory" and for having "formalized" it (which didn't seem to happen yet). This is all really stupid with a dash of very questionable research ethics, and perhaps one of the best reasons not to take these folks seriously.

Expand full comment

An off-topic but interesting quote from that Damascus link:"This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)"I never followed the Omega discussion all that closely, but isn't one-boxing normative at LW?

Expand full comment

SIA-type claims contain a ceteris paribus clauses. Is there any reason to think the inferential weight of the theorems is substantial in the real world?

This is why I could never even become mildly interested in these arguments. Unless my take is wrong, they're debating an argument that really carries very little weight in drawing any conclusion. They invoke considerations that are (at best) formally relevant but aren't theoretically relevant. If the argument worked, it would cary little weight because it is only one very abstract consideration among an overwhelming weight of potential countervailing considerations, swept under the ceteris paribus clause.

Expand full comment

Or the simulation 'arguments'. It boils down to one thing: numbers that can be produced by theory grow faster than any computable function of size of the theory, if the theory is expressed in any Turing-complete grammar. (Same can be trivially extended to second order theories).

A final predictive theory is the theory describing world around me (or around us). A theory with multiple observers like me is equivalent to a group of theories of world around me, and this group needs not have higher probability (it sometimes can, though). Speaking of which if you ever end up having your 'probabilities' add up to more than 1, that doesn't mean you can normalize them to add to 1, that means you should go back and look where you screwed up the math.

Expand full comment

No. We simply shouldn't trust any argument based upon anthropics until we have actually sorted anthropics out.

Expand full comment

By the way, the argument I have presented here is just a variant of the presumptuous philosopher problem.

Expand full comment

I think SIA-type arguments are fishy. Look, SIA says that you should believe more in hypotheses about the world which say that there are lots of other "yous" or "you-like experiences".  Essentially the perfect hypothesis from the point of view of SIA is one where reality consists of an extremely large number N of brains-in-vats having the "you" experience. (Let us ignore the possibility that reality could be infinitely large, for now).  So, even if you assign a tiny prior to the aforementioned hypothesis, there will exist a sufficiently large N that you should assign, for example,  > a 50% chance to the N-brains-in-vats reality, as opposed to the ordinary hypothesis where there is just one of you living normally on earth.

I consider this to be a reductio ad absurdum of SIA-based anthropics.

Expand full comment

Correction. You're introducing endogenous considerations into evaluating the exogenous factor.

Minor correction: "any strategy as good as S."

Expand full comment

I had written,

Where in the OP do you intimate anything about an initial estimate of quality?

Which you didn't answer. I can only guess that you thought the point purely verbal. (Not that this should be excusatory: your confused prose contributes to your confused thought.)

Here's the significance. Once you realize that unless you had characterized the avoider based on its endogenous effectiveness deciding to use a given avoider doesn't necessarily lower the probability of success.

This leaves you with a formal theorem with dubious application to anything. The theorem is that if you can distinguish the endogenous and exogenous causes for not adopting an avoider the probability of succeeding depends not only on the endogenous causes (favorably) but on the exogenous causes (unfavorably). But distinguishing them involves both conceptual and empirical problems; I don't know that even the conceptual distinction can be drawn.

What it does NOT mean is that:

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits.

We don't have to be very confident of both if we're confident enough for one. They're compensatory factors.

But there's another confusion I might as well remark on. In evaluating the exogenous component, we would not be evaluating for any strategy as good as g. There you're introducing exogenous considerations into evaluating the endogenous component. This confusion shows that you didn't fully understand the role of the endogenous-exogenous distinction when you posted--that the confusion wasn't purely verbal (although that's bad enough to warrant criticism).

Expand full comment