(I wouldn’t be at all surprised if the following argument isn’t original, but I haven’t seen it elsewhere yet.) If our descendants do not destroy themselves, then over the next trillion years they may become knowledgeable and powerful enough to create new baby universes that expand to look much like the universe we can see. Such a universe might then evolve its own intelligence, which would grow powerful enough to repeat the process. A

There's a Stephen Baxter book, Manifold Time, with a version of this argument which extends to non-intelligent creation of new universes as well - there's a paraphrasing of it here:

James Miller pointed out the first problem with this argument. We can easily postulate a countably infinite set of regions like our visible universe. It wouldn't even seem that surprising (at least according to my limited understanding of cosmic inflation). In that case it wouldn't matter what 'fraction' of natural 'universes' produce life. If the rest of your argument holds, and if I understand how to apply probability to infinite sets (both of which seem doubtful) then the alternative to 1 seems like a toss-up.

The other problem I see here lies in your assumptions about godlike behavior. Maybe the power to create continua -- like the power that Bostrom's gods possess to simulate human brains perfectly, and to create immersive brain interfaces -- strongly suggests a telepathic civilization. Maybe if we actually knew what we were talking about here, we'd see a strong correlation between godhood and having the basic decency not to create worlds where people die alone and in pain.

i just got a little chuckle realizing that somewhere, somebody is probably reading this post and trying very hard to figure out exactly how it serves to promote the nefarious ulterior motives of the diabolical Koch brothers

I doubt anyone would spend much time on it. Obviously a hypothetical marketing genius in the pay of the Koch brothers could start from the premise of God(s) and derive any number of Koch-like conclusions.

To put this in terms a Bayesian would appreciate the issue is what our a priori probabilities truly represent.

These sleeping beauty type arguments assume that our a priori probabilities should be thought of as just our unexamined gut feelings. However, they can't really be that since they must satisfy the probability axioms, in other words they must assume logical omniscience. Only events whose prior probability was not 1 should modify your probability judgments via conditioning.

The argument presented here is obviously not new empirical data. It proposes a mathematical/philosophical argument which our prior probability assignment must (if valid) comply with. But on this view the argument no longer goes through.

For the sleeping beauty style argument to work we need to assume that since we have no reason to favor one centered universe consistant with our experiences from another we should give them equal prior probability and make some other assumptions about our prior probabilities to be in various centered universes. The conclusion we then seem to reach is that our probability for say not being in a imprinted universe, induction working, etc.. etc.. is very tiny if not 0.

But our gut level intuition that it's unlikely we are in an imprinted universe is considerably stronger and easier to grasp than those about centered universes. This argument then hasn't so much shown that we should assign probability 0 to these events but that our prior probabilities were not actually consistent. Indeed, I would argue that the error is in taking our prior probabilities to even range over centered universes. That is a bad way to model our beliefs and if we are insistent about using probability as a model for ideal confidence than our event space should consist of empirically observable properties of the universe, or at least concrete facts about the world we can easily grasp.

But if we insist on taking our priors to be a distribution on scientific claims about the world this kind of sleeping beauty argument can't even get off the ground.

To give a shorter more convincing refutation consider the following argument:

The principle of induction tells us to grant more confidence to simpler descriptions of reality so it shouldn't be inconsistent to have a high confidence that the universe has a finite information content, i.e., the laws of physics and the initial conditions can be specified using only finitely much information (initial conditions and physical laws are interchangeable...every universe obeys the physical laws of the universal turing machine with the right initial conditions).

However, by the argument above if it is possible to create baby universes (or indeed just make simulations containing conscious sims) and unboundedly many such universes/sims are created then the probability the universe can be described with less than k bits of information must be 0. After all there are only 2^k such universes and the limit of 2^k/n as n goes to infinity is 0. Therefore we must assign probability 1 to the claim that the universe allows no simple description

First just on straightforward grounds the argument doesn't follow. Just because we could imprint our personality or whatever on a baby universe doesn't entail that we would always do so. Indeed, one might reasonably believe that spawning a baby universe with such an imprint requires design and care while spawning such universes in some boringly random fashion is quite easy.

Thus it's entierly possible that all our descendants will imprint their personalities on baby universes but that one weirdo will build a self-replicating machine that spits out universes generated with simple random parameters and his contribution to the collection of all universes will massively outweigh the others.

--

A slightly more subtle problem is the assignment of a measure to the collection of all universes. Were this a finite collection this would be easy but if we have the capability of producing a universe which, like this one, also has the capacity to produce baby universes it would be easy to create continuum many universes (infinite binary tree of universes). So how does one even assign a measure on this collection of universes.

Immediatly this should raise a red flag. The argument relies on the intuition that "obviously" our confidence that our universe has property P should be the same as the proportion of universes with that property. But how can that be obvious if we don't even understand what proportion means in this context (there are TONS of probability measures on Cantor space)?

--Ultimately this is really a version of the sleeping beauty paradox and exactly the same argument would apparently prove that, provided you didn't believe it was impossible for there to be infinitely many copies of our universe, it is virtually certain (probability 1).

The fallacy in all these arguments is the confusion of the confidence we should have that some property holds of our current awareness (generated by an imprinted world) with the measure of all such awareness with that property. In other words the assumption that since most possible consciousnesses will inhabit an imprinted world we should be confident that we inhabit such a world, is invalid.

The problem is in assuming our awareness is somehow the result of randomly selecting (with something like a uniform distribution) one of the collection of all possible conscious experiences. We have absolutely no reason to believe any such thing.

Here is an elaboration of a similar argument, the New God Argument, which stems from the Simulation Argument and the Great Filter Argument, as well as the transhumanist assumption that we will eventually become posthuman:

So this is basically Bostrom plus "imprinted minds". Even if we buy the criteria that makes it p~1 should then the likelihood of being in a universe containing imprinted creator minds be whatever fraction of universes the creators decide to put their minds in. I must have missed why we would expect this to be a large fraction.

Well, it would seem to me that this intellectual speculation about the universe once again clearly violates Occam's razor. http://en.wikipedia.org/wik...Of course, Occam could be wrong (even if seldom so). And since we can be pretty confident that we'll never know if any of these universes or pseudo-gods exist, you can speculate all you wish with no negative consequences. Pass the popcorn. :)

In Alternatives to No Mind Hair I propose three alternatives. One I see has already been sort of proposed byJames Miller and Carl Shulman. The others are the co-opting argument for first moving and maybe it's us.

Why not apply evolutionary thinking, and posit that the universes that exist are exactly those that their creators correctly tailored to lead to subsequent recreation?

So the imprint of the creator is not in the form of a representation of its mind hidden somewhere, but instead the imprint is itself the initial conditions.

i just got a little chuckle realizing that somewhere, somebody is probably reading this post and trying very hard to figure out exactly how it serves to promote the nefarious ulterior motives of the diabolical Koch brothers

For a theory that produces large numbers of universes like ours, see Lee Smolin's Cosmological Natural Selection theory.

Basically, it makes two assumptions: 1) new child universes are born as a a side effect of black hole creation, 2) child universes have the same laws as their parent, with small random variation.

Smolin concludes that if these assumptions are right, you end up with the vast majority of universes having laws tuned to maximize the production of black holes, which likely means they are tuned to maximize star production, which correlates with life-friendliness, which explains the fine-tuning mystery without resort to anthropic reasoning.

One spiffy thing about this theory is that it makes the testable prediction that as we learn more about how the black-hole-fecundity of a universe varies as a function of its physical constants, we will (continue to) find that, lo and behold, our particular universe's combination of physical constants is near a maximum.

I gather from Wikipedia that there are some recent skeptical views of the theory - see the article as a takeoff point for more info on those.

## God Near or No Mind Hair

3) Survival of the fittest – why the fittest? Why not the reddest, or the oldest, or the most depressed?

Because "fitness" is defined in terms of what survives.

There's a Stephen Baxter book, Manifold Time, with a version of this argument which extends to non-intelligent creation of new universes as well - there's a paraphrasing of it here:

http://www.csub.edu/Physics...

James Miller pointed out the first problem with this argument. We can easily postulate a countably infinite set of regions like our visible universe. It wouldn't even seem that surprising (at least according to my limited understanding of cosmic inflation). In that case it wouldn't matter what 'fraction' of natural 'universes' produce life. If the rest of your argument holds, and if I understand how to apply probability to infinite sets (both of which seem doubtful) then the alternative to 1 seems like a toss-up.

The other problem I see here lies in your assumptions about godlike behavior. Maybe the power to create continua -- like the power that Bostrom's gods possess to simulate human brains perfectly, and to create immersive brain interfaces -- strongly suggests a telepathic civilization. Maybe if we actually knew what we were talking about here, we'd see a strong correlation between godhood and having the basic decency not to create worlds where people die alone and in pain.

i just got a little chuckle realizing that somewhere, somebody is probably reading this post and trying very hard to figure out exactly how it serves to promote the nefarious ulterior motives of the diabolical Koch brothers

I doubt anyone would spend much time on it. Obviously a hypothetical marketing genius in the pay of the Koch brothers could start from the premise of God(s) and derive any number of Koch-like conclusions.

To put this in terms a Bayesian would appreciate the issue is what our a priori probabilities truly represent.

These sleeping beauty type arguments assume that our a priori probabilities should be thought of as just our unexamined gut feelings. However, they can't really be that since they must satisfy the probability axioms, in other words they must assume logical omniscience. Only events whose prior probability was not 1 should modify your probability judgments via conditioning.

The argument presented here is obviously not new empirical data. It proposes a mathematical/philosophical argument which our prior probability assignment must (if valid) comply with. But on this view the argument no longer goes through.

For the sleeping beauty style argument to work we need to assume that since we have no reason to favor one centered universe consistant with our experiences from another we should give them equal prior probability and make some other assumptions about our prior probabilities to be in various centered universes. The conclusion we then seem to reach is that our probability for say not being in a imprinted universe, induction working, etc.. etc.. is very tiny if not 0.

But our gut level intuition that it's unlikely we are in an imprinted universe is considerably stronger and easier to grasp than those about centered universes. This argument then hasn't so much shown that we should assign probability 0 to these events but that our prior probabilities were not actually consistent. Indeed, I would argue that the error is in taking our prior probabilities to even range over centered universes. That is a bad way to model our beliefs and if we are insistent about using probability as a model for ideal confidence than our event space should consist of empirically observable properties of the universe, or at least concrete facts about the world we can easily grasp.

But if we insist on taking our priors to be a distribution on scientific claims about the world this kind of sleeping beauty argument can't even get off the ground.

To give a shorter more convincing refutation consider the following argument:

The principle of induction tells us to grant more confidence to simpler descriptions of reality so it shouldn't be inconsistent to have a high confidence that the universe has a finite information content, i.e., the laws of physics and the initial conditions can be specified using only finitely much information (initial conditions and physical laws are interchangeable...every universe obeys the physical laws of the universal turing machine with the right initial conditions).

However, by the argument above if it is possible to create baby universes (or indeed just make simulations containing conscious sims) and unboundedly many such universes/sims are created then the probability the universe can be described with less than k bits of information must be 0. After all there are only 2^k such universes and the limit of 2^k/n as n goes to infinity is 0. Therefore we must assign probability 1 to the claim that the universe allows no simple description

First just on straightforward grounds the argument doesn't follow. Just because we could imprint our personality or whatever on a baby universe doesn't entail that we would always do so. Indeed, one might reasonably believe that spawning a baby universe with such an imprint requires design and care while spawning such universes in some boringly random fashion is quite easy.

Thus it's entierly possible that all our descendants will imprint their personalities on baby universes but that one weirdo will build a self-replicating machine that spits out universes generated with simple random parameters and his contribution to the collection of all universes will massively outweigh the others.

--

A slightly more subtle problem is the assignment of a measure to the collection of all universes. Were this a finite collection this would be easy but if we have the capability of producing a universe which, like this one, also has the capacity to produce baby universes it would be easy to create continuum many universes (infinite binary tree of universes). So how does one even assign a measure on this collection of universes.

Immediatly this should raise a red flag. The argument relies on the intuition that "obviously" our confidence that our universe has property P should be the same as the proportion of universes with that property. But how can that be obvious if we don't even understand what proportion means in this context (there are TONS of probability measures on Cantor space)?

--Ultimately this is really a version of the sleeping beauty paradox and exactly the same argument would apparently prove that, provided you didn't believe it was impossible for there to be infinitely many copies of our universe, it is virtually certain (probability 1).

The fallacy in all these arguments is the confusion of the confidence we should have that some property holds of our current awareness (generated by an imprinted world) with the measure of all such awareness with that property. In other words the assumption that since most possible consciousnesses will inhabit an imprinted world we should be confident that we inhabit such a world, is invalid.

The problem is in assuming our awareness is somehow the result of randomly selecting (with something like a uniform distribution) one of the collection of all possible conscious experiences. We have absolutely no reason to believe any such thing.

Here is an elaboration of a similar argument, the New God Argument, which stems from the Simulation Argument and the Great Filter Argument, as well as the transhumanist assumption that we will eventually become posthuman:

http://www.scribd.com/doc/9...

So this is basically Bostrom plus "imprinted minds". Even if we buy the criteria that makes it p~1 should then the likelihood of being in a universe containing imprinted creator minds be whatever fraction of universes the creators decide to put their minds in. I must have missed why we would expect this to be a large fraction.

Sean Carroll of "Cosmic Variance" promotes a theory which I believe derives from Smolin in his book "From Eternity to Here".

Well, it would seem to me that this intellectual speculation about the universe once again clearly violates Occam's razor. http://en.wikipedia.org/wik...Of course, Occam could be wrong (even if seldom so). And since we can be pretty confident that we'll never know if any of these universes or pseudo-gods exist, you can speculate all you wish with no negative consequences. Pass the popcorn. :)

In Alternatives to No Mind Hair I propose three alternatives. One I see has already been sort of proposed byJames Miller and Carl Shulman. The others are the co-opting argument for first moving and maybe it's us.

Why not apply evolutionary thinking, and posit that the universes that exist are exactly those that their creators correctly tailored to lead to subsequent recreation?

So the imprint of the creator is not in the form of a representation of its mind hidden somewhere, but instead the imprint is itself the initial conditions.

what is P(our universe|a general intelligence chooses to create a universe)?

what is P(a general intelligence chooses to create a universe|a general intelligence exists)?

They're not 1. The first, in particular, is very small, in particular, if we find a universe is very random. So your conclusion doesn't hold.

This saves us from the conclusion that the more random our universe is, the more likely it was to be created by an intelligence.

Why would anyone want to “imprint their mind” on a baby universe?

Why would anyone want to have children?

i just got a little chuckle realizing that somewhere, somebody is probably reading this post and trying very hard to figure out exactly how it serves to promote the nefarious ulterior motives of the diabolical Koch brothers

For a theory that produces large numbers of universes like ours, see Lee Smolin's Cosmological Natural Selection theory.

Basically, it makes two assumptions: 1) new child universes are born as a a side effect of black hole creation, 2) child universes have the same laws as their parent, with small random variation.

Smolin concludes that if these assumptions are right, you end up with the vast majority of universes having laws tuned to maximize the production of black holes, which likely means they are tuned to maximize star production, which correlates with life-friendliness, which explains the fine-tuning mystery without resort to anthropic reasoning.

One spiffy thing about this theory is that it makes the testable prediction that as we learn more about how the black-hole-fecundity of a universe varies as a function of its physical constants, we will (continue to) find that, lo and behold, our particular universe's combination of physical constants is near a maximum.

I gather from Wikipedia that there are some recent skeptical views of the theory - see the article as a takeoff point for more info on those.