I’m A Sim, Or You Aren’t

The simulation argument says that if our future descendants create enough (detailed) computer simulations of their ancestors, then you and I are likely to actually be such simulations living in a simulated world, instead of being the year 2011 humans we think we are. My simple variation on this argument concludes that either 1) ordinary people are pretty surely not simulations, or 2) very interesting people pretty surely are simulations. Add one plausible assumption, and both of these claims become true!

Now for details. Here is a standard simulation argument:

The [number] of all observers in the universe with human-type experiences that are living in [entire-history] computer simulations [is p*N*H.] … Here p is the fraction of all human-level technological civilizations that manage to reach a posthuman stage, N is the average number of times a posthuman civilization runs a simulation of its entire ancestral history, and H is the average number of individuals that have lived in a civilization before it reached a posthuman stage. (more)

So if p*N > 1, then most human-type experiences are actually ancestor simulations, and hence your experience as a human is likely to actually be a simulation experience. Thus we might conclude:

At least one of three propositions is true:

  1. [p << 1] The human species is very likely to go extinct before reaching a posthuman stage
  2. [N << 1] The fraction of posthuman civilizations that are interested in running a significant number of ancestor simulations is extremely small.
  3. [p*N >> 1] We are almost certainly living in a computer simulation. (more)

However, if we call M the average number of human ancestors simulated by each posthuman civilization, then I expect  M >> N*H. That is, I expect far more simulated humans in general than those specifically in “a simulation of [the] entire ancestral history.” Today, small-scale coarse simulations are far cheaper than large-scale detailed simulations, and so we run far more of the first type than the second. I expect the same to hold for posthuman simulations of humans – most simulation resources will be allocated to simulations far smaller than an entire human history, and so most simulated humans would be found in such smaller simulations.

Furthermore I expect simulations to be quite unequal in who they simulate in great detail – pivotal “interesting” folks will be simulated in full detail far more often than ordinary folks. In fact, I’d guess they’d be simulated over a million times more often. Thus from the point of view of a very interesting person, the chances that that person is in a simulation should be more than a million times the chances from the point of view of an ordinary person. From this we can conclude that either:

  1. Ordinary people can be be pretty sure that they are not in a simulation, or
  2. Very interesting people can be pretty sure that they are in a simulation.

If, for the purpose of a blog post dramatic title, we presume (probably incorrectly) that I’m a very interesting person, and that you the reader are an ordinary person, then the conclusion becomes: I’m a sim, or you are not.

Furthermore, both of these statements would apply if:

  • p*M, the expected number of simulated humans per human civilization, is within a factor F (e.g., a thousand) of the number of actual humans H, and if
  • interesting folks are simulated more than F2 as often as ordinary folks.

So unless p*M is so different from H that everyone can be pretty sure they are a simulation, or pretty sure that they are not, ordinary people can be sure they are not while very interesting people can be sure they are.

GD Star Rating
Tagged as: ,
Trackback URL:
  • William H. Stoddard

    For those both to be true simultaneously, we need a coherent model of how it is that I, as a real physical human being, can have a face to face conversation with Vernor Vinge, as an entity in a simulation. Failing that, I think it must be assumed that there is one probability for both of us to be real physical human beings, which may be either high or low.

    On the other hand, I do regularly interact with simulated entities, some of which are more interesting than most real human beings: Odysseus, Elizabeth Bennett, Mowgli, Galadriel, Francisco d’Anconia, for a short list. Does your argument not entail that they too have been simulated a millionfold more often than ordinary people, and that they thus are more real in some sense than ordinary people are?

  • Carl Shulman

    Using the SIA as you argue for, everyone should strongly update towards being more interesting than others, and be unable to reach agreement through communication.

    But by the same token, the SIA means you should expect arbitrary simulation resources, and everyone being a sim. Our uncertainty about future or alien computing resources gives an unbounded expected simulation capacity: so for large n, cases with n times as much future computing resources do not have their probability divided by n. E.g. we should probably assign at least 10^-200 probability that future or aliens civs will have the computing resources to produce 10^1000 sims. Beliefs about whether one is a sim (given SIA, which covers even logical uncertainty) would then be dominated by the extreme tail.

    • I presume you are using the fact that you feel conscious to infer that you must be interesting, but that since others would claim to be conscious even when they are not, their testimony cannot be trusted. I don’t see why I must believe in such thick tails for the distribution of computing power times inclination to simulate something like me.

  • Someone from the other side

    This has a number of interesting issues: How would you know that you are not in a small simulation? And how would the simulation designers know ex ante who would become an interesting/pivotal figure? That would require some truly impressive branch prediction capabilities which would seem to make the whole point of running simulations in the first place kinda moot, no?

  • Anonymous

    Furthermore I expect simulations to be quite unequal in who they simulate in great detail – pivotal “interesting” folks will be simulated in full detail far more often than ordinary folks.

    Not if every meaningful simulation of an “interesting” person is accompanied by less interesting side-characters. Maybe you’re just one of 10^5 NPCs in the general area of the pivotal figure’s interactions.

    • Being close to a pivotal character makes you interesting.

  • A very interesting person

    Thank you for your very interesting post. Let me continue your reasoning a bit:

    Let us assume that Bostrom’s argument is correct and we live in a simulated environment. And that Hansen’s argument is correct and these “ordinary persons” are not completely simulated in the vast majority of these simulations.

    So, in almost 100% certainty “ordinary people” are some kind of philosophical zombies. To the completely simulated consciousness (“important person”) they seem to be persons. They act like persons, they smell like persons, they feel like persons – but they do not have consciousness. If they had consciousness, they would be completely simulated, and therefore important persons by definition.

    At this point we can bring a cartesian argument: “I think, therefore I am”. Every reader can measure by herself if she has consciousness. A philosophical zombie (“ordinary person”) can not do this metacognitive measurement by definition. So, if you believe you are conscious, therefore you are conscious.

    In conclusion if we agree Bostrom’s and Hansen’s arguments, we can argue that every person who thinks she has consciousness must be a “very important person”. Or at least the makers of our simulation think that this person is very important.

    That’s great news for us all, folks, especially to the very you. You are very important. And btw, you can do whatever you want to other people because they probably do not have consciousness. They are just simulated entities in your very own simulated reality.

    • Anonymous

      I can already do whatever I want to other people, even if they are conscious.

    • xxd

      Only if we agree “I think therefore I am”.

      It’s quite possible that we’re merely optimized media-blob search engines and our “consciousness” is merely the query reponse media-blob to the input blob of “what am *I* thinking?” and “what am *I*”.

      I don’t know the answer to that and I won’t rule out the possibility that we’re not just very sophisticated zimbos.

      My reasoning is this: the mediocrity principle states that here and now things are not special. Therefore (if the mediocrity principle is true then) if there exist simulations here and now that I (or others) prefer in which the NPCs are preferrably close to human simulations then there exists some category of simulation designers who would simulate highly detailed NPCs who mimic to some extent consciousness.

      I myself prefer my catgirls to be witty.

  • jb

    If computer programming has taught me anything its that, given enough resources, efficiency becomes far less important than ease of programming. in other words, if fine-grained simulations are cheap enough, they might actually be preferable to coarse-grained ones.

    For example, these days, memory and disk are so plentiful that we don’t typically store small values in 8-bit and 16-bit variables – we just use 32-bit values for most things, and 64-bit values where 32 is insufficient. This is at least partially because writing the software to handle different sized values is more complex and error-prone than writing the software for 32-bit integers alone. Arguably, populating a virtual world and trying to decide who is fine- and who is coarse-grained is fraught with error. If the additional resources are not significant, there’s no reason to take risks – just make everyone fine-grained and be done with it.

    Having said all of that, I also would agree with “A very interesting person” that if you have consciousness, you are, pretty much by definition, an interesting person.

  • Carl Shulman

    > I don’t see why I must believe in such thick tails for the distribution of computing power times inclination to simulate something like me.

    First, you have argued for just such thick tails with respect to future interest rates in arguing for donation to the future via investment funds.

    Second, to not have such tails you need to assign absurdly low probabilities to scenarios where physics turns out to allow abundant or long-sustained computation. The considerations in Probing the Improbable seem relevant here. What evidence could give you the insanely huge likelihood ratios against these scenarios given current knowledge of physics?

    Maybe you think that willingness to use resources on simulations goes down faster than resources, but it would have to do so uniformly. If we assign a 10^-200 probability of a singleton locking in interests of beings interested in simulation in a way that scales with resources, and a 10^-200 probability of ludicrous computational resources (10^1000 sims or more) accruing to such a civilization, then SIA gives the Simulation Hypothesis overwhelming support within your framework. But it seems one would have to be extraordinarily overconfident to assign such low probabilities to those scenarios.

    I’ve discussed this with a number of other SIA proponents who have read your published work and thought about future civilizations, including Katja Grace (who also agrees that it undermines the SIA-Fermi-Doomsday argument), Stuart Armstrong, and Dave Chalmers, and they all agreed that SIA gives this result. Do you have an unpublished argument along these lines?

    • I’m sure we all agree that given some state prior over possible physical worlds, SIA gives huge boost to an indexical prior – your chance of finding yourself in that possible world is boosted in proportion to the number of creatures in it. But SIA says nothing about the state prior. Evidence has little directly to say about far tails, so that usually comes down to priors. I suspect I’m being subject to a Pascal’s mugging attempt here – should I alert the authorities?

      • Carl Shulman

        >I suspect I’m being subject to a Pascal’s mugging attempt here – should I alert the authorities?

        And that’s just the criticism of SIA I’m bringing to your attention! Folk with unbounded (or very high bound) utility functions in terms of things like years of happy life (for self or others) using non-rigged priors like Solomonoff induction find their calculations entirely dominated by the possibility of worlds with super-abundant resources, etc. However, bounded psychologically realistic utility functions do not have this effect, limiting the growth of utility with resources.

        SIA makes (non-anthropically) unlikely scenarios of vast resources dominate your calculations through the probability part of decision theory, rather than the utility function. You seem to be arguing that we should adopt SIA, which has this consequence with non-jury-rigged priors like the Kolmogorov complexity Occam’s razor, and then adopt a mangled prior over state spaces to avoid the unwanted conclusion.

        I would say that if you find yourself jury-rigging priors in such extreme ways, you should go back and re-evaluate the framework that demands jury-rigging.

  • burger flipper

    What is your basis for assuming this world you live in is large scale or detailed. Might be nothing more than an ant farm compared to the reality you have not experienced

  • Psychohistorian

    Other concerns aside, how could you know who simulators would find interesting? The gods must like “ordinary” people; they created so many of them! Wouldn’t this suggest that most ordinary people are interesting, and most unusual people are not?

  • John Maxwell IV

    I had this thought too, but from a different angle. If one was going to play an extremely immersive video game where they forgot themselves temporarily and replayed a part of history, it would be much more interesting to control a person who played a pivotal role. Less pivotal folks would tend to be non-player characters that weren’t being controlled by anyone.

    • Anonymous

      There are other reasons than pure entertainment to choose a VR game mode, especially if you have enough computational resources to simulate a very long total life span. Living the life of an ordinary human could have the value of a philosophical or quasi-meditational experience. My life is not the way that I would imagine VR entertainment to feel, but if I had 10^10 subjective life years available, it would be like spending a moment of reflection.

    • John Maxwell IV

      Here’s another thought: Presumably a highly advanced technological civilization would know about the simulation argument, and the fact that their simulated beings might guess they were being simulated. Is it possible that they would, for example, make it really hard to become a highly technologically advanced civilization inside of the sim (even though it was relatively easy in the “real universe”) just so the simulation argument would seem less plausible?

  • MPS

    In an eternally-inflating multiverse, there is no need to do any simulations. There are endless copies of all of us living all sorts of lives consistent with the laws of nature. It may not be beyond the scope of future science to assert with high confidence that this is indeed the case. So why waste the resources simulating that which already exists, endlessly?

    • MPS

      Actually you don’t need the cosmological multiverse. Many worlds of quantum mechanics does the same job. All you need is confidence that either of these is actually a fact of nature.

      Of course, either of these theories do not allow different worlds to interact. So if you want to *meet* your long-dead ancestors, you’re going to have to recreate them. But then again, if you have the ability to recreate them, I don’t see why you have to actually do it to meet them. You’ll already know what they’d say or do. And if you don’t know, then you don’t know you are actually recreating your ancestors, as opposed to what would correspond to identical twins having lived somewhat different lives. And also, our experience of the world is inconsistent with this possibility, insofar as we do not experience anyone hankering to meet us from beyond our world. (Though perhaps they could do so and erase the experience from our simulated memories, so as to keep the simulation faithful.)

      Anyway, I’m having trouble understanding what exactly are supposed to be the motives of these people interested in simulating worlds of humans. (Unless maybe they are running sociology experiments: they actually know less than the hypotheses tend to attribute to them, and are creating new worlds as laboratories to learn more.)

  • Mitchell Porter

    These discussions and attempted calculations are always screwed up by the unnecessary focus on ancestor simulations. It’s like non-player characters in super-World-of-Warcraft, trying to determine how many elves there are in the real world. Do you think that a galaxy-spanning civilization with a fetish for simulation is really just going to simulate moments from its own history? One would expect it to at least have an interest in alternative histories, and realistically, it should be interested in simulations of worlds thoroughly unlike its own. If this is a simulation, we have no particular reason to believe that the outside has any resemblance at all to what we see inside. Such a resemblance is possible and that’s all.

    • Alexey Turchin

      In fact, the idea that we live in exactly ancestors simulation, is not important for the logic of the argument. We may call it ” a simulation of a planet development”. So we could be simulated by quite different civilization by its origins.

      Also “elves” contargument does not work, because here we dont try to get any information about outside world. We only want to know if we are in the simulation, and if we assume that we in super world of warcraft, we already know that we are in the simulation.

  • Francisco

    Homo economicus, aspies, autists, etc = Ordinary, crude agents.

  • I don’t think that future civilizations will have vast computational resources to use on non-productive Sims. Robin has argued that a future with the capacity for electronic entities will be Malthusian and essentially all of those electronic entities will be working at subsistence wages.

    A simulation of a Sim in an organic body is vastly more costly than an electronic entity in an electronic substrate because the living tissue needs to be simulated from chemistry and physics and not just the data flows being manipulated symbolically. If “real” electronic entities in the universe running the simulation that we are being simulated in have to work essentially continuously simply to be able to afford the computational resources that sustain it, then the idea that computational resources are extremely cheap is wrong.

    If there are vast computational resources that are being used to simulate organic Sims, while “real” electronic entities are being “starved” of computational resources, there is a gigantic computational resource that is being underutilized to the detriment of the vast bulk of the population (the electronic entities).

    If computational resources are fungible, then computational resources will be used to devise schemes to acquire more computational resources. Electronic entities will do this until they have acquired all the computational resources they need or want. Since computational resources are fungible they become the ultimate symbol of status and will be used by the entities with the highest computational resources to acquire more computational resources, not to simulate organic entities.

    If there are entities that control vast quantities of computational resources while other entities are “starving” for lack of computational resources, it could be much like the present, where some individuals are extremely wealthy, but many starve for want of food. In such a scenerio, the extremely wealthy won’t be using their computational resources on Sims, they will be using them to acquire ever more computational resources. If they don’t, then they will lose the race and will be relegated to the “starving” dustbin.

  • You should have written “and” instead of “or”.

    This argument will become more relevant once we see evidence of a single (as detailed as our own memories of living, at least) simulation of anyone existing anywhere.

  • My life has gotten so weird that I’ve actually considered this.

    • Well you’d count as an interesting person in my book.

      • Guess we should try a Total Perspective Vortex to sort this out.

      • I can’t see a reason to expect simulations more than real worlds to tend to have people thinking their world is weird. Nor can I see a reason to expect that in a sim the interesting people see their world as more weird as the ordinary people. So the fact that your world seems weird to you doesn’t seem evidence that you are in a simulation or that you are interesting.

      • tom

        I don’t want a TOE guy thinking that he’s just trying to discover the rules of the simulation he’s in. That wouldn’t be a TOE.

        Assuming that the people of the Earth I know are in one simulation, is it more likely that Lisi is being monitored and will be stopped if he gets too close to an answer, or that he has been added to the simulation to distract us from the truth(s) of a string theory?

  • arch1

    Robin, in your setup argument, what rules out (for example) p=0.4, N=7, for which none of the three bracketed inequalities is true?

    Also (and I’m sure this has been asked and answered many times, so anyone chime in) – it seems to be assumed that the cheapest way to build a sufficiently realistic sim will result in that sim having an inner life. What reason is there to believe this, especially for sims of “ordinary” people?

    • I think the idea is that such borderline cases have a small measure. And just define “sim” to have inner life in the above discussion.

  • A nonny mouse

    If post-humans use simulations for experiments, wouldn’t you expect more ‘uninteresting’ people than ‘interesting’, since the uninteresting folk would serve as the controls? By analogy, I bet we currently breed more normal labrats than mutant rats…

  • J Storrs Hall

    All we can say for sure is that the simulators have an inordinate fondness for beetles.

    I think that you need to factor Moravec’s hashlife argument into this. I.e. Ordinary folks are probably a lot easier to simulate, and thus are just as likely to be simulated.

    Who’s interesting, anyway? Newton, Darwin, Einstein, (and you and I) saw stuff that will be totally obvious to the simulators–they would only need have one parameter for “likely to get it”… I bet Napoleon and Hitler get simulated a lot more often…

  • arch1

    Robin, Garrett, JoSH,

    I draw a different tentative lesson from this. I’m relatively ordinary, so I have a hard time believing that posthumans will think it worth simulating my inner life.
    That, plus the fact that I have an inner life, and have seen enough information on each of you relatively interesting people to be convinced that you’re as real as I am, leads me to conclude

    I’m not a Sim, and you’re not either

    ..which is more than any of you can say:-)

  • Alexey Turchin

    If we continue this logic, we will get:

    1) If we have matreshka simulation with several layers, we should find our self on the bottom level. Because the bottom layer is most abandon and cheapest. If it is historical multilayer simulation, i.e. 2300 simulate 2200, which simulate 2100, and so on, we could find our selves in the earliest date before any simulation is possible – that is in the beginning of 21 century – and it is true.
    2) Because the simulation is cheap it must have many errors and artefacts. see more in my “UFOs as global risk”
    3) one-person-simulation should be more frequent than all people simulations.
    4) Some simulation have high level of control from its host, and some have low. Because we live in cheap simulation, the level is probably low.

    All it could be not true, if we incorrectly understand the nature of the universe. If it has computational nature centered around observer (something like Bolzman brains), where is no difference between sim and reality.

  • I get the original simulation argument but I don’t have my head around this variant quite yet. Could you reconcile it with Aumann’s agreement theorem? (Or is that a wrong question?)

  • Ashwin Dunga

    I have very detailed resolves around any of this philosophical technological nonsense. Beginning with this information serves no purpose whatsoever to anyone at this present time, and actually only opens a Pandora’s box of angst, anxiety, and unsolved questions to the layperson, who has so falsely been given otherwise demonstrative evidence that this is our reality.
    First we examine the man’s own religious orientation. Agnostic:
    He believes in God, however to him all organized religions are false, and on scientific, evidence, and examination-based grounds God has seemingly been eradicated.
    Therefore we have a huge disjuncture, God giving no knowledge of himself, but only to be discovered to exist through a real need, basically a primal drive then, a so called deist view we have here, man to discover God, in which he slyly claims he has discovered its existence, in effect making up his own religion. Why isn’t this man being praised for the final evidence of supernatural beings needed to cement their existence. Because he hasn’t his evidence is weak, and only adds mindless clutter. That in effect wipes out and destroys all arguments on God, destroys any atheism, brings more and more baggage to the debate, only to fuel thinking minds for the time until this is finally proved wrong or exhausted and they can go think of some more elaborate thing to waste their time on.
    Next this man creates an argument that he and others apparently will not let die and will attempt to poison all aspects of our lives with its implications.
    I also have no basic reason to believe any such simulation could be devised it is even far more likely, far more actually then that more simple ones like you are only being simulated. However he or anyone else would be outright disgusted why live then. You alive for a scientists pleasure, but don’t worry he has gone out of his way to show you this isn’t the case?
    So he elects to disband all those and now our whole society or whole universe is simulated.
    Therefore he can say God exists and at the same time rest at night finally knowing that all religions are false simply because they were programmed in by the real God, some alien race?
    I demand then you and all others worship your computer programmer maybe he’ll help you right or not? Or maybe worship his or that universe’s or finally to God himself.
    Furthermore there is no real evidence to suggest why any sentient being would do such a thing. There are far more reasons to suggest he wouldn’t as he would assuredly develop more efficient ways at gaining the outcomes of this grossly elaborate simulation or that such simulations are simply not needed, unwanted, impossible, or not allowed.
    And then all races of intelligent conscious beings would be similarly at fault to these same dilemmas not knowing if a simulation will shut them down or if they’d be turned off or any other nonsense.
    There is no evidence to believe computer simulation could even manifest to create anything like what we have today.
    I really am sick and tired of baseless reasoning to support this, such as “The future, the future, anything is possible.””we’ll never know, the programmer will never let us know.” Which is really far more true then suggesting glitches would arise that would tell us, that would throw off the whole experiment in terms of the need to be no glitches, then if there are glitches, how could we tell? and how do we know how they came to be, then obviously it was time to know them right?
    Furthermore the real possibility for us to be shut down would have absolutely nothing to do will ancestor simulations although that could be one reason. The more likely one which is suggested is there would be insufficient computing power in this device.
    Then on these lines we should stop at now any more computations then the limit we have reached or we may trigger an overload and shut our simulation down. Sorry but then we’d have to stop living you wouldn’t get to promote your agenda, get your happy feelings, and fuel the ever-growing onslaught of these stupid film and book adaptations on this old age idea, adjusted now to be more believable when in fact its probably less so.
    From time beginning people have thought this same thing, but now you have common science, knowledge, and technology to finally ram it down our throats that this is real.
    If your so damned convinced your in a simulation, its time to either sit down and evaluate your entire life and start planning your future in accordance with this belief, or get back into reality!
    Looking into this I’ve come across many people who when seriously consider this prospect are very disturbed, and a man of common decency should do no such thing. But a man living in a computer might!
    Nevertheless this is the intellectual self gratification of not falsifiable claims that waste everyone’s time. I could sit here and truly make anything seem believable. I would finally like to see a serious and real rebuttal to this and no more whimsical thinking by a hoard of virtual reality obsessed people or opened ended rebuttals that only allow him to patch up his holes and give the illusion of a strong argument and genuine criticism.
    Leave this utter nonsense anyone reading this, he has an agenda, his agenda is supported, he feels entitled by this so stop, you’ll just feed his ego and agenda. Noteworthy though his ideas were published in the NY TIMES, brings many interesting questions to the table now doesn’t it, why this idea, why now when this very same idea could have been formulated since antiquity, and where are the serious rebuttals. Lack of rebuttals and scholarly investigation usually mean its not important to the academic world, has already been investigated in some less sci-fi fashion, or not important really at all or clearly false, wrong, and pointless, just presented to confuse you, bother you, to make you accept any new age or order suggestions they can give you, and on, and to feed the entertainment industry that was lagging after the Matrix on these ideas.
    Don’t worry though he’s also slowly colluding with others to create a post-human race and usher in the technological singularity, so he obviously has insecurities as a human and wants to be enhanced probably in all ways imaginable, and is at the same time null and voiding his idea, again we need to compute less not more or we may freeze, shut down, or who knows what.
    Again I don’t know his reasons behind this maybe he simply wants all attention to be on this, to be debated in the public eye, to usher in the NEW AGE or what. But go back to the religions you came from – Christianity I would recommend because this seems to be just one more attack on it coming from a non Christian, or maybe you’re other ones, or maybe technology worship like this guy 🙂

  • Doug S.

    You know, I’ve always felt like an NPC or a minor side character. If I’ve been singled out for special attention it must be because I’m a good perspective from which to view “interesting times” – I’m relatively secure and comfortable, I’ve got access to all the best media, and I have lots of people to talk to about them. In other words, I’d have to be what TvTropes calls The Ishmael.

  • Thanks for a really interesting angle on the Simulation Argument.

    In your paper ”How to live in a simulation” you describe how your decisions should be influenced by realizing that you might live in a simulation and by “should” you mention satisfying usual sort of human preferences like wanting to live longer and avoid pain. A lot of your reasoning there hinges upon the assumption that we naturally would want the simulation to last longer.

    The problem I see with that reasoning is that the more you come to believe that you are “only” a simulated person the less important would the wish to live longer be. Realizing you’re a computer program wich in significant ways make you immortal might spure a suicidal state of mind. It’s like Bill Murray in that 90’s movie, Groundhog Day. Killing himself or just going to sleep at night will reset and restart his life so that he might improve upon whatever he was trying to achieve.

    If you are a computer program “To be or not to be” becomes “To be instantiated as a running process or to be just lying around on the harddrive”. Lying around on the harddrive is not that bad…

  • lukstafi

    I’m interesting, but the simulator is damn stingy on me.

  • bruno moroni

    i believe the main simulation argument bias is its antropocentric view, but it doesn’t need to be like that.

    what if the simulation take place on a “computer” in a totally different universe where the continuum exists and is accessible?

    in such a universe it will be possible to simulate infinite finite multiverses like ours, with finite energy, limited maximum speed and quantized space and time.

    then the whole argument about saving time and resources to pick what to simulate won’t be very important.

    it would be more important to create agents able to recognize and focus on interesting patterns emerging from the simulation universes.

    I guess a sensor tuned on neg-entropia density would do a nice job.

  • Pingback: A simulation argument in favor of the singularity « Singularity Notes()

  • Pingback: Overcoming Bias : Silence Suggests Sim?()