Reversible Simulations 

Physicist Sabine Hossenfelder is irate that non-physicists use the hypothesis that we live in a computer simulation to intrude on the territory of physicists:

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature. If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. ..

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation. (more)

Video games today typically only compute visual and auditory details of scenes that players are currently viewing, and then only to a resolution players are capable of noticing. The physics, chemistry, etc. is also made only as consistent and exact as typical players will notice. And most players don’t notice enough to bother them.

What if it were physicists playing a video game? What if they recorded a long video game period from several points of view, and were then able go back and spend years scouring their data carefully? Mightn’t they then be able to prove deviations? Of course, if they tried long and hard enough. And all the more so if the game allowed players to construct many complex measuring devices.

But if the physicists were entirely within a simulation, then all the measuring, recording, and computing devices available to those physicists would be under full control of the simulators. If devices gave measurements showing deviations, the output of those devices could just be directly changed. Or recordings of previous measurements could be changed. Or simulators could change the high level output of computer calculations that study measurements. Or they might perhaps more directly change what the physicists see, remember, or think.

In addition, within a few decades computers in our world will typically use reversible computation (as I discuss in my book), wherein costs are low to reverse previous computations. When simulations are run on reversible computers, it becomes feasible and even cheap to wait until a simulation reveals some problem, and then reverse the simulation back to a earlier point, make some changes, and run the simulation forward again to see it the problem is avoided. And repeat until the problem is in fact avoided.

So those running a simulation containing physicists who could detect deviations from some purported physics of the simulated world could actually wait until some simulated physicist claimed to have detected a deviation. Or even wait until an article based on their claim was accepted for peer review. And then back up the simulation and add more physics detail to try to avoid the problem.

Yes, to implement a strategy like this those running the simulation might have to understand the physics issues as well as did the physicists in the simulation. And they’d have to adjust the cost of computing their simulation to the types of tests that physicists inside examined. In the worse case, if the simulated universe seemed to allow for very large incompressible computations, then if the simulators couldn’t find a way to fudge that by changing high level outputs, they might have to find an excuse to kill off the physicists, to directly change their thoughts, or to end the simulation.

But overall it seems to me that those running a simulation containing physicists have many good options short of ending the simulation. Sabine Hossenfelder goes on to say:

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

Here I just don’t see what Sabine can be thinking. Today we can quickly make many copies of most any item that we can make in factories from concise designs. Yes, quantum states have a “no-cloning theorem”, but even so if we knew of a good quantum state to start a system in, we should be able to create many such systems that start in that same state. And I know of no serious claim that human minds make important use of unclonable quantum states, or that this would prevent creating many such systems fast.

Yes, biological systems today can be hard to copy fast, because they are so crammed with intricate detail. But as with other organs like bones, hearts, ears, eyes, and skin, most of the complexity in biological brain cells probably isn’t used directly for the function that those cells provide the rest of the body, in this case signal processing. So just as emulations of bones, hearts, ears, eyes, and skin can be much simpler than those organs, a brain emulation should be much simpler than a brain.

Maybe Sabine will explain her reasoning here.

GD Star Rating
Trackback URL:
  • J Storrs Hall

    You should hear experimental physicist a talk about theorists so e time. In real life, it can be quite difficult to get an experiment to come out the way it is “supposed” to.
    Furthermore, people ignore contradictions all the time. There are enormous blindnesses and self-deceptions in the human makeup. As Robin has made a career of pointing out…

    • Bee

      Yeah, so? It’s still non-trivial to find a model to explain the data at least as good as we can, and it’s beyond me why people who don’t know a thing about physics believe they can outperform the standard model and general relativity with fantasies.

      • J Storrs Hall

        Rem acu tetigisti. The (mostly) philosophers who are telling the living-in-a-sim story aren’t saying anything falsifiable, and I certainly don’t expect to gain a better ability to predict anything, particularly the outcomes of experiments, thereby.

      • J Storrs Hall

        That said, there is much of interest to physicists, I should think, in studying the process and phenomena of doing computer simulation of physics. The first time I met Robin, if I remember correctly, we were both giving papers about reversible computation at a physics of information conference. To a large extent, the writer of a simulation is doing the same thing a theoretical physicist is — producing a mathematical model that gives results that agree with experiment– but within much greater constraints of information storage and computational power.
        Most really big/serious physical simulations these days are multilevel, with the result that most of the apparent physics isn’t actually being computed. (Example- a mostly-classical molecular simulation which drops down to QM only when it looks like chemistry might happen.)
        The ruminations of Hans Moravec re Hashlife are perhaps of interest here.

      • No one is talking about out performing the standard model. Even physicists in a simulated world have to make a standard model, and that is hard work.

  • Oleg Eterevsky

    To me the question whether we live in simulation is orthogonal to physics. A question “what are the rules of our world” is independent from “are those rules simulated on a computer in some bigger world, or are they basic”.

    One thing I really don’t buy is that this simulation operates on entities other than fundamental physical stuff. It is not feasible to detect all the “measuring, recording, and computing devices” in the world that you are simulating and tweak their output. Even detecting such devices is a much more difficult problem, than simulation itself. And tweaking their output without breaking the rest of the simulation is even more difficult. The hypothesis that the world is simulated exactly according to the rules of physics is much more simple, and thus more likely to be true.

    • davidmanheim

      >It is not feasible to detect all the “measuring, recording, and computing devices” in the world that you are simulating and tweak their output.

      Feasible given what? If we stipulate an advanced technology significantly superior than our own, it seems strange to then note that our assumptions about feasibility are useful.

      Are you making an argument based on the relative computational complexity of the two? If so, I’m unsure why using simplifications of physics to ignore many features in most cases, and then tweaking high-level features in human brains to compensate is necessarily more complex than running a full simulation at a low level.

      • Oleg Eterevsky

        > why using simplifications of physics to ignore many features in most cases, and then tweaking high-level features in human brains to compensate is necessarily more complex

        There are several reasons, and the main one is that prior probability of such theory (aka its Kolmogoroff complexity) is way way higher than the probability that simulation just uses the simple physics laws.

        Aside from that, let’s consider for a second, when should this transition between “coarse” simulation and “faking” the fine details happen. Let’s imagine for instance that atomic theory is properly simulated, but quantum mechanics is “faked”. How would the programmers of the simulation achieve it?

        First of all, they have to fake the evidence whenever a human physicist is performing an experiment. But that’s not all. What if humans invent a piece of technology that uses some quantum effects, like polarized glass? Do they turn quantum effects on every time someone is looking through a piece of polarized glass, and then turn them off?

        What if some event happens a million years ago in a galaxy far far away, then reaches humans, and they can use it as an evidence for or against quantum theory? Should the simulation program guess that it can be used as such from the beginning, or does it rewind this million years to fix the incositency?

        In my opinion, the simplest, and the only good answer is that if this world is a simulation, then this simulation uses the rules of physics. It might seem hard to imagine because of the scale of required computational resources, but let’s not forget that we do not have any evidence whatsoever about the meta-world in which the simulation computer exists. For all we know, it may be a quantum computer with 10^10^10 qbits in a 10^10-dimentional space.

      • davidmanheim

        I think you’re penalizing priors wrong. Kolmogoroff complexity of the theory that a civilization builds a simulation one way versus the other is not necessarily anything like the kolmogoroff complexity of the simulation itself – but if you want a formal proof, you’ll need to provide a computable complexity measure ;).

        Also, I’m unclear about your later argument, so I can’t respond. Are you arguing that the transition between high and low level simulation is intractable formally (noncomputable), or only that it is computationally more work than just simulating at a low-level?

      • Oleg Eterevsky

        > Kolmogoroff complexity of the theory that a civilization builds a simulation one way versus the other

        I think this complexity is aligned with the complexity of simulation. The only requirement for a “dumb but faithful” simulation is big enough amount of computational resources (which does not penalise the complexity of the world). Basically, if we had a big enough computer at our disposal, we could run it at our current stage of development. At the same time I do not think we are even close to creating a fake simulation that would fool us, even given the same amount of resources.

        > Are you arguing that the transition between high and low level simulation is intractable formally (noncomputable), or only that it is computationally more work than just simulating at a low-level?

        It depends. If you allow unlimited backtracking, then it probably is computable, but difficult, and probably will require at least the same amount of resources as a faithful simulation.

        If you expect to detect ahead of time whether some event has to be simulated with higher level of details, because it will have a causal link with some human observations, then I am pretty sure this task will not be computable. It’s easy enough to imagine a model to prove this, using the halting problem.

      • Joe

        The key insight in Robin’s ‘How To Live In A Simulation’ is that there’s no guarantee that a simulation would in fact span all of the time and space that we think exists – it would be much easier to just simulate, say, a small number of people in part of a building, for a few minutes or hours. (And note this is how we build simulations today.)

        Sure, you have memories stretching far beyond this, but since you can’t access the situations that created those memories now, how do you know they’re memories of real events, rather than having just been designed into your (simulated) mind?

      • Oleg Eterevsky

        It’s not quite so simple. Consider bitcoin mining. By examining the state of bitcoin network, you can get a proof of millions of hours of computer time spent mining.

        This is true for any NP-hard task: you can easily verify the results of computation, but generating them requires a lot of resources. You in your simulated room can proove (with an assumption that P≠NP) that a lot of computational resources were used to obtain the results that you are seeing.

        This can be vaguely generalized on many things. For instance, art. In a few minutes you can google a bunch of art objects, that would require a lot of work to create. Someone is creating them.

      • davidmanheim

        Unless it’s easier to fake NP-hard tasks than simulate them. It’s plausible that it’s less computationally intensive to identify where NP-complete questions are being asked or noticed and compute only those problems, rather than generally relying on intense computation. Then most areas where complex phenomena would be occurring are simulated by, say, rough linear approximations.

      • small sim customer service

        > the simplest, and the only good answer is that if this world is a simulation, then this simulation uses the rules of physics.

        The universe seems big. Bigger simulations are more complex and require more resources. An alternative type of simulation that fit all evidence you have at hand exactly as well as the big universe simulation hypothesis is that you’re in a much, much smaller simulation of one individual subject (you). All the rest (including me) are bots and plots and impressions, including all you’ve heard about physics, rendered on the fly only for you. Enjoy!

      • Oleg Eterevsky

        So, you are saying that the resources of the simulation are limited, right? What is the limit then? Does it mean that there’s a limit to the amount of computations, result of which I can observe? That I can’t have a computer bigger than some maximum size? What about quantum computers? Do you expect that once we build a quantum computer with more than a few quints, it will break because it will be too expensive to simulate?

        If you answer “yes” to any of the questions above, then let me ask you, why hasn’t the simulation broken already, after humanity started using more and more fine technology over the last century?

      • small sim customer service

        > why hasn’t the simulation broken already, after humanity started using more and more fine technology over the last century?

        A simulation that will break when simulating such fine tech (or merely render the appearance of it) cannot generate a subject that asks your question.

      • Oleg Eterevsky

        Does it mean that your answer to other questions is “yes”? You believe it likely that the physics theories, that were confirmed by a lot of observations will at some point break, because it will be too expensive to simulate?

        And if we believe for a second that the simulation has very limited resource, then why does it have quantum effects, that are notoriously costly to simulate? Why does it simulate a huge universe with billions of galaxies, not visible to a naked eye, but that are still there? Why simulate things with at least femtosecond resolution, when humans can’t persieve time below 10 ms?

    • Peter David Jones

      “To me the question whether we live in simulation is orthogonal to physics.”. It isn’t at all irrelevant to physicalISM.

  • Bee


    Your “argument” lives on “could” and “might”. My point wasn’t to say it’s impossible, I’m saying as long as you can’t demonstrate that “could” is “can” and “might” is “does” it’s not science, it’s merely fiction, and I’m annoyed if people pretend otherwise.

    Regarding reversing the system to an earlier state: As you note this requires you’ll have to change the initial condition. Where do you get the initial data from and how do you hide the change?

    Regarding the issue of making copies of advanced AI: This is merely speculation on my part, it isn’t relevant to the argument. It’s just something I’ve been wondering about.

    • As soon as you say that some hypotheses about what is going on, hypotheses that evidence can speak to, are not “science”, you give people an excuse to look elsewhere to find out what is going on. I’d rather see “science” as a commitment to considering whatever hypotheses and evidence may be relevant.

      When reversing to avoid a problem, you’d just change the state at that point in time, not the “initial” state of the system. The result of successfully avoiding the physicist publishing their demonstration of a deviation is exactly to “hide” the change.

      • Robert Koslover

        This process has been described previously in the prestigious technical journal (oops, I meant TV series) named “Charmed.” Just replace “mortals” by “physicists” and “magic” or “magical world” by “simulation.” See
        . From that site:
        The Cleaners are a race of magical neutral beings that were empowered by the Tribunal with the eternal task of protecting magic from exposure. Existing beyond time and space, the sole purpose of their existence is to ensure that mortals never became aware of the existence of the magical world, whatever the cost.

  • Tim Tyler

    Re: “If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work.”

    That’s a simple technical mistake. Classical computers can simulate quantum effects. They just do it slowly.

  • Robert Koslover

    What testable measurable predictions are made by the “theory” that we live in a simulation? If none, then there is no more reality to this “theory” than there is to believing in Santa Claus. Occam’s razor makes clear that the burden of proof here rests entirely upon those who claim we live in a simulation, not upon those who reject this notion. Make your testable predictions and then let the experimentalists make the measurements. Put up or shut up (and I mean that in the nicest way.)

    • The creation of future simulations is a *consequence* of our other standards theories. So all the predictions of those standard theories are relevant.

      • Robert Koslover

        So… does this mean you agree with me that we humans are not living in a simulation (or at least that we have zero evidence of it)? And… that you are simply interested in the details of if/how we humans might someday, possibly, create such simulations ourselves — and if so, whether and how we could successfully trick the simulated inhabitants of that simulation into believing that they are not in a simulation? Is that really what all this discussion is about?

      • I do NOT agree to the zero evidence claim. But calculating the likelihood ratios are tricky, hence this discussion.

  • Wei Dai

    One plausible way that AIs may be difficult to copy is if AIs are implemented most efficiently using analog instead of digital computing, since we can’t make analog devices that are exactly identical. In a memristor-based artificial neural network, for example, each artificial neuron behaves somewhat differently, so if you train such an ANN, the resulting synapse weights would be optimized for that specific physical instance with its particular pattern of variations in the artificial neurons.

    • Even for analogue devices, there is usually some resolution where if you copy at that resolution you get basically the same functionality.

      • Wei Dai

        Imagine you’ve got a fabrication process that lets you lay down billions of artificial neurons on a chip at a competitive cost, but with random variations from neuron to neuron. Now someone asks you to make a copy of such a chip at a high enough resolution that the pattern of variations is preserved. This seems like a much harder problem that would require more advanced technology, which you may not have or can achieve only at a much higher cost. I’m not claiming this is how things will surely turn out, but it seems like a plausible scenario to me. Sabine appears to be making a stronger claim than I am, but maybe this is the kind of thing that she’s thinking of?

      • But you need an error rate so high that the resulting creature is economically useless compared to the first. That’s a very high error rate.

      • Wei Dai

        Well, the error rate depends on how much uniformity the manufacturing process can achieve, and how sensitive the AI architecture is to device variation (for example it might use a NN with many layers and the errors tend to accumulate). I don’t know enough to rule out even a “very high” error rate in a copy, so it seems plausible to me that could be the case.

        Another seemingly plausible scenario is that it’s more economical to attach additional analog chips to the original AI to improve its capabilities, rather than using them to make degraded copies of it.

      • Peter David Jones

        Isn’t that contradicted by critical dependence on starting conditions?

  • J Storrs Hall

    All models are wrong, but some are useful … This might be of interest:

  • Joe

    If we’re going to consider the possibility of living in a simulation that does not just faithfully simulate physics at a low level, using simple rules and vast amounts of computing power, but instead contains many high-level components modeled with specific functionality at a scale we can actually see, then I think this has further implications on how to behave.

    Notice we haven’t ever seen any visual glitch in the simulation of reality. This suggests that preventing the inhabitants of the simulation from knowing it’s a simulation is quite important to those running it. In this case, if you want to stay ‘alive’ as long as possible, one good general suggestion could be – try not to break anything! Don’t perform physics experiments. Don’t try to build machines from scratch, from first principles. Prefer using ‘heavily-designed’ devices, black boxes with just a few simple inputs and outputs, that can work without you having to know much about the underlying implementation. Perhaps even stay away from materials that we currently find hard to simulate in video games – so prefer rigid, solid objects, and avoid liquids, gases, powders, cloth.

    One thing you can use without fear in a simulation of this kind – computers! 🙂

  • UWIR

    “Today we can quickly make many copies of most any item that we can make in factories from concise designs.”

    This seems rather like begging the question. Will artificial brains be made in factories from concise designs?

    “if we knew of a good quantum state to start a system in, we should be able to create many such systems that start in that same state.”

    But would you know? Given a biological brain, we wouldn’t know the initial quantum state, so why would we necessarily know the initial quantum state of an artificial brain?

  • Ari T

    I’d more interested in the argument if our universe is computable. I can’t follow the physics arguments about Church-Turing hypothesis, lattices, AdS/CFT, Yang-Mills gap, Hilbert space, PSPACE. It seems like theres a million threads and different debates going on.

    I wonder if you physicists could write like “here’s where we stand and know regarding computability of our universe with our current knowledge; br physicists Scott Aaronson, Leonard Susskind, Sabine, Ed Witten etc.”

    Doesn’t help its full of amateurs talking about things they don’t understand, or other nonsense (ad hominems etc.). Not that amateurs can’t point out things or ask smart questions, just that many are adding noise.

    Adding my own thought, I feel like Sabine is coming from European science history where idea of simulation is just “crazy” without giving it much second though. I know because I am from Europe and I’d guess what most of my physicist friends think about this. I feel this is on some level intellectual arrogance.