37 Comments
User's avatar
davidmanheim's avatar

Unless it's easier to fake NP-hard tasks than simulate them. It's plausible that it's less computationally intensive to identify where NP-complete questions are being asked or noticed and compute only those problems, rather than generally relying on intense computation. Then most areas where complex phenomena would be occurring are simulated by, say, rough linear approximations.

Expand full comment
Overcoming Bias Commenter's avatar

I'd more interested in the argument if our universe is computable. I can't follow the physics arguments about Church-Turing hypothesis, lattices, AdS/CFT, Yang-Mills gap, Hilbert space, PSPACE. It seems like theres a million threads and different debates going on.

I wonder if you physicists could write like "here's where we stand and know regarding computability of our universe with our current knowledge; br physicists Scott Aaronson, Leonard Susskind, Sabine, Ed Witten etc."

Doesn't help its full of amateurs talking about things they don't understand, or other nonsense (ad hominems etc.). Not that amateurs can't point out things or ask smart questions, just that many are adding noise.

Adding my own thought, I feel like Sabine is coming from European science history where idea of simulation is just "crazy" without giving it much second though. I know because I am from Europe and I'd guess what most of my physicist friends think about this. I feel this is on some level intellectual arrogance.

Expand full comment
Peter David Jones's avatar

"To me the question whether we live in simulation is orthogonal to physics.". It isn't at all irrelevant to physicalISM.

Expand full comment
Peter David Jones's avatar

Isn't that contradicted by critical dependence on starting conditions?

Expand full comment
UWIR's avatar

"Today we can quickly make many copies of most any item that we can make in factories from concise designs."

This seems rather like begging the question. Will artificial brains be made in factories from concise designs?

"if we knew of a good quantum state to start a system in, we should be able to create many such systems that start in that same state."

But would you know? Given a biological brain, we wouldn't know the initial quantum state, so why would we necessarily know the initial quantum state of an artificial brain?

Expand full comment
Oleg Eterevsky's avatar

It's not quite so simple. Consider bitcoin mining. By examining the state of bitcoin network, you can get a proof of millions of hours of computer time spent mining.

This is true for any NP-hard task: you can easily verify the results of computation, but generating them requires a lot of resources. You in your simulated room can prove (with an assumption that P≠NP) that a lot of computational resources were used to obtain the results that you are seeing.

This can be vaguely generalized on many things. For instance, art. In a few minutes you can google a bunch of art objects, that would require a lot of work to create. Someone is creating them.

Expand full comment
Oleg Eterevsky's avatar

Does it mean that your answer to other questions is "yes"? You believe it likely that the physics theories, that were confirmed by a lot of observations will at some point break, because it will be too expensive to simulate?

And if we believe for a second that the simulation has very limited resource, then why does it have quantum effects, that are notoriously costly to simulate? Why does it simulate a huge universe with billions of galaxies, not visible to a naked eye, but that are still there? Why simulate things with at least femtosecond resolution, when humans can't persieve time below 10 ms?

Expand full comment
Joe's avatar

If we're going to consider the possibility of living in a simulation that does not just faithfully simulate physics at a low level, using simple rules and vast amounts of computing power, but instead contains many high-level components modeled with specific functionality at a scale we can actually see, then I think this has further implications on how to behave.

Notice we haven't ever seen any visual glitch in the simulation of reality. This suggests that preventing the inhabitants of the simulation from knowing it's a simulation is quite important to those running it. In this case, if you want to stay 'alive' as long as possible, one good general suggestion could be - try not to break anything! Don't perform physics or chemistry experiments. Don't try to build machines from scratch, from first principles. Prefer using 'heavily-designed' devices, black boxes with just a few simple inputs and outputs, that can work without you having to know much about the underlying implementation. Perhaps even stay away from materials that we currently find hard to simulate in video games - so prefer rigid solid objects, and avoid liquids, (visible) gases, powders, cloth.

One thing you can use without fear in a simulation of this kind - computers! :)

Expand full comment
Overcoming Bias Commenter's avatar

> why hasn't the simulation broken already, after humanity started using more and more fine technology over the last century?

A simulation that will break when simulating such fine tech (or merely render the appearance of it) cannot generate a subject that asks your question.

Expand full comment
Joe's avatar

The key insight in Robin's 'How To Live In A Simulation' is that there's no guarantee that a simulation would in fact span all of the time and space that we think exists - it would be much easier to just simulate, say, a small number of people in part of a building, for a few minutes or hours. (And note this is how we build simulations today.)

Sure, you have memories stretching far beyond this, but since you can't access the situations that created those memories now, how do you know they're memories of real events, rather than having just been designed into your (simulated) mind?

Expand full comment
Wei Dai's avatar

Well, the error rate depends on how much uniformity the manufacturing process can achieve, and how sensitive the AI architecture is to device variation (for example it might use a NN with many layers and the errors tend to accumulate). I don't know enough to rule out even a "very high" error rate in a copy, so it seems plausible to me that could be the case.

Another seemingly plausible scenario is that it's more economical to attach additional analog chips to the original AI to improve its capabilities, rather than using them to make degraded copies of it.

Expand full comment
J Storrs Hall's avatar

All models are wrong, but some are useful ... This might be of interest: http://www.scottaaronson.co...

Expand full comment
RobinHanson's avatar

But you need an error rate so high that the resulting creature is economically useless compared to the first. That's a very high error rate.

Expand full comment
Oleg Eterevsky's avatar

> Kolmogoroff complexity of the theory that a civilization builds a simulation one way versus the other

I think this complexity is aligned with the complexity of simulation. The only requirement for a "dumb but faithful" simulation is big enough amount of computational resources (which does not penalise the complexity of the world). Basically, if we had a big enough computer at our disposal, we could run it at our current stage of development. At the same time I do not think we are even close to creating a fake simulation that would fool us, even given the same amount of resources.

> Are you arguing that the transition between high and low level simulation is intractable formally (noncomputable), or only that it is computationally more work than just simulating at a low-level?

It depends. If you allow unlimited backtracking, then it probably is computable, but difficult, and probably will require at least the same amount of resources as a faithful simulation.

If you expect to detect ahead of time whether some event has to be simulated with higher level of details, because it will have a causal link with some human observations, then I am pretty sure this task will not be computable. It's easy enough to imagine a model to prove this, using the halting problem.

Expand full comment
Oleg Eterevsky's avatar

So, you are saying that the resources of the simulation are limited, right? What is the limit then? Does it mean that there's a limit to the amount of computations, result of which I can observe? That I can't have a computer bigger than some maximum size? What about quantum computers? Do you expect that once we build a quantum computer with more than a few quints, it will break because it will be too expensive to simulate?

If you answer "yes" to any of the questions above, then let me ask you, why hasn't the simulation broken already, after humanity started using more and more fine technology over the last century?

Expand full comment
Wei Dai's avatar

Imagine you've got a fabrication process that lets you lay down billions of artificial neurons on a chip at a competitive cost, but with random variations from neuron to neuron. Now someone asks you to make a copy of such a chip at a high enough resolution that the pattern of variations is preserved. This seems like a much harder problem that would require more advanced technology, which you may not have or can achieve only at a much higher cost. I'm not claiming this is how things will surely turn out, but it seems like a plausible scenario to me. Sabine appears to be making a stronger claim than I am, but maybe this is the kind of thing that she's thinking of?

Expand full comment