The Born Probabilities

This post is part of the Quantum Physics Sequence.
Previously in seriesDecoherence is Pointless
Followup toWhere Experience Confuses Physicists

One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of.  What does the integral over the squared modulus of the amplitude density have to do with anything?

This was discussed by analogy in "Where Experience Confuses Physicists", and I won’t repeat arguments already covered there.  I will, however, try to convey exactly what the puzzle is, in the real framework of quantum mechanics.

A professor teaching undergraduates might say:  "The probability of finding a particle in a particular position is given by the squared modulus of the amplitude at that position."

This is oversimplified in several ways.

First, for continuous variables like position, amplitude is a density, not a point mass.  You integrate over it.  The integral over a single point is zero.

(Historical note:  If "observing a particle’s position" invoked a mysterious event that squeezed the amplitude distribution down to a delta point, or flattened it in one subspace, this would give us a different future amplitude distribution from what decoherence would predict.  All interpretations of QM that involve quantum systems jumping into a point/flat state, which are both testable and have been tested, have been falsified.  The universe does not have a "classical mode" to jump into; it’s all amplitudes, all the time.)

Second, a single observed particle doesn’t have an amplitude distribution.  Rather the system containing yourself, plus the particle, plus the rest of the universe, may approximately factor into the multiplicative product of (1) a sub-distribution over the particle position and (2) a sub-distribution over the rest of the universe.  Or rather, the particular blob of amplitude that you happen to be in, can factor that way.

So what could it mean, to associate a "subjective probability" with a component of one factor of a combined amplitude distribution that happens to factorize?

Recall the physics for:

(Human-BLANK * Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
        =>
(Human-LEFT * Sensor-LEFT * Atom-LEFT) + (Human-RIGHT * Sensor-RIGHT * Atom-RIGHT)

Think of the whole process as reflecting the good-old-fashioned distributive rule of algebra.  The initial state can be decomposed – note that this is an identity, not an evolution – into:

(Human-BLANK * Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
    =
(Human-BLANK * Sensor-BLANK * Atom-LEFT) + (Human-BLANK * Sensor-BLANK * Atom-RIGHT)

We assume that the distribution factorizes.  It follows that the term on the left, and the term on the right, initially differ only by a multiplicative factor of Atom-LEFT vs. Atom-RIGHT.

If you were to immediately take the multi-dimensional integral over the squared modulus of the amplitude density of that whole system,

Then the ratio of the all-dimensional integral of the squared modulus over the left-side term, to the all-dimensional integral over the squared modulus of the right-side term,

Would equal the ratio of the lower-dimensional integral over the squared modulus of the Atom-LEFT, to the lower-dimensional integral over the squared modulus of Atom-RIGHT,

For essentially the same reason that if you’ve got (2 * 3) * (5 + 7), the ratio of (2 * 3 * 5) to (2 * 3 * 7) is the same as the ratio of 5 to 7.

Doing an integral over the squared modulus of a complex amplitude distribution in N dimensions doesn’t change that.

There’s also a rule called "unitary evolution" in quantum mechanics, which says that quantum evolution never changes the total integral over the squared modulus of the amplitude density.

So if you assume that the initial left term and the initial right term evolve, without overlapping each other, into the final LEFT term and the final RIGHT term, they’ll have the same ratio of integrals over etcetera as before.

What all this says is that,

If some roughly independent Atom has got a blob of amplitude on the left of its factor, and a blob of amplitude on the right,

Then, after the Sensor senses the atom, and you look at the Sensor,

The integrated squared modulus of the whole LEFT blob, and the integrated squared modulus of the whole RIGHT blob,

Will have the same ratio,

As the ratio of the squared moduli of the original Atom-LEFT and Atom-RIGHT components.

This is why it’s important to remember that apparently individual particles have amplitude distributions that are multiplicative factors within the total joint distribution over all the particles.

If a whole gigantic human experimenter made up of quintillions of particles,

Interacts with one teensy little atom whose amplitude factor has a big bulge on the left and a small bulge on the right,

Then the resulting amplitude distribution, in the joint configuration space,

Has a big amplitude blob for "human sees atom on the left", and a small amplitude blob of "human sees atom on the right".

And what that means, is that the Born probabilities seem to be about finding yourself in a particular blob, not the particle being in a particular place.

But what does the integral over squared moduli have to do with anything?  On a straight reading of the data, you would always find yourself in both blobs, every time.  How can you find yourself in one blob with greater probability?  What are the Born probabilities, probabilities of?  Here’s the map – where’s the territory?

I don’t know.  It’s an open problem.  Try not to go funny in the head about it.

This problem is even worse than it looks, because the squared-modulus business is the only non-linear rule in all of quantum mechanics.  Everything else – everything else – obeys the linear rule that the evolution of amplitude distribution A, plus the evolution of the amplitude distribution B, equals the evolution of the amplitude distribution A + B.

When you think about the weather in terms of clouds and flapping butterflies, it may not look linear on that higher level.  But the amplitude distribution for weather (plus the rest of the universe) is linear on the only level that’s fundamentally real.

Does this mean that the squared-modulus business must require additional physics beyond the linear laws we know – that it’s necessarily futile to try to derive it on any higher level of organization?

But even this doesn’t follow.

Let’s say I have a computer program which computes a sequence of positive integers that encode the successive states of a sentient being.  For example, the positive integers might describe a Conway’s-Game-of-Life universe containing sentient beings (Life is Turing-complete) or some other cellular automaton.

Regardless, this sequence of positive integers represents the time series of a discrete universe containing conscious entities.  Call this sequence Sentient(n).

Now consider another computer program, which computes the negative of the first sequence:  -Sentient(n).  If the computer running Sentient(n) instantiates conscious entities, then so too should a program that computes Sentient(n) and then negates the output.

Now I write a computer program that computes the sequence {0, 0, 0…} in the obvious fashion.

This sequence happens to be equal to the sequence Sentient(n) + -Sentient(n).

So does a program that computes {0, 0, 0…} necessarily instantiate as many conscious beings as both Sentient programs put together?

Admittedly, this isn’t an exact analogy for "two universes add linearly and cancel out".  For that, you would have to talk about a universe with linear physics, which excludes Conway’s Life.  And then in this linear universe, two states of the world both containing conscious observers – world-states equal but for their opposite sign – would have to cancel out.

It doesn’t work in Conway’s Life, but it works in our own universe!  Two quantum amplitude distributions can contain components that cancel each other out, and this demonstrates that the number of conscious observers in the sum of two distributions, need not equal the sum of conscious observers in each distribution separately.

So it actually is possible that we could pawn off the only non-linear phenomenon in all of quantum physics onto a better understanding of consciousness.  The question "How many conscious observers are contained in an evolving amplitude distribution?" has obvious reasons to be non-linear.

(!)

Robin Hanson has made a suggestion along these lines.

(!!)

Decoherence is a physically continuous process, and the interaction between LEFT and RIGHT blobs may never actually become zero.

So, Robin suggests, any blob of amplitude which gets small enough, becomes dominated by stray flows of amplitude from many larger worlds.

A blob which gets too small, cannot sustain coherent inner interactions – an internally driven chain of cause and effect – because the amplitude flows are dominated from outside.  Too-small worlds fail to support computation and consciousness, or are ground up into chaos, or merge into larger worlds.

Hence Robin’s cheery phrase, "mangled worlds".

The cutoff point will be a function of the squared modulus, because unitary physics preserves the squared modulus under evolution; if a blob has a certain total squared modulus, future evolution will preserve that integrated squared modulus so long as the blob doesn’t split further.  You can think of the squared modulus as the amount of amplitude available to internal flows of causality, as opposed to outside impositions.

The seductive aspect of Robin’s theory is that quantum physics wouldn’t need interpreting.  You wouldn’t have to stand off beside the mathematical structure of the universe, and say, "Okay, now that you’re finished computing all the mere numbers, I’m furthermore telling you that the squared modulus is the ‘degree of existence’."  Instead, when you run any program that computes the mere numbers, the program automatically contains people who experience the same physics we do, with the same probabilities.

A major problem with Robin’s theory is that it seems to predict things like, "We should find ourselves in a universe in which lots of very few decoherence events have already taken place," which tendency does not seem especially apparent.

The main thing that would support Robin’s theory would be if you could show from first principles that mangling does happen; and that the cutoff point is somewhere around the median amplitude density (the point where half the total amplitude density is in worlds above the point, and half beneath it), which is apparently what it takes to reproduce the Born probabilities in any particular experiment.

What’s the probability that Hanson’s suggestion is right?  I’d put it under fifty percent, which I don’t think Hanson would disagree with.  It would be much lower if I knew of a single alternative that seemed equally… reductionist.

But even if Hanson is wrong about what causes the Born probabilities, I would guess that the final answer still comes out equally non-mysterious.  Which would make me feel very silly, if I’d embraced a more mysterious-seeming "answer" up until then.  As a general rule, it is questions that are mysterious, not answers.

When I began reading Hanson’s paper, my initial thought was:  The math isn’t beautiful enough to be true.

By the time I finished processing the paper, I was thinking:  I don’t know if this is the real answer, but the real answer has got to be at least this normal.

This is still my position today.

GD Star Rating
loading...
Trackback URL:
  • http://profile.typekey.com/simon112/ simon

    I guess I was too quick to assume that mangled worlds involved some additional process. Oops.

  • http://profile.typekey.com/simon112/ simon

    I guess I was too quick to assume that mangled worlds involved some additional process. Oops.

  • http://profile.typekey.com/simon112/ simon

    Unless there is a surprising amount of coherence between worlds with different lottery outcomes, this mangled worlds model should still be vulnerable to my lottery winning technique (split the world a bunch of times if you win).

  • Roland

    Hi,

    I haven’t commented on a while. I’m just curious, are there any non-physicists who are able to follow this whole quantum-series? I’ve given up some posts ago.

    Peace!

  • steven

    You wouldn’t have to stand off beside the mathematical structure of the universe, and say, “Okay, now that you’re finished computing all the mere numbers, I’m furthermore telling you that the squared modulus is the ‘degree of existence’.”

    Instead, you’d have to stand off beside the mathematical structure of the universe, and say, “Okay, now that you’re finished computing all the mere numbers, I’m furthermore telling you that the world count is the ‘degree of existence’.”

  • http://www.ciphergoth.org/ Paul Crowley

    Roland: yes, at least one. Where did you give up and why?

  • http://hanson.gmu.edu Robin Hanson

    A major problem with Robin’s theory is that it seems to predict things like, “We should find ourselves in a universe in which lots of decoherence events have already taken place,” which tendency does not seem especially apparent.

    Actually the theory suggests we should find ourselves in a state with near the least feasible number of past decoherence events. Yes, it is not clear if this in fact holds, and yes I’d put the chance of something like mangled worlds being right as more like 1/4 or 1/3.

  • eddie

    Thanks to Eliezer’s QM series, I’m starting to have enough background to understand Robin’s paper (kind of, maybe). And now that I do (kind of, maybe), it seems to me that Robin’s point is completely demolished by Wallace’s points about decoherence being continuous rather than discrete and therefore there being no such thing as a number of discrete worlds to count.

    There seems to be nothing to resolve between the probabilities given by measure and the probabilities implied by world count if you simply say that measure is probability.

    Eliezer objects. We’re interpreting. We’re adding something outside the mathematics.

    I fail to see the problem.

    If we’re to accept that particles moving like billiard balls are an illusion, and configuration space is real, and blobs of amplitude are real, and time evolution of amplitude within configuration space according to the wave equations is real, and that configurations and amplitude and wave equations are fundamental parts of reality, because that’s the best model we’ve come up with that agrees with experimental observation… why not accept that the modulus-squared law is real and fundamental, too?

    It certainly agrees with experimental observations, and doesn’t seem any less desirable a part of our model of reality than configurations, amplitude blobs, and wave equations.

    I wish someone would explain the problem more clearly, although if Eliezer’s explanations so far haven’t cleared it up for me yet, perhaps nothing will.

  • http://web.mit.edu/sjordan/www/ Stephen

    Eddie,

    My understanding of Eli’s beef with the Born rule is this (he can correct me if I’m wrong): the Born rule appears to be a bridging rule in fundamental physics that directly tells us something about how qualia bind to the universe. This seems odd. Furthermore, if the binding of qualia to the universe is given by a separate fundamental bridging rule independent of the other laws of physics, then the zombie world really is logically possible, or in other words epiphenomenalism is true. (Just postulate a universe with all the laws of physics except Born’s bridging rule. Such a universe is, as far as we know, logically consistent.) Eli argues against epiphenomenalism on the grounds that if epiphenomenalism is true, then the correlation between beliefs (which are qualia) with our statements and actions (which are physical processes) is just a miraculous coincidence.

    What follows are my own comments as opposed to a summary of what I believe Eli thinks:

    Why can’t the correlation between physical states and beliefs arise by an arrow of causation that goes from the physical states to the beliefs? In this case epiphenomenalism would be true (since qualia have no effect on the physical world), but the correlation would not be a coincidence (since the physical world directly causes qualia). I think the objection to this is that if there really is a bridging law, then the coincidence remains that it is such a reasonable bridging law. That is, what we say we experience and physically act as though we experience actually matches (usually) what we do experience, as opposed to relating to what we do experience in some arbitrarily scrambled way. If qualia bind to some higher emergent level having to do with information processing, then it seems non-coincidental that the bridging law is reasonable. (Because the things it is mapping between seem to have a close and clear relationship.) However, the Born rule seems to suggest that the bridging rule is at the level of fundamental physics.

    Maybe if we could derive the Born rule as a property of the information processing performed by a quantum universe the mystery would go away.

  • Nick Tarleton

    None of the confusion over duplication and quantum measures seems unique to beings with qualia; any Bayesian system capable of anthropic reasoning, it would seem, should be surprised the universe is orderly. So maybe either the confusion is separate from and deeper than experience, or AIXItl has qualia.

  • ME

    As I understand it (someone correct me if I’m wrong), there are two problems with the Born rule:
    1) It is non-linear, which suggests that it’s not fundamental, since other fundamental laws seem to be linear

    2) From my reading of Robin’s article, I gather that the problem with the many-worlds interpretation is: let’s say a world is created for each possible outcome (countable or uncountable). In that case, the vast majority of worlds should end up away from the peaks of the distribution, just because the peaks only occupy a small part of any distribution.

    Robin’s solution seems to me equivalent to the Quantum Spaghetti Monster eating the unlikely worlds that we find ourselves not to end up in. The key line is “sudden and thermodynamically irreversible.” Actually, that should be enough to bury the theory since aren’t fundamental physical laws thermodynamically neutral?

    We could probably eliminate this distraction of consciousness, couldn’t we? I mean, let’s say that Mathematica version 5000 comes out in a few centuries and in addition to its other symbolic algebra capabilities, it comes with a physical-law-prover: you ask it questions and it sets up experiments to answer those questions. So you ask it about quantum mechanics, it does a bunch of double-slit-experiments in a robotic lab, and gives you the answer, which includes the Born rule. Consciousness was never involved.

    Actually it seems to me like this whole business of quantum probabilities is way overrated (for the non-physicist), because it only really manifests itself in cleverly constructed experiments . . . right? I mean, setting aside exactly how Born’s rule derives from the underlying physics, is there any reason to believe that we would learn anything new by finding out?

  • http://web.mit.edu/sjordan/www/ Stephen

    Nick: I don’t understand the connection to quantum mechanics.

    The argument that I commonly see relating quantum mechanics to anthropic reasoning is deeply flawed. Some people seem to think that many worlds means there are many “branches” of the wavefunction and we find ourselves in them with equal probability. In this case, they argue, we should expect to find ourselves in a disorderly universe. However, this is exactly what the Born rule (and experiment!) does not say. Rather, the Born rule says that we are only likely to find ourselves in states with large amplitude. Also, standard quantum mechanics allows the probabilities to fall on a continuum. They aren’t arrived at by counting, so the whole concept of counting branches is not standard QM anyway.

    (I don’t know whether you hold this view, but it is a common misconception that should be addressed at some point anyway.)

  • Caledonian

    In this case epiphenomenalism would be true (since qualia have no effect on the physical world), but the correlation would not be a coincidence (since the physical world directly causes qualia).

    But the nature of the experiences we claimed to have would not depend in any way on the properties of these hypothetical ‘qualia’. There would be no event in the physical world that would be affected by them – they would not, in fact, exist.

    Epiphenomenalism is never true, because it contains a contradiction in terms.

  • http://profile.typekey.com/Psy-Kosh/ Psy-Kosh

    Here’s a different question which may be relevant: why unitary transforms?

    That is, if you didn’t in the first place know about the Born rule, what would be a (even semi) intuitive justification for the restriction that all “reasonable” transforms/time evolution operators have to conserve the squared magnitude?

    Given the Born rule, it seems rather obvious, but the Born rule itself is what is currently appears to be suspiciously out of place. So, if that arises out of something more basic, then why the unitary rule in the first place?

  • eddie

    Stephen, thanks for your thoughts on Eli’s thoughts. I’m going to have to think on them further – after all these helpful posts I can pretend I understand quantum mechanics, but pretending to understand how conscious minds perceive a single point in configuration space instead of blobs of amplitude is going to take more work.

    I will point out, though, that the question of how consciousness is bound to a particular branch (and thus why the Born rule works like it does) doesn’t seem that much different from how consciousness is tied to a particular point in time or to a particular brain when the Spaghetti Monster can see all brains in all times and would have to be given extra information to know that my consciousness seems to be living in *this* particular brain at *this* particular time.

    Finally: “it is a common misconception that should be addressed at some point anyway” – it appears to me that Robin’s paper is based on this same misconception, or something like it: the Born rule (and experiment!) give one result while counting worlds gives another, therefore we have to add a new rule (“worlds that are too small get mangled”) in order to make counting worlds match experiment. Whereas without the misconception we wouldn’t be counting worlds in the first place. Do you think I’m understanding Robin’s position and/or QM correctly?

  • http://web.mit.edu/sjordan/www/ Stephen

    “Given the Born rule, it seems rather obvious, but the Born rule itself is what is currently appears to be suspiciously out of place. So, if that arises out of something more basic, then why the unitary rule in the first place?”

    While not an answer, I know of a relevant comment. Suppose you assume that a theory is linear and preserves some norm. What norm might it be? Before addressing this, let’s say what a norm is. In mathematics a norm is defined to be some function on vectors that is only zero for the all zeros vector, and obeys the triangle inequality: the norm of a+b is no more than the norm of a plus the norm of b. The functions satisfying these axioms seem to capture everything that we would intuitively regard as some sort of length or magnitude.

    The Euclidian norm is obtained by summing the squares of the absolute values of the vector components, and then taking the square root of the result. The other norms that arise in mathematics are usually of the type where you raise the each of the absolute values of the vector components to some power p, then sum them up, and then take the pth root. The corresponding norm is called the p-norm. (Does somebody know: are all the norms invariant under permutation of the indices p-norms?) Scott Aaronson proved that for any p other than 1 or 2, the only norm-preserving linear transformations are the permutations of the components. If you choose the 1-norm, then the sum of the absolute values of the components are preserved, and the norm preserving transformations correspond to the stochastic matrices. This is essentially probability theory. If you choose the 2-norm then the Euclidean length of the vectors is preserved, and the allowed linear transformations correspond to the unitary matrices. This is essentially quantum mechanics. (Scott always hastens to add that his theorem about p-norms and permutations was probably known by mathematicians for a long time. The new part is the application to foundations of QM.)

  • http://web.mit.edu/sjordan/www/ Stephen

    “I will point out, though, that the question of how consciousness is bound to a particular branch (and thus why the Born rule works like it does) doesn’t seem that much different from how consciousness is tied to a particular point in time or to a particular brain when the Spaghetti Monster can see all brains in all times and would have to be given extra information to know that my consciousness seems to be living in *this* particular brain at *this* particular time.”

    Agreed!

    More generally, it seems to me that many objections people raise about the foundations of QM apply equally well to classical physics when you really think about it.

    However, I think Eli’s objection to the Born rule is different. The special weird thing about quantum mechanics as currently understood is that Born’s rule seems to suggest that the binding of qualia is a separate rule in fundamental physics.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Psy-Kosh, the amplitudes of everything everywhere could be changing by a constant modulus and phase, without it being noticed. But if it were possible for you to carry out some physical process that changed the squared modulus of the LEFT blob as a whole, without splitting it and without changing the squared modulus of the RIGHT blob, then you would be able to use this physical process to change the ratio of the squared moduli of LEFT and RIGHT, hence control the outcome of arbitrary quantum experiments by invoking it selectively.

    It would be an Outcome Pump.

    Controllable unitarity violation wouldn’t just let you win the lottery, it would let you communicate faster than light, by forcing a particular outcome in a quantum entanglement, Bell’s Inequality type situation.

  • Psy-Kosh

    Stephen: Thanks. First, not everything corresponding to a length or such obeys that particular rule… consider the Lorenz metric… any “lightlike” vector has a norm of zero, for instance, and yet that particular matric is rather useful physically. 🙂 (admittedly, you get that via the minus sign, and if your norm is such that it treats all the components in some sense equivalently, you don’t get that… well, what about norms involving cross terms?)

    More to the subject… why is any norm preserved? That is, why only allow norm preserving transforms?

    Which brings be to Eliezer:

    So? Why does the universe “choose” rules that say “no outcome pump”? That’s way up the ladder of stuff built out of other stuff. (as far as communicating faster than light, I’d think “outcome pump” type things are the main ‘crazy’ result of FTL in the first place)

    Actually, I think I didn’t communicate my question accurately. You derived it would be an outcome pump by noting it would change the Born derived probabilities (At least, that’s my understanding of the significance of you noting that the ratios of the squared magnitudes changing.) But the Born probabilities are already the “odd rule out”… so I wanted to know if there was any other reason/argument you could think of as to why we have norm preservation without appealing to the Born rule. (Does that clarify my question?)

    I mean, if I was letting myself use the Born rule, I could just say that the probabilities have to sum to 1, and that hands me the unitaryness. But my whole point was “the restriction to unitary transforms _itself_ seems to be related to squared magnitude stuff. So by understanding why that restriction exists in reality, maybe I’d have a better idea where the Born rule is coming from”

  • http://web.mit.edu/sjordan/www/ Stephen

    Psy-Kosh:

    Good example with the Lorentz metric.

    Invariance of norm under permutations seems a reasonable assumption for state spaces. On the other hand, I now realize the answer to my question about whether permutation invariance narrows things down to p-norms is no. A simple counterexample is a linear combination of two different p-norms.

    I think there might be a good reason to think in terms of norm-preserving maps. Namely, suppose the norms can be anything but the individual amplitudes don’t matter, only their ratios do. That is, states are identified not with vectors in the Hilbert space, but rays in the Hilbert space. This is the way von Neumann formulated QM, and it is equivalent to the now more common norm=1 formulation. This also seems to be the formulation Eli was implicitly using in some of his previous posts.

    The usual way to formulate QM these days is, rather than ignoring the normalizations of the state vectors, one can instead just decree that the norms must always have a certain value (specifically, 1). Then we can assign meaning to the individual amplitudes rather than only their ratios. It seems likely to me that theories where only the ratios of the “amplitudes” matter, generically can be equivalently formulated as a theory with fixed norm. Thinking that only ratios matter seems a more intuitive starting point.

  • http://web.mit.edu/sjordan/www/ Stephen

    I’m struck by guilt for having spoken of “ratios of amplitudes”. It makes the proposal sound more specific and fully worked-out than it is. Let me just replace that phrase in my previous post with the vaguer notion of “relative amplitudes”.

  • http://profile.typekey.com/Psy-Kosh/ Psy-Kosh

    Stephen: Is the point you’re making basically along the lines of “vector as geometric object rather than list of numbers”?

    Sure, I buy that. Heck, I’m naturally inclined toward that perspective at this time. (In part because have been studying GR lately)

    Aaanyways, so I guess basically what you’re saying is that all operators corresponding to time evolution or whatever are just rotations or such in the space? And why the 2-norm instead of, say, the 1-norm? why would the universe “prefer” to preserve the sum of the squared magnitudes rather than the sum of the magnitudes? ie, why is the rule “unitary” rather than “stochastic”, for instance? (Well, I have a partial answer for that myself… reversibility. Stochastic isn’t necessarally reversible, right? unitary is though, so there is that…)

    If I’m understanding what you’re trying to say, basically you’re saying “it’s as if you use any ole transform, then just divide by the factor the norm’s been changed by, so you may as well have that ‘already in’ the transform”… But if the transform isn’t some multiple of a unitary transform, then there won’t be any single scalar value that takes care of that, right? Why instead of “norm preserving” isn’t the rule “any invertable linear transform”?

    Or did I completely and utterly misunderstand what you were trying to say?

  • Recovering irrationalist

    @Roland: My physics and maths is patchy but I’m still just about following (the posts – some comments are way too advanced) though it is hard work for some bits. Lots of slow re-reading, looking things up and revising old posts, but it’s worth it.

    If you’re determined enough, try reading the posts a few at a time (instead of one a day) starting a few posts before where you got stuck, and make sure you “get” each one before you move on, even if it means an hour on another web source studying the thing you don’t understand in Eliezer’s explanation.

  • http://web.mit.edu/sjordan/www/ Stephen

    Psy-Kosh:

    “Or did I completely and utterly misunderstand what you were trying to say?”

    No, you are correctly interpreting me and noticing a gap in the reasoning of my preceeding post. Sorry about that. I re-looked-up Scott’s paper to see what he actually said. If, as you propose, you allow invertible but non-norm-preserving time evolutions and just re-adjust the norm afterwards then you get FTL signalling, as well as obscene computational power. The paper is here.

  • http://dao.complexitystudies.org/ Peter Mexbacher

    A major problem with Robin’s theory is that it seems to predict things like, We should find ourselves in a universe in which lots of decoherence events have already taken place,” which tendency does not seem especially apparent.

    Actually the theory suggests we should find ourselves in a state with near the least feasible number of past decoherence events

    I don’t understand this – doesn’t decoherence occur _all_ the time, in every quantum interaction between all amplitudes all the time? So, like for every amptlitude separate enough to be a “particle” in the universe (=factor) every planck time it will decohere with other factors?

    Or did I misunderstand something big time here?

    Cheers,
    Peter

  • Psy-Kosh

    Stephen: I don’t have a postscript viewer.

    Wait, I thought the superpower stuff only happens if you allow nonlinear transforms, not just nonunitary. Let’s add an additional restriction: let’s actually throw in some notion of locality, but even with the locality, abandon unitaryness. So our rules are “linear, local, invertable” (no rescaling aftarwards… not defining a norm to preserve in the first place)… or does locality necessitate unitarity? (is unitarity a word? Well, you know what I mean. Maybe I should say orthognality instead?)

    Well, actually, also same question here I asked Eliezer. If you _didn’t_ know squared amplitudes corresponded to probability of experiencing a state, would you still be able to derive “nonunitary operator -> superpowers?”

    Anyways, let’s turn it around again. Let’s say we didn’t know the Born rule, but we did already know some other way that all state vectors must evolve via a unitary operator.

    So from there we may notice sum/integral of squared amplitude is conserved, and that by appropriate scaling, total squared amplitude = 1 always.

    Looks like we may even notice that it happens to obey the axioms of probability. (it _looks_ like the quanity in question does automatically do so, given only unitary transforms are allowed.)

    Is the mere fact that the quantity does “just happen” to obey the axioms of probability, on its own, help us here? Would that at least help answer the “why” for the Born rule? I’d think it would be relevant, but, thinking about it, I don’t see any obvious way to go from there to “therefore it’s the probability we’ll experience something…”

    Yep, my confusion is definately shuffled.

    hrgflargh… (That’s the noise of frustrated curiousity. :D)

  • http://web.mit.edu/sjordan/www/ Stephen

    “If you _didn’t_ know squared amplitudes corresponded to probability of experiencing a state, would you still be able to derive “nonunitary operator -> superpowers?””

    Scott looks at a specific class of models where you assume that your state is a vector of amplitudes, and then you use a p-norm to get the corresponding probabilities. If you demand that the time evolutions be norm-preserving then you’re stuck with permutations. If you allow non-norm-preserving time evolution, then you have to readjust the normalization before calculating the probabilities in order to make them add up to 1. This readjustment of the norm is nonlinear. It results in superpowers. The paper in pdf and other formats is here.

  • Psy-Kosh

    Stephen: Aaah, okay. And yeah, that’s why I said no rescaling.

    I mean, if one didn’t already have the “probability of experiencing something is linear in p-norm…” thing, would one still be able to argue superpowers?

    From your description, it looks like he still has to use the princple of “probability of experiencing something proportional to p-norm” to justify the superpowers thing.

    Browsed through the paper, and, if I interpreted it right, that is kinda what it was doing… Assume there’s some p-norm corresponding to probability. But maybe I misunderstood.

    Eliezer: oh, mind elaborating on ‘Historical note: If “observing a particle’s position” invoked a mysterious event that squeezed the amplitude distribution down to a delta point, or flattened it in one subspace, this would give us a different future amplitude distribution from what decoherence would predict. All interpretations of QM that involve quantum systems jumping into a point/flat state, which are both testable and have been tested, have been falsified.’? Thanks.

  • Douglas Knight

    are all the norms invariant under permutation of the indices p-norms?

    Well, you answered that exact question, but here’s a description of all norms (on a finite dimensional real vector space): a norm determines the set of all vectors of norm less than or equal to 1. This is convex and symmetric under inverting sign (if you wanted complex, you’d have to allow multiplication by complex units). It determines the norm: the norm of a vector is the amount you have to scale the set to envelope the vector. Any set satisfying those conditions determines a norm.

    So there are a lot of norms out there. eg, you can take a cylinder in 3-space (one of your examples). You could take a hexagon in the plane. This norm allows the interchange of coordinates, but it has a bigger symmetry group, though still finite. (I guess one could write this as max(|x|,|y|,|x-y|))

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Weren’t the Born probabilities successfully derived from decision theory for the MWI in 2007 by Deutsch: “Probabilities used to be regarded as the biggest problem for Everett, but ironically, they are now its most powerful success” – http://forum.astroversum.nl/viewtopic.php?p=1649

  • Dihymo

    If anyone can produce a cellular automata model that can create circles like those which relate to the inverse square of distance or the stuff of early wave mechanics, I think I can bridge the MWI view and the one universe of many fidgetings view that I cling to. I know of one other person who has a similar idea, unfortunately his idea has a bizarre quantity which is the square root of a meter.

  • http://tyrannogenius.blogspot.com Neil B.

    Consider for example what “scattering experiments” show, in a context of imagining that the universe is made of fields and that only “observation” makes a manifestation in a small region of space? I mean, suppose we think of the “observations” as being our detecting the impacts of the “scattered” electrons rather than the scatterings themselves. (IOW, we don’t consider “mere” interactions to be observations – whatever that means.) But then why and how did the waves representing the electrons scatter as if off little concentrations when they were interpenetrating? And, what of the finding that electrons are “points” as far as we can tell, from scattering experiments? Note that the scattering is based on imagining one charge “source” being affected by another source’s central inverse-square field, nothing that makes a lot of sense in terms of spread-out waves. Note also that the scattering is not a specific “impact” like that of billiard balls, since it is a matter of degree (how close one electron approaches another, still not touching since they don’t have extensions with a discontinuity like a hard ball – and the very term “how close” betrays an existing pointness.) And so on … IOW, it’s worse than you think.

    On a different note, it is supposed to be impossible to find out certain things about the wave function, like its particular shape. We are supposed to only be able to find out, whether it passed or failed to pass the test for chance of a particular eigenstate (like, a linear polarized photon having a greater chance of passing a linear filter of similar orientation, but we wouldn’t be able to find out directly it had been produced with a 20 degree orientation of polarization.) However, I thought of a way to perhaps do such a thing. It involves passing a polarized photon through two half-wave plates over and over, say with reflections. The first plate collects a little bit of average spin from each pass of the photon, due to the inverting of photon spin by such a HWP. The second HWP reverts the photon’s spin (superposed value, the “circularity”) back to it’s original value so it will reenter the first HWP with the same value of circularity each time.

    After many passes, angular momentum transfer S should accumulate in the first plate along a range of values. S = 2nC hbar, where n is number of passes, and C is the “circularity” based on how much RH and LH is superposed in that photon. So for example, a photon that came out of a linear pol. filter would show zero net spin in such a device, elliptical photons would show intermediate spin, and CP photons would show full spin of S = 2n hbar. It isn’t at all like having eigenstate filters. Having an indication along a range is not supposed to be possible (projection postulate), and is reminiscent of Y. Aharonov’s “weak measurement” ideas.