A Possibly Inept Attempt to Understand the Hidden Variable Problem in Quantum Mechanics
I’m trying to wrap my head partway around Bell’s Theorem, and what implies for the quantum interpretation conundrum. I’m sort of hoping that I can explain it to myself in a halfway coherent way here.
When they first got a grip on quantum mechanics, the key insight was that it only defines probabilities, rather than anything definite. And these probabilities are concrete phenomena — you can make two probabilities create an interference pattern, as if they were light waves. Yet when the probable location of an electron or whatever runs into an obstacle, the electron hits it at a single spot, not in a smeared-out wave shape. According to the “Copenhagen” mode of interpreting quantum mechanics (or at least, its simpler variants), the act of encountering something external causes the spread of likelihood to undergo a mysterious and undefined change from potentiality to actuality, in which one out of countless possibilities gets privileged to become real, with a likelihood proportional to the density of the probability wave in that spot. This change is known as “collapsing” the probability wave.
This idea of collapse runs into philosophical difficulty when it encounters the phenomenon of entanglement. This is something that happens when two or more quanta are created by the same event, in a way such that some property shared by the two has to be the same, or has to cancel out and add up to zero. The most common and accessible cases of this under discussion are when two photons are created with equal or opposite polarization, and when two electrons are created with opposite “spin”. (This so-called spin is the property that allows electrons to create a magnetic field along one specific axis.) You can think of such an experiment as like reaching into a shoebox and pulling out one shoe: if you get a left shoe, you can be quite confident that the other one is a right shoe. The thing is, at creation time, the probability function says the orientation of the spin or polarization can be anything — it is undetermined and undeterminable. But the two undetermined orientations must still be opposite. If the two electrons or photons travel a long ways apart, and eventually they both run into something which collapses their probabilities and makes them reveal one specific orientation, you can never know what the orientation of either one will be, but if you compare the two, they’re always aligned so that one is the reverse of the other.
The problem is that supposedly, all the possible orientations are equally valid before the moment of collapse. If you pull out one shoe, it has no left or right shape until the moment when you put it on your foot and see if it fits, and the same is true if somebody else pulls out the other shoe. It isn’t just hard to spot the difference — there actually is no difference; the shoes are identical until a foot is put into it, at which point that one becomes either a right shoe or left shoe. But if that’s true, how can two widely separated entities coordinate with each other to collapse in a consistent way? It’s absurd to think they could communicate in that instant in order to form an agreement. That would require undetectable information moving faster than light and/or backward in time. Einstein famously referred to this idea as “spooky action at a distance”. (His actual term was “spukhafte Fernwirkung”, which might also translate as “spooky side-effects”.)
Einstein joined two other physicists, Boris Podolsky and Nathan Rosen, to write a 1935 paper which argued that this apparent contradiction, which then became known as the “EPR Paradox”, meant that the orientations couldn’t really be undetermined before the collapse. There must be some unobserved property of the particles which would determine what orientation they would eventually show themselves to have. In other words, while quantum mechanics says it has any arbitrary orientation, or a superposition of many possible orientations, there must be some sense in which it really has one specific orientation, unknown to us. (The shoe factory must have made one shoe definitely a left and the other definitely a right, even if we can’t see the difference.) Physicists refer to this hypothetical determining property as a “hidden variable”. The trio then criticized quantum theory as being incomplete because it does not explain or recognize these hidden variables, which they felt must exist. This put a formal argument behind Einstein’s earlier intuition that “God does not play dice”.
In reasoning to this conclusion, they started with two basic assumptions. One was that quantum phenomena are part of an objective reality. The other was that quantum phenomena can’t be granted arbitrary exceptions to all the normal rules that make “spooky” actions impossible. These axioms, to them, basically just amounted to a belief that quantum mechanics fell into the realm of science rather than magic. This pair of assumptions is called “local realism”, and the resulting hypothesis is more strictly referred to as “local hidden variables”.
Matters stood unresolved until 1964. That’s when John Stewart Bell — a name which should be a lot more well known to the public than it is — published a mathematical analysis of the EPR paradox which settled the question. (Others had claimed earlier to have somehow disproved hidden variables, but they failed to convince.) Bell managed to prove that a realism-based theory with local hidden variables can never exactly replace the probability-based quantum theory. Entanglement experiments will produce different results depending on which theory is valid. Specifically, if the two orientation detectors in an entanglement experiment are offset at an intermediate angle such as 45 degrees, so that they don’t always measure the two particles as oppositely oriented because they’re not judging them along quite the same axis, then the probability of them agreeing on the opposite orientation of the particles will be lower if there are local hidden variables involved than it will be if there aren’t. Like, if the angle is 45 degrees, the orientations should correlate 70.7% of the time (1/√2) by quantum theory, but if there are hidden variables the correlation will be lower. This is known as Bell’s Inequality. And this correlation can be measured very precisely with statistics, by streaming millions of particles through the experiment.
Before Bell published it was clear that existing data already favored real quantum mechanics, and since then, as others have expanded Bell’s theory and experimenters have slowly eliminated all the leaks and loopholes and asterisks, that’s only become more true. The definitive final experiment was only concluded last year. So now we know: there ain’t no such thing as a local hidden variable. Einstein, Podolsky, and Rosen turned out to be wrong. And this apparently means that quantum probabilities really are as indeterminate as they seen.
(Or are they?)
So, what about the logic which implies that we need hidden variables? Well, this apparently means we are in some way mistaken to make the assumptions of local realism. And scientists are now busily debating the nuances of exactly what those assumptions mean and where they might be bendable.
What alternatives does this leave us?
One possibility (probably the only one still compatible with Copenhagenism) is that entanglement is nonlocal — that there really is some kind of faster than light communication going on, though it can’t be used to communicate any real information because the signal is always combined with perfectly random noise. (It hardly seems fair to wave something that magical under our noses but then have it unable to do anything. One naturally suspects that there might be a loophole, so if a science fiction writer wants to handwave a means of instant untraceable communication, they’ll often base it on entanglement.) This may currently be the most “mainstream” interpretation — a fact which makes sense to me only in terms of historical inertia rather than in terms of logic. A variant of this is the “Bohmian” view, which postulates undetectable “pilot waves” which guide the destinies of particles. As far as I can see, this is basically just a particular way of embracing nonlocality. Some variations postulate an instantaneous interaction between all particles in the universe at once, despite there being no such thing as simultaneous time between astronomically distant regions of space.
Another escape is “superdeterminism”, which postulates that the hidden variables are nonlocal — in fact, that they are universal. In this hypothesis, the future is as fixed as the past, and there is no freedom for anything to happen differently. In this case it’s difficult to see how a quantum computer could produce a useful answer, since none of the counterfactual possibilities it explores have any existence. It’s as if all possibilities are in some sense pre-collapsed, but in such a way that it always looks as if they had just been having a lively existence a moment ago. Bell himself investigated this possibility, and found it repugnant but impossible to disprove.
Another option is to deny that collapse occurs. After all, collapse isn’t actually part of the classic quantum-mechanical theory: it’s an addendum which was bodged on to explain why we see just one of the many possible quantum outcomes.
One way to do this is to go to the opposite extreme from superdeterminism, and say that all the probabilities are equally real. Whenever a two-way random quantum event occurs, one version of you sees it happen the first way and another version sees it happen the opposite way. Both versions of you are real but each is unaware of the other. And this doesn’t just happen with quantum events that have countably distinct outcomes, but with ones that produce a continuous spread of possibilities. As a result, each moment generates innumerable alternative realities. There is an infinite range of realities and they multiply infinitely every second. What we see as collapse consists not of the moment when one outcome pushes another off the table, but when one ceases to affect the future of another. This is called the “many worlds” interpretation, and a good-sized group of scientists believe it’s the only logically defensible way to understand the universe. This once-mocked approach has been gaining in mindshare relative to the traditional Copenhagen view. (This one is also a very useful resource for science fiction writers who want to handwave up impossible things.)
The popular layman’s interpretation of this is to imagine “parallel universes”, as if the alternate versions of reality were layered side by side along some higher dimension, like pages in a book. To me, this doesn’t make much sense: it’s difficult to believe that whole new realities can be so easily created, because it’s getting something for nothing, which otherwise the universe doesn’t normally allow. Also, it’s hard to see how that could handle continuous, as opposed to countable, possibilities. But a less naive interpretation avoids this: it says there is only one world, but it consists of quantum superpositions which never collapse, but just keep piling up into ever deeper complexity. All those possible versions of you are equally real, within this single reality. Your perceptions are superimposed too: your single brain exists in a multitude of states, each of which believes one particular outcome occurred. Again, this is impossible to disprove, so far as we can tell.
This approach is attractive in that it doesn’t add any extra assumptions: in one sense, all we have to do to believe this is to take the original quantum theory completely literally. But I have the same sort of problem with it that I have with superdeterminism: why should we work so hard to disbelieve in collapse when the universe seems so determined to make it look real? Clearly, when a photon wavefront is absorbed by a solid surface, and then a single atom in that surface loses an electron (the phenomenon which underlies how it’s possible to see, or to take pictures), some kind of narrowing of effect takes place. Even if the resulting state is still in superposition (like, any of nineteen different atoms were simultaneously the only one to lose an electron), there’s still been a qualitative change from a broad event to a narrower kind of event — an event located at one atom instead of across millions at a time. So the collapse phenomenon is mostly still there, and still unexplained within the theory. And since that’s true, does it really make sense to think of the superpositions and multiplicities of reality piling up and up without limit, constantly mutating into qualitatively different forms as these collapse-like events occur? We still have no way to answer that question, but I can mention that this is exactly what Schrödinger was getting at with his famous idea of the cat in the box: the question of whether it’s plausible to accumulate quantum indeterminacy over arbitrarily large systems and long periods of time. But even if we grant all that... as far as I can tell, the requirement that each of the diverging realities still has to keep its story straight might still imply some sort of nonlocality or hidden variable. Some argue that many-worldsism evades the problem, but many others dispute this, and until a better argument comes my way, I have to side with the skeptics.
A scaled-down variation of many-worldsism is the idea that all possibilities are equally real until our minds become conscious of the outcome, at which point a single reality becomes privileged because it is known to someone capable of knowing. Collapse is now a subjective phenomenon, having meaning only from the point of view of consciousness. So either the universe is inherently many-worlds-ish but consciousness follows just one of the possible tracks, or our minds actually have the power to impose shape and order on what we see — our eyes could be thought of as emitting deadly rays of collapsification. (An SF novel does exist in which other species isolate the human race in order to prevent us from spoiling their quantum lives with our brains’ power to see a single reality.) To me this hypothesis is mystical rather than scientific. I don’t think it’s taken seriously among physicists anymore, though it was at one time.
A variation which is still viable is called Quantum Bayesianism. It postulates that the probability wave itself has a bit of a subjective component, differing depending on the observer. This one is tricky and I can’t yet say much about it.
And there are “relational” theories, drawing inspiration from special relativity, which postulate that collapse is not absolute, but something defined only from a particular viewpoint, so that a particle’s state can legitimately be collapsed according to one observer, while still being in superposition according to another, depending on what degree of interaction they’ve had with it. I suppose this would mean that Schrödinger’s cat could be stuck in superposition as seen from outside the box, even though it’s in a definite single state when viewed from the inside. The interior state is unresolved for us, to the extent that we remain isolated from it. This idea may have promise, and I’m finding no obvious objections to it. Maybe you could think of the collapse seen from a near point of view as being a sort of hidden variable to someone at a more distant viewpoint. But that may not do much to resolve the entanglement conundrum, as it seems to me that the single measurement the experiment takes is the first and only interaction the two particles have after being created, and is their earliest opportunity to collapse from any viewpoint. But on the other hand, maybe the source that emitted the two is a context in which they collapse to a particular orientation immediately, and that local collapse is the hidden variable which drives the rest. To me this is the best-sounding reconciliation so far. But I wonder: how is this compatible with quantum computing algorithms? They depend on setting up a complex superposition and then preventing any collapse from occurring before the calculation is completed.
Maybe one can look at relational theories the other way, and say that as a small system experiences local collapse, it’s shaking itself down into a narrow consistent state, but it still has many possible overall outcomes, which don’t finish collapsing until it interacts more broadly. You observe an experiment and see one collapsed outcome, but you personally still have multiple outcomes of how you saw it, until a further observer collapses you, in relationship to him. Instead of seeing the local collapse as a pseudo hidden variable which propagates certainty outward, perhaps we should see certainty as percolating from the outside in: the more broadly something interacts, the fewer alternative possibilities it has. But as attractive as this is, due to the way it keeps the exponential spread of possible realities on smaller scales down to a manageable level, it seems to me that from the broadest perspective, the possibilities are still endless and all outcomes are unresolved, so I’m not at all confident that this is really any improvement over many-worldsism.
One set of theories, broadly known as “stochastic”, say that the probabilities predicted by classic quantum theory are not exact, but approximate and statistical... which sounds to me like it would end up defining things in terms of the probability of a probability. This kind of hypothesis has the practical effect that a collapse (which in this theory is an objective occurrence with nothing relative about it) can occasionally happen in a spontaneous way without outside provocation, and when you consider very large systems such as a person, this tendency to collapse predominates and quickly reduces the broad state of things to certainty. This practical implication means that the validity of this theory should, at some point, be measurable by experiment... but to make that trickier, this theory comes in numerous variant flavors. One is the Ghirardi-Rimini-Weber theory, which builds the slight tendency to collapse directly into the field equations without relying on any other stochastic factors. Now, how is this idea supposed to get around the hidden-variable puzzle? Some versions apparently suggest that there is a hidden variable, but because it has only an uncertain influence and doesn’t strictly determine outcomes... it evades Bell’s proof? I don’t get how that helps... the outcome of entanglement experiments appears to be highly certain. I clearly have work left to do on understanding this one. But maybe that would be a waste of time: some are saying that even if hidden variables are nondeterministic, they can still demonstrate Bell’s Inequality. But others think they’ve found exceptions to that, and can come up with hidden variable definitions that are narrow enough to worm through holes in the argument. It’s definitely not a settled debate. If they can make this idea work, it might be a solid contender.
I might mention, on the question of whether collapse is subjective or objective, that certain experimenters claim to have shown that collapse is not instantaneous: they say they’ve managed to measure the time it takes to occur, and shown that there is such a thing as being momentarily half-collapsed. I do not know whether this is a valid dependable result or just wishful thinking.
What else is left for us? At this point the supply of attractive alternatives is thinning out, though there is no end to the assortment of alternative hypotheses which either resemble one of the categories above except in nuance, or which try to imagine a whole new conceptual framework but haven’t managed to clarify it enough for it to be broadly regarded as a contender. Nowadays, apparently, more attention is turning from the “local” assumption to the “realism” assumption. Specifically, they’re focusing on a narrow form of realism known as “counterfactual definiteness”, which means the belief that it is meaningful to speak of what result an experiment would have shown if it were performed, even though it wasn’t. If a tree falls in the forest, and no tree-falling detector is present to monitor the event, can we assume that it made a noise? If we give up counterfactual definiteness, this could imply that it is no longer meaningful to speak of unobserved quantum events as knowably having occurred. For example, we can’t talk about what an electron’s spin would have been if it had been checked earlier than it was.
Unfortunately, it’s hard to see any clear distinction between this and the old Copenhagen hypothesis which says the spin is undeterminable before it’s measured, and which would therefore imply nonlocality. It doesn’t sound like anybody has yet managed to find a way in which indefinite counterfactuality yields a comprehensible interpretation of quantum phenomena... all they’ve managed to do with it, as far as I can grasp, is show that without this assumption, the EPR and Bell arguments lose their mathematical foundation, and neither argument proves anything anymore. But that means... there might still be local hidden variables, as long as they’re so well hidden that their nature and effects are fundamentally unknowable before the fact. The EPR group might have been right about incompleteness, for the wrong reasons.
The impression I’m getting is that the consensus is moving toward abandoning realism. After all, what did it ever get us? We may soon regard realism as just a holdover of pre-quantum intuition, which we never had any justification to apply to quantum theory. We may end up postulating hidden variables which have definite effects and yet are, in some sense, not real.
On another front, people are taking more and more seriously the idea that quantum phenomena may only be understandable as operating both forward and backward in time. Which may not be quite as bad as faster-than-light information jumping arbitrarily through space, as we can at least imagine that it stays local to the worldlines of the particles involved, but it’s still tough to swallow. Whether backwards time is better or worse than non-real hidden variables is not something I can judge. This wouldn’t be the first time that the notion of treating something as operating backward in time entered the equations of quantum physics, but it might be the first time when it actually means something significant which has no alternate explanation. We shall see if that concept ever amounts to anything. Some say that in theories where collapse is an objective real phenomenon, time is not reversible: collapse makes forward time qualitatively different from backward time. I’m not sure about that, as any given quantum interaction is still reversible. Whatever emits two photons with matched polarizations can also absorb them, if they happen to arrive at the same time in a matched state. The thing is, though, in reverse that’s about luck, and we probably can’t meaningfully speak of the particles being entangled before arrival.
One trendy branch of theory based on reversible time is called the Transactional Interpretation. It postulates a sort of “handshake” occurring between forward and backward influences, and thinks of collapse as an event outside of time, affecting the past and the present together. These two-way interactions spread consistency outward, from small scales to large. Its advocates say their approach not only gives its users a more accurate intuitive grasp of quantum phenomena, but agrees better with certain experimental results than more traditional interpretations such as Copenhagen or many-worlds do. Others dispute this and argue that these awkward experiments are still perfectly compatible with traditional interpretations. Debate is ongoing, but to me, this is not something to take seriously without concrete experimental grounds.
I really don’t want to be asked to swallow influences moving backward in time. While it’s true that subatomic interactions can all operate equally well in either direction, it’s also true that this only makes sense as long as you’ve got only two particles interacting. As soon as you involve three or four, the scenarios which are easy and commonplace in one direction become vastly improbable in the other, like shooting a cueball into a randomly scattered set of pool balls and seeing them reassemble into a neatly packed triangle. That’s the meaning of entropy. I don’t buy that this is compatible with a universe in which cause and effect runs in both directions.
Even when events don’t interact with each other chaotically, some of them make a lot more sense in one direction than the other. Think of an opaque surface absorbing light. It soaks up photons of random different energy levels, using some of the energy to excite its electrons and turning the rest into heat. But in reverse, a surface emitting light from excited electrons is going to consistently produce particular wavelengths — spectral lines — and it’s essentially impossible for a bit of heat in the surface material to spread out the emitted light frequencies uniformly. That’s entropy too.
And in the above, I’m assuming that entanglement is itself a time-reversible phenomenon, which may not necessarily be the case.
I’m probably missing some categories of interpretation that don’t fit above, but I think that’s as much as I can digest. To me, the least unattractive one at first blush is the relational hypothesis, which sounds very believable and sort of combines the good points of the Copenhagen and many-worlds interpretations... but depending on how the specifics work out, it could end up combining their bad points instead.
Let’s take a closer look at that.... After some further study of some writings about the relational hypothesis, it sounds like they’re saying that things remain uncollapsed at the largest scales. Local collapse doesn’t create certainty, only consistency, which spreads outwards. Any observers who see a definite state are themselves in an indefinite state — nobody can perceive their own superpositions.
In its basic essence, then, I would say this is just a refinement of many-worldsism. Nice to see that it’s keeping up with the times.
What about the other grandpa theory, Copenhagenism? Is it keeping up with the young hip crowd too? Yes. One path that Copenhagenists are moving forward along is called the “consistent histories” approach. It defines mathematically the degree to which it is possible to ask definite, as opposed to probabilistic, questions about the state of a quantum system. They say that the EPR reasoning fails because they assumed definite knowledge which is beyond the limit of what can be asked. As with the original Heisenberg uncertainty principle, there are certain combinations of facts where you can’t know both at once — if you’re certain of one then the other must be indefinite — and they say EPR’s logic implicitly relies on knowing two incompatible things.
This concept may be a genuine breakthrough in our comprehension. But I am still not the least bit clear about how this mathematical assertion of past consistency can actually create physically consistent outcomes, without hidden variables.
So the broad takeaway is that the more I dig into this, the less confidence remains that Bell’s Inequality settles anything at all. This in turn means that the hopes I had when beginning this writing, of strengthening my grasp of these nebulous issues by narrowing in on a concrete advance of knowledge, are largely going to remain unfulfilled.
I am now moving toward a belief that the simplest theory is for hidden variables to exist but be unknowable, and that Bell doesn’t disprove their existence, only their measurability. That seems to me to be the most straightforward hypothesis which is capable of resolving the contradictions and avoiding paradoxes. To me, at this point, all the weirdo theories about pilot waves and transactions across time just end up sounding like attempts to simulate hidden variables, so as far as consequences go, there may be no scientifically testable difference between this hypothesis and those.
I think this view would imply that collapse is an objectively real phenomenon, there are no parallel alternative worlds, and time is unidirectional. That sounds all right to me. But as far as actually knowing anything goes, I am really no better off than I was when I started.