Philosophers On a Physics Experiment that “Suggests There’s No Such Thing As Objective Reality”
Earlier this month, MIT Technology Review published an article entitled “A quantum experiment suggests there’s no such thing as objective reality.” It was one of several publications to excitedly report on a recent experiment conducted by Massimiliano Proietti (HeriotWatt University) and others.
The provocative headline drew a lot of attention to the article and the experiment. Given how outlandish it sounded, I—like most people, largely ignorant of cuttingedge physics—thought that the experiment was either earthshatteringly amazing or that the claims made about it were bunk. Either way, it sounded like the perfect candidate for an intervention from philosophers and philosophyknowledgeable physicists. This post, the latest entry in the occasional “Philosophers On” series, is the result.While I am going to leave most of the explanation of the background physics, experiments, and findings to the guest authors, it might be useful to note how the MIT Technology Review article described what happened. It first notes that Proietti’s experiment is based on a thought experiment devised by physicist Eugene Wigner called “Wigner’s Friend.” It continues:
Last year… physicists noticed that recent advances in quantum technologies have made it possible to reproduce the Wigner’s Friend test in a real experiment. In other words, it ought to be possible to create different realities and compare them in the lab to find out whether they can be reconciled. And today, Massimiliano Proietti at HeriotWatt University in Edinburgh and a few colleagues say they have performed this experiment for the first time: they have created different realities and compared them. Their conclusion is that Wigner was correct—these realities can be made irreconcilable so that it is impossible to agree on objective facts about an experiment.
You can check out the whole article here.
And now let me introduce our guest authors. They are: Sean Carroll (Research Professor of Physics at Caltech), Karen Crowther (Postdoctoral Researcher in Philosophy at the University of Geneva), Dustin Lazarovici (Postdoctoral Fellow in Philosophy, Université de Lausanne), Tim Maudlin (Professor of Philosophy at New York University), and Wayne Myrvold (Professor of Philosophy at Western University).
I am very grateful to them for the time and effort they put into crafting contributions for this post that are informative, fascinating, and, importantly, accessible to nonexperts. Thank you, authors!
Thanks also to Michael Dickson (University of South Carolina) and David Wallace (University of Southern California) for some preliminary feedback about this topic.
You can scroll down to the posts or click on the titles in the following list. (Note: while I normally put the contributions in alphabetical order, I am deviating slightly from that and putting Dr. Crowther’s first, as she included a helpful diagram that is relevant to all of the posts).

 “What the Experiment Actually Did and What Is Learned from It” by Karen Crowther
 “Reality Remains Intact” by Sean Carroll
 “Keep Calm, Quantum Mechanics has not Rejected Objective Reality” by Dustin Lazarovici
 “If There Is No Objective Physical World Then There Is No Subject Matter For Physics” by Tim Maudlin
 “Quantum Theory Confirmed Again” by Wayne Myrvold
What the Experiment Actually Did and What Is Learned from It
by Karen Crowther
Quantum mechanics (QM) is supposed to be a universal theory: its domain of applicability is not restricted to the world at very small length scales. In other words, the theory is meant to describe elephants as well as electrons. While we do not, of course, need to use quantum theory to describe elephants, there are increasingly larger and more complex laboratory systems (i.e., tabletop experiments) being built that do display quantum behaviour. There are various proposals for why, in practice, we do not need to use quantum theory to describe the world at the length and timescales that are familiar to us as human beings. The central of these is decoherence—the idea that the interference effects that would otherwise reveal our ‘quantumness’ get suppressed when a system interacts with other systems around it (‘the environment’). Thus, demonstrating the quantum behaviour of a laboratory system requires the system to be isolated (to a great degree) from outside influences.
Decoherence, however, does not help when it comes to a more problematic disconnect between the quantummechanical description of the world and our experience of it. Whenever we take a measurement of a system to determine the value of some property it possesses (e.g., position, charge, spin, mass, polarisation, etc.), we find the system to have a definite value of this property. Yet, before the measurement, QM says that the system does not possess a determinate value of this property, but rather exists in a superposition of different states with different values of this property.
Rectifying these two pictures is known as the measurement problem, and solving it has spawned the development of various interpretations of QM that seek to explain what’s going on. These interpretations are constrained by the violation of a mathematical relation known as the Bell inequality, which makes it particularly difficult to retain the belief that the system possesses a definite state (i.e., one with particular values of its measurable properties) before being observed—as some interpretations known as hidden variable interpretations seek to do.
Wigner’s thoughtexperiment was an attempt to show that conscious observers cannot themselves exist in superpositions because it would lead to situations where a person has an experience of the world that conflicts with the experiences of others: two people would record inconsistent facts about one and the same system (Wigner, 1967). In other words, reality would be observerdependent.
In the thoughtexperiment, Wigner has a friend in an isolated lab who measures the polarisation of a photon, and finds it to have a definite value—this is the friend’s ‘fact’. Wigner, however, is outside of the lab, and does not know the outcome of his friend’s measurement. Instead, Wigner uses QM to describe his friend’s entire lab as a quantum system and finds it to be in one giant superposition of the different possible polarisations of the photon, as well as the different possible outcomes of his friend’s measurement—this superposition is Wigner’s ‘fact’. The two ‘facts’ are inconsistent. (See Figure 1).
Different interpretations of QM have different ways of dealing with this scenario. For example, the relational interpretation of QM would embrace the inconsistency of the two ‘facts’, maintaining that facts are observerdependent. On the other hand, the many worlds interpretation would deny the inconsistency, saying that the universe has branched into multiple universes, and in any one universe, observers will record consistent facts about the state of a given system.
Wigner’s own interpretation was that the scenario described by his thoughtexperiment was physically impossible: he argued that the conscious experience of his friend as having recorded a definite measurementoutcome would mean that after her measurement, it would not be correct for Wigner on the outside of the lab to describe the system as being in a superposition. This interpretation means believing that a “being with a consciousness must have a different role in quantum mechanics than the inanimate measuring device”, and hence that there must be “a violation of physical laws where consciousness plays a role” (Wigner, 1967, p. 181).
Yet, the laboratory experiment of Proietti et al. (2019) claims to have concretely realised Wigner’s thoughtexperiment. In this ‘real life’ experiment, the friend, isolated in her lab, measures the polarisation of a photon and records the outcome of her measurement; Wigner, outside of the lab, can then choose to either measure his friend’s record of her measurementoutcome (to attest to the ‘fact’ established by his friend), or to jointly measure both the friend’s record as well as the polarisation of the original photon (to establish his own ‘fact’).
In this ‘real life’ experiment, however, Wigner and his friend are not conscious observers, but pieces of machinery: they are measuringandrecording devices. Proietti et al. (2019) argue that these devices can act as observers, defining an observer as any physical system that can extract information about another system (by means of an interaction) and can store that information in a physical memory. On this definition, computers and other devices can act as observers, just as humans can.
Now, what the experiment actually did was to use QM to calculate the probabilities of each of the possible measurement outcomes, and then compare these to the probabilities calculated from the experimental data obtained (1794 sixphoton coincidence events, using 64 settings, over a total of 360 hours). The experimenters did this in order to test the violation of a Belltype inequality, and the experiment was indeed successful in confirming its violation. Thus, the significance of the experiment in this sense was to further confirm the violation of Belltype inequalities by quantum systems (even relatively large, complex ones) and to place stricter constraints on particular hidden variable interpretations of QM. But there are already many other experiments that have confirmed the violation of Belltype inequalities by quantum systems (although under different conditions, and subject to different ‘loopholes’ and sources of error). And, there are already many other experiments that have confirmed that QM is not restricted in its domain to very small systems.
So, what is the philosophical interest in this particular experiment? The question is what this experiment demonstrates about QM that was not already known from the thoughtexperiment plus previous experimental results. Plausibly, what it shows is that a scenario analogous to the one imagined by Wigner is in fact physically possible, and in it the observers do record conflicting facts. Thus, the philosophical significance of the experiment is to make Wigner’s own interpretation of his thoughtexperiment look increasingly implausible: it is difficult to imagine that this experiment would not have been successful if the devices had conscious experiences.
But, on the other hand, the fact remains that these devices are not conscious, and so Wigner could stand resolute in his interpretation. If anything, he could point out that—in the same way that an observation of a nonblack, nonraven provides a negligible sliver of confirmation for the claim that ‘all ravens are black’—the success of the experiment even provides inductive support in favour of his interpretation: the ‘observers’ in this experiment are able to record conflicting facts only because they do not experience these facts.
Reality Remains Intact
by Sean Carroll
Of course there is not a new experiment that suggests there’s no such thing as objective reality. That would be silly. (What would we be experimenting on?)
There is a long tradition in science journalism—and one must admit that the scientists themselves are fully culpable in keeping the tradition alive—of reporting on experiments that (1) verify exactly the predictions of quantum mechanics as they have been understood for decades, and (2) are nevertheless used to claim that a wholesale reimagining of our view of reality is called for. This weird situation comes about because neither journalists nor professional physicists have been taught, nor have they thought deeply about, the foundations of quantum mechanics. We therefore get situations like the present one, where an intrinsically interesting and impressive example of experimental virtuosity is saddled with a woefully misleading sales pitch.
My own preferred version of quantum mechanics is the Everett, or ManyWorlds formulation. It is a thoroughly realist theory, and is completely compatible with the experimental results obtained here. Thus, we have a proof by construction that this result cannot possibly imply that there is no objective reality. I am fairly confident that other realist approaches—hiddenvariables models such as Bohmian mechanics, or dynamicalcollapse models such as GRW theory—can offer equally satisfactory ways of interpreting this result without sacrificing objective reality, but I’m not confident in my ability to give such an account myself, so I’ll stick to the Everettian story.
ManyWorlds is a simple theory: there are wave functions, and they evolve smoothly according to the Schrödinger equation. Wave functions generally describe superpositions of what we think of as possible measurement outcomes, such as “horizontal” or “vertical” polarizations of a photon. The traditional “collapse of the wave function,” where an observer sees a unique measurement outcome, is replaced by decoherence and branching. That is, once a quantum superposition becomes entangled with a macroscopic system, that entanglement spreads to the environment (effectively irreversibly). If the measurement apparatus included a physical pointer indicating different possible results, that pointer cannot help but interact differently with the photons suffusing the room it’s in, depending on where it’s pointing. The pointer is now entangled with its environment.
That’s decoherence, and it implies that the two parts of the superposition now describe separate, noninteracting worlds, each of which includes observers who see some definite measurement outcome. The separate worlds aren’t put in by hand; they were always there in the space of all possible wave functions, and Schrödinger’s equation naturally brings them to life. If you believe a photon can be in a superposition, it’s not much of a conceptual leap to believe that the universe can be.
The experiment under question here is a version of Wigner’s Friend. The idea is to illustrate the possibility that observers in a quantum world can obtain measurement results, or “facts,” that are seemingly inconsistent with each other. One person, the “friend,” observes the polarization of a photon and obtains a result. But from the perspective of Wigner, both the photon and the friend appear to be in a superposition, and no measurement outcome has been obtained. How can we reconcile the truth of both perspectives while maining a belief in objective reality?
It’s pretty easy, from a ManyWorlds perspective. All we have to do is ask whether the original quantum superposition became entangled with the external environment, leading to decoherence and branching of the wave function. If it did, there are multiple copies of both Wigner and his friend. If it did not, it’s not really accurate to say that a measurement has taken place.
In the experiment being discussed, branching did not occur. Rather than having an actual human friend who observes the photon polarization—which would inevitably lead to decoherence and branching, because humans are gigantic macroscopic objects who can’t help but interact with the environment around them—the “observer” in this case is just a single photon. For an Everettian, this means that there is still just one branch of the wave function all along. The idea that “the observer sees a definite outcome” is replaced by “one photon becomes entangled with another photon,” which is a perfectly reversible process. Reality, which to an Everettian is isomorphic to a wave function, remains perfectly intact.
Recent years have seen an astonishing increase in the precision and cleverness of experiments probing heretofore unobserved quantum phenomena. These experiments have both illustrated the counterintuitive nature of the quantum world, and begun to blaze a trail to a new generation of quantum technologies, from computers to cryptography. What they have not done is to call into question the existence of an objective reality. Such a reality may or may not exist (I think it does), but experiments that return results compatible with the standard predictions of quantum mechanics cannot possibly overturn it.
Keep Calm, Quantum Mechanics has not Rejected Objective Reality
by Dustin Lazarovici
A group of physicists claims to have found experimental evidence that there are no objective facts observed in quantum experiments. For some reason, they have still chosen to share the observations from their quantum experiment with the outside world.
There is a lot wrong with the paper, so let me focus on the most critical points. First of all: what the experiment actually tested has little to do with the existence or nonexistence of objective facts. It rather shows that the outcomes of different possible “Wigner’s friendtype” measurements cannot be predetermined, independent of what measurements are actually performed. This should come as no surprise to anyone familiar with quantum foundations as similar results have been established many times before (by various socalled “no hidden variables theorems”). In particular, it doesn’t mean that measurement outcomes, once obtained, are not objective. It rather reminds us that a measurement is not a purely passive perception but an active interaction that “brings about” a particular outcome and can affect the state of the measured system in the process.
Even from a logical point of view, the argument in the paper doesn’t hold water. Proietti et al. test a version of the Bell inequality whose violation, in different settings, has already been confirmed by various other experiments. They claim (but never prove) that their inequality follows from three assumptions: Locality (simply put: distant simultaneous measurements cannot affect each other), “free choice” (simply put: the experimentalists can freely choose what they measure) and “observerindependent facts” (whatever this means). Now, the original Bell inequality is derived from only the first two assumptions, locality and free choice. Hence, it’s already wellestablished that at least one of these assumptions is violated by quantum phenomena. (Indeed, the extended Wigner’s friend experiment does involve nonlocality; it is carried out on entangled systems, and the measurement of Alice’s friend can instantaneously affect the outcome obtained by Bob’s friend and/or vice versa.) So how could the violation of the Bell inequality in the extended Wigner’s friend scenario challenge the assumption of “observerindependent facts”? Well, it can’t, and it doesn’t. Not any more than the experimental falsification of an inequality derived from the assumption that 2+2=5 and the existence of observerindependent facts.
On a more general note, the entire Wigner’s friend craze is a bit silly. In effect, Wigner’s friend is little more than a rendition of the famous Schrödinger cat paradox, and any precise quantum theory that solves the Schrödinger cat paradox (also known as the “measurement problem” of quantum mechanics) has no difficulties providing a precise and objective description of “extended Wigner’s friend experiments”. My colleague Mario Hubert and I have discussed this in detail for the example of Bohmian mechanics, a quantum theory that grounds the prediction of standard quantum mechanics in an ontology of point particles and precise mathematical equations. In particular, in Bohmian mechanics, the state of a system is not described by the wave function alone but has a definite configuration even if its wave function is in a superposition. This provides a clear and simple solution to both Schrödinger’s cat and the Wigner’s friend “paradox.”
To their credit, the authors are more or less acknowledging this in their discussion, writing:
[O]ne way to accommodate our result is by proclaiming that “facts of the world” can only be established by a privileged observe — e.g., one that would have access to the “global wavefunction” in the many worlds interpretation or Bohmian mechanics.
But Bohmian mechanics and ManyWorlds theories have nothing to do with “privileged observers.” The whole point of these theories is to provide an objective description of the quantum world in which observers have no distinguished role in the first place but are treated just like any other physical system (that’s why John Bell called them “quantum theories without observer”). In doing so, both Bohmian mechanics and the ManyWorlds theory use, of course, an objective wave function that describes the experiment in its entirety. If the authors assume, instead, that wave functions describing the state of quantum systems are subjective, defined relative to different observers, (and mind you, some of the “observers” are just photons in this case!) it is not at all surprising that they end up with inconsistent or observerdependent facts. They should just not suggest that their experiment provides corroboration for this bizarre and ultimately solipsistic view.
In my opinion, the paper does indeed raise some important questions, though they are mostly sociological ones. For instance: Why does physics tend to get exposure and attention merely for making outlandish claims, regardless of their scientific substance? And why do even many experts tend to abandon rational and critical standards when it comes to quantum mechanics? Why, in other words, have we gotten so used to quantum physics being crazy that even the most outlandish claims come with a presupposition of plausibility and relevance?
As a matter of fact, quantum mechanics can be as clear and rational as any respectable theory that came before it. You just have to do it right.
If There Is No Objective Physical World Then There Is No Subject Matter For Physics
by Tim Maudlin
The MIT Technology Review article that occasions this discussion has the rather astounding title “A quantum experiment suggests there’s no such thing as objective reality”. One could be rightly puzzled about how any experiment could suggest any such thing, since the existence of “objective reality” seems to be a precondition for the existence of experiments in the first place.
The abstract is perhaps slightly more promising: “Physicists have long suspected that quantum mechanics allows two observers to experience different, conflicting realities. Now they’ve performed the first experiment that proves it.” After all, familiar optical illusions permit different observers to “experience different, conflicting realities” in the sense of conflicting apparent realities. Of course, in such a case at least one of the “perceived realities” is indeed illusory, since they cannot both be veridical and also conflicting on pain of violating the Law of NonContradiction.
But further perusal of the article dashes any hope of anything comprehensible in this way. The experiments in question are done on a system composed of only six photons. Obviously the photons do not experience anything at all, much less conflicting realities. What in the world is going on?
In short, the way that this experiment is described—in terms of its significance—is complete nonsense. Physicists have become accustomed to spouting nonsense when quantum mechanics is the subject of discussion, which often takes the form of mindblowing assertions about the loss of “classical reality” or even “classical logic”. The reason we know that all of this is nonsense right off the bat is that the experimental predictions of standard quantum mechanics can be accounted for—in several different ways—by theories that postulate an objective, unique physical reality governed by definite laws and using only classical logic and mathematics. So when the sorts of claims made in the title and abstract of the article are made, one knows immediately that they are unjustified hype.
But surely some sort of interesting experiment was done! Yes, indeed. The experiment is of the same general sort as has been done for the last halfcentury, beginning with John Clauser and Alain Aspect, and continued by many other experimentalists including Anton Zeilinger. All of these are usually, and accurately, described as tests of violations of Bell’s Inequality, the epochal discovery of John Stewart Bell. What Bell showed is that certain correlations between the outcomes of distant experiment cannot be predicted or explained by any theory that satisfies a certain precise locality condition—a condition one would expect any fundamentally Relativistic theory to obey. The fact that quantum theory predicts violations of Bell’s Inequality has been called quantum nonlocality, and the increasingly precise and exacting experiments done over the past halfcentury have all confirmed the quantum predictions, as does this experiment.
All of this is even spelled out in the article itself: “But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.” That is, in order to account for the outcome of this experiment, one has to deny that physical reality is local in Bell’s sense. (This gloss on the locality condition is not accurate, but leave that aside.) That is something we have known for 50 years.
What about “objective reality” and “Wigner’s friend” and whatnot? Well, the nonlocal theories that we have—pilot wave theories such as Bohm’s theory, objective collapse theories such as the GhirardiRiminiWeber theory, and the Many Worlds theory of Hugh Everett—all postulate a single objective reality. In the proper sense of “conflicting”, none of them allow for observers to observe “conflicting realities” (although in the Many Worlds theory observers have experimental access only to a small part of the objective reality). And of course, all of these theories are nonlocal, as Bell requires.
Now suppose that, for some obscure reason, one were deadset against accepting Bell’s theoretical work and all of the experiments that have been done. Suppose, in other words, one were deadset on maintaining that the physical world is local in the face of all the experimental evidence that it isn’t. How might that be done?
It seems rather desperate but I suppose one might go so far as denying the very existence of any objective physical reality at all. Or, as I sometimes put it, “Nothing really exists, but thank God it is local”. But as should be obvious, this accomplishes nothing. If there is no objective physical world then there is no subject matter for physics, and no resources to account for the outcomes of experiments.
There are many good books that correctly and clearly exposit the situation, including David Albert’s Quantum Mechanics and Experience, Travis Norson’s Foundations of Quantum Mechanics, Peter Lewis’s Quantum Ontology, Jean Bricmont’s Understanding Quantum Mechanics and Quantum Sense and Nonsense, and (coincidentally) my own Philosophy of Physics Quantum Theory which happens to go on sale on March 19.
Objective reality is safe and sound. We can all sleep well.
Quantum Theory Confirmed Again
by Wayne Myrvold
Headline news! Stop the presses! A group of experimenters did an experiment, and the results came out exactly the way that our best physical theory of such things says it should, just as everyone expected. Quantum Theory Confirmed Again.
That’s what actually happened, though you’d never know it from the clickbait headline: A quantum experiment suggests there’s no such thing as objective reality [1].
The experiment [2] was inspired by a recent paper by Časlav Brukner, entitled “A NoGo Theorem for ObserverIndependent Facts” [3]. The abstract of the paper reporting on the experiment proclaims, “This result lends considerable strength to interpretations of quantum theory already set in an observerdependent framework and demands for revision of those which are not.”
Here’s a nice fact about claims of this sort: when you see one, you can be sure, without even going through the details of the argument, that any conclusion to the effect that the predictions of quantum mechanics are incompatible with an objective, observerindependent reality, is simply and plainly false. That is because we have a theory that yields all of the predictions of standard quantum mechanics and coherently describes a single, observerindependent world. This is the theory that was presented already in 1927 by Louis de Broglie, and was rediscovered in 1952 by David Bohm, and is either called the de BroglieBohm pilot wave theory, or Bohmian mechanics, depending on who you’re talking to. You can be confident that, if you went through the details of any real or imagined experiment, then you would find that the de BroglieBohm theory gives a consistent, observerindependent, oneworld account of what happens in the experiment, an account that is in complete accord with standard quantum mechanics with regards to predictions of experimental outcomes.
There are other theories, known as dynamical collapse theories, that also yield accounts of a single, observerindependent reality. These theories yield virtually the same predictions as standard quantum mechanics for all experiments that are currently feasible, but differ from the predictions of quantum mechanics for some experiments involving macroscopic objects.
Much of the confusion surrounding quantum mechanics, which leads smart people to say foolish things, stems from the fact that, in the usual textbook presentations, we are not presented with a coherent physical theory. Typical textbook presentations incorporate something that is called the “collapse postulate.” This postulate tells you that, at the end of an experiment, you dispense with the usual rule for evolving quantum states, and replace the quantum state by one corresponding to the actual outcome of the experiment (which, typically, could not have been predicted from the quantum state).
If we want to apply the collapse postulate, we need guidance as to when to apply it, and when to use the usual quantum dynamics. Standard textbooks are invariably vague on this. In practice, this vagueness tends not to matter much. But a thoughtexperiment devised by Eugene Wigner [4] imagines a situation in which it does matter. Brukner’s thoughtexperiment is a combination of Wigner’s thoughtexperiment and tests of Bell inequalities.
Brukner’s version of the thoughtexperiment involves a pair of hermetically sealed labs, each containing an observer playing the role of Wigner’s friend, and an observer outside each of these labs. Each outside observer has a choice of experiments to do. One choice of experiment amounts to asking the friend what result was obtained, the other, to the sort of experiment Wigner is imagined to do. Brukner considers a situation in which an assumption of locality would entail the existence of preexisting values for the results of both experiments, which are merely revealed if the experiment is done. His thoughtexperiment involves an entangled state of the labs for which this is in conflict with the quantummechanical statistical predictions. But we already know that any theory that reproduces the probabilistic predictions of quantum mechanics is going to have to reject any locality assumption that leads to Brukner’s conclusion; this is Bell’s theorem (see [5]). Moreover, in spite of the title of his paper, “A NoGo Theorem for ObserverIndependent Facts,” Brukner explicitly mentions both of the ways that we’ve discussed—the de BroglieBohn theory, and collapse theories—for there to be observerindependent facts.
If we have a theory that tells us whether the quantum state collapses, and, if so, when it does, then that theory can be applied both to the WignerBrukner thoughtexperiment and to the actual experiment of Proietti et al.. The de BroglieBohm theory will predict the same thing as standard quantum mechanics for both. Collapse theories predict the result of the Proietti et al.experiment, but predict a departure from the predictions of any nocollapse theory for the fullblown WignerBrukner thoughtexperiment, if it could be realized.
There’s nothing new here, and nothing that prompts revision of any existing theory of quantum phenomena set in an observerindependent framework.
Notes:
[1] “A quantum experiment suggests there’s no such thing as objective reality.” MIT Technology Review, March 12, 2019.[2] Proietti, Massimo, et al., “Experimental rejection of observerindependence in the quantum world.” arXiv:1902.05080v1 [quantph].
[3] Brukner, Časlav, “A NoGo Theorem for ObserverIndependent Facts,” Entropy 2018, 20(5), 350.
[4] Wigner, Eugene, “Remarks on the mindbody question,” in The Scientist Speculates, I. J. Good (ed.). London, Heinemann, 1961: 284–302.
[5] Myrvold, Wayne, Marco Genovese, and Abner Shimony, “Bell’s Theorem.” The Stanford Encyclopedia of Philosophy (Spring 2019 Edition), Edward N. Zalta (ed.).
Discussion welcome.
Very well written, and thought provoking
Agreed.
Ouch! The only thing I’ll say about the guest author essays is that none of the authors, apparently, ever had a philosophy tutor who taught them the notion of charity and the charitable argument.
In the interests of being charitable, or at least amiable, I’d like to suggest Jeffrey Bub’s perspective as one that I found worth considering, despite my ultimate disagreement with it: https://arxiv.org/abs/1711.01604
(It is also a bit less heavy going, notationally speaking, than the papers it is commenting upon.)
Out of pure interest, what aspects of Bub’s paper did you disagree with?
For reasons that might be too long a story for this text box, I’m skeptical that the “nonBoolean algebra” approach really puts its emphasis on what is deeply, fundamentally enigmatic about quantum theory, which in turn leads me to suspect that his imagery of the “Boolean macroworld” and all that will ultimately not be satisfactory.
Thanks for the reply
You’re welcome. Happy Friday!
The idea that wavefunctions are subjective is not “ultimately solipsistic”, and it is frankly absurd to say so. Even QBism, the most radical of the interpretations that take a position of that kind, posits a physical world outside the mind of the individual. This is Quantum Foundations 101: There’s plenty about quantum physics that can be objective even though wavefunctions aren’t. And after attending a workshop devoted to the topic of Wigner’s Friend variations (a satellite miniconference just after the March Meeting of the American Physical Society this month), I think it’s safe to say that the QBist position is that these variations, including the Proietti et al. experiment, don’t tell us more than the violations of Bell inequalities already did.
Thank you for attending this session of “a person transparently promotes their own preferred interpretation of quantum mechanics in the comment section, because most everyone in the original post was doing the same thing”.
This repeated trope about promoting a particular interpretation is off the mark.
The article explicitly claims that the result of a certain experiment both suggest that there is no objective reality and proves something or other—it’s really not clear what—about Wigner’s thought experiment.
Both of these claims are simply false. They are proven to be false by explicit counterexample. Indeed, by several different counterexamples.
That’s the point.
Thank you to Justin and the authors for such a wonderful treat!
I’m trying to figure out how to apply these comments about Brukner’s thoughtexperiment to the earlier one by Frauchiger & Renner (https://www.nature.com/articles/s41467018057398). I think that some physicists/mathematicians have characterized the F&R result as showing that “quantum mechanics is inconsistent” in that they derive two incompatible probabilities assignments. Since people here seem to have thought a lot about extended Wigner’sfriend cases, I just wondered if anyone might have a succinct way of characterizing what’s wrong with F&R’s reasoning?
The “selfconsistency” condition that Frauchiger and Renner impose is both mislabeled and not really well motivated. As originally written, it would for example rule out special relativity, in which each observer can say that her clock is ticking faster than the clock of a timedilated partner.
Frauchiger and Renner wrap a Wigner’s Friend thoughtexperiment around a nohiddenvariables argument due to Hardy, while Brukner did so for a nohiddenvariables argument based on the BellCHSH inequality.
Dustin, along with Mario Hubert, has a nice discussion of the FR argument here: https://www.nature.com/articles/s41598018375351
Basically, what FR do is allow the Friends to use a collapsed state, while the outside observers use uncollapsed states, and combine inferences based on the two different state assignments, to get an inconsistency.
The argument doesn’t spell trouble for anybody’s take on QM, that I know of. But suppose that someone were to
advance a position that replaces Heisenberg’s “movable cut” between quantum and classical with multiple cuts. Each agent is permitted to place herself on the classical side of the cut, even if other agents include her as part of a quantum system. Moreover, one is not required to maintain a single cut in reasoning about a system, but may freely combine inferences based on multiple placements of a cut. Call this position Copenhagen 2.0, to distinguish it from anything that Bohr or Heisenberg might have endorsed. It is this position that is shown to be incoherent by F&R’s reasoning.
Thanks for the responses, both of you! Quite helpful
I might be completely wrong here, but does FrauchigerRenner even imply those Copenhagen 2.0 interpretations are wrong?
From reading Richard Healey’s “Quantum Theory and the Limits of Objectivity” I would have thought not, because the FR paper requires the Superobservers to hold to what he calls Intervention Insensitivity (in short ignore entanglement among the superobservers), which seems invalid given that they are using an entangled state to model the observers.
What I called “Copenhagen 2.0” is not Richard Healey’s view or Jeff Bub’s view or anyone else’s. It is a fictitious position that nobody has actually advocated. The FR thoughtexperiment does not spell trouble for any view that anyone has actually advanced, as far as I know.
I just read Chapter 11 of Healey’s book “The Quantum Revolution in Philosophy” and realized I had misunderstood him and that it isn’t Copenhagen 2.0 as you say.
I thought he allowed the observer to have a collapsed state for the state of his lab, while permitting the superobserver to have the lab in superposition.
Where as in fact he only allows the observer to have a collapsed state for the “system + device”, which due to decoherence is compatible with the superobserver’s use of superposition for the lab = “system + device + lab environment”.
Thanks for that.
Based on his talk at the APS March Meeting and again at the satellite workshop, I suspect that I disagree with Brukner about how much this genre of thoughtexperiment can tell us above and beyond what the old thoughtexperiments did. However, I think the “in spite of the title of his paper” bit is a little unfair. All theorems have assumptions, and he says straightforwardly how the different clades of interpretations part ways with one or another of his assumptions. Collapse models violate the first, nonlocal hidden variable theories quarrel with the second, and ‘t Hooftian superdeterminism would violate the third. This is, I think, a fair presentation on his part. In addition, Brukner carves up Bell’s assumption of “local causality” into three subcriteria, which has been a standard kind of move since, oh, Jarrett in the early 1980s at least. (There may be obscure precedents that I’m not aware of.) What Brukner calls “locality” is roughly what might elsewhere be called “parameter independence”, and he considers it a separate postulate from what he calls the existence of “observerindependent facts”. The latter is a misnomer, since it only concerns a subset of what one could more generally call “facts”; it is also the assumption which essentially demands a hiddenvariable account of the situation, thus setting up a BellCHSH inequality for quantum theory to violate. Mathematically, what he demands with that assumption is that the truth values for the propositions regarding all measurement outcomes of all observers fit into a single Boolean algebra. Of course, nobody ever got a glowing writeup in the MIT Technology Review for rejecting the premise of a Boolean event algebra.
As soon as I saw the original headline I thought, “Well, that can’t be what happened.” Then I thought, “Wayne Myrvold will have a good explanation of this.” And here it is!
Not to disagree with any of the commentaries but just to ride a current hobbyhorse of mine a little: there is, of course, an issue as to how one should understand ‘objective’ in this context. So, just to give a little history: Wigner proposed his ‘friend’ thought experiment in the context of his attempt to cement into history his version of the ‘orthodox’ interpretation of QM, over that of Bohr (at least according to Freire Jr in his book Quantum Dissidents). As part of that attempt, he effectively coopted London and Bauer’s ‘classic’ treatment of the measurement problem that invokes consciousness: London, F. and Bauer, E. (1939/1983), La Théorie de L’Observation en Mécanique Quantique, Hermann (in J.A. Wheeler and W.H. Zurek (eds.), Quantum Theory and Measurement, Princeton University Press,1983). This is often mistakenly taken to be a mere summary of von Neumann’s treatment. However, London, perhaps best known in physics for his work on superconductivity and superfluidity (and also for giving the first quantum mechanical treatment of chemical bonding, with Heitler) was also a student of Husserlian phenomenology (see Gavroglu’s excellent biography). And in the little pamphlet with Bauer, they write that understanding the concept of objectivity in the context of QM involves ‘… the determination of the necessary and sufficient conditions for an object of thought to possess “objectivity” and to be an object of science’ (1983, p. 259). They then continue, ‘… Husserl … has systematically studied such questions and has thus created a new method of investigation called “Phenomenology”’ (ibid.; here they refer to both the Logical Investigations and the Ideas). The classical concept of objectivity is dismissed as ‘useless and even incorrect, [generating] actual obstacles to progress’ (ibid.) and they insist that it is the phenomenological concept which is now sufficient for physics’ needs.
What Wigner did by coopting the London and Bauer approach, whilst ignoring the phenomenological underpinning, was to contribute to what Tom Ryckman in his book The Reign of Relativity calls the ‘effacement’ of certain strands of philosophical thought (typically phenomenological and neonKantian strands) from the history of work on the foundations of physics.
This is all to report, not to endorse but declarations to the effect that of course there’s an objective reality else what would be the subject matter of physics and the like, are perhaps a little simplistic. And how the phenomenological approach treats the Wigner’s friend experiment and its more recent iterations is an interesting exercise – but not one I’ll pursue here!
Well, while Wigner might have understood ”objectivity” in the phenomenological sense, it seems likely that Proietti et al. do not understand it in that sense, but in the usual sense of mindindependence
Sorry – my point was that Wigner *stripped out* the phenomenological sense! There is no indication that he adopted London’s Husserlian line.
And of course Proietti et al do not understand it a la London – who does these days?! I was simply observing that whether one takes this result to undermine the notion of ‘objective reality’ or not, depends on how one understands that notion (“well, d’uh!”) and that there lies, in the history of physics itself, an alternative understanding that tends to get overlooked in current discussions.
Hi, Steve,
Thanks for this. My post is a trimmeddown version of what I originally wrote (because Justin asked us to keep these short), and, in the original version (which I may post on my own blog), I mentioned the two readings—von Neumann and LondonBauer—of the collapse postulate.
I’m not sure I’m following what you’re saying, though. You say that “whether one takes this result to undermine the notion of ‘objective reality’ or not, depends on how one understands that notion.” Is that meant to suggest that there is a sense of “objective reality” on which this result undermines objective reality? What would that sense be, and what is the argument?
In order for *this* experiment to undermine objective reality, it needs to do something that previous experiments didn’t. I’m trying to imagine an argument to the effect that there is a sense of “objective reality” such that this experiment undermines it in a way that previous Bell tests didn’t. What would the crucial difference be, between the Proietti et al experiment, and previous Bell tests?
Thanks Wayne – good point! I should’ve been clearer … you’re right of course that *this* experiment doesn’t ‘do’ anything different or further. From the perspective of L&B, all measurements in the QM context reveal that the ‘classic’ conception of objectivity is ‘useless and even incorrect’. And for the phenomenologistically inclined, the W Friend TE simply sharpens the point (although the details need to be spelled out; likewise for the F&R iteration). I was just reacting, kneejerk fashion, to the invocation of ‘objective reality’ as if the notion hadn’t been contested within the history of physics itself (assuming the measurement problem falls under physics of course).
Anyway, I for one would like to see that blog post!
Ok. Here’s the longer version.
https://filosothots.blogspot.com/2019/03/quantumtheoryconfirmedagain.html
Nice observation! I remember not being 100% sold on The Quantum Dissidents (the example that springs to mind this morning was a biographical claim about John Wheeler being shaky), but that point sounds right. I definitely agree that “how the phenomenological approach treats the Wigner’s friend experiment and its more recent iterations” would be a topic worth exploring.
I’m no expert, but wondering how such expressions as “experiment” and “experimental result” are to be construed if there is no objective reality.
If “There’s No Such Thing As Objective Reality”, then there might be multiple subjective realities.
At least one of those “Subjective Realities” could contain an “Objective Reality”.
Tim Maudlin asked: “Suppose, in other words, one were deadset on maintaining that the physical world is local in the face of all the experimental evidence that it isn’t. How might that be done?”
It can be done, simply by exploiting the very rarely discussed, but very fundamental loophole, pointed out 40 years ago by d’Espagnat; that Bell’s theorem and all related theorems are based on the assumption that “Whereas previously A+ was merely one possible outcome of a measurement made on a particle, it is converted by this argument into an attribute of the particle itself.” In other words, it is being assumed (as selfevidently true) that the ability to “merely” perform some number of measurements on some entity, necessitates the existence of an equal number of independent attributes embodied within that entity. But that is a logical impossibility for an object that manifests only a single bit of information as defined via Shannon’s Capacity theorem. Consequently, it is easily demonstrated that classical objects, constructed to manifest only a single bit of information, behave just like quantum objects, when subjected to Bell tests. See my comment here, and the links contained therein, for further insights into this longneglected fact: https://disqus.com/home/discussion/societyforscience/beyond_weird_and_what_is_real_try_to_make_sense_of_quantum_weirdness/#comment4272228526
This claim is incorrect.
In the GRW theory, for example, when a “measurement” (not an accurate description) is made on a particle, there are usually a number of different possible outcomes that can arise via the fundamental (stochastic) dynamics. These various possible outcomes, in that theory, are exactly *not* converted into attributes of the particle itself: two particles in precisely the same initial state can display different outcomes of the same experiment, even though there was *no* antecedent difference between them. And the outcome carries very little Shannon information about the antecedent state. (To explicitly quantify this, we would need a probability distribution over a set of possible initial states.)
The idea that the outcome must somehow be attributed to the antecedent state of the particle would be an assumption of determinism, and Bell makes no such assumption. So denying the assumption is not a way out of Bell’s conclusion. GRW denies the assumption explicitly, yet still is a nonlocal theory in Bell’s sense.
What is true, as EPR pointed out, is that if one does start out by demanding locality, then just the EPR correlations (which do not violate Bell’s inequality) force you into determinism. That is why Bell, starting where EPR left off, immediately can go from locality to determinism: he assumes that the reader is already familiar with the EPR argument. And then he can prove his inequality, using the locality assumption again.
Shannon information theory can be used to quantify the minimal amount of information—on average—that must be superluminally transmitted from one side to the other in order to violate Bell’s inequality and recover the quantum predictions. I inaugurated this line of research in my book Quantum NonLocality and Relativity, and some further work on it has been done since then. But none of this provides any way around Bell’s result.
That claim is incorrect.
“there are usually a number of different possible outcomes that can arise via the fundamental (stochastic) dynamics.” Indeed there are. The point is, many of them are incorrect – they called the state “up” when it was actually “down” and vice versa. This result is unavoidable, when there is only a single bit present. A coin will be detected as being either heads or tails, if you are forced to “call it”, but when you are also being forced to occasionally observe it edgeon, which is the entire point of every Bell test, then it is inevitable that you will make many bad calls. There are no hidden variables – there are no variables at all. In the case of two entangled coins, that have severely limited extent, bandwidth and signaltonoise ratio which thereby limit their combined information content to a single bit, then a Bell test either gets the same result for every pair of measurements, or else a bad call MUST have been made. The point is, there is no possibility whatsoever of the measurements ever differing, except when an ” bit error” has been made in the measurement, of a nearly “edge on” coin. Bell naively assumed that measurements actually correspond to something, other than just noise intrinsic to the object being measured.
Hi Rob,
You’ve given incredible insights in your comments here, elsewhere, and in your online papers.
Would you consider private email correspondence with me? I would like to ask you some questions for my own satisfaction. ptf4242″at”gmail.com.
Thanks,
M
In the GRW theory there are non “bad calls” because there is nothing to call: the antecedent state simply does not predispose the system to one outcome rather than another. If two systems in the same initial state yield different outcomes, there it is not that one is “right” and the other “wrong”: there is nothing to be right or wrong about.
In Bohmian mechanics, similarly, no result is ever “wrong” or a “bad call”. That terminology just makes no sense. The initial configuration and the initial wavefunction predetermines the final configuration and the final wavefunction.
All of this talk about “bandwidth” and so on is just off topic. And Bell made no error or mistaken assumptions.
There are more things in heaven and earth, Tim, Than are dreamt of in your philosophy.
For generations, physicists have sought to determine if quantum theory is complete. But the problem is, it is their characterization of the classical realm that is incomplete. Bell’s theorem fails to account for the behavior of the one and only type of classical object, at the heart of the problem. The supposed discrepancy between quantum and classical behavior has nothing to do with GRW theory, Bohmian mechanics, or any other conception of the quantum realm. The problem lies within Bell’s incomplete and inadequate characterization of classical entities. “Bad calls” (bit errors) may indeed, as you say, make no sense when applied to existing quantum theory. But they make perfect sense in the classical realm – a fact that Bell utterly failed to ever consider in his theorem’s characterization of all the possible, classical, correlation statistics, that need to be compared with quantum predictions. Classical objects, manifesting only a single bit of information, exhibit behaviors, unlike anything that Bell ever dreamt of, in his characterization of the classical realm. World War II era RADAR signal detection techniques, when applied to such objects, reproduce the same type of correlations observed in Bell tests of quantum objects.
Bell proved a theorem. It nowhere even deploys the concepts “classical” or “quantum”, or makes any use of any notion of a “good call” or a “bad call” or any concepts from Shannon’s theory of information.
Bell proved that no local theory can predict violations of his inequality for events at spacelike separation.
Experiments have demonstrated violations of his inequality for events at spacelike separation.
Ergo, no local theory can be physically correct.
If you think there is a mathematical error in Bell’s proof, point it out.
Otherwise, take the time to understand what Bell did.
The error is in the premise that d’Espagnat noted and that I quoted above, not the logic used to subsequently derive a conclusion from that premise. Take the time to understand why d’Espagnat ever bothered to say what he said (and think about this: measurements that produce bad calls, do not correspond to “an attribute of the particle itself” as the premise assumes ). New classical experiments, exploiting this fact, have demonstrated that Bell got it wrong. It has been demonstrated by actual, classical construction, not mathematical theorizing. Q.E.D.
The problem arises precisely because of the deficiency that you just stated: “It nowhere … makes any use of any notion of a “good call” or a “bad call” or any concepts from Shannon’s theory of information.” As a direct result of that deficiency, it is an incomplete description of some of the behaviors, found to exist in the real world. Shannon’s theory was developed, explicitly to deal with such issues.
To elucidate this matter further, d’Espagnat’s quote can be found on the bottom of page 166 of this November 1979 article:
https://static.scientificamerican.com/sciam/assets/media/pdf/197911_0158.pdf
I am perfectly aware of d’Espagnat’s article. I read it when it came out.
He fails to note and properly account for the use that Bell makes of the EPR argument. That argument already shows that just to account for the EPR correlations a local theory must be deterministic. Therefore, Bell starts from that conclusion and asks whether a local theory can account for all the correlations, not just the EPR perfect correlations. He proves it cannot.
d’Espagnat has just made an error here. A lot of this has been straightened out since 1979.
I also read it when it first came out, and first wrote about this problem, nearly thirty years ago, in a booklength rebuttal to Roger Penrose’s 1989 book, “The Emperor’s New Mind.” What you and Bell and Penrose and everyone else have failed to notice is the following, which is directly related to Shannon’s conception of “information”:
Ask yourself what happens if the “noise” on one particle/coin is (or is not) the same as the noise on an “identical” particle/coin; Think of minting errors, while attempting to produce “identical” coins.
If the noise (minting errors) differs from cointocoin, then the noise (minting errors) on one coin is an “attribute” of that coin, but NOT any other “identical” coins; you can make REPEATABLE measurements of the “noise” (static minting errors), on an individual coin, but if you attempt the same measurement on an “identical” particle/coin, you will obtain a different result. But if the “noise” (minting errors) is identical for all coins, measurements of “identical” coins will produce identical results.
Thus, in one case, the noise is both an “attribute of a particle/coin” and ALSO an attribute of its entangled, “identical” copy – just as d’Espagnat’s premise requires. But not in the other case.
This is at the very heart of what Shannon meant by a “bit of information”. Bits of “data” only count as bits of “information”, when then can be PERFECTLY copied.
For over a century, the physics world has been naively assuming that “identical” elementary particles are identical in the sense that there are no “minting errors”. But Bell tests demonstrate that they are identical, only in the sense that there is only a “single bit of information” present, as an actual “attribute” of any single particle. That is what makes the particles “elementary” in the first place – they only manifest, the least possible, most elemental amount of information – a single element/quantum of information. That is what “quantization” is all about. This is the ultimate reason for the existence of the Heisenberg Uncertainty Principle – you cannot reliably measure two things (variables) when there is only one thing (a single bit of information) present.
That is the origin of the seemingly “weird” statistical correlations. It has noting to do with spooky action at a distance, hidden variables, nonlocality, multiworlds or any of the other absurd interpretations of quantum theory. It all has to do with the very nature of one of Shannon’s bits of information – a quantum.
As has been DEMONSTRATED, if you make the noise (minting errors) different between one coin and its entangled pair, you reproduce “Quantum correlations”, but if you make the noise (Minting errors) the same, the quantum correlations vanish are are replaced by the familiar classical correlations. Quantum correlations only appear when there is a single quantum of information (a single bit, not a qubit) being manifested, between the entangled “identical” pair, in a Bell test.
As an aside, this is also connected to a recently proven math theorem: https://www.quantamagazine.org/mathprooffindsallchangeismixoforderandrandomness20190327/ : A single, isolated bit of Shannon’s “information” cannot exist without noise being present, for the simple reason that more than one bit would always be reliably recoverable, if the noise is reduced.
To repeat: Bell’s theorem has zero to do with the concept of “noise” (which is an informationtheoretic notion) or with Shannon information. You have two (or three) separated labs. Experimenters have control over whether the equipment is each lab is set either of two (or three) ways. For each setting, an experiment is done and a binary outcome recorded.
If the underlying physics is local, there are limits to the sorts of observed correlations among these outcomes. Quantum mechanics predicts violations of these limits and—the important thing—violations are observed in the lab.
Ergo, actual physics cannot be local.
Shannon and “noise” are nowhere in sight.
Bear with me here Tim. I am trying to rouse you from your dogmatic slumbers, before Penrose’s “Quantum Weirdness” Brain Syndrome renders you totally comatose. Bell, like all physicists, has assumed that “identical” particles possess an infinite number of bits of information, that due to “quantum weirdness”, cannot be recovered – only a single bit can be recovered. Here is a typical quote regarding this: “However, the quantum state of a twostate system cannot be measured. Instead, the outcome of a measurement is always one of the two classical states. So despite in some sense containing an infinite information potential, only a single bit of information can actually be extracted from a qubit.” The quote is from here: https://ndpr.nd.edu/news/philosophyofquantuminformationandentanglement/ But the underlying assumption, that mother nature knows how to manufacture such absolutely perfect copies, is a ludicrous idealization. In reality, the reason you can only extract a single bit, is because that is all that is present. Stare at the two figures on the first page of this ( http://vixra.org/pdf/1609.0129v1.pdf ) until you comprehend the true nature of reality; “identical” particles are not that identical.
Then stare at the figure here ( http://vixra.org/pdf/1804.0123v1.pdf ) until you comprehend that a one time pad is only an unbreakable encryption, when the noise used as the pad, is only used once. The incorrect conception of identical particles, at the foundation of Bell’s theorem, makes the same mistake as a crypto agent, using the same random noise sequence for two messages, thereby “entangling” them in a manner that introduces correlations that enable the code to be broken, in spite of the fact that a truly random sequence (not just a pseudo random one) was used. Think about it.
I have correctly described Bell’s result and demonstrated that your invocation of informationtheoretic notions is a red herring. You respond with insults. That says it all.
Admittedly, when I heard about this MIT promo piece from a friend who directed my attention to your site, I got so agitated that I felt an immediate urge to write an arXiv piece to contest the claims. Fortunately, I had to write two referee reports I had promised for a long time instead 😉
But after reading the guest authors’ opinions I am relieved — happy to see that not everybody believes any piece of opinionated, emotionalized and magically glowing quantum hocus pocus marketing … (please see my respective emanations at URL http://live.ioppp01.agh.sleek.net/2017/05/25/deliveringonaquantumpromise/ and http://doi.org/10.3354/esep00171 ).
Just a few side notes as addenda to these opinions:
*) Barret, eg in his SEP entry @ URL https://plato.stanford.edu/entries/qmeverett/ calls the EverettWigner scenario “nesting” and seems to believe that Everett had it before Wigner: “The problem with the theory, Everett argued, was that it was logically inconsistent and hence untenable. In particular, one could not provide a consistent account of nested measurement in the theory. Everett illustrated the problem of the consistency of the standard collapse theory in the context of an “amusing, but extremely hypothetical drama” (1956, 74–8), a story that was a few years later famously retold by Eugene Wigner.”
Purely from timeordering t(Everett) = 1956 < t (Wigner) = 1961 but who knows? Btw, I set on Wigner's table (together with Dirac and Wigner's sister 😉 in Sicily in 1984, and did not care to ask him if his opinions changed; shame on me; but at least I want to advertise Dirac's forgotten rant about the futility of war @ URL https://doi.org/10.1142/9789814536608 .
*) the perplexity of nesting has two sources:
+) one is the apparent inconsistency between the reversible "permutative" unitary evolution of a quantum state — as mentioned by Barrett and quoted by Everett to have been already pointed out in von Neumann's "Grundlagen" (btw, very bad translated into English; and nobody reads von Neumann anyway); but also present in Schrödinger's "gegenwärtige Situation" triple paper;
+) as mentioned in particular also by Dustin Lazarovici, it is the coherent superposition, so frighteningly attenuated by Schrödinger in his cat section of the aforementioned triple paper — in his later, Dublin seminars Schrödinger became more concerned with "quantum jellification"; the nonobserver becoming jellyfish without measurement (but how with reversible evolution?) …
*) and finally I believe that the message of quantum nesting is that, actually, Wigner's friend, if ridiculously isolated, would indeed "experience" (however that would feel) himself (I refuse the pc "her" and thereby risk extinction) to be superposed between whatever — even dead or alive — of course the latter condition has practical limits as he would need to breathe, and get rid of stuff. It's like going through a double slit without which way detection. Or he would first measure, and then quantum erase the measurement — as Shimony commanded: "unthink"!
In that way, I cannot see the slightest difference between the perception of "reality" (whatever that may be) between Wigner's friend and Wigner himself. Alas who knows; both of them might feel a sense of dizziness that comes when one forces a quantized system to answer questions is not prepared to answer with certainty (cf URL http://doi.org/10.1063/1.4931658 )?
*) and finally, when it comes to sensationalism,: in my observation, it is almost impossible to overestimate narcissistic motives in scientists. They would claim almost anything (to complete selfmutilation) for getting a big piece of the attention cake.
Ah and I forgot to mention that detector clicks sometimes mean very different things for different people; eg, see the “a posteriori” teleportation debate that took place at a conference in Vienna and in Nature: https://doi.org/10.1038/29674 and reply https://doi.org/10.1038/29678
I wish someone would explain what is actually happening in this experiment. It looks to me like an exercise in three photonpair entanglement (has this been done before?), where they get further entangled using two fusion gates. If I understand the Supplement of the paper, two of the photons go to detectors (the “friend’s measurements”), while the remaining four are entangled with each other. (Can you entangle four photons using a property with only two states?)
I don’t understand how you can get any information from entangled photons from both sides of the experiment. They claim the information about the particle in the detector is “copied”, but it was copied from both sides, and is now in both states at once. Can someone help me with this?
Thanks.
Did my reply to Tim Maudlin on the morning of March 31 get snagged by your spam filter?
It had. It should be visible now. Thanks for letting me know.
You should consider publishing this content to LinkedIn. Well done and a great synthesis of views on the subject.
To LinkedIn and Medium (if you haven’t done so already). Cheers!
I hope – but I am not sure – that the reality soon will be accepted as the fundament of all science. Not just in physics. That QM is used to declare objektive reality dead is rooted in among others Bohrs and Heisenbergs idealism, inspired by Kant, Goethe, Kierkegaard. Heisenberg was very clear about that in 1927. Realism, materialism and determinism were proved wrong. De Broglie realistic QM was ignored by the majority in the Copenhagen interpretation. Of cause, ideologically, this was also a reaction to rationalism and the 1 World War. Later, in the sixties antirealistic ideas were popular again “nothing is real” and today in our postmodern world we construct our own reality, and in QM physics antirealism is still very strong as we see in this investigation. Therefore i am happy with many of the comments above. Also ‘t Hooft, Valentini, Penrose, Lee Smolin and Many others give hope for a change towards reality in physics.
For another view of how Realism vs. Idealism is at the foundation of all the supposed weirdness, in the misinterpretations of QM, see my comment here: http://vixra.org/abs/1609.0129#comment4458485189
Mr. Rob Mceachern, have you actually read what Bohr and Heisenberg said about QM? The “supposed weirdness” was there from the beginning, at the very heart of QM. Most scientist do not think deeply about that, and just use QM as a good statistical calculator (this is also almost the only way to get a job as scientist). But statistics can not be the fundament of science. Thanks to the few scientists, who now dare to seek a reality behind the statistical QM.
Yes, I have read those authors. The weirdness was there from the beginning, just like the leaning tower of Pisa’s bad foundation was there from the beginning; at the heart of the math being used to describe reality, but not at the heart of reality itself. The classical world exhibits the exact same types of behavior, but only if you know both how and where to look for it (an issue debated by the ancient Greek philosophers). But physicists have never known that; that their problem results from a small information content, rather than a small physical size. They idealize “identical” particles, with infinite information content, and then wonder why they can never actually observe such a vast content, when they dig down and look for it. But there is no dough in a doughnut hole.
As the saying “There are lies, damn lies and statistics” implies, the problem with statistics resides entirely within their interpretation – just as in quantum theory: analogous to when you compute the statistical centerofmass of a doughnut, but then become mystified by the fact that you cannot actually find any mass at all, at the location of that preciselycomputed centerofmass. Small sets of mathematical equations do not contain enough information, for them to ever completely describe anything, other than the most trivial physical situations – those with only a tiny information content, that corresponds to the information capacity of the equations themselves. In order to use equations to accurately describe anything at all useful, about a complex situation, the complexity of the situation, must be greatly reduced, in order to correspond to the capacity of the equations being used in an attempt to describe it. That is the role played by statistics. At the dawn of the quantum age, physicists employed “magical thinking”, in the belief that clever mathematical techniques, like Fourier analysis, could be employed to automatically solve “the problem” thereby relieving them of ever having to actually think about it. But their lack of any intuitive understanding of the behavior of such techniques, has lead them so far astray, that they have never recovered from that initial misstep – the wrong path taken; you cannot use a single wavefunction to describe “everything”, because that inevitably results in removing all constraints on the solution, such as its highly undesirable behavior in the presence of any noise or errors (the origin of the belief in quantum vacuum fluctuations). Classically, a separate equation must be used to “track” the trajectory of each particle, if you ever wish to describe the individual, nonstatistical motion of each one. Unfortunately for the history of science, physicists in the 1920s believed that Fourier analysis would “take care of all that” automatically, because, after all, with an uncountable infinity of adjustable parameters, Fourier analysis can, and will, “fit” anything. So they blissfully assumed it would automatically fit reality. It does – just not their idealized, errorfree vision of reality. Unfortunately, fitting the real, nonideal reality, includes perfectly fitting every error. So now we have “the most accurate theory” ever devised, precisely because it has no fixed “model of reality” at all; the Fourier analysis will simply and automatically change the modelofreality it creates, to perfectly match the ever changing observations, including all the changing errors. Claude Shannon figured this all out seventy years ago – dealing with this problem, is what his Information Theory, is ultimately all about – modeling reality sans all the errors. So, while communications engineers have revolutionized modern communications by exploiting Shannon’s insights, the physics world has never even tried to understand what Shannon was actually saying – because they thought they already knew what he was saying – just another mundane way of talking about entropy. But information is not the same as entropy.
In regards to the statement that “the Fourier analysis will simply and automatically change the modelofreality it creates, to perfectly match the ever changing observations”, think of the Everett’s ManyWorlds Interpretation of QM and the Multiverse.
It is a strange discussion. It is about objectivity in relation to quantum mechanics, not in relation to “ultimate reality”. Nevertheless, if we speak about “objectivity” we mostly mean reality itself, the dynamical structure that creates observable reality although we don’t understand exactly how observable reality is created.
Quantum mechanics originates from the interpretation of experiments at the atomic scale. Actually, it is just phenomenological physics, no matter how “sophisticated” the abstractions are to describe the model. But phenomenological physics is simplified reality. In physics we know that “tangible” phenomena are concentrations of energy (quanta) that doesn’t exist without the continuous exchange of energy with vacuum space around. We even cannot state that a “tangible” phenomenon at position A – that is transferred to position B – is the same phenomenon. We only know that the properties at position A are the same at position B. Or better: nearly the same (Subcycle quantum electrodynamics: Nature volume 541, pages 376–379; 19012017).
What am I missing?
All this “deep philosophising”, when it seems to me that these experiments simply imply that ‘entanglement’ means what QM says it should and nothing more and nothing less. I haven’t seen the details of the experiment, but my impression was that the statistics of the measurements represents a measure of the level of entanglement of the measurement systems. This seems perfectly consistent with a single reality.
What you are missing, is the following:QM says nothing at all about its own “meaning”, so all the “deep philosophising” is an attempt by various people, to impose their own preferred meaning, upon the equations of QM, which, in and of themselves, merely describe observable statistical effects, without providing any clue about what underlying cause is responsible for producing those effects.
Where this gets a bit more complicated, is when the same people have proven theorems, that demonstrate that the underlying cause cannot possibly be “reality as we know it”, if all the various premises that their theorems have been derived from, are valid statements about “reality”.
But, as I have argued above, the most fundamental, unstated premises underlying those theorems (unstated because people accept them as established facts, rather than idealistic assumptions) are highly unlikely to be valid; and if they are not valid, then it is easy to demonstrate that the “underlying cause” for the seemingly weird statistical effects, has a much more mundane origin, than all the imaginings produced via the “deep philosophing” of the past century.
It seems that after 100 years the issue still has not been settled. Yet we do have the knowledge now to put the matter to rest for now. The problem lies in our way we think about observing reality, how we perceive it and how we interpret measurements in controlled environments.
Buddhist philosophers are very clear on this subject. Our sensory apparatus and all attached perceptory, interpretory and cognitive processes are flawed.
Astrophysicists and astronomers in general are the only scientists who knowingly expand our experience beyond what our visual sense allows. And we run into dark matter, dark energy and information annihilation in black hole issues.
Non locality is not the issue here.
Our senses are. Taking a hint from.the popular tv series Brain Games, I suggest we make an inventory of all shortcuts, adaptive and simplifying measures the brain makes to interpret what we see and hear.
Our own mind fools us.
Neuroscience and brain projects around the world are trying to figure out how our brains work in a manner that will allow neuromorphic cognitive architecture to.be imparted on of all things quantum computers.
As a mathematician with a keen sense of the essence of Buddhist philosophy, the all pervasive ramifications of the Godel Skolem theorems, the proven Bell non locality theorem, and the absurdity of grand unified theories and theories of everything, I suggest we focus instead on making sure that any scientific framework, including the philosophy, mathematics and computer science aspects thereof, for (physical) reality stick to consistency and non contradictory reasoning, and figure out how to incorporate the sensory apparatus limitations which are caused by quantum interactions into the pursuit of science.
“Our sensory apparatus and all attached perceptory, interpretory and cognitive processes are flawed.”
That claim is founded upon the dubious assumption, that our senses were constructed to inform us of “what is actually out there.” They are not. As Shannon definitively demonstrated in his Information Theory, over seventy years ago, the truly successful detection (AKA sensory) processes, must be constructed to detect only what they are seeking to find. In other words, our senses are not “designed” to tell us what is “out there”, but only to tell us if what is “out there” matches what they are looking for. Hammers are not “flawed” just because they are illsuited to the task of driving screws. It is the mental logic of any person wielding a hammer, that is flawed, if they attempt to use any such tool for an inappropriate task.
As Edward O. Wilson once stated in his book “On Human Nature”: “the intellect was not constructed to understand atoms or even to understand itself but to promote the survival of human genes.”
Few “sensory apparatus limitations” are caused by “quantum interactions.” They are caused by the limitations in information recovery processes, identified by Shannon. In the case of our sense of Hearing, you might find my paper (https://vixra.org/pdf/2003.0069v1.pdf) to be revealing; for example, pitch perception and color perception do not work anything at all like the “spectral analysis” models discussed by physicists and in most text books. Pitch and color receptors are not “flawed” spectrum analyzers – they are something else entirely – serving a much more useful purpose than merely attempting to recover enormous amounts of useless (for the purpose of survival) spectral information from either sunlight or environmental sounds. Being able to directly perceive the solar absorption spectrum may be of great interest to a presentday astrophysicist, but eons ago, it would have inevitably gotten them eaten by a lion, while they stood, transfixed, cogitating about the wonderfulness of such a fatallyattractive, but ultimately useless, visual capability.
To the great Rob McEachern, read your exchanges with Tim Maudlin. I am 100% behind you. My twocents on this: a single nonrepeatable event cannot be turned into a “coin toss,” or modeled into a mathematical equation.
Another reason I’m with you is that he, Maudlin, and many other “professionals” of his ilk think they know the truth (Bell, QM, etc), that they can be objective where others can’t.
In reality, they are not only amateur thinkers but dangerous amateur thinkers, for they have failed to realize that since the first day of a person’s consciousness, he can no longer be considered “objective.” I can write 100 pages on this topic but this is not the time nor place. That’s why Godel was so brilliant, even though he didn’t truly realize the implications of what he wrote.
Mr. McEachern, you are a genius, not just in your thinking but in your linguistic prowess. Would love to get in touch with you to discuss your masterful understanding of Shannon’s work. Please contact me at jh 99 fg at yahoo dot com
Can you see it this way: everyone who measures causes an individual collapse of the wave function. As if he throws a stone into the water and a wave becomes a drop that splashes up. So everyone brings about their own reality, but everyone is in the same reality.
Excellent comment. I’m going to go a step further and say: the act of measuring is itself a subjective act. You heard it here first!
Not intending to get too philosophical in this space, but true objectivity is a fantasy. In a classical world, we can at least observe with our eyes (continuity). But to say outright that atoms exist because our detectors say they do is making a subjective statement.
Yes, we are pretty certain those blinking points of “light” stand for something, and we call them electrons. But if we can’t see or track their movement, and can only make mathematical model of their existence one moment at a time, then a quantized result is what we are going to get. In other words, if our method of detection is quantum, then why are we surprised that the results are quantum based?
But tackling basic foundational issues like this is too much for the “shut up and calculate” crowd. That’s why philosophy and philosophers are sorely needed here. Because the physicists are mixing apples with orangutans…for over 100 years. The result is confusion to some (we the public just don’t get it), to preachings by snake oil opportunists. (Dude, reality is an illusion and we’ve got the math to prove it.)
I’ll just leave it at that.