— musings on quantum mechanics, the multiverse, and scientific principles old and new
Christian Jepsen, PhD
Princeton University
Why is there something instead of nothing?
Because there is everything!
This answer to the timeless question posed above is one that an increasing number of physicists have begun to take seriously over the last couple of decades as the so-called many-worlds interpretation of quantum mechanics has been slowly supplanting the Copenhagen interpretation as the most commonly accepted way of understanding what transpires when a measurement is performed on a quantum system.
Almost anyone who knows anything about quantum mechanics will agree that it’s strange. When subjected to scrutiny, elementary particles behave in ways that defy conventional intuition. Tunneling, superposition, wave-particle duality, entanglement, uncertainty relations — these are concepts that number among the ranks of novel ideas that physicists were forced to embrace during the last century when the framework of classical physics crumbled. The debate over quantum mechanical interpretations revolves around what these phenomena exhibited by elementary particles entail for the macroscopic world of our everyday experience: Do the laws of quantum mechanics apply for systems of any size? Are even macroscopic objects subject to these funky phenomena? The possible answers to this question lead to drastically different conclusions. In particular, the many-worlds interpretation postulates the existence of a virtually infinite number of parallel realities, a multiverse, which adherents of the Copenhagen interpretation need not subscribe to. From another perspective however, the discussion of how to interpret quantum mechanics is a purely hypothetical matter, and many physicists choose to remain agnostic on the question of interpretation, invoking instead the precept “Shut up and calculate”. In the everyday life of a scientist it does not matter which interpretation one chooses to adopt. For all practical purposes, quantum mechanics offers a precise and unequivocal prescription for computing the outcome of any experiment that can feasibly be realized. And yet, the potential existence of a multiverse may for better or worse help shape the future course of fundamental research in physics, as theorists are invoking the multiverse in addressing some of the biggest outstanding questions in science today and attempting to account for the structure of the laws of nature and the distribution of energy and matter throughout space and time. In some ways, the kind of arguments involved in this line of reasoning runs counter to conventional scientific thinking, but it also represents a revival of ancient principles of natural philosophy.
Wavefunction Collapse versus Entanglement
The predictions of quantum mechanics are probabilistic in nature. If you measure the position, momentum, or spin of a particle, the outcome is random. But the probability distribution that describes this randomness can be computed precisely. To find the probability of detecting a particle at a given location one must carefully take into account all possible paths the particle could have taken to get there. Where quantum mechanics gets weird is that these possible paths interfere with one another. In a sense, the particle traverses all possible paths as long as you don’t mess with it. But as soon as you perform a measurement, the particle is in one place only.
An experiment that clearly demonstrates this behaviour is the famed double-slit experiment. Shoot particles, say electrons or photons, one at the time towards a partition that contains two separate slits through which the particles can pass. Let us refer to these two slits as slit 1 and slit 2. If you put a detector right after each of the two slits, you find that each particle registers in one of the two detectors, never both. This is in accordance with the usual notion of particles being little lumps of matter or energy localized in space. But now remove the detectors, put a screen at some distance after the two slits, and measure where the particles strike the screen. You’ll find that the particles preferentially hit certain parts of the screen. In fact, the preferred regions form a ripple-like pattern in accordance with an interference pattern generated by waves emitted from the two slits. And this pattern persists even if each particle is shot through the slits hours or even days after the previous particle: the interference is not between different particles, but rather each particle interferes with itself. But place detectors of any kind to determine which slit each particle passes through, and the interference is destroyed so that the particles will preferentially hit the region of the screen directly ahead of the screens — no ripples in the probability density. Remove the detectors, and the ripples reappear. If you don’t look to see which slit a particle passes through, it passes through both. The formal way of saying this is that the system is in a superposition of states; or, in mathematical notation,
When you place detectors by the slits, you destroy the superposition and force the system into a definite state, either |particle goes through slit 1⟩ or |particle goes through slit 2⟩. This process of performing a measurement, and thereby singling out a definite outcome from a superposition, is known in the parlance of the Copenhagen interpretation as collapsing the wavefunction of the system.
In this example we considered a superposition of single-particle states. But multiple particles can also enter into a superposition. One could perform an experiment where pairs of particles that are bound together by some attractive force are shot at a double- slit. Let us refer to the two particles in a pair as particle A and particle B. Being tethered together, the particles in a pair follow each other through either slit 1 or slit 2, leaving us still with two possible options and resulting still in a superposition of two states, but states which now each involves two particles:
You don’t know which slit particle A went through, but you know it went through the same slit as particle B. Two particles whose outcomes are connected in this fashion are described as being entangled. While in this example the pair of particles are tethered together in space, it is possible for particles far apart from each other to be entangled together. One example would be a pair of particles with spin. These can interact such that one particle has spin up and one particle has spin down, but which is which is undetermined. The system will then be in a superposition of 1) the state where particle A is spin up and particle B is spin down and 2) the state where A is down and B is up. After interacting however, the particles can move arbitrarily far away from each other and still remain entangled. But as soon as you measure the spin of one particle, you collapse the wavefunction, and the other distant particle will also assume a definite spin. It was this kind of phenomenon that Einstein famously referred to as “spooky action at a distance”.
Superpositions involving more than two particles are also possible, though experimentally it becomes exceedingly difficult to observe interference between superposed states when large numbers of particles are involved, since the particles must all be kept from interacting with their surroundings lest the interference pattern is destroyed. A milestone was reached in the end of the nineties when scientists succeeded in observing ripples in the probability density of Buckyballs, a chemical compound consisting of sixty carbon atoms. But, experimental hurdles aside, how large is it possible for a collection of particles to become and still take part in a superposition? Can macroscopic objects, people, cats, dogs, cars enter into a superposition? Well, now we’re touching the heart of the matter: how to interpret quantum mechanics. In the Copenhagen interpretation, there exists some threshold (an energy scale, a mass limit, or something of the sort) at which the wavefunction collapses and superpositions are no longer possible. Not so, says the many-worlds interpretation. There is no limit to superpositions, and there is no such thing as the collapse of a wavefunction. But then what happens in the examples we considered above when a measurement is performed on a particle and a definite outcome is observed? The answer: The observer becomes entangled with the particle. If detectors are used to determine which slit a particle traverses in a double-slit experiment, the state of the system does not collapse down to |p. through s. 1⟩ or |p. through slit 2⟩. Instead, using the abbreviation o. for observer, we have a new superposition:
The observer becomes part of the superposition! We see now how the assumption that the laws of quantum mechanics apply universally naturally leads to the multiverse. For every fork in the road ever encountered by a particle, split realities arise. Schrödinger’s cat is dead and alive.
The Quantum Donkey
The Standard Model of particle physics describes quantum mechanics, the nuclear forces, and electromagnetism to an incredibly high precision. And general relativity accounts beautifully for the motion of objects as disparate as individual planets and clusters of galaxies. Our understanding of both these theories has been largely driven by a careful scrutiny of the symmetries of nature. Throughout the history of science, the concept of symmetry has been of paramount importance. Even in the cosmogonies of the ancient philosophers endeavouring to provide rational explanations for the motion of the stars and the cycles of the seasons, symmetry played a crucial role. The universe was a place in balance. The elements of fire, earth, water, and air, always vying for dominance in the world, were held in a perpetual stalemate through the equilibrium of their forces. The universe was a perfect sphere with the earth sitting motionless in the centre, going nowhere because all directions were the same.
The notion of perfect balance became a subject of contemplation to later philosophers and was wittily discoursed upon by means of the paradoxical fable of Buridan’s ass, named after the 14th century French philosopher Jean Buridan. Buridan’s ass is a hungry ass who is situated smack in the middle between two identical piles of hay. The haystacks being completely alike and equally far away, the ass has no preference for one haystack over the other. Consequently, the ass stays put in the centre and so dies of starvation. Quantum mechanics teaches us that in the lab and in nature, we do in fact encounter situations analogous to the plight of Buridan’s ass: alternatives that are precisely equally attractive. Does a particle traverse slit one or slit two? Is the spin of an electron up or down? Does a radioactive isotope decay within its half-life or not? But we need no longer accept the dramatic fate of Buridan’s ass succumbing to famine between two haystacks. For we have learned to accept indeterminacy as a fact of life. 50% chance we measure one outcome, 50% change we measure the other. No dead donkeys. This probabilistic resolution of the paradox probably would not have pleased the renaissance philosophers. Why should one outcome transpire rather than the other when there is no argument in favour of either? Well, if we accept the many-worlds interpretation of quantum mechanics, balance is restored. No outcome preponderates over the over. But it is not that Buridan’s ass chooses neither haystack. It chooses both.
The World on a Knife’s Edge
While quantum mechanics frees Buridan’s ass from starving indecision, the universe it- self, as we understand it today, does appear to hang in the balance like in the ancient cosmogonies. The best model we currently have for describing the observable phenomena in the cosmos — the acceleration of the universe, the distribution of galaxies, the cosmic microwave radiation — the so-called Lambda-CDM model, tells us that the continued existence of our universe hinges on a tenuous balance of its constituent components: matter, radiation, curvature, and dark energy. Too much matter and the universe collapses in on itself shortly after its formation. Too much dark energy and the universe is ripped apart before galaxies form. The world is balancing on a knife’s edge with perdition on either side. It is as in the genesis of Norse mythology, where the world emerged in the void of Ginnungagap between the scorching heat of Muspelheim (land of the fire giants) and the biting cold of Niflheim (home of the frost giants). We live in a narrow Goldilock zone of stability. (Or relative stability. The expansion of the universe is accelerating, and so it appears the universe is heading towards an eventual big rip.) And it is not just in cosmological models that parameters must be delicately adjusted in order to give rise to a stable or metastable universe. The same applies to particle physics. For example, if the mass of one particular kind of particle was just a little bit larger, the universe would have been wiped out long ago by an expanding vacuum bubble.
The need to finely tune parameters in physical models is usually considered problematic. Ideally, parameter values should be computable from first principles. Why should some parameters have to be set to seemingly arbitrary values? Fine-tuned parameters have an air of ad hoc explanations about them, they remind us of the increasing number of epicycles that astronomers introduced to the geocentric model of the solar system in order to keep it in agreement with observations. When Kepler proposed that the planets move around the Sun in elliptical orbits, he was able to dispense with epicycles all together, and we know that he was right on track. Among competing theories compatible with observation, Occam’s razor tends to serve as guiding principle, and so it has been since much before the time of Kepler or the time of Occam. Nature does nothing that is in vain or superfluous, Aristotle tells us. Many physicists are occupied in searching for simple underlying principles that may account for the seeming need of fine-tuning of physical models. For example, apparently miraculous cancellations in the calculations of particle masses could perhaps be the result of supersymmetry. Theorists have tried to think of many ways of possibly detecting traces of supersymmetry in the big accelerators that smash together particles and record the particles generated by the collision, though so far experiments have yielded no evidence indicative of supersymmetry.
Over the past decades however, another school of thought has arisen that seeks to account for fine-tuning in an altogether different manner, which could be viewed as being very much opposed to Occam’s razor: The explanation is the multiverse! The emergence of a universe like ours is a wildly improbable event, proponents of this kind of argument concede. But it is a possible event. And so it will perforce occur somewhere in the multiverse. There is nothing unusual in an unlikely universe existing as long as many more likely universes also exist. The fact that we live in an unlikely universe is simply due to the fact that universes stable enough to support sentient life are all unlikely. One could object that postulating the existence of countless universes is the exact opposite of providing the simplest explanation. No explanation could be more involved. But contrariwise, many-world believers may argue that Occam’s razor disfavours the Copenhagen interpretation, which hypothesizes some mechanism that triggers wavefunction collapse, though no experiments testify to such a thing. In the many-worlds interpretation one set of principles, those of quantum mechanics, governs all.
A principle that has helped advance science in the past, and which the many-world ac- count of fine-tuning definitely does violate, is the Copernican Principle: we are not the center of the universe. There is nothing special about our home on one of the spiral arms of the Milky Way Galaxy, in the Local Group of galaxies. But our universe is special. It is a rare exception among universes, a universe stable enough to host thinking beings, just like our planet is special in that it is the only inhabited planet we know of. Out with the Copernican Principle, in with the Anthropic Universe.
It is in the nature of science to explore the unknown, and so by its very essence, we cannot know what the future of science holds. We cannot predict which guiding principles will prove fruitful to science in the time to come. The purpose of any theory is to make far-reaching and precise predictions, and the ultimate arbiter in matters of science is experimental tests. Grand arguments that appeal to our sense of aesthetics count for little in and of themselves. If the concept of the multiverse is to be a part of the future of science, it will have to play a role in producing new experimentally testable predictions and not simply provide explanations for phenomena we already know of or, even worse, merely dissuade scientists from searching for other explanations that may ultimately be closer to the truth. But the fact that there is a very real possibility that our universe is but a tiny speck in an almost limitless multiverse, where everything that can occur does occur, sure is fun to think about.
Commentaires