Finding a quantum theory of gravity

One of the hardest problem in fundamental physics is to find a consistent theory of Quantum Gravity (QG) that matches with current experimental data. Gravity can be quantized as an effective field theory, but we know that this description breaks down at the Planck scale.

Many people try to tackle QG and a common mistake that many of them make is that they fail to understand the problem. For some reason there is a misconception that we do not have a theory of QG. This misconception wrongly motivates people to come up with their own version of QG.

But the problem of QG is exactly the opposite. There is no shortage of QG theories. On the contrary, we have too many of these, infinitely too many.

Let me explain. QG is a non-renormalizable Quantum Field Theory (QFT). This means that we can define it as an effective field theory at low energies, but as we try to extend the theory to high energies, new interactions would arise in the theory. Each such interaction adds a counter term to the theory, which adds a new free parameter to the theory. There are infinitely many counter terms, generating infinitely many free parameters.

Therefore, we have a theory of QG with infinitely many parameters that have to be determined by experiment. The fact that there is no experimental data to determine these parameters does not make our lives any easier.

I hope that this didn’t scare you off from working on QG, because this was the easy part about quantizing gravity. A bigger issue is that exactly at the Planck scale, where QG becomes non-renormalizable, it also becomes strongly coupled. But our effective field theory approach to quantization is a perturbative approach. Once the coupling is strong, of the order of 1, the perturbative approach fails.

But all is not lost. There is still a workaround. We can cut off the theory at high energies, above the energies reached by any current experiment, yet bellow the Planck scale. This is referred to as a UV cut-off. Adding a cut-off is actually a standard procedure in QFT called regularization (in practice, dimensional regularization is usually used). Without regularization, infinities show up in QFT calculations. By adding a regularization, all intermediate calculations become finite. Then, at the end of the calculation we can remove the regularization by pushing the UV cut-off to infinity. If the QFT is well behaved, all the infinities in the intermediate calculations would cancel out and the final results would be finite.

What happens if the QFT is not well behaved, like in the case of QG? Well, nobody said that we have to take the UV cut-off to infinity. Sure, taking the UV cut-off to infinity makes our life easy, as it means that we do not need to worry about the details of the UV cut-off effecting our results.

But who said that life is easy? Maybe there is a real UV cut-off in nature. So far, there is no experimental data that supports a finite UV cut-off. On the contrary, a UV cut-off would break Lorentz invariance and there is tons of experimental data showing that there are very tight constraints on Lorentz Invariance Violation (LIV). For example, according to this paper, the lower limit for LIV is actually above the Planck scale. Therefore, if someone is interested in going this route, they need to come up with a UV cut-off that is consistent with existing limits on LIV and also results in a consistent QFT.

Just a little word of warning — this endeavor might be a bit more complicated than I made it look. You see, there are two approaches, perturbative and non-perturbative.

The perturbative approach has the advantage that it starts at low energies, so you can choose your starting point as the current standard model for particle physics together with general relativity. The downside is that you will have to show that the perturbative expansion gives some usable results. Notice that I am not asking for a consistent theory. My bar is much lower. For example, setting the UV cut-off to 1% of the Planck scale might have been good enough, if it wasn’t ruled out by experimental data constraining LIV.

In the non-perturbative approach, you need not worry about a consistent expansion. Instead you need to worry about reproducing matter and interactions at low energies. For example, if the theory is defined on the lattice, its content is defined at the lattice scale. Yet from renormalization group flow we know that the low energy theory could look completely different. Again, like in the perturbative case, if we did not have to worry about LIV, we could have set the lattice scale to 1% of the Planck scale and everything might have worked out. We would have started with the standard model and general relativity at the lattice scale and probably we would have been able to fine tune the lattice couplings to reproduce low energy physics.

Unfortunately, the lattice scale is constrained to be much shorter than the Planck scale. We have no idea which matter fields and what interactions at this scale have a chance of flowing to the known low energy physics. We could try guessing, but there are infinitely many options with infinitely many free parameters. And even for a single guess with specific parameters, calculating how the theory would look like at low energies would be a monumental task, probably beyond anything that mankind has ever achieved.

To summarize, there are infinitely many theories of QG that match current experimental data, and there are infinitely many theories of QG that are self-consistent. But there are none that we know of that satisfy both conditions.

Is quantum mechanics deterministic or random?

Quantum mechanics is intrinsically random. If we look at a radioactive element, it will decay with a specific half-life time, but there is no way to predict exactly when it is going to decay. It is a random process.

On the other hand, the Schrödinger equation that describes the time evolution of a quantum system is fully deterministic. Given well defined initial conditions for the wave-function at some point in time, we can determine the wave-function in any other time, in the future or in the past

\Psi(t) = e^{-\frac{i}{\hbar} \int_0^t H(t) dt} \Psi(0) .

How can these two statements coexist in a single consistent theory?

The answer is that it is just a question of perspective. If you look at the entire wave-function, you would realize that the theory is deterministic. But if you write down the wave-function as a combination of eigen-states, then each state has a probability associated with it depending on its amplitude, according to the Born rule

P(x) = \left|\Psi(x)\right| ,

and therefore from the perspective of the eigen-state the system is random.

For example, take a spin which is in a superposition of up state and down state

\frac{1}{\sqrt{2}} \left ( \left|\uparrow\right> + \left|\downarrow\right> \right) .

There is a 50% chance that it is an up spin and 50% chance that it is a down spin. This randomness/probability is intrinsic in quantum mechanic. The randomness is there even without measurement, but it is easier to describe when we talk about a detector that made a measurement. Before the measurement the spin and the detector are uncorrelated.

\Psi_{\text{initial}} = \frac{1}{2} \left ( \left|\uparrow\right> + \left|\downarrow\right> \right) \left ( \left|\text{detect}\uparrow\right> + \left|\text{detect}\downarrow\right> \right) .

When our detector measures the spin, it also gets into a superposition of states. There is one state in which the spin is up and the detector measured spin up. There is a second state in which the spin is down and the detector measured spin down

\Psi_{\text{final}} = \frac{1}{\sqrt{2}} \left ( \left|\uparrow\right> \left|\text{detect}\uparrow\right> + \left|\downarrow\right> \left|\text{detect}\downarrow\right> \right) .

This measurement process is unitary. The wave-function of the entire system after the measurement can be calculated deterministically. Yet from the point of view of the detector, a purely random event occurred. It is impossible to predict what spin will be  detected, because in the final state both spins are detected. But from the point of view of the detector, only one spin is detected, and therefore from its perspective it is a purely random event.

Moreover, random processes are irreversible. If a ball bounces of a wall at a random angle, there is no way to trace back in time the motion of the ball before it hit the wall. Indeed, from the point of view of the detector, the measurement is irreversible. If you start from a state of a detector that measured spin up, there is no way to trace back time to the original state of the spin before measurement. On the other hand, if you start with the full final state that includes the superposition of the two states, you can reverse time and recover the original spin state.

This picture is also consistent in the context of thermodynamics. In thermodynamics, entropy is not an intrinsic property of the physical system. Entropy is a measure of how much we know about the system, it depends on our perspective.

For example, take a classical ideal gas in a box. What happens if you double the volume of the box? If you know the exact position and momentum of every particle in the gas at a certain time, you can predict their future position. Therefore the entropy of the system does not change when the volume increases. But, if you are not aware of the microscopical state, your conclusion would be that the entropy increased with the change in volume.

ideal_gas

Ideal gas, about to double in volume.

Back to our quantum system, from the perspective of the entire wave-function, the entropy is constant. As long as you look at the microscopic details of a closed system, the entropy does not increase over time. This is consistent with the fact that the process is reversible.

From the perspective of the detector, there was one bit of random information in the measurement, and therefore the entropy increased exactly by log(2). This is again consistent with the fact that from the point of view of the detector the process is irreversible.

We described the exact same physical system from the point of view of two different actors with different knowledge about the system. One description is deterministic and one is random. Both are correct at the same time, it is just a question of what is your perspective.

Understanding quantum mechanics

For almost a century now, there is an on going debate on the interpretations of quantum mechanics. Two such interpretations are the Copenhagen Interpretation (CI) and the Many Worlds Interpretation (MWI).

Beyond that, there is a meta-debate going on for almost as long about whether there is any value in debating these interpretations. Physicists that think this is a pointless endeavor consist of the Shut Up And Calculate (SUAC) camp.

In a recent article in the New York Times, Sean Carroll writes that Even Physicists Don’t Understand Quantum Mechanics. The New York Times is not a scientific journal, so normally I would not cite it as an authoritative scientific source. But it is a good source to demonstrate sentiments. This is an example of a physicist studying the foundation of quantum mechanics who feels that too many physicist do not care that they do not understand quantum mechanics, as long as they know how to make calculations.

Much of what is written here was inspired by Sabine Hossenfelder’s blog. Specifically, by her post on quantum measurement and the long comment thread that followed. This is also where I was referred to a paper by Lev Vaidman on the Ontology of the wave function and the many-worlds interpretation. My views are closely aligned with the views he presents in this paper. Vaidman’s entire career was researching the foundations of quantum mechanics. It would be hard to blame him that he does not care about the subject.

Let me try to convince you that at least some of us in the SUAC camp do care about understanding quantum mechanics. The point is that we actually understand quantum mechanics fairly well. While we might not understand everything, our understanding is deep enough that we are convinced that we do not need to worry about quantum interpretations. We also understand the measurement process. The “measurement problem” is a solved problem.

As a baseline, all parties in this debate actually agree about almost everything. We all agree that the theoretical formulation of quantum mechanics is based on Schrodinger’s equation (and all its extensions) together with the Born rule that interprets the wave-function as a probability amplitude. We also agree about all the experimental results that confirm the validity of quantum mechanics.

The only disagreement is about the process of measurement, which you could say, is somewhat crucial for connecting theory with experiment. So lets delve deeper into the Measurement Postulate.

A measurement involves an observable operator which is an Hermitian operator. Examples of Hermitian operators are position, momentum, energy, angular momentum and spin. An Hermitian operator has real eigen-values that correspond to the real values that we measure. Each eigen-value has a corresponding eigen-state.

This is where the problem begins. The Measurement Postulate states that a measurement would result in the system being in the eigen-state corresponding to the eigen-value that was measured. This is a non-unitary, non-local, non-deterministic, irreversible process. Therefore, it can not be described using the standard Hamiltonian time evolution of quantum mechanics.

The Copenhagen Interpretation (CI) claims that the wave-function collapses during the measurement. Therefore, the measurement process cannot be described using the standard Hamiltonian time evolution of quantum mechanics. The problem with this interpretation is that no one ever came up with a mechanism for this collapse that is consistent with existing experiments and that makes any new unique predictions.

Many Worlds Interpretation (MWI) claims that all branches of the measurement keep on existing, there is no collapse. One could say that the world “splits” during measurement to several worlds, but what MWI really says is that everything is encoded in the wave-function, so there is no special event during measurement. This solves the unitarity problem. We now have a reversible, deterministic, local model.

One criticism of MWI is that without collapse the measurement has no effect on our system. Choosing an observable Hermitian operator is equivalent to choosing a basis, it does not change the physics. This is a legitimate criticism, but the problem is not in quantum mechanics. The problem is that in the formulation of the Measurement Postulate, people forget to mention that a measurement requires an interaction between the observable operator and our system.

For example, if we want to measure the spin of a particle, we use the Stern–Gerlach experiment as a detector. Such a detector is essentially equivalent to adding an interaction term in the Hamiltonian that describes the coupling between the particle’s spin and a non-uniform magnetic field. A measurement is not just a change of basis. We can model how our detector deflects different spins in different directions.

The Stern-Gerlach experiment

The Stern-Gerlach experiment

To illustrate the effect of the measurement, think about an experiment that starts with a spin pointing up (\left|\uparrow\right>). The spin is first measured in the X axis and then it is measured in the Y axis.

Stern-Gerlach-XY

A Stern–Gerlach experiment where the initial state is spin up. First the spin is measured in the X direction and then in the Y direction.

If, like the critics claim, the measurement in the X axis had no effect on the spin, it would stay in the up state. Then the measurement in the Y direction would just be an up state (\left|\uparrow\right>). If the measurement “collapses” the wave-function, there would be a mix of two pure spin states, one pointing left (\left|\leftarrow\right>) and one pointing right (\left|\rightarrow\right>). Then, the measurement in the Y axis would measure each such pure state as a combination of up (\left|\uparrow\right>) and down (\left|\downarrow\right>)  spins, resulting in a mixed state of 4 pure spin states. 

sg_illustration

Four different outcome of our experiment where we vary the strength of the measurement in the X direction. (a) there is no X measurement, we only measure spin up. (b) weak X measurement, there is small separation in X and therefore a small chance of measuring spin down. (c) stronger X measurement, better separation in X and therefore a higher change of measuring spin down. (d) Full separation in X and therefore all four possible outcomes have equal probability.

Conceptually, one could calculate the results of this experiment using the Schrödinger equation, modelling explicitly the elusive wave-function collapse. There is no need for any extra measurement postulate. I must admit that I did not perform these calculations. Solving the Schrödinger equation for this case can be done analytically. But matching the boundary conditions and setting initial condition for the wave-packet is a bit more complicated. If I’ll get around to it, I will post a detailed explanation of the calculation. Meanwhile, the plot above illustrates the expected results.

Another criticism of MWI is that we cannot have any evidence about other worlds because these worlds have no effect on our world. But we do have evidence. In the double slit experiment the particle goes through one slit in one world and through the other slit in the other world. The interference pattern in the double slit experiment is the manifestation of the effect that different worlds have on each other.

Some worry that scaling up the measurement to macroscopic scales would require giving up reductionism. Yet the Stern-Gerlach experiment is an explicit example of how a microscopic spin quantity transforms into a macroscopic spatial separation.

Others worry that there is something mysterious going on during decoherence. Again, the Stern-Gerlach experiment shows us exactly how it works. The calculation is time-reversible.  Still, the incoming wave-packet is a combination of energy states that come out of the detector with very specific phases for each state. Reversing the experiment by sending in two spins and trying to get out a pure spin state is clearly not practical.

The last criticism I can think of is that in MWI probabilities are meaningless if we claim that all outcomes exist. I do not see how this different from CI where all outcomes are possible. Moreover, it is not different from classical probabilities.

To summarize, since the introduction of the Schrödinger equation in 1925, and the Born rule in 1926, we have made some progress in our understanding of quantum mechanics. The EPR experiment, Bell’s inequalities and Everett’s many worlds interpretation contributed towards this understanding. This progress convinces us that nature behaves exactly according to the rules quantum mechanics. There is just no shred of evidence for a missing ingredient in our understanding.

As physicists we always hope that nature would challenge us with fascinating riddles. Unfortunately, I am fairly convinced that interpretations of quantum mechanics and the Measurement Postulate are not such riddles.

Are there any promising ways to quantize gravity?

I just started this blog a few days ago. One of my plans for this blog was to write about quantum gravity. I though that I will ease my way into it by starting with the basics and building up to the more nuanced details. But then Sabine Hossenfelder published a blog post about the five most promising ways to quantize gravity.

While I mostly agree with Sabine on the technical details, my views on the subject are very different. So I decided to scrape my original plans for this blog and dive right in to subject of quantum gravity (QG). In the future I will hopefully go back to the basics. For now, here are my views on the five most promising ways to quantize gravity. You might want to read Sabine’s blog post before continuing reading this.

First, I should mention that like Sabine writes, we already know how to quantize gravity. At low energies every thing works just fine. The problems start when you go to extremely high energies — at the Planck scale.

To understand how quantum gravity should be approached, it is really important to understand what the problems are (isn’t this an obvious first step?). There are two problems, which are closely related, but they are not exactly the same.

  1. Quantum gravity is non-renormalizable. I won’t go into the technical details, but the implication of this is that we need infinitely many parameters to describe the theory at high energies. The problem with quantum gravity is not that we do not have a theory that describes it. The problem is that we have too many theories, infinitely many of them.
  2. Quantum gravity is strongly coupled at high energies. The problem here is that we have a perturbative description of quantum gravity. As the coupling becomes stronger, the calculations become harder. Once the coupling crosses a certain threshold and becomes too strong, the calculations don’t only become harder, they become meaningless.

We have actually encounters these two problems in particle physics before. So we can learn from history what are the possible outcomes.

The first problem popped up in the 1930’s with the four-Fermi interaction. We didn’t know about non-renormalizability at the time, so we referred to it as a unitarity problem. Essentially, it seemed that at high enough energies the chances of two particles scattering would be more than 100%. Clearly, something was wrong.

The resolution of this issue was that in the four-Fermi interaction there was an exchange of a massive gauge bosons — the W and Z bosons of the weak interaction.

Four fermions interacting by exchanging a W boson.

This means that the four-Fermi interaction still requires infinitely many parameters, but now we have a systematic method of calculating these parameters using the weak interaction model.

Did the four-Fermi interaction also suffer from the second problem of strong coupling? Actually, until recently we didn’t know for sure. The point is that having massive gauge bosons requires the Higgs mechanism. The mass of the Higgs determines whether the weak coupling becomes strong at high energies. Luckily, nature was kind to us. The Higgs mass is about 125 GeV, which means that the weak coupling never becomes strong. If the Higgs mass turned out to be about 4 times bigger, the weak coupling would get too strong. It would have made the things way more complicated, but it could also have been way more interesting.

The second problem does pop up in QCD. Well, sort of. QCD becomes strongly coupled, but it does not happen at high energies, it happens at low energies. In this case, we did manage to find a non-perturbative formulation of QCD. The trick is to do all calculations on the lattice and at then make the lattice spacing infinitely small. It does not make the calculation easy, but at least they are somewhat doable, or at least well defined.

QCD on the lattice.

Low energy QCD is important for understanding composite particles such as the proton and the neutron. It is also relevant for nuclear physics. It seems that nature was not kind to us this time. There is no elegant way to make QCD calculations. QCD calculations are extremely hard, and the results are not always conclusive. Our only consolation is that we have lots of experimental data that we can use to make up for the lack of precision in our calculations. This would not be the case in quantum gravity.

So now we are ready to examine what Sabine refers to as promising ways to quantize gravity.

1. String theory

First, for full disclosure, I should mention that my PhD thesis was about string theory, so I have a warm soft spot in my heart for the subject. But we are only interested in the cold hard facts, so it is not really relevant.

Sabine writes that string theory was discovered in the 1970’s. It was actually discovered in 1960’s. This is really not important. What is important is that it did not really gain popularity neither in the 1960’s nor the 1970’s.

String theory only became popular in the 1980’s when it was discovered that there are only 5 consistent string theories. Remember, the problem with quantum gravity is that we have too many options. Now string theory comes along and tells us that there are only 5! I wasn’t there at the time, but my impression is that physicists felt like all they had to do was to split into 5 groups. Each group would study one theory and try to match it with the real world.

As you all know, things didn’t quite turn out this way. This maybe a story for another post. This also explain why many string theories, mostly the ones that lived through that period, hate the multiverse. If the multiverse is real, the original string theory program is doomed.

2. Loop Quantum Gravity (LQG)

The basic idea of LQG of using Ashtekar variables to quantize gravity is very interesting. Unfortunately, from here, everything goes downhill. Their biggest problem is that they insist on a non-perturbative approach. They have to, because they think that the theory must be background independence. For pertubation theory you need to start from some background, so LQG can not use pertubation theory.

That is what Sabine is referring to when she says that “it has remained unclear just exactly how one recovers general relativity in this approach”. I would extend it and say that it remains unclear how one does any calculation in this approach.

3. Asymptotically Safe Gravity (ASG)

If you look at history, ASG might look promising. After all, the four-Fermi interaction seemed non-renormalizable and it turned out to be asymptotically safe. The problem is that there are many ways to extent the theory to make it asymptotically safe. There are even more ways to make it unsafe. The only way to figure out which is true is to collect lots of experimental data.

Even in hindsight, starting from the four-Fermi interaction and trying to figure out the theory just by requiring asymptotically safety, seems totally futile.

Beyond the “technical” issues, ASG seems to suggest that the UV completion of quantum gravity is a standard quantum field theory. From the little we know about quantum gravity from black hole entropy, this does not fit our expectations.

4. Causal Dynamical Triangulation (CDT)

Finally one promising direction. But, and you knew there would be a but, it does nothing to solve our first problem. CDT is supposed to be the equivalent of lattice QCD. It tries to give us a non-perturbative formulation of quantum gravity. Even if it succeeds, we are still stuck with the infinitely many parameters from the non-renormalizability.

In lattice QCD the first step is to start with a cubic lattice. In QG a cubic lattice does not work. CDT suggests an alternative. The next step in lattice QCD was to define an action on the lattice using the Wilson line approach. In CDT, one should probably use Ashtekar variables (the ones from LQG, I told you it would be useful for something). I’m not sure if anyone went that far in the formulation.

This is just the beginning of the problems with this model.

5. Emergent Gravity

This is not a specific model, it is a generic approach. therefore there is nothing much to say about it.

What I can say is that approaches like this are taking us in the wrong direction. We are already suffering from infinitely many QG models. The emergent gravity approaches try to introduce new models in addition to the infinity that we already have.

We have infinitely many options that we cannot rule out by experiment.You are not making our lives any easier by adding another one.


To summarize, the way I see it, string theory is the only approach that actually attempts to solve the problems of quantum gravity.

What is the right approach to quantum gravity? I have absolutely no idea. It is really an extremely hard problem. My only recommendation is that before you try to find a solution, you should spend all the time that is necessary to get a deep understanding on what is the problem that you are trying to solve. This blog post might give you a tiny hint about what the problems are. I have just barely scratched the surface.

Conservatism, Liberalism or Pragmatism?

I finally decided to start my own blog. There are all these people all over the internet that are writing something wrong. It is about time that someone would write the right things.

First thing is to choose a name for the blog. A name that would concisely summarize my world views. I need to ask myself, what are my world views? On some issues I am a conservative, on others I am a liberal. Mostly, I try to be a pragmatist and as a pragmatist I know that I need to find a domain name that is freely available. Luckily, there is one world view that has the perfect balance. Not too conservative, not too liberal, not to big, not too small, not too hot, not too cold. And the domain name is still available – Goldilocksism.

What will it be about? To those of you that don’t know me, I would describe myself as a physicist. Being a physicist is not only an academic degree, it defines how I perceive the world. I will try to concentrate on science. If I do wonder off to other subjects (politics anyone?), I will try to keep it scientific. Well, as scientific as is humanly possible. After all, I don’t want to raise expectations too high.