When Rigby and Casey first boarded the shuttle back on Diem Dues 3, Rigby couldn't help but think back to the old days. The shuttles for interplanetary travel were much different when Rigby took her first off-Earth assignment. Things were scarier back then because so much was unknown. People would bring along personal space suits and oxygen. One had to wear magnetic shoes to keep themselves on the floor, and launch time was a bumpy ride. The shuttle living space had been primarily limited to the initial launch seating area, which resembled something like the interior of a standard military aircraft on Earth: a lot of exposed metal, bolts uncovered, practical well-restrained seats, and nothing for entertainment. And while the initial launch area remained a cold, metallic, and functional setting, the launch itself was now smooth, and the conversations with strangers were easy. The loud, piercing sounds of thruster engines echoing through the launch chambers were long gone. Today, the passengers were restricted to this area only during the first and last 30 minutes of the trip. Depending on your personality, you could make friends or wait it out.

** **

The living areas of the ship now usually included a kitchen, dining room, several small sitting rooms, and then, of course, the sleeping cabins. Passengers on the shuttle were likely to be on different sleep schedules, depending on their job or tastes. There was no consistent routine. This far out–pretty much anywhere beyond Mars–people’s schedules rarely aligned. And it worked out pretty well to share the cabins half and half. One set of people inhabited any given cabin for 12 hours, and they would alternate every 12 hours with a different group of passengers. Of course, if you were wealthy, you might just buy it out for the entire time for privacy and ease of use. But since this was a work trip, Rigby tried to keep the expenses minimal.

** **

Still waiting in the launch cabin with Casey, Rigby wondered about the telescope they would be visiting. She then realized she didn’t understand how they worked. The detective turned to Casey, “Casey, how do telescopes work? I know that it’s a combination of mirrors and lenses, but I guess I didn’t consider it beyond that.”

** **

“They are actually quite varied. As you can imagine, lenses and mirrors can be arranged to achieve different ends in more than one way. They will vary based on magnification, of course, but also resolution and are tailored to different wavelengths of light. For example, in the classic movie “Contact,” the radio telescope at Arecibo Observatory is huge. It’s a 1000 ft spherical reflector dish built back in the 1960s.”

** **

“Oh yes! I have seen that movie. Crazy old. I can’t imagine what it would have been like to live back in the 1900s. Their world was so small,” said Rigby.

** **

“Simpler times, that’s for sure,” replied Casey. She continued, “The telescope I had growing up, however, was about 2-3 ft long, stationed on a tripod, and the framework consisted of just a couple of mirrors and a lens. And the list goes on, Galileo used a telescope with only two lenses to discover four moons of Jupiter in the early 1600's, but even among two lens telescopes there is some variety. To give you an idea of how telescopes work, I'll step you through the simplest two lens telescope I can think of. Most basically, you need to separate two specific types of lenses by a special distance. Each lens would have to look roughly like this where they bulge out a bit.” Casey scribbled a drawing of what she meant by the lens's shape.

“The special distance you would need to separate two lenses by is the sum of the focal lengths of the lenses,” said Casey.

** **

“And how do you find out what the focal length of a lens is?” asked Rigby.

** **

“Good question. The focal length is where incoming light from a distant object all comes together to focus on the other side of the lens. Like this,” said Casey as she scribbled on a piece of paper.

“If you stood with a lens near a window, for example, you could take a piece of paper, and if you were to put it at exactly the right spot, distant objects through the window would be shown crisply on the paper–albeit much smaller and upside down. The exact right spot gives you roughly the focal length of the lens. It’s neat to see, and you should try it if you haven’t seen it. And, a telescope can be made with just two lenses separated by the sum of their focal lengths,” Casey said as she continued to draw.

“One lens puts the image of a distant object at its focal point, which is then designed to be at a place that is one focal length away from the other lens, and then you see it after that. There are some subtleties I'm glossing over, but it gives you the basic idea,” said Casey.

** **

“And, you’ll notice I drew the lens you look through as the ‘smaller’ one. I did that because if you want the image to be larger when you look through the two lenses, you have to look through the one with a smaller focal length. The magnification is determined by the ‘bigger’ lens in the picture’s focal length divided by the ‘smaller’ lens’s focal length. So if the focal length of the bigger one is twice that of the small one, any image you saw through the telescope would be twice as big compared to what you would see with your naked eye. But, we don’t have to go into more detail until you want to build one,” said Casey with a smile*.

** **

“So basically, to make a telescope, you redirect light that you have collected in clever ways that make the image look bigger?” said Rigby. Casey nodded in affirmation. Then Rigby continued, “I have also heard about gravitational lensing with galaxies and how multiple images of the galaxies can be created or how their images can be smeared out into arcs and things like that. Like this," Rigby brought up an image on a portable screen to show Casey.

"How does that work?” Rigby asked.

** **

“Well, gravity does act a lot like the standard lenses we have been discussing. Gravity bends light, and there is a lot you can do with turning light in specific ways, as we just discussed. Take the bottom of a wine glass and hold it up. You will see how the image on the other side smears out in a circle on the bottom rim, and that can also similarly happen to the light moving through a gravitational field coming from a galaxy on its way to us. Gravity can cause the light of a galaxy to smear out in a ring like shape on its way to our Solar system, which is of course where we see it."

** **

Casey smiled. She rather liked to dive into physics with Rigby. She had a raw child-like curiosity about her that physicists quite enjoy.

** **

Rigby noticed a new ring on Casey's finger as the initial launch sequence was winding down.

** **

"That's an exquisite ring," said Rigby as she pointed to Casey's hand. "Is it new?"

** **

The ring had a thick gold band and a colorful stone inlay. Casey looked at the ring. She was pretty pleased with how it had turned out.

** **

"Yes. It is. I'm glad you like it," said Casey. It was a pleasant sort of ring, even though it had been created for a much deeper purpose than aesthetic appreciation. Casey looked at Rigby and smiled. Rigby wondered about the story behind it, but soon she began to self-indulge and admire her own rings.

** **

Casey started wondering if Rigby would ask about the ring further when a sense of dread fell over her. She hadn't considered how Rigby would take it until this point. What if Rigby confiscated it? Or worse, if she thought it was so bad, she charged her with some kind of violation of the Solar Standard? The bell rang to indicate that passengers could leave the launch cabin.

** **

Casey got up suddenly, “I need to go to the bathroom. I’ll catch you later in the dining room?”

** **

“Sure,” said Rigby.

** **

Casey walked straight down the hall and took a left. The door color was green, which indicated that the restroom was free. The door opened a few moments after as she stood looking into the sensor above the opening. Once she was inside, she felt like she could think. Like she could breathe.

** **

“One of these days...” said Casey, but then she paused. She thought that what she had done this time would swallow her whole. Why did she always have to do something like this? Again and again and again. “But why does it matter what I do? Why do people care?” she thought to herself.

** **

“If Rigby is a friend,” Casey thought, “but we’re not friends… are we? Even if she considered me as a colleague really. And,” she sighed audibly, “it’s so much more difficult if we have to do it the Solar Standard way. *By the Solar Standard*.” Casey mouthed those last words as she looked at herself in the mirror. “What was that anyway? Who came up with it? It’s not like it was my idea.” She then noticed the smell of cleaning products. It was some mix of pine and bleach. She knew that Rigby wouldn’t be thrilled, and she knew that the hammer of disapproval was inevitable. She stared up at the ceiling in the single-stall restroom, and she wondered how long would be an appropriate amount of time before she had to go back out and pretend that everything was “normal.”

** **

The bathroom was relatively unremarkable: a small mirror, worn countertops. The design was an efficient use of space, decorated in shades of gray, but there was something calming to it.

** **

Casey just didn’t feel like she could live with it. It was unbearable. But what could she do now? It’s not like she could go back in time. “It’s never like you can go back in time…” she thought. She had taken it with her; she had brought it along, and that is that. Que Será, Será, or so they say, but she couldn’t help feeling just a little bit nauseous. “She will be so smug. Rigby is just perfect all the time,” Casey imagined.

** **

She wanted to kick herself for doing something that once again would haunt her in this way, आसमान से गिरा, खजूर में अटका. Until one of two things happened this torment would continue. Either Rigby figured it out, or the trip was over. Yet, as Casey thought about it more, there was another way out. She could just tell her. Explain why she brought it. Explain how crucial it is for determining whether or not people were telling the truth about where they had been. Casey said to herself that it was, after all, only a portable DNA and fingerprint collector, well, sort of. Yes, it was an invasion of privacy, and yes, she could kind of see-through walls, but it’s not like she would use it casually; I mean, it had been outlawed in certain parts of the solar system for good-ish various reasons, but it’s also not like Rigby couldn’t get ahold of one if she wanted one. But perhaps not always in time, which is the point!

** **

Casey pulled the ring off her finger and put it on the bathroom counter. It was a well-equipped data collection service unit with many of the latest features, often referred to as ‘bugs.’ But it did so much more than just gather information. It collected samples and took pictures, yes, but then you could use it to artfully reconstruct all sorts of things. When Rigby’s unit was using something to wrap up the impression details of the crime scene, they used similar devices; this device was effectively a portable version of that. However, ‘bugs’ are more insidious in how hidden they are, and Casey’s ring was not quite as detailed or powerful without all of the added benefits of what a crime unit can provide. Depending on how much money you intended to invest, the form of the ‘bug’ could be a flying insect or the marginally less expensive versions could be worn around the finger in the form of a ring, like Casey’s.

** **

They weren’t outlawed everywhere. But certainly, on Mars their use was limited and on Earth possession carried a hefty fine. Generally, their use was frowned upon in places with a strong sense of communal identity, in areas that included strict policing, or in highly controlled terraformation projects. Back around the vicinity of Saturn, it’s sort of a grey area. Casey was confident now that Rigby would be furious. They were pretty hard to come by. And, for the level of functionality of the one Casey had, it was kind of a wonder how she could afford one. The cost might raise certain questions in Rigby’s eye, but Casey was thankful that Rigby wasn’t usually prone to prying into Casey’s personal life.

** **

The ring did go a bit beyond the corporeal, too, in a way. It could tell you the basic emotional state of a person. Perhaps she should use it on Rigby to find the best time to break it to her? “No, I don’t care what she thinks. I was right. I’m certainly taking it with me. I’m going to use it as I please,” Casey mused. “And there is just nothing that Rigby will say to deter me because getting impressions on the spot will be incredibly useful. We don’t have perfect memories, and we can’t trust people in general either, so that’s that.” Casey was feeling much better about herself. She had forgotten about her surroundings for a moment. But, Casey now had a confidence that allowed her to place the ring back on her finger and re-enter the usual distractions of everyday life. She opened the bathroom door, walked out, and turned towards the dining hall.

** **

Casey skirted around the corner staring at the floor as she walked until she almost bumped into someone and began paying more attention. She forgot for a moment where the dining room was. But, luckily, there was a screen nearby. She tapped it and took a look at the map of the shuttle. She took note of where their sleeping cabin was and continued on her way to the dining room. She saw Rigby sitting at a table in the back corner of the dining hall.

** **

It was a rather beautiful space. And, on smaller shuttles like these, the dining rooms were usually the most well-decorated areas of the ship. There were windows to look out into the blackness of space, and the lighting on the walls was soft but not dim. Casey sat down at the little round table with Rigby, crossed her legs, and leaned back. Rigby had placed a small dish of fruit and nuts to share on the table.

** **

“They seemed to have a wide variety of choices. Please feel free to have some of the walnuts, dried figs, and dates I picked up. They were the simplest thing that I knew I would like,” said Rigby.

** **

Casey said, “Oh, sure, I might try a few pastries too. But you are correct. Sometimes it’s better to play it safe; these ships don’t always have the best food to offer.”

** **

Rigby then grabbed some walnuts and a dried fig and said, “However, we will probably get something more exciting at Garden Station. There’s bound to be some interesting things that we pick up there.”

** **

Casey cheerily replied, “Ah yes. I will try not to be too disappointed because I bet you’re right. And, it will be a taste of the future as well. I can’t wait until we get regular shipments from Garden Station on Diem Deus.”

** **

“Won’t that be nice?” said Rigby.

*The following book was used as a reference for this post, **Optics** by Eugene Hecht. It is an excellent undergraduate level resource to learn more about the basics of how telescopes work and more.*

*Hubble Image: https://en.wikipedia.org/wiki/Strong_gravitational_lensing#/media/File:A_Horseshoe_Einstein_Ring_from_Hubble.JPG*

*This blog post will also available as a podcast in the near future, as read by the author, Erin Blauvelt. *

Author: **Erin Blauvelt, PhD**

*Question: Why did you choose to study physics? What got you interested in it?*

**Mary:** Well, I took physics when I was a senior in high school. In those days, you always took physics last. And I decided to major in physics. Then, I ended up going to a small college in Virginia where there were hardly any physics majors.

*Question: Which college did you attend?*

**Mary: **Hollins.

*Comment (Laura): Actually, I've been there! I went to a summer school there.*

**Mary: **Well, I'll be darned. Well, there was one woman there who was a professor in physics [Dorothy Montgomery]. There was [about] one physics major every two years [at Hollins]. But, [Dorothy] had worked with Oppenheimer. So she was a research physicist, but she had been at Yale—on soft money—her husband was on the faculty, but when he died, they let her go. She was offered a job at Columbia, she told me, but she had two young children, and she didn't want to raise them in New York [City], so she moved to Virginia. And, it was just by good luck. Because, well, I went to Paris [due to her being at Hollins in Virginia] for a year as an undergraduate, and she [also] got me into a lab in Paris. And then she got me to apply to Brookhaven for the summer school, and there I worked with the Columbia group. And that is what really got me interested in particle physics.

*Question: What were the challenges you faced as a female graduate student?*

**Mary:** I started graduate school at Columbia, and that was fine. I mean, I had a little bit of trouble. [In] the more mathematical courses I did very well. But the physics courses, well there was one course, classical mechanics, which was taught by this old guy, and he would go to the blackboard before the class started. [You] had to go to class early and copy everything down. And then he would take notes all the way through, and you had to turn in everything at the end of the semester. He never graded anything. So you never knew what you were doing. [The] first semester of that, I failed, and I never knew why. But he would also [mark it wrong] if there was a factor or two or a sign mistake, [as in the whole thing is wrong]. In fact, the second semester, I aced everything. [And] there were only like two women in my class, but I had a lot of friends. I had friends to study with. I didn't feel isolated. I remember one nasty guy, but I ignored him. So, I was fine [at Columbia]. But then I got married to a Frenchmen and moved to Paris, and that's where all the problems started.

*Question: So you finished your grad school in Paris?*

**Mary:** It used to be in France that there was essentially no graduate instruction. Essentially people would go to summer schools. There is this famous summer school [and] that's where people learned as students. When I got there, they were just starting this graduate instruction taught by theorists. And a friend of my husband said, you will never be taken by the theory group, so you should get into a lab. So I went around to all the labs in the region, everybody turned me down. And then one guy said you came to get married, not to do physics. Nobody ever asked me for any references or anything like that. Except one guy said to me, well, you couldn't get a recommendation letter. And, I said yeah sure I [could], and he changed the subject. I don't think that was the answer he expected. And so that year was the worst year of my life. I mean, I was totally depressed. But then, in the end, we went back to Columbia because my husband had his experiment, [so he] was going back to finish that experiment. So I went [to] Columbia and [then] came back to take the exams in Paris, and I came out second in the exams, and then they took me in the theory group.

And so I went through all this misery because someone said, oh, you'll never get into the theory group. So then we went to CERN, and there was a guy [from just] the south of Paris. He agreed to be my advisor, [but] he wasn't there most of the time. So I was working by myself. And so I started out in a basement office with as many as five other people. And then I started doing stuff and publishing papers, and I got to go up the floors. Eventually, after I had my thesis, I got onto the first level. It's called NCSR, which stands for the national center of scientific research. And so, I got the first rank position in that, and I gradually went up the ranks.

But I couldn't get a job at CERN. I wasn't even offered a postdoc at CERN. [Or what they call] a junior staff. And then I was starting to get more and more well-known and giving talks all over the world. Getting invited to stuff. So finally, a couple [of] people at CERN insisted that I be considered for the staff position that was coming up. By that time, I already had an offer from Fermi lab, and then I got an offer from Berkley. And so, by that time, I knew if I wasn't going to get that position at CERN, I was going to leave. And I did leave, and in fact, CERN did not hire a female staff member until the mid-90s.

*Comment: Oh wow.*

**Mary:** But there still hasn't been a women senior staff position in theory at CERN.

*Question: What should we do to bring more women into physics?*

**Mary:** Oh boy, that is a good question. Well, when I moved to the states, the first thing [they said was], now you have to talk to the American Crucible Society, and now you have to be on the committee on women in physics. And then I chaired [that committee] for a year, and I was also put on some blue ribbon panel for the APS. And, the idea was to try to transfer women from industry or national labs to academia so that there would be more role models. But, most of these people with all the influence were men, and they never showed up to the meetings, except the first meeting. So eventually, that thing fell apart and didn't do anything.

Also, when I was at CERN, I wrote a report called the status of women in technical careers at CERN, something like that. Actually, that report was eventually used when they finally formed a committee on diversity at CERN. I was actually invited to go there and talk about my book. It was a joint thing with the library and this committee. And so I learned that they were trying to do something. But it is something I don't know [the answer to] because the same problems keep coming up. We've had conversations with women graduate students. Now they are like 15% roughly in a typical graduate class, but they still complain about the men putting them down and men [saying] something even if they don't know what they are talking about. And women are afraid to speak up. There are all these problems, and I don't know why. Right now, even there are only a certain number of role models. And I don't understand why it's so difficult. You know, part of the problem, I think, is getting young children to take the math classes they need when they are in school.

*Comment: Even if you get them interested, the culture is still not welcoming. We have wondered about how to get women who are in physics to stay on in physics. Women drop out at high rates.*

Women do drop out at higher rates than men. I mean, there have been some notorious cases of sexual harassment; one was at Berkley that you probably heard about in the astronomy department. But, more often, it's students [and] TA's that make the atmosphere so bad. [To solve the issues] there are all these programs that are effective for diversity training. The worst of them all is the online forms that tell you how to behave, and you try to get through them as quick as you can. I know I've done it too! And, so it's a really hard question. It's almost incomprehensible to me that it is still so difficult.

For more information about the life and career of Mary Gaillard, check out her book __A Singularly Unfeminine Profession: One Woman's Journey in Physics__.

*Note: Due to sound quality and background noise, only as much as could be understood was transcribed (as best as possible).

*Credits: Thank you so much to the interviewers Shamreen Iram, Laura Johnson, and Klaountia Pasmatsiou for their questions and for recording this excellent interview! *

*Question: Why did you major in physics?*

**Helen:** It was the easiest major to complete! The funny thing is, I started University in Australia. Then after two years at Melbourne University, my family—because of my father’s job—moved to the United States, and I moved with them. And, some professor at Melbourne University wrote a letter that said I would get a bachelor’s degree in a year if I stayed there—because it's a three-year degree—so I should be put in an equivalent position when I got to the U.S. I think on the basis of the letter—and nothing else—Stanford gave me a year of credit for my last year of high school. So, I arrived at Stanford with three years of credit and no major. And I had to find a major I could complete.

So, I went around with my notes from the courses I had taken [and began] talking to people in various departments. In the physics department, I happened to present to Jerry Pine. [He] was then an assistant professor at Stanford and [later] a professor at Caltech [doing] biophysics, but at that time, he was a particle physicist. And, he looked at my notes, and he said, "Well, you've got a good basis here, but I can't really say that there is a correspondence between the courses you've taken and the courses we have. So, why don't you, it's just the beginning; we are already in the middle of this quarter... so for this quarter, just go audit courses. And, then you tell me which ones you think you've taken, and I'll sign for it." [In] many other departments, the professors said, "Oh well, there's this course... and you've done these things, but you haven't done that... so you need to take that course." And if I looked at any other major, it would have taken me more than two years to finish it. I could complete a physics major in four quarters, from the last quarter of that year and then the whole next year.

So, I kind of slipped into the physics program at Stanford. And, there were places where I was behind and places where I was ahead, and I just had to make up for that in my first quarter and then [by] my senior year I was like the other seniors, like the advanced seniors at Stanford. I was taking the courses that they were taking, and I guess somebody noticed that I was doing reasonably well, so they encouraged me to go on to graduate school, which I would not have thought of doing for myself. Doing a Ph.D. was way beyond the world I grew up in. The notion of a Ph.D. in physics would never have occurred to me.

*Question: So you got your bachelor's in physics because it was easy, but we are always told that theoretical physics is the hardest thing you can go into. So why did you go into that?*

**Helen:** Well, because first of all, I know I am not an experimentalist. I have no skill in that direction. I can do certain types of things with my hands, but dealing with electronics and making it work and all that, that was not where my strengths were. And, my strength was actually mathematics and mathematical ability. In fact, in high school, my math teacher said to me,

And, I think she was actually partially scolding me for not just grinding through some problems, but I interpreted it after some thought as encouraging me to keep thinking for myself. So I had a really good high school math teacher, and I was in a good position from the point of view of mathematics. The courses, the applied Math course [in particular], that I had taken at Melbourne University was really a physics course. So, all of that is a kind of strong background. And, the other thing was SLAC was just being built. So there were people around me who were very excited about particle physics because they had this new tool that would surely tell them new physics, so when you have people teaching courses who are very excited about what they are doing, you tend to get excited yourself. And that's what I did! I got excited. [At one time] I actually thought, you know I'll probably be a high school physics teacher, but Stanford won't accept me if I just apply for a master’s degree, so I'll apply for a Ph.D., and when I'm ready I'll quit. But I kept getting more and more interested. And so I stayed on and did a Ph.D. BJ (James Bjorken) was actually my thesis advisor.

Question: What was he (James Bjorken) like as a thesis advisor?

Helen: Very hands-off, very laid back. He's an interesting person that makes you think. But he also, I mean, that was the time when the deep inelastic scattering experiments at SLAC were beginning to happen. And BJ would show me negatives and say, "Hey, what do you think about this?" So it made the theory even more exciting because there was something we could equate it to. I was always, I asked him the other day, "Did I seem like a confident student?" He said, "Oh yes. I'd give you a practice problem, and you did it in no time flat." Well, I know that when I handed him back that—the answers to the problem—I had no clue what it meant. I'd done the math, but I couldn't figure out the physical interpretation. But, he'd look at it and say, oh, look, she solved the problem. So from then on, he'd give me problems to challenge me, and things progressed. And, I had a really interesting Ph.D. problem, and that, of course, set me in good stead to go on in my career.

Question: Can we talk a little bit about the ups and downs that are common when doing a Ph.D. in physics?

Helen: It's not just doing a Ph.D. OK, doing theoretical physics, there are going to be times when you are totally frustrated, right? The psychologists talk about this. They talk about engagement, interest, [and] identity. So engagement means if somebody shows you something interesting, you can get interested. Interest means I'm sufficiently interested in this subject, and I'm going to go find out more about it. And, identity means I find it so fascinating that even when it’s not [interesting], I'm going to keep doing it because that's what I do. And, so that's a progression to go for anybody to go from a place where you think you might be interested in something, to the place where you are totally engaged with what’s going on, to the place where you say that's who I am. And, as a graduate student, I wasn't sure. Every graduate student has times when they think

and that's because the problems are not easy. And, there is an awful lot you have to learn for you to prepare to solve what's at the forefront of the field, which is what you are asked to do as a graduate student! You are supposed to jump right in and move very quickly to be competing with people who have been doing it for 10 years or 20 years or however long that theory has been around.

*Question: Sometimes, I worry that what I'm working on isn't relevant or going to have an impact. Do you think that this plagues everyone at some point in time?*

**Helen:** You have to [somewhat] trust your thesis advisor. That your thesis advisor knows what’s important to the field and knows what is an appropriate problem to give a student at your stage of development. So, you're not going to get the most critical interesting problem in the field as your first research kind of test problem [that explores] can you do research, can you figure things out for yourself, and [then] the next problem will be a bit more interesting particularly if you do a good job with the first one. But, it doesn't matter what the problem is. Whether you chose it yourself or whether your advisor chose it, there are going to be times when you feel you can't do this, and it doesn't make sense to me. I'm beating my head against the wall, and I'm not getting anywhere... and what you have to do is have sufficient confidence in yourself that if you keep working at it, you can get passed that. And that's a really pretty critical piece. That inner confidence, not only am I interested and I want to do it, but that I have the capability to do it, and I just haven't quite gotten there yet.

That's a very different interpretation. I've been working because, you see, before you get to be a graduate student, no one gives you a problem that you can't solve in a matter of hours. But, when you are doing research, it might take you three months to solve a problem. It may take you more than that as you work with others and collaborate and redefine the problem and move on, so it's just a different class of problems than any you've been asked to work on before. So yes, it feels hard, of course, because if they weren't harder problems, somebody would have solved them. And, the real trick is taking that problem and formulating it into a problem you can do, which may not be the whole of the question you asked. Taking some part of it that you say, this I understand. This piece I can make some progress on. And you'd be surprised how often that allows you to make progress on the part you thought you didn't understand. So just persist. Persist and speak up. Those are the two things.

*Question: What are some of the challenges that women in physics face today?*

**Helen:** Well, I'm not really sure about nowadays, because I've been retired since 2010, but I don't think it's changed all that much. When I got my Ph.D., women were 2%, and now they are 15%, but 15% is still a minority. So the challenge for most young women is how to speak up and make themselves heard in a population that's predominately men. And, this starts in middle school and goes on through high school and university as an undergraduate. And, once you are trying to be a professional in the field, it becomes critical, right? You must acquire your voice. And, be able to say what you are thinking, not too tentatively. Women tend to be a little more tentative.

I have three brothers [and] a father who liked to have family arguments just for fun. I learned to argue with the boys from the time I was, I don't know, eight years old. I think that really helped and really put me in a good position when I found myself—very often—the only woman in the room.

*Question: You suggest things women can do to fit into the culture of physics, but is there anything that men should do to make it more welcoming for women?*

**Helen:** Actually, the American Physical Society has a program where groups of senior women in physics go to departments to evaluate the climate for women in that department and tell the department what it needs to do to make its climate better. And, in fact, it is just making its climate better [as a whole] because what happens is [if] the climate is bad in the department, more men will just stick through that bad climate, and women will say this is not a place I want to be and move out. So, if the department doesn't feel comfortable—and you think there is sympathetic leadership—tell them to invite one of those reviews from the American Physical Society, and they'll bring a committee of senior women who will say to your department chair the things that you could say but [they] won't hear it from you. If you tell it to me, and I tell it to the department chair, then [they]'ll hear it. So, that system has worked very well for most departments, so departments who don't want to get better with this don't bother inviting such a committee; most departments are not actually deliberately being exclusive. They're just not aware of all the things that are affecting their students.

*Question: What do you think are some of the most common things you end up telling departments that they can work on?*

**Helen:** First-year graduate students are members of your department; they’re not on test, right? You should invite them and include them in everything that's going on. Rather than saying until you pass your orals, I'm not interested in talking to you. This is a very common [thing]. The old-fashioned version of that is the professor who stands up and says, you have neighbors look around you. Only one in ten of you will be here at the end of the course. [Which] means I'm a really bad teacher; I can not teach you this stuff. I can only teach the ones who already know it. So, it's the same thing with the climate in the department. When the department says, we don't really have the energy to spend time with you until you have passed your quals, that’s saying we think that most of you will fail because we don't know how to teach you to pass. So, don't take it as a judgement on yourselves; take it as a judgement of this department hasn't figured how to teach well. And, this is what I spend my retirement on, science education, mostly at the grade k-12 level, but I talk a lot with people who do physics education research at the college level and the graduate level. And, a lot of it has to do, [with the fact that] there are no strategies for teaching for general enough use. And, the strategy of standing up there and talking for 50 minutes is known to be a bad strategy.

So, push the department on updating its teaching methodologies. Push the department on thinking of inviting students in—rather than failing students out—and as an attitude, you have to admit students. Even for undergraduate majors, the same is true. You have to invite them in. You can't tell them, “if you aren't good enough, we don't want you.” Because that’s an assumption that they're not good enough, so, there is a whole mindset, right? It's the mindset of how physics is a wonderful subject and most people—if they are willing to work hard enough—can learn versus physics is hard and you are not smart enough.

**University of Pisa**

**This is a two-part post. Be sure to check out part one.**

*Hi all, I’m Brian—*a postdoc working at the University of Pisa*. I wanted to write a post that connects some classic concepts in physics—phase transitions—with an important branch of research—the conformal bootstrap. The stuff at the beginning of part one is pretty introductory, but watch out: the end gets pretty technical, and a little knowledge of Quantum Field Theory would help too. *

So now we know that we can describe the critical points of statistical systems with CFTs, but what does that get us? The answer, it turns out, is “a lot”. We’ve seen in part one of this post that one of the critical exponents, ν, can be directly computed from the scaling dimension of the operator σ(x). In fact, for the two-dimensional Ising model, all the critical exponents are determined by the dimensions of the σ(x) and ε(x) operators. This is related to the fact that the content of the theory is encoded in the correlation functions, and the simplest correlation functions (those with two and three operators) are determined by the scaling dimensions.* The two-dimensional model is a little special—conformal symmetry provides enough constraints to allow us to fully solve the theory. In three-dimensions the Ising model is not exactly solvable. However, conformal field theory still helps—it gives an efficient method of numerically computing the critical exponents. This method is called the conformal bootstrap, and it includes a large set of related techniques that apply to different aspects of conformal field theory. In this section, we will describe how conformal bootstrap methods allow us to put a lower bound on the energy gap between the ground state and the first excited state in two-dimensional conformal field theories.

The specific form of the bootstrap we’ll focus on is called the modular bootstrap. It’s going to get a little technical, so hang on. The first object we’ll need is the partition function. In statistical mechanics, the partition function is the sum over all the states in the theory, weighted by their “Boltzmann factor”:

Here we sum over states with distinct energies E—degeneracy is accounted for by N(E), which counts the number of states with energy E. β is the inverse temperature, 1/T, so this function adds up each state, exponentially damped by their energy over the temperature (we use k_B = 1, so temperature and energy have the same units).

In a two-dimensional theory, the partition function is similar, but two-dimensional Lorentz invariance means that states have additional charges—in addition to their energy, they have a spin. Therefore we have two temperature-like variables that show up in the partition function—we will call them τ1 and τ2. For a two-dimensional CFT, the energy of a state is equal to its scaling dimension plus a Casimir energy equal to −c/12 (this real number, c, is called the central charge of the theory, and it shows up in a lot of places, but we won’t review that here). The partition function is

Now we sum over all possible spins s and dimensions ∆. This object is actually the partition function when the background manifold of the theory is the torus. This can be seen by computing the quantum path integral on the torus. We tend to think of a torus as a cylinder with its ends sewn together to look like a donut. This is partially accurate, but it misses the fact that there are actually many tori—the two ends can be sewn together with an offset, thereby twisting the torus. A twisted torus has a different periodicity condition. If you go straight forward along one direction for the length of the torus, you won’t come back to where you started—you’ll be shifted to the left or right. This is shown in the above picture, which was borrowed from the excellent review “Applied Conformal Field Theory,” by Paul Ginsparg. The complex torus is defined by three real numbers—two periodicities (one in each direction) and the amount of twisting. However, in conformal field theory, scale invariance means that the physics doesn’t depend on the total size of the spacetime manifold. As is customary, we set the length of the spatial dimension to 1. Then the two remaining parameters, the time-periodicity (τ2) and the twisting of the torus (τ1), are combined into one complex parameter τ = τ1 + iτ2, which is called the modular parameter. These are exactly the parameters which appear in the partition function above.

It is sensible that the partition function might depend on the manifold the theory lives on, but it may seem a little mysterious that the periodicity of the manifold shows up in the partition function in the same place as the inverse temperature in the statistical example. This is actually a general feature, and is related to the fact that QFT at finite temperature is periodic in imaginary time. That means that partition functions with temperature T are the same as Euclidean (imaginary time) path integrals with periodicity β, which is a fact that can be derived from the path integral.

Now let’s think a little harder about this modular parameter τ, which describes the length and twisting of our torus. First of all, we can’t twist too much. If we twist the torus a little bit, then going around the torus in the temporal direction will move us a little bit to the left or right of where we started in the spatial direction. But if we keep twisting more and more, eventually the amount of twisting will be exactly the size of the spatial circle—that is, 1, and we will end up back where we started. So it turns out that

There’s a further transformation that doesn’t change the partition function, which essentially swaps the space and time directions. This is actually the same thing as a duality that shows up in the Ising model called the Kramers-Wannier duality, which relates the theory at high temperatures to the theory at low temperatures. Mathematically, it acts by τ → −1/τ. So we have a second invariance:

These two transformations that don’t change the partition function are called modular transformations. Conformal invariance is required to ensure that the τ → −1/τ is a symmetry, so general partition functions are not invariant under modular transformations. We will now use them to derive something very non-trivial about the theory. First we rewrite the second invariance as a “crossing equation”:

Recalling the form we used above, with ∆ and s, we find

This equation is true for any value of τ1 and τ2, so we may also take τ derivatives of this equation, and it will still hold. Let’s define the one-derivative and three-derivative operators

and let’s make another definition for convenience:

Now we can act these operators on our crossing equation. We need one more fact first: the point (τ1, τ2) = (0, 1) is invariant under the second modular transformation (23). This simplifies the crossing equation greatly, essentially because it means that the two partition functions in the “crossing equation” will be equal term-by-term (rather than only being equal after the sum, which is the case for generic values of τ). The result is the following:

I’ve skipped a few steps—if you’re interested in how this works, you may want to fill them in yourself. This equation is telling us that a bunch of terms sum up to zero. The idea of the bootstrap is this: unless every term is zero, this implies there must be positive and negative terms. In particular, because N and e−2π∆∗ must be positive, it means there is a term which satisfies

This is a constraint on the spectrum. In fact, it is a useless constraint—it is satisfied by the ground state, for which

That is why we need the three-derivative operator, from which we obtain:

The trick is going to be finding a linear combination which vanishes on the ground state. This will let us put a bound on the first excited state.** To do this, we define the new operator

which is constructed to be zero on the ground state and negative on the high-∆ states. When we apply it to the crossing equation, it will imply that there must exist a low-∆ excited state! Specifically, when we act on the crossing equation, we find

The ground state has ∆⋆ = −∆0, so this equation implies that the summand vanishes on the ground state. For states with very large values of ∆⋆, the ∆3⋆ term dominates, and the summand is negative. The crossing equation requires positive and negative terms, so there must be a term where the summand is positive. If we do a little algebra, we will find that the summand is negative if ∆⋆ < ∆0. Therefore, the spectrum of the two-dimensional CFT must include a state which has

So to recap, we found that the one-derivative crossing rule (28) implies there is a state with ∆ < c . This is satisfied by the ground state. Then we found a three-derivative crossing rule, (33). Since it vanished on the ground state, it implies there must be an excited state with ∆ < 6c .

This is a very non-trivial bound on the gap between the ground state and the first excited state of the theory. This bound may be improved by including higher-derivative terms in the linear functionals, and it may be generalized by considering global symmetries, or including τ1 derivatives, or in a number of other ways. All of these methods comprise the modular bootstrap program.

The modular bootstrap is one of the simplest bootstrap methods in conformal field theory. Unfortunately, it is not known how to apply it in higher dimensions, because there is no modular invariance to constrain the partition function (there are some known versions of modular invariance in higher dimensions but they fail for various reasons). There is a more general technique which goes by the name of the “conformal bootstrap,” which uses the conformal symmetry to constrain four-point functions rather than the partition function. The idea is the same though—take the difference between the four-point function and the four- point function which has been transformed by crossing symmetry. Then apply various derivative operators. If the result is always positive, you can rule out that theory from being consistent. This approach is more technically complicated because the four-point functions take a very complicated form, so we won’t go into it here. But we should point out that it was through this method that the best theoretical prediction of the three-dimensional Ising critical exponents, some of which we encountered above, have been obtained. The results agree with those obtained by Monte-Carlo simulations but are significantly more precise. Interestingly, however, these methods have led to a large (8σ!) discrepancy between these theoretical predictions and experiment.

In 1992, an experiment aboard the space shuttle STS-52 took advantage of the low pressure environment to measure a number of properties of the superfluid phase transition of liquid helium. The results for one of the critical exponents—the divergence of the specific heat—disagree significantly with both the Monte-Carlo simulations and the conformal bootstrap calculations. Further calculations are required to fully understand this disagreement. These calculations will certainly be done, so we will have to wait to find out the answer to this puzzle.

And, thank you once again to Andrew Hanlon for several rounds of thorough editing, and thank you to the Theory Girls for the opportunity to write this post!

**Citations and Acknowledgements **

The first section of this post was largely inspired by Henriette Elvang’s course on CFTs.

Paul Ginsparg’s introduction to 2D CFTs is at (9108028).

The modular bootstrap was introduced in (1608.06241).

The modern conformal bootstrap was introduced in (0807.0004). For a nice introduction, see David Simmons-Duffin’s TASI lectures, (1602.07982).

The bootstrap results for the three-dimensional Ising model and the liquid helium discrepancy were reported in (1603.04436).

*CFTs do have information beyond the scaling dimensions—these include the couplings between different operators, and operators’ charges under the various symmetries of the theory, which includes the spin, and potentially their electric / magnetic charges if there are global symmetries.

**Actually, I’ve done something a little not-okay here by using this form of the partition function instead of the Virasoro characters, but this leads to the same bound.

Image Credits:

(1) Liquid Helium: wiki https://en.wikipedia.org/wiki/Liquid_helium#/media/File:Liquid_helium_Rollin_film.jpg

]]>**ETH Zurich **

Although this two part blog is geared towards our readers with a bit of a physics background, part one is intended to be accessible to a wide audience. For some background on related topics in this post, we have a few podcasts and blog posts for you to check out. For some black hole basics, check out our first podcast __Black Holes are Everywhere__. We also have a podcast that explains entropy alongside black holes with some introductory examples called __Entropy, Black Holes, and the Heat Death of the Universe__. Additionally, we will refer you to related blog posts throughout this work when we think it might be useful (for some basic motivation for fundamental physics __check out this blog__).

In today’s blog, we introduce black hole entropy and the tools needed to calculate it. We will be focusing on the Bekenstien entropy bound which puts an upper bound on the entropy needed to describe physical systems, like a cup of tea or a globular cluster. Using a Gedanken experiment, we derive the Bekenstein entropy bound. Take note that this bound is only satisfied for systems of constant, finite size, and weak self-gravity. What is weak self-gravity? For now, weak self-gravity can be loosely motivated by thinking of situations in which the effect of gravity isn't too strong.

The latter half of this blog (to be released at a future date) will get more technical. In part two, the Bekenstein bound will be reformulated into a covariant form, known as the covariant entropy bound, proposed by Bousso, which is valid in all space-times admitted by Einstein’s equation. Stay tuned for the release of part two which will be on our blog and linked here.

Entropy can be thought of as counting the number of microstates that a system can be in. For example, let's consider flipping a coin with two sides, heads and and tails. In our example, we will flip the coin 4 times. The microstates of this example would be the possible outcomes: HHHH, HHHT, ... , TTTT.

If we choose to measure the number of tails, this could be our macrostate. The entropy of a given macrostate would correspond to the logarithm of the number of microstates that corresponds to it. A macrostate with lower entropy would have fewer microstates than a macrostate with higher entropy. So a macrostate with 4 tails has one microstate and an entropy of S=log(1)=0, which a macrostate with 2 tails has 6 microstates and and entropy of S=log(6) ~ 1.79. And we can count quantum microstates too (for a blog post about some modern debates in quantum mechanics see __Bardeen’s Ass__ by Christian Jepsen). In physics, the Bekenstein bound puts an upper limit on the amount of entropy S, or information that can be contained in a finite region of space with a finite amount of energy. In other words, it puts constraints on the maximum amount of information needed to exactly describe the system down to the quantum level.

Yet, entropy bounds have implications even outside the realm of physics. In computer science, this implies that a Turing machine with finite size cannot have an unbounded memory. There is a maximum information-processing rate. Even in theory, the human brain cannot hold infinite amounts of information. An average human brain of mass 1.5 kg and volume 1250 cm^3 has an information Bekenstein bound of ≈ 2.6x10^42 bits*. This post will give an informal derivation of the Bekenstein bound using a thought experiment. There are more rigorous Quantum Field Theory derivations, but to keep within the spirit of general relativity and how relativity was first invented using thought experiments, we still stick to these informal sorts of arguments for now.

In part two of this post, we will show how this bound cannot be the full story as it is not covariant and *does not* hold when the conditions of finite size, energy, and weak self-gravity are *not* met. It will be succeeded by a covariant version known as the Bousso bound. Nevertheless, the Bousso bound reduces to the case of the Bekenstein bound for the case of weak self-gravity and finite size/energy. It has also been shown to be valid for many difference cases including surfaces inside gravitationally collapsing objects and cosmological solutions to Einstein’s equation [1] (see __The Cosmological Constant Problem__ by Leah Jenks for an introduction to the using Einstein’s equations in cosmology).

In order to talk about entropy bounds, it is beneficial to first understand black hole entropy. Black holes arise as classical solutions to Einstein’s theory of general relativity and can be completely characterized, according the no hair theorem, by only three parameters: mass, electric charge, and angular momentum. Since they can be characterized by just these three parameters, it is perplexing that they should even have entropy. Entropy, S, in statistical mechanics counts the number of microstates of a system:

However, classically, black holes have no microstates. It seems that black holes would have no entropy as they do not have any microstates. On the other hand, if we had a cup of tea with some amount of entropy, and we threw it into the black hole, we would end up with a black hole with a slightly larger mass. If black holes did not have entropy, the entropy of the tea would have disappeared violating the second law of thermodynamics. One could even imagine using black holes as an entropy dumping ground to create a perpetual motion machine as depicted in Figure 1, where a black hole is used to convert gravitational potential energy into mechanical energy, which is then turned into electrical energy to power a light bulb. In this drawing, a box of thermal radiation is lowered to just outside the black hole horizon. The radiation is then dumped onto the black hole horizon, and the box is lifted back up using less work than was generated during the lowering process. The box can then be refilled from the reservoir and the process repeated. To be consistent with the second law, black holes must have entropy and if we throw a cup of tea into a black hole, the change in the black hole entropy must be greater than or equal to the lost entropy of the tea [3]:

Now that we see the motivation for needing black holes to have entropy, we can go about figuring out how to calculate the entropy of a black hole.

*The image below was conceived of by Robert Geroch, drawn by Louis Fulgoni, and was copied from Bekenstein (1980).*

For simplicity, we will stick with Schwarzchild black holes, which can be characterized by a mass M. From the first law of thermodynamics, dE = TdS, where E is energy, T is temperature. Classically, black holes have a temperature of absolute 0, but when quantum effects are taken into account, black holes behave as perfect blackbodies, absorbing and emitting particles (Hawking radiation) at a temperature know as the Hawking temperature T_H. Using E = M we find [3–5].

The Hawking temperature can be found using a periodicity trick. To perform this trick, two facts must first be proven.

**Fact 1: At finite temperature, quantum field theory is periodic in imaginary time. **

This can be shown using the thermal Green’s function defined as (4)

where O is an operator, τ = it is Euclidean time, and T_E mean Euclidean time ordering.

This Green’s function is periodic in imaginary time:

where in the 1st line we assume 0 < τ < β, in the 2nd line we use the cyclic property of the trace, in the 3rd line we use the definition of time translation:

Thus t ∼ t + iβ [3, 5].

**Fact 2: The region near the horizon of black holes can be related to Rindler space. **

While this holds for other black holes as well, for our purposes, showing it holds for a Schwarzchild black hole will suffice. The metric for the Schwarzchild black hole is:

To look at the near horizon region, we can make the coordinate change

where ε≪1. Expanding about small ε we get

Now, make another coordinate change:

to see that we get the Rindler form in the (R, η) piece:

The horizon is at R = 0, so we restrict the coordinates to R > 0, but there is no restriction on η. Now, if we make one last transformation, η = iφ, then we see that we get something that looks like polar coordinates in the (R, η) piece:

with the identification

Finally, we conclude that β = 8πM, so the Hawking temperature of a Schwarzchild black hole is

Plugging this back into equation (3), we get

Integrating this and assuming Sbh → 0 as M → 0, we get [3]

or putting back in factors of G, h bar, c,

Using the entropy of a Schwarzchild black hole, we can now derive an entropy bound.

The necessity of a maximum entropy bound for an object or system with a given size R and energy E can be realized by a simple Gedanken experiment. This is known as the “poor man’s derivation of the Bekenstein bound,” as a more rigorous derivation can be done using quantum field theory. Starting with a Schwarzchild black hole of mass M and entropy Sbh = 4π(G/hbar c)M^2, imagine adiabatically dropping a cup of tea of mass m ≪ M and entropy S from far away. After sipping the cup of

tea, the black hole mass will grow to M + m, assuming energy losses to gravitational radiation and Hawking radiation are negligible. So the entropy of the black hole will be

The black hole’s entropy grows by an amount ∆Sbh = 8π(G/hbar c)(Mm) to first order in m, while the cup of tea along with it’s entropy has disappeared. In order to obey the second law of thermodynamics, S + 8π(G/hbar c)(Mm) ≥ 0. This can be rewritten in terms of the radius of the black hole horizon, rh = 2GM/c^2 to get

We now take the limit that the black hole radius is as small as we can make it, maybe a few times bigger than the radius cup of tea, to ensure the cup of tea can still be swallowed by the black hole, rh = ξR, where ξ is some number larger than, but on the order of unity. Our condition that m ≪ M is still satisfied because we assume the cup of tea to be much larger than its gravitational radius 2GM/c . Using E = mc , we can write the entropy bound as

[2]. There are more formal approaches that can be used to derive this, and it can be found that ξ = 1/2. To give an idea of where the entropy of various objects lie in relation to their Bekenstein bound, Figure 2 is a plot of the entropy bound versus the radius of systems with various densities. The equation for this plot can be obtained starting with the Bekenstein bound equation, plugging in E = mc^2, and using the Wigner-Seitz radius, R = (3m/4πρ)^1/3 to write mass m in terms of density ρ and size R:

The figure is over-plotted with the entropy and size of various objects to compare to the bounds; the neutron star should be compared with the purple line bound, the globular cluster with the green line bound, and the rest against the blue line bound [2].

**That concludes the first portion of this post! **We went over some great foundational concepts about entropy and black holes that will serve us well for part two where we get to the heart of this poor woman's derivation of a covariant entropy bound.

There were many subtleties that were glossed over during the derivations in this paper in order to keep within the theme of derivation by thought experiments. In summary, as Bousso puts it [7],

[1] Wikipedia. Bekenstein bound — wikipedia, the free encyclopedia, 2016. [Online; accessed29-April-2016]. __Black Hole Event Horizon Image__

[2] J. D. Bekenstein. Bekenstein bound. 3(10):7374, 2008. revision 121148. *Image for gravitational-thermodynamic engine was conceived of by Robert Geroch, drawn by Louis Fulgoni, and was copied from Bekenstein (1980). Image for the* __Universal Bounds Plot can be found here on scholarpedia.__

[3] Tom Hartman.Quantum Gravity and Black Holes, Spring 2015.

[4] Barton Zwiebach.A First Course in String Theory. Cambridge University Press, 2004.

[5] Robert Wald.General Relativity. The University of Chicago Press, 1984.

[6] Charles W Misner, Kip S Thorne, and John Archibald Wheeler.Gravitation. W H Freemanand Company, 1973.

[7] Rafael Bousso. Perturbative proof of the covariant entropy bound. FXQI Conference in Vieques,2014.

[8] Raphael Bousso, Horacio Casini, Zachary Fisher, and Juan Maldacena. Proof of a quantumbousso bound.Physical Review D, 90(4), Aug 2014.

[9] Raphael Bousso. A covariant entropy conjecture.Journal of High Energy Physics, 1999(07):004,1999.

**Note that there are different ways to define entropy. While in the table for coin flipping log base 10 was used, here it is log base 2.*

**Lehigh University**

Every 108 nanoseconds at the Relativistic Heavy Ion Collider (RHIC), two ion beams pass by one another at 99.999% the speed of light, in the hopes that two out of trillions of them will collide in just the right spot, and we will be able to measure what happens. For 24 hours a day, 7 days a week (yes, including holidays), for 4 months every year, for the past 2 decades; every 108 nanoseconds.

There is some truth in this, but the reality is much messier. Things break, alarms go off, objects are thrown. Getting the timing just right is very hard; 108 nanoseconds is a very, *very* small amount of time, and the timing changes based on the energy! Particle detectors turn on, ions collide; oh no the machine is out of ions, particle detectors down, more ions please! Feed the machine of physics more data in the hopes that this will culminate in a deeper understanding of the physical world around us.

From the gold foil scattering experiments in the early 1900s (what is now called a fixed target experiment), to the first collider accelerators in the 1960s (where two beams are collided with one another, providing more total energy than if one collides a single beam with a stationary target), to the future Electron Ion Collider (EIC), which will be built where RHIC is now running at Brookhaven National Lab; each machine is built on the learned knowledge of past experiments. Many machines are also literally built with the physical remains of previous experiments. The sPHENIX detector currently being built at RHIC reuses the solenoidal magnet from the BaBar experiment at SLAC National Accelerator Laboratory —electronics are saved whenever possible in order to be used in the next big experiment—and, as previously mentioned, RHIC will eventually become the EIC. All things, in time, run their course and are torn asunder, providing the soil for the next season's growth.

As we build ever greater machines—built on ever growing contours of understanding—we probe more precisely and more deeply, trying to understand: what is it? What we've learned is that reality is far more richly textured than earlier mechanistic models in physics would lead one to believe. A proton is not simply a hard ball of matter, but a swirling mix of various, varying intensities of quantum fields, constantly tugging, pushing and pulling one another about, only existing through constant interaction, lest the entire thing come apart. In fact, a proton is the only stable single-baryon that we know of, all other bound states decaying back into a proton and/or other forms of matter and energy.*

We've also learned that no matter how hard we try, we cannot escape the subjective element. From the beginning, the modern field of physics has been the attempt of humanity to create, through objective observation and experimentation, a mathematically rigorous, physical model that maps directly to the reality around us. In essence, not just experimenting and observing, but formulating these observations into objective, mathematical models that can be used to accurately predict the results of future experiments. Eventually, as these experiments grew more and more complex, a fracture was introduced. Humanity had come face to face with the quantum reality of our existence. No longer was our beautiful idealistic world apart from us, where we could merely observe objectively and without interaction. Gone were the days of taking notes as impartial judges and using these to develop our objective models. No, this was a world where the particles changed their behavior depending on whether we tried to measure them or not.

To compensate, the mathematics has become ever more complex. In particular, we can't simply calculate the path from point A to point B; we must consider all of the different ways one could possibly travel from A to B and weigh each path accordingly. How are we to know our particle didn't simply go on a stroll in the middle of our experiment? We only measured it at the beginning and the end, what happened in the meantime is anybody's guess.

So what are we to do? Is it all for naught? In our attempts to build a purely objective model of the universe, we have found that some element of subjectivity is required. Freed from our role as impartial, objective observers, we now must jump in and fully immerse ourselves as subjects in the web of reality, fully aware that our actions and observations are part of the system we are trying to understand. And personally, I find that to be a much richer and more beautiful world than a cold mechanistic one, where going from point A to point B is done in the strictest linear fashion possible, objects merely tracing out a predetermined path, no detours allowed.

**If the proton can decay, it is thought to have a half-life that is **stable to ~10^33 years**,** which is much much older than the current age of the universe. The **deuter**on** (bound state of a proton and neutron) is itself stable, but an isolated neutron only has a **half life of about 15 minutes**.*

**University of Pisa**

**This is a two-part post. Stay tuned for part two.**

*Hi all, I’m Brian—a recent PhD graduate of the University of Michigan. I wanted to write a post that connects some classic concepts in physics—phase transitions—with an important branch of research—the conformal bootstrap. The stuff at the beginning of this post is pretty introductory, but watch out: the end gets pretty technical, and a little knowledge of Quantum Field Theory would help too. *

One of the exciting parts of theoretical physics is finding unexpected connections. Often we find that physical systems which look completely different at a superficial level are partially or entirely the same when you consider their dynamics in a more abstract way—for instance at the level of the organization of the states in the theory. __Erin wrote a post__ discussing an example of this, which is both very surprising and very important in contemporary research: holography. One of the most important things that we’ve learned about fundamental physics in the last few decades is that quantum gravity in Anti-de Sitter space has a *dual description* in terms of the quantum dynamics on the boundary. Other dualities in high energy physics abound, such as those between different types of string theory, between different quantum field theories, or between the same theories in different regimes.

In this post, we will focus on a particular type of these coincidences that arise between different systems near certain kinds of phase transitions. The transitions that we most commonly encounter in everyday life are the freezing/melting transition between solid and liquids, and the evaporation/condensation transition between liquids and gases. However, many physical systems on earth undergo transitions, such as the formation of carbon into diamonds due to high temperatures and pressures in the earth’s mantle, or the onset of superconductivity when certain metals reach low enough temperatures. In general, phase transitions are defined as discontinuities in the behavior of a system when you change external thermodynamic quantities such as temperature or pressure. They may be broadly classified by where the discontinuity shows up. If the free energy of the system is a discontinuous function of the thermodynamic variables (typically temperature and pressure), it is called *discontinuous* or *first order*. This discontinuity in the free energy appears as a latent heat, which is thermal energy required for a phase transition but which does not change the temperature. Most of the transitions we encounter in day-to-day life are first order. For example, boiling water at atmospheric pressure has a latent heat of 40.65 kJ/mol. This means that even after you reach the boiling point, you still have to put in a lot of energy just to convert the water to gas– 40.65 kJ (or about 10 Calories) for every mole (about 18 grams) of water!

In this post, however, we will be interested in the other class of phase transitions, which are called *continuous* or *second order*. These are transitions where the free energy is continuous across the transition (but often its derivatives are not). In the case of continuous transitions, it is often useful to define an *order parameter* to describe the transition. These are quantities that are zero in one phase and non-zero in another. We’ll see a few examples below.

Another really fundamental feature of continuous phase transitions is that when physical systems are near them, parts of the systems which are far away from each other can still interact. These interactions are quantified by correlation functions, which describe how quantities in the system at different positions are related. These typically decay exponentially as the distance increases—that is, a variable like the spin at some point may be highly correlated with the spins at nearby points, but it will be almost completely uncorrelated with far away spins.

A simple example is a lattice of spins, where spin sits on a corner and can be up or down. In this simple example we might quantify this by

Here we’ve used ∼ to indicate that we are considering only the scaling, and might be ignoring constant and subleading pre-factors. This equation tells us that the correlation between the spins at site i and site j decays exponentially as the distance rij between i and j increases. Since we can’t put a dimensionful quantity like distance in an exponent, we are forced to introduce another distance scale ξ, which is called the correlation length. This variable ξ can depend on the external parameters of the system, such as temperature and pressure. However we find that as the system approaches a phase transition, the correlation length approaches infinity, implying that even greatly separated spins are now correlated.

Let’s illustrate this with an example. As we know, water boils at 100 degrees Celsius. Actually, that’s only true at atmospheric pressure. As you increase the pressure, the boiling point will increase—but not for- ever. Eventually, you reach a point, called a critical point, above which there is no sharp phase transition between liquid and gas. This is shown in the figure below, which is called a “phase diagram.” For water, this point occurs at 374 degrees Celsius and 218 atmospheres of pressure–this is far above the pressures we are used to in everyday life, which is only about one atmosphere! Below the critical point, the phase transition is discontinuous, which means that there is a latent heat. At the critical pressure, however, the latent heat disappears and the transition becomes continuous. The order parameter for this transition is the difference in density between the liquid and gas phase. We will denote this by ρ. Let’s consider the pressure to be fixed at its critical value. Then, if we were to measure ρ as we approach the critical temperature Tc, we would find a simple behavior for ρ:

The absolute values mean that there will be a discontinuity in the first derivative of ρ. β is a constant called a critical exponent that is intrinsic to the system. For the water / gas transition, we have

Recall also that the correlation length *diverges *as we approach the critical temperature. This is also characterized by a critical exponent:

where

β and ν are only two of a number of critical exponents characterizing the system.

Let’s turn to another example: the lattice of spins we mentioned earlier. This model has a particle at each corner i of a square lattice with spin σi. We’ll allow the spin at each site to be up or down, so σ = ±1. The spins in this model only interact with their nearest neighbor—the interaction energy is positive or negative according to whether the spins are the same or different. The total energy of the system depends on the configuration of spins and is given by:

Here ⟨ij⟩ means that we sum over all pairs of neighboring particles, while the sum over i is just a sum over every site. Therefore, J describes the interactions within the lattice and h models the effect of an external magnetic field. This is a very simplified model of a magnet—the spin of each particle comprising the magnet points up or down, and the resulting magnetic field is the net number of spins pointing up or down. This net field is called the magnetization, given by the net magnetic field averaged over the number of spins N:

The magnetization plays the role of the order parameter of this system, just as the density did for the liquid / gas transition. At high temperatures, the spins are oriented more or less randomly—each spin has an even chance of being up or down, so the magnetization is close to zero. But as the temperature drops, thermal fluctuations stop overriding the interaction energy, and nearby spins will be more and more correlated. At zero temperature, the system relaxes into its ground state. For J > 0, this is the state where all the spins point the same way—then the material is said to be ferromagnetic (J < 0 leads to a ground state with anti-aligned neighboring spins, a configuration called anti-ferromagnetic). With all the spins pointing the same way, the magnetization will approach one. There is a phase transition between the disordered, high-temperature phase and low-temperature, aligned phase. We can model this with a similar equation to the case of water:

and we can also consider the correlation length

This model can exist in any number of dimensions—in one dimensions the “lattice” is evenly spaced points on a line, in two dimensions it’s squares on a plane, and so on. The critical exponents β and ν are different for each number of dimensions. In two dimensions, we find:

while in three dimensions, we find

These three-dimensional values were the same ones that we found for the liquid/gas transition! The remarkable coincidence that we’ve been hinting at before is that completely different physical systems can be described by the same critical exponents! This phenomenon is called *universality*. It tells us that, near a critical point, the behavior of a system depends on the dimension and the symmetries of the problem, but not the underlying dynamics.

The Ising model has a long history. The one-dimensional model was solved by Ising himself in his PhD thesis way back in 1924. The two-dimensional model with no external magnetic field was solved by Onsager in 1944. The two-dimensional model with a magnetic field was only solved exactly in 1989, by Zamolodchikov. In four or more dimensions, the exponents can be computed using an approach called mean field theory. Therefore the three-dimensional model is of considerable theoretical interest, because it is notoriously difficult to study. Unlike the other dimensions, its exponents are not believed to be rational numbers*, and so far they do not have a closed form expression.

In this section we will introduce a formalism, called Conformal Field Theory (CFT), which can be used to study the behavior of the Ising model (and many other systems) at the critical point. In some cases, this formalism will allow us to fully solve the theory. This is the case for the two-dimensional Ising model, where CFT allows us to compute the exact values of the critical exponents. In other cases, such as the three-dimensional Ising model, the theory cannot be fully solved, but CFT gives an efficient set of tools for putting rigorous bounds on the exponents. Those tools are called the conformal bootstrap, and they will be the subject of the final section. However, for the sake of simplicity we will focus on the two-dimensional Ising model in what follows.

Let’s first go over what a conformal field theory (CFT) is. This is a huge field, and we won’t be able to do it justice in this post. Let’s briefly try to give a little of the flavor.

A CFT is a quantum field theory (QFT) for which an enlarged group of spacetime symmetries—the “conformal group”—act on the states. Typically, we study QFT in flat (rather than curved) spacetime, where the symmetries are translations, rotations, and Lorentz transformations. The conformal group, how- ever, includes these symmetries and adds two more—dilatations and special conformal transformations. A typical introduction to CFTs usually involves determining the commutation relations between the gener- ators of these symmetries and showing how they each act on the fields in the theory. Here, we’re going to skip to the important part: scaling transformations. These act on the coordinates as

which means the fields transform as

In a CFT, each field φ has a positive real number ∆ associated with it—this is called the scaling dimension of φ. It’s conceptually similar to the mass dimension in a normal quantum field theory**.

This extra symmetry may seem innocuous, but it affects the structure of the theory on a fundamental level. For one thing, scaling transformations mean that there is no notion of asymptotic states, because there is no real notion of particles getting “very far apart”. This means there is no way to define an S-matrix. Therefore, it is natural for the correlation functions to play the role of the primary observables in CFTs. We observe that two-point functions in a CFT transform in the following way under scaling:

The rotational and translational symmetry imply that these functions can only depend on the difference between x and y. Only one function of |x − y| satisfies the scaling relationship we’ve written above, so the two-point function in CFT is fixed to be

So much for our lightning outline of CFTs. As we’ve seen, they are basically collections of fields, and the physical content is described by the correlation functions of the fields, which transform a certain way under scaling. The next question is: what do these models have to do with the lattice models we’ve outlined above? They are quite different after all—the CFT is a quantum field theory and the fields take continuous values. In the lattice models, we also have correlation functions, but they are at discrete points in space, and the spin takes discrete values. But the remarkable fact is there exists a CFT whose correlation functions are the same as the correlation functions for the Ising model, in the limit of zero lattice spacing. The basic idea can be summarized succinctly as:

Here a is the spacing between sites, so the CFT operator at location x corresponds to a lattice spin at the site corresponding to the integer part of x/a. It is clear from this equation that the scaling transformation of the CFT correlation function holds for the right-hand side of the equation as long as a is scaled along with x and y. To get a feeling for the fields in this CFT, let’s consider the critical exponent ν of the lattice model, defined by:

where rij is the distance between the sites. We know from Onsager’s solution that

in the two-dimensional Ising model. Therefore, a continuum CFT describing the model must have a field σ(x) with ∆ = 8. Such a CFT does exist: it turns out to have three operators

In this case, I is the identity operator, which doesn’t do anything to the states it acts on. σ represents the local spin, and ε is the local energy density.

It is important to mention that this is not a proof that the models are the same. You might call it a “physicist’s proof”—if you find that enough quantities in two different-looking models are the same, you can convince yourself they are the same model. Nonetheless, proving they are the same is more difficult***.

Thank you to Andrew Hanlon for several rounds of thorough editing, and thank you to the Theory Girls for the opportunity to write this post! Please stay tuned for part two.

**Citations and Acknowledgements **

The first section of this post was largely inspired by Henriette Elvang’s course on CFTs.

Paul Ginsparg’s introduction to 2D CFTs is at (9108028)

The modular bootstrap was introduced in (1608.06241).

The modern conformal bootstrap was introduced in (0807.0004). For a nice introduction, see David Simmons-Duffin’s TASI lectures, (1602.07982).

The bootstrap results for the three-dimensional Ising model and the liquid helium discrepancy were reported in (1603.04436).

*For a list of classes and their exponents, see __wikipedia’s page on universality classes__.

**Like the mass dimension, it is essentially determined by the dimension of the theory and spin of the field in a free theory, but can be changed by renormalization effects which may be drastic in strongly coupled theories.

***One approach involves a series of transformations between the two theories. First one must prove an equivalence between the two-dimensional (classical) Ising model we’ve described, and a 1D quantum model. Then the operators of the 1D model are related to fermionic operators via a Jordan-Wigner transformation. Finally, a theory of free Majorana fermions is obtained in the limit where the lattice spacing goes to zero, and this model has the three operators described above.

Image Credits:

(1) This is from Project Runeberg book called The key to science In swedish., Public Domain, __https://commons.wikimedia.org/w/index.php?curid=472708__

(2) By Matthieumarechal, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4623701

]]>**Case Western Reserve University**

In this post we’ll explore some of the motivation and background for __a recent preprint__ about a new method for finding bounds on Einstein manifolds which is inspired by scattering amplitude calculations and the conformal bootstrap.

To begin, let’s review some facts about Riemannian geometry. Suppose that someone hands you a compact Riemannian manifold, such as the double torus depicted in the figure below. Given such a manifold, there are all sorts of interesting geometric quantities that you could

compute: its dimension, its volume, the lengths of its closed geodesics, the eigenvalues of the scalar Laplacian*****, and so on. As a simple example, take the two-dimensional sphere, S^2, with unit radius and the usual round metric. This space has volume 4π, its closed geodesics all have length 2π, and the eigenvalues of its scalar Laplacian are given by L(L + 1), where L = 0, 1, 2, . . . and the Lth value is repeated 2L + 1 times. Some other quantities that you could compute for a compact manifold are integrals over the manifold of products of three eigenfunctions,

which we’ll call triple overlap integrals. (It turns out that integrals of products of more than three eigenfunctions can all be written in terms of these.) For example, on S^2 the eigenfunctions are the spherical harmonics and their triple overlap integrals can be written in terms of Clebsch–Gordan coefficients. We collectively call these various quantities “geometric data” of the manifold.

Now suppose that instead of giving you an actual manifold, someone gives you the next best thing, namely a sequence of numbers that they claim corresponds to the geometric data of a compact manifold. For example, they may give you a sequence of putative eigenvalues of the scalar Laplacian given by L^3 for L = 0, 1, . . . , where the Lth value is repeated L times. How can you tell whether or not these really could be the eigenvalues of the scalar Laplacian on some manifold? Well, there are various checks you can do. One simple condition that these eigenvalues should satisfy is that the smallest eigenvalue is zero, corresponding to the constant function. As you can easily check, our example satisfies this condition. A more subtle requirement is that the eigenvalues must obey Weyl’s law, which dictates how the number of eigenvalues grows as they tend to infinity. More precisely, if n(λ) is the total number of eigenvalues less than λ, Weyl’s law says that the following limit must be a constant:

where N is the dimension of the manifold. Applying this formula to our example eigenvalues, we find that it holds only if N = 4/3. Since the dimension of a manifold has to be an integer, we can conclude that our original list of candidate eigenvalues could not have come from a manifold. This illustrates that not every list of numbers can correspond to the geometric data of a manifold. If we modify our example so that the Lth value is now repeated L^8 times, it becomes consistent with Weyl’s law with N = 6. Could this sequence correspond to the scalar eigenvalues of some six-dimensional manifold? We’ll come back to this later.

It turns out that there are some additional consistency conditions that the Laplacian eigenvalues and triple overlap integrals of a manifold have to satisfy. To give a sense of why these consistency conditions exist, it’s useful to discuss a physical interpretation of the geometric data of a manifold. Consider a hypothetical scenario in which there exist additional spatial dimensions beyond the three large spatial dimensions that we observe. This is required, for example, in string theories, since they only really make sense if the total number of spacetime dimensions is ten or eleven. One way for extra dimensions to have escaped our attention so far is if they are compact and small enough that we could not have resolved them with any terrestrial experiment, a scenario that goes by the name of Kaluza–Klein theory. If gravity is dynamical then the space describing the extra dimensions should be a solution of the gravitational equations of motion; in the simplest cases, this means that the extra dimensions are described by an Einstein manifold, a solution to the vacuum Einstein equations, and from now on we’ll only consider such manifolds. In the case of ten-dimensional string theories, a common choice of manifold to describe the six extra spatial dimensions is a particular type of Einstein manifold called a Calabi–Yau manifold.

In a Kaluza–Klein theory, every light particle in the lower-dimensional world is accompanied by an infinite tower of massive copies of itself. The particles in this tower correspond to versions of the original particle that have excited vibrational modes in the extra dimensions. These vibrational modes are described by the eigenmodes of various Laplacian operators in the extra dimensions and the corresponding eigenvalues determine the masses of the particles. For example, the graviton—the massless spin-2 particle that mediates the gravitational force—is associated with an infinite tower of massive spin-2 particles whose masses squared are given by the eigenvalues of the scalar Laplacian on the manifold describing the extra dimensions. Additionally, the strength of the interaction between any three of these graviton excitations is determined by the triple overlap integral of the corresponding eigenfunctions of the scalar Laplacian. The geometric data of a manifold thus corresponds physically to properties of particles in a Kaluza–Klein theory.

Now suppose you want to perform an experiment where you collide together two massive excitations of the graviton and two other excitations come out. The probabilities for the different possible outcomes of this experiment are encoded in a scattering amplitude. This scattering amplitude depends, among other things, on the energy of the collision, E. When you ordinarily look at scattering amplitudes involving massive spin-2 particles, for large values of the energy they grow quite fast as a function of energy, like E^10. However, since these massive spin-2 particles are secretly just components of a higher-dimensional graviton, their amplitudes should inherit the relatively soft high-energy behavior of graviton amplitudes, which grow instead like E^2. From the perspective of the lower-dimensional amplitude, achieving this softening requires miraculous-looking cancellations between the different Feynman diagrams that contribute to the amplitude. For example, the E^8 part of the amplitude only vanishes because the contribution from the exchange of the infinite number of graviton excitations precisely cancels the contribution from exchanging the massless graviton. These cancellations occur because of subtle relations between the masses and interaction strengths of the different particles that contribute to the scattering amplitude. Since the masses and interaction strengths are in turn determined by the geometric data of the manifold describing the extra dimensions, this geometric data must satisfy certain nontrivial relations to ensure the amplitudes have the expected high-energy behavior. These relations are the additional consistency conditions alluded to above.

We have just argued that the geometric data of a manifold must satisfy consistency conditions to ensure that the scattering amplitudes of the massive excitations of the graviton in Kaluza–Klein theories have the high-energy behavior that we would expect based on their extra-dimensional origin. An example of one of these consistency conditions is the following:

where i labels the non-constant eigenfunctions ψi of the scalar Laplacian with eigenvalues λi, ψ1 is a fixed eigenfunction, g11i is the integral of ψ1^2 ψi, and V is the volume of the manifold. This is the consistency condition responsible for the vanishing of the E^8 part of the amplitude mentioned above. The problem is that the consistency conditions involve potentially infinite sums over all of the particles contributing to the amplitude, so it’s not obvious a priori how to extract useful information from them. Fortunately, there is an analogous and well-studied problem that arises in a different context in physics, that of conformal field theories. It has been known since the 70’s that conformal correlators must satisfy certain nontrivial consistency conditions, but only over the last decade or so have effective computational tools been developed to extract useful information from these consistency conditions in more than two dimensions. This approach is called the conformal bootstrap and most modern numerical implementations use a form of optimization called semidefinite programming. We can take some of the techniques that have been developed for the conformal bootstrap and directly apply them to the consistency conditions satisfied by the geometric data of a manifold to extract some useful geometric information. This useful information usually takes the form of bounds on the geometric data, so we call them geometric bootstrap bounds.

Now that we have our consistency conditions and we know how to extract useful information from them, we can go ahead and search for geometric bootstrap bounds. An example of the type of bound that we get from this approach is the following: the ratio of any two consecutive eigenvalues of the scalar Laplacian can be at most four on Einstein manifolds with a non-negative Ricci scalar******. Physically, this means that the mass of an excitation of the graviton can be at most twice the mass of the next-lightest such excitation. Going back to the second example of candidate scalar Laplacian eigenvalues that we considered earlier, we see that its first and second distinct nonzero eigenvalues differ by more than a factor of four. We can therefore conclude that this sequence could not correspond to the spectrum of graviton excitations in a Kaluza–Klein theory with six extra dimensions satisfying our assumptions, such as what you would get from string theory with an internal Calabi–Yau manifold. Another result for Einstein manifolds with a non-negative Ricci scalar is that there is an upper bound on the size of the triple overlap integral of the lightest eigenfunction of the scalar Laplacian times the square root of the volume of the manifold, |g111| sqrt(V) . For example, on manifolds with six dimensions or fewer this quantity cannot exceed 10/3. This means that the strength of the self-interaction of the lightest massive excitation of the graviton can not be more than 10/3 times the strength of gravity when there are six or fewer extra dimensions. The fact that we can get such strong bounds from consistency conditions is quite surprising. As a final example, we show in the figure below an upper bound on the triple overlap integral for the lightest two scalar eigenfunctions in terms of the ratio of their eigenvalues and for different values of N, the dimension of the manifold, assuming the manifold satisfies the same conditions as in the previous two examples.

It is an old dream to constrain and solve physical theories using consistency conditions. This idea has experienced a recent resurgence thanks to advances in our understanding and computing power, giving new insights in several areas of theoretical physics. Here we have seen how related ideas coming from the conformal bootstrap and amplitudes can also teach us some new things about geometry and the physics of extra dimensions. For more details and for references to the literature, see the preprint available __here__.

**Olivia Beckwith, PhD**

**University of Illinois at Urbana-Champaign**

Are the answers to the hardest questions in math and physics within our grasp? Is it possible that some of them have been staring us in the face since we learned to count? Could they come down to math as simple as 1+2=3? Imagine that in one hand you have the nature of matter and gravity, and in the other hand, arithmetic. Both are fundamental aspects of our reality, but feel like completely disparate subjects. Yet they may not be as independent of each other as they seem. Integer partitions give us a glimpse of the interconnectedness of arithmetic, modular forms, algebraic topology, and theoretical physics.

It all starts with an elementary question. Let n be a positive integer. How many ways can you write n as a sum of nonincreasing positive integers? We let p(n) denote the answer.

For example, you can write 4 as 4, 3+1, 2+2, 2 + 1 + 1, and 1 + 1 + 1 +1, so p(4) = 5. A pretty simple idea, right?

Number theorists call p(n) the partition function (not to be confused with the partitions functions in statistical mechanics or quantum field theory). The various sums are the **partitions** of n.

I want to say just a few words about why someone would care to study integer partitions in the first place. From a number theorist's perspective, p(n) is naturally intriguing because it speaks to the additive structure of the integers. Furthermore, the sequence p(1), p(2), p(3), ... is full of the sorts of patterns that number theorists love. For example, in the table below, observe that if the units digit of n is either 4 or 9, then p(n) is divisible by 5. This regularity in divisibility holds for all n and doesn't happen by accident, it is just one example of a large class of patterns for p(n) and other coefficients of modular forms. But I don't want to get carried away - this is perhaps a topic for another post.

You might wonder how you could go about computing p(n), especially for large n. I sometimes like to try to compute partition numbers in my head by counting partitions while I'm out running, to pass the time. However, I can never get very far. Past n=8, it becomes difficult for me to keep track of all the partitions. The following formula helps me get a little bit farther:

Where does this come from, and what exactly comes next on the right hand side? A direct counting argument might feel like the right approach, but good luck! It is easier to work with the **generating function** for p(n):

This is a common strategy in combinatorics: if you're interested in a sequence, look at its generating function and try to manipulate it algebraically or study its analytic behavior. If you can show that the generating function can be written in a nice way, see what that new formula tells you about your sequence.

In the case of p(n), it turns out that the generating function can be rewritten as a product:

This formula isn't terribly hard to show - you can express each factor on the right as a geometric series and think about what happens when you distribute each factor. It is extremely important, though, because it relates p(n) to the theory of **q-series**, which are loosely defined as functions involving products such as the right hand side of (2).

Here's an identity that's a little harder to prove:

We won't go into the proof, which involves some very clever reasoning. Multiplying both sides of (3) by the generating function for p(n), you get:

You can rewrite the product as an infinite power series in q, but because the left hand side is 1, all of the coefficients except the constant term have to equal 0. This is how you get the recursive formula (1).

So we see that an identity between two functions, (3), held the key to understanding the recursive formula for p(n) (1). Often in the study of partitions, intricate combinatorial structure is elegantly conveyed by means of q-series. Next we'll see a few more increasingly complex examples of this.

We defined p(n) to be the number of ways of writing n as a sum of positive integers in nonincreasing order. There is a lot of room to modify this definition to get other interesting functions. One typical modification is to require that all of the summands belong to some set S. If you took S to be the set of odd numbers, you'd be looking at the number of ways of writing n as a sum of odd numbers. In the case of n=4, you'd have 1+1+1+1 and 3+1, so the answer would be 2.

It is also common to put restrictions on *multiplicities* of the summands. For example, you could require that each of the summands be distinct. In the case of n=3, you'd have 3 and 2+1, so the answer would be 2 (you wouldn't count 1+1+1 because the 1 appears more than once).

Sometimes these functions actually coincide - even if their definitions look completely different! These relations are sometimes easiest to prove using algebraic manipulations. For example,

shows that the number of partitions of a number into odd parts is the same as the number of partitions into distinct parts.

The Rogers-Ramanujan identities are a fascinating example:

Like (3), these identities relate infinite summations to infinite products, but also have a combinatorial interpretation that tells us something about integer partitions. In (4), the left hand side is the generating function for partitions such that the adjacent parts differ by at least 2, and the right hand side is the generating function for the number partitions into parts of the form 5n + 1 and 5n+4. Since the generating functions are equal, these quantities must be the same.

Let's check this for n=6. On the left hand side, the coefficient of q^6 is the number of partitions of 6 for which all of the summands differ by at least 2. If we check each of the 11 partitions of 6, we find that 6, 5+1 and 4+2 are the ones satisfying this property, so the coefficient for q^6 is 3.

To obtain the coefficient of q^6 on the right, you count partitions of 6 for which all of the summands are either 1, 4, and 6. Those partitions are 1+1+1+1+1+1, 4+1+1, and 6, so again you have 3 partitions. Notice that we just counted two different sets of partitions, but the total number of partitions in each set was the same.

There is a similar interpretation of (5). And this is just the beginning. Other beautiful identities similar to (4-5) have been discovered by Andrews, Gordon, Bressoud, Slater, Schur, and much more recently Kanade and Russell. Finding new identities remains an area of interest. In addition to being interesting in their own right, one may hope that such identities might fuel the development of algebraic structures useful in theoretical physics, as we will discuss next.

Equation (3) is a special case of an equation known as the Jacobi Triple Product identity:

If you set x = q^3/2 and y = i q^1/4, this simplifies to (3). Formula (6) can be interpreted using **Lie algebras. **

Lie algebras are mathematical objects defined as vector spaces endowed with a pairing called a Lie bracket. They famously describe symmetries for systems in quantum physics. One attempts to understand all of the *representations* of a Lie algebra - that is, all the ways the Lie algebra can be written in terms of linear maps on vector spaces. The Weyl Character Formula relates the representations to intrinsic properties of the Lie algebra by writing the characters of representations as ratios where the numerator is a sum and the denominator is a product. The character of the trivial representation is 1, so in that situation the Weyl Character Formula says that a certain product is equal to a sum. In the case of the affine Kac-Moody algebra with root system type A_1, you get (6).

So (3) represents an overlap between the representation theory of Lie algebras and integer partitions. This overlap is not a fluke, but a feature of the connectedness of the two areas. The Macdonald identities are a larger class of identities which include (6) and come from characters of affine Lie algebras.

Affine Lie algebras are famous for their role in the construction of conformal blocks in two-dimensional conformal field theory. The conformal blocks are used to construct correlators, the functions governing predictions about the behavior of particles in the spacetime.

So the algebraic structures giving rise to identities like (3) are used to study conformal field theories. One reason this is fascinating is that conformal field theories play a vital role in cutting edge work in theoretical physics on quantum gravity. Famously, a major goal in theoretical physics is finding a physical theory that is compatible with both quantum physics and general relativity. One approach involves the study of **Anti de Sitter space** - a spacetime that is a solution to Einstein's field equations which is different from our universe but admits a quantum gravitational theory. There is a duality - the AdS/CFT correspondence - which says that every AdS corresponds to a CFT on a spacetime with one less dimension. This dimension reduction is the ``holographic principle" of string theory, which is discussed in other articles on this site.

An interesting feature of (4-5) is the appearance of a quadratic form in the exponent on the summation side. The product side, on the other hand, is a **modular form**, a function with symmetry under Mobius transformations. You might wonder if it is possible to characterize all of the identities of this type. Nahm's Conjecture connects this question with algebraic K-theory and conformal field theory.

The functions in Nahm's conjecture are easiest described using Q-Pochhammer symbols. We let (q)_n be the denominator on the left hand side of (4), that is:

Nahm's Conjecture involves functions defined by sums over vectors m = (m_1, ... , m_r), where the $m_i$ are integers. Suppose that the q-series

is a modular form, where Q(m) is a quadratic multinomial on the m_i.

When r = 1, it is known (due to work of Zagier) that the series (7) is modular for just seven quadratic forms Q(m) = (A/2) m^2 + Bm + C:

Number theorists would love to see a nice explicit description when r > 1. Obtaining one might require a better understanding of how such q-series come about from elements of groups in K-theory.

Nahm studied the restrictions enforced on the coefficients of Q(m) by the modular symmetry and obtained a system of equations. He demonstrated that the solutions of these systems of equations produce torsion elements in the Bloch group B(C), a group connected to algebraic K-theory and used in the study of three-dimensional hyperbolic manifolds.

On the other hand, every conformal field theory has characters which have modular invariance flowing from the conformal symmetry and q-series of a particular shape. Nahm also studies a map from torsion elements in the Bloch group to central charges of conformal field theories. So we have a 3-way diagram , in which conformal field theories produce Rogers-Ramanujan-like identities, and Rogers-Ramanujan-like identities predict conformal field theories with properties encoded by modular q-expansions.

So we started with a simple addition question, and quickly found ourselves observing ramifications of the interactions of quantum field theory, modular forms, and algebraic K-theory. That elementary combinatorial identities like (3, 4, 5) can have anything to do with high energy physics amazes me.

Some look for the answers to the grand mysteries of the universe by delving deep into the physical laws, the nature of gravity, and the behavior of small particles that are all around us. Some of us seek beauty and meaning in the most elementary of concepts - the numbers we count with. Perhaps we're really looking at 2 sides of the same coin.

**Acknowledgements**: This article was inspired by many enlightening conversations over the years with Dan Fretwell, Laura Johnson, and Robert Schneider.

*The theory of partitions*by George Andrews.*Conformal field theory and torsion elements in the Bloch group*by Werner Nahm.*Notes on Lie algebras*by Hans Samerson.

**Christian Jepsen, PhD**

**Princeton University**

This answer to the timeless question posed above is one that an increasing number of physicists have begun to take seriously over the last couple of decades as the so-called many-worlds interpretation of quantum mechanics has been slowly supplanting the Copenhagen interpretation as the most commonly accepted way of understanding what transpires when a measurement is performed on a quantum system.

Almost anyone who knows anything about quantum mechanics will agree that it’s strange. When subjected to scrutiny, elementary particles behave in ways that defy conventional intuition. Tunneling, superposition, wave-particle duality, entanglement, uncertainty relations — these are concepts that number among the ranks of novel ideas that physicists were forced to embrace during the last century when the framework of classical physics crumbled. The debate over quantum mechanical interpretations revolves around what these phenomena exhibited by elementary particles entail for the macroscopic world of our everyday experience: Do the laws of quantum mechanics apply for systems of any size? Are even macroscopic objects subject to these funky phenomena? The possible answers to this question lead to drastically different conclusions. In particular, the many-worlds interpretation postulates the existence of a virtually infinite number of parallel realities, a multiverse, which adherents of the Copenhagen interpretation need not subscribe to. From another perspective however, the discussion of how to interpret quantum mechanics is a purely hypothetical matter, and many physicists choose to remain agnostic on the question of interpretation, invoking instead the precept “Shut up and calculate”. In the everyday life of a scientist it does not matter which interpretation one chooses to adopt. For all practical purposes, quantum mechanics offers a precise and unequivocal prescription for computing the outcome of any experiment that can feasibly be realized. And yet, the potential existence of a multiverse may for better or worse help shape the future course of fundamental research in physics, as theorists are invoking the multiverse in addressing some of the biggest outstanding questions in science today and attempting to account for the structure of the laws of nature and the distribution of energy and matter throughout space and time. In some ways, the kind of arguments involved in this line of reasoning runs counter to conventional scientific thinking, but it also represents a revival of ancient principles of natural philosophy.

The predictions of quantum mechanics are probabilistic in nature. If you measure the position, momentum, or spin of a particle, the outcome is random. But the probability distribution that describes this randomness can be computed precisely. To find the probability of detecting a particle at a given location one must carefully take into account all possible paths the particle could have taken to get there. Where quantum mechanics gets weird is that these possible paths interfere with one another. In a sense, the particle traverses all possible paths as long as you don’t mess with it. But as soon as you perform a measurement, the particle is in one place only.

An experiment that clearly demonstrates this behaviour is the famed double-slit experiment. Shoot particles, say electrons or photons, one at the time towards a partition that contains two separate slits through which the particles can pass. Let us refer to these two slits as slit 1 and slit 2. If you put a detector right after each of the two slits, you find that each particle registers in one of the two detectors, never both. This is in accordance with the usual notion of particles being little lumps of matter or energy localized in space. But now remove the detectors, put a screen at some distance after the two slits, and measure where the particles strike the screen. You’ll find that the particles preferentially hit certain parts of the screen. In fact, the preferred regions form a ripple-like pattern in accordance with an interference pattern generated by waves emitted from the two slits. And this pattern persists even if each particle is shot through the slits hours or even days after the previous particle: the interference is not between different particles, but rather each particle interferes with itself. But place detectors of any kind to determine which slit each particle passes through, and the interference is destroyed so that the particles will preferentially hit the region of the screen directly ahead of the screens — no ripples in the probability density. Remove the detectors, and the ripples reappear. If you don’t look to see which slit a particle passes through, it passes through both. The formal way of saying this is that the system is in a superposition of states; or, in mathematical notation,

When you place detectors by the slits, you destroy the superposition and force the system into a definite state, either |particle goes through slit 1⟩ or |particle goes through slit 2⟩. This process of performing a measurement, and thereby singling out a definite outcome from a superposition, is known in the parlance of the Copenhagen interpretation as collapsing the wavefunction of the system.

In this example we considered a superposition of single-particle states. But multiple particles can also enter into a superposition. One could perform an experiment where pairs of particles that are bound together by some attractive force are shot at a double- slit. Let us refer to the two particles in a pair as particle A and particle B. Being tethered together, the particles in a pair follow each other through either slit 1 or slit 2, leaving us still with two possible options and resulting still in a superposition of two states, but states which now each involves two particles:

You don’t know which slit particle A went through, but you know it went through the same slit as particle B. Two particles whose outcomes are connected in this fashion are described as being entangled. While in this example the pair of particles are tethered together in space, it is possible for particles far apart from each other to be entangled together. One example would be a pair of particles with spin. These can interact such that one particle has spin up and one particle has spin down, but which is which is undetermined. The system will then be in a superposition of 1) the state where particle A is spin up and particle B is spin down and 2) the state where A is down and B is up. After interacting however, the particles can move arbitrarily far away from each other and still remain entangled. But as soon as you measure the spin of one particle, you collapse the wavefunction, and the other distant particle will also assume a definite spin. It was this kind of phenomenon that Einstein famously referred to as “spooky action at a distance”.

Superpositions involving more than two particles are also possible, though experimentally it becomes exceedingly difficult to observe interference between superposed states when large numbers of particles are involved, since the particles must all be kept from interacting with their surroundings lest the interference pattern is destroyed. A milestone was reached in the end of the nineties when scientists succeeded in observing ripples in the probability density of Buckyballs, a chemical compound consisting of sixty carbon atoms. But, experimental hurdles aside, how large is it possible for a collection of particles to become and still take part in a superposition? Can macroscopic objects, people, cats, dogs, cars enter into a superposition? Well, now we’re touching the heart of the matter: how to interpret quantum mechanics. In the Copenhagen interpretation, there exists some threshold (an energy scale, a mass limit, or something of the sort) at which the wavefunction collapses and superpositions are no longer possible. Not so, says the many-worlds interpretation. There is no limit to superpositions, and there is no such thing as the collapse of a wavefunction. But then what happens in the examples we considered above when a measurement is performed on a particle and a definite outcome is observed? The answer: *The observer becomes entangled with the particle.* If detectors are used to determine which slit a particle traverses in a double-slit experiment, the state of the system does not collapse down to |p. through s. 1⟩ or |p. through slit 2⟩. Instead, using the abbreviation o. for observer, we have a new superposition:

The observer becomes part of the superposition! We see now how the assumption that the laws of quantum mechanics apply universally naturally leads to the multiverse. For every fork in the road ever encountered by a particle, split realities arise. Schrödinger’s cat is dead and alive.

The Standard Model of particle physics describes quantum mechanics, the nuclear forces, and electromagnetism to an incredibly high precision. And general relativity accounts beautifully for the motion of objects as disparate as individual planets and clusters of galaxies. Our understanding of both these theories has been largely driven by a careful scrutiny of the symmetries of nature. Throughout the history of science, the concept of symmetry has been of paramount importance. Even in the cosmogonies of the ancient philosophers endeavouring to provide rational explanations for the motion of the stars and the cycles of the seasons, symmetry played a crucial role. The universe was a place in balance. The elements of fire, earth, water, and air, always vying for dominance in the world, were held in a perpetual stalemate through the equilibrium of their forces. The universe was a perfect sphere with the earth sitting motionless in the centre, going nowhere because all directions were the same.

The notion of perfect balance became a subject of contemplation to later philosophers and was wittily discoursed upon by means of the paradoxical fable of Buridan’s ass, named after the 14th century French philosopher Jean Buridan. Buridan’s ass is a hungry ass who is situated smack in the middle between two identical piles of hay. The haystacks being completely alike and equally far away, the ass has no preference for one haystack over the other. Consequently, the ass stays put in the centre and so dies of starvation. Quantum mechanics teaches us that in the lab and in nature, we do in fact encounter situations analogous to the plight of Buridan’s ass: alternatives that are precisely equally attractive. Does a particle traverse slit one or slit two? Is the spin of an electron up or down? Does a radioactive isotope decay within its half-life or not? But we need no longer accept the dramatic fate of Buridan’s ass succumbing to famine between two haystacks. For we have learned to accept indeterminacy as a fact of life. 50% chance we measure one outcome, 50% change we measure the other. No dead donkeys. This probabilistic resolution of the paradox probably would not have pleased the renaissance philosophers. Why should one outcome transpire rather than the other when there is no argument in favour of either? Well, if we accept the many-worlds interpretation of quantum mechanics, balance is restored. No outcome preponderates over the over. But it is not that Buridan’s ass chooses neither haystack. It chooses both.

While quantum mechanics frees Buridan’s ass from starving indecision, the universe it- self, as we understand it today, does appear to hang in the balance like in the ancient cosmogonies. The best model we currently have for describing the observable phenomena in the cosmos — the acceleration of the universe, the distribution of galaxies, the cosmic microwave radiation — the so-called Lambda-CDM model, tells us that the continued existence of our universe hinges on a tenuous balance of its constituent components: matter, radiation, curvature, and dark energy. Too much matter and the universe collapses in on itself shortly after its formation. Too much dark energy and the universe is ripped apart before galaxies form. The world is balancing on a knife’s edge with perdition on either side. It is as in the genesis of Norse mythology, where the world emerged in the void of Ginnungagap between the scorching heat of Muspelheim (land of the fire giants) and the biting cold of Niflheim (home of the frost giants). We live in a narrow Goldilock zone of stability. (Or relative stability. The expansion of the universe is accelerating, and so it appears the universe is heading towards an eventual big rip.) And it is not just in cosmological models that parameters must be delicately adjusted in order to give rise to a stable or metastable universe. The same applies to particle physics. For example, if the mass of one particular kind of particle was just a little bit larger, the universe would have been wiped out long ago by an expanding vacuum bubble.

The need to finely tune parameters in physical models is usually considered problematic. Ideally, parameter values should be computable from first principles. Why should some parameters have to be set to seemingly arbitrary values? Fine-tuned parameters have an air of ad hoc explanations about them, they remind us of the increasing number of epicycles that astronomers introduced to the geocentric model of the solar system in order to keep it in agreement with observations. When Kepler proposed that the planets move around the Sun in elliptical orbits, he was able to dispense with epicycles all together, and we know that he was right on track. Among competing theories compatible with observation, Occam’s razor tends to serve as guiding principle, and so it has been since much before the time of Kepler or the time of Occam. Nature does nothing that is in vain or superfluous, Aristotle tells us. Many physicists are occupied in searching for simple underlying principles that may account for the seeming need of fine-tuning of physical models. For example, apparently miraculous cancellations in the calculations of particle masses could perhaps be the result of supersymmetry. Theorists have tried to think of many ways of possibly detecting traces of supersymmetry in the big accelerators that smash together particles and record the particles generated by the collision, though so far experiments have yielded no evidence indicative of supersymmetry.

Over the past decades however, another school of thought has arisen that seeks to account for fine-tuning in an altogether different manner, which could be viewed as being very much opposed to Occam’s razor: The explanation is the multiverse! The emergence of a universe like ours is a wildly improbable event, proponents of this kind of argument concede. But it is a possible event. And so it will perforce occur somewhere in the multiverse. There is nothing unusual in an unlikely universe existing as long as many more likely universes also exist. The fact that we live in an unlikely universe is simply due to the fact that universes stable enough to support sentient life are all unlikely. One could object that postulating the existence of countless universes is the exact opposite of providing the simplest explanation. No explanation could be more involved. But contrariwise, many-world believers may argue that Occam’s razor disfavours the Copenhagen interpretation, which hypothesizes some mechanism that triggers wavefunction collapse, though no experiments testify to such a thing. In the many-worlds interpretation one set of principles, those of quantum mechanics, governs all.

A principle that has helped advance science in the past, and which the many-world ac- count of fine-tuning definitely does violate, is the Copernican Principle: we are not the center of the universe. There is nothing special about our home on one of the spiral arms of the Milky Way Galaxy, in the Local Group of galaxies. But our universe is special. It is a rare exception among universes, a universe stable enough to host thinking beings, just like our planet is special in that it is the only inhabited planet we know of. Out with the Copernican Principle, in with the Anthropic Universe.

It is in the nature of science to explore the unknown, and so by its very essence, we cannot know what the future of science holds. We cannot predict which guiding principles will prove fruitful to science in the time to come. The purpose of any theory is to make far-reaching and precise predictions, and the ultimate arbiter in matters of science is experimental tests. Grand arguments that appeal to our sense of aesthetics count for little in and of themselves. If the concept of the multiverse is to be a part of the future of science, it will have to play a role in producing new experimentally testable predictions and not simply provide explanations for phenomena we already know of or, even worse, merely dissuade scientists from searching for other explanations that may ultimately be closer to the truth. But the fact that there is a very real possibility that our universe is but a tiny speck in an almost limitless multiverse, where everything that can occur does occur, sure is fun to think about.

]]>

Rigby noticed the headlines of a newspaper on a nearby table. On the front page of *The Saturn Daily*, she saw that a motion had passed in favor of an amendment to standard migration laws. This change would dictate the future of those wanting to migrate beyond the orbit of Neptune in the solar system. Exploration groups with certain qualifying scientific missions would now be allowed to try to set-up working colonies. But there were, of course, many aspects to consider with projects like these. The article had been controversial because of the possibility for *very* expensive rescue missions. She felt a sort of drifting within her thoughts once more and stared out the window. Then, she looked back at the paper just to make sure she had read it correctly.

After waiting quite a long time, she decided to go look for Casey. She got up from her chair and walked down the hallway towards the coffee room. Yet, not far beyond the door of the office, there was Casey bearing hot coffee, tea, milk, and sugar on a tray. Casey looked at Rigby and paused, tray in her hands, before continuing forward. Rigby was beginning to speak but then thought better of it. She quietly followed her back into the office. Casey sat the tray down next to Rigby, and they both grabbed a mug. Rigby put milk in her tea followed by sugar. She added an extra white grainy cube, pausing to enjoy the feeling between her thumb and index finger.

Looking back at the newspaper, Rigby remembered that a new crossword puzzle series would be out in this week's press. She was certain Cameron would enjoy these; they were called “__Theory Thinkers__”.

“Casey, do you mind if I take this newspaper with me?”

“Not at all. They just pile up over time.”

“Thanks.” Rigby replied.

“You know it’s been a really long time since we last saw each other… ”, said Casey as she paused to look at Rigby. “...what month are you? Not that I know anything about having kids. Ich verstehe nur Bahnhof.”

Rigby made a little scoffing sound, “Oh, yeah. I’m at about four… four and a half months along.”

“Wow...” said Casey followed by an extended pause. “You know I always thought I might end up having kids one day, but you know… I don’t know. Sort of scared me a bit. Not that that is what I should be saying right now... uh congratulations you look great and you’re glowing, and you know all that.”

“Yep well. Thanks Casey. You always know exactly what to say.” Rigby was caught off guard by the comment and honestly a bit offended. She was just beginning to really show. She was certainly not looking forward to being gawked at for the next few months.

Casey was not very good at picking up on other people’s emotions. She pressed on, “So…. can I show you something?”

“Yeah. Yeah, go for it.” replied Rigby.

Casey got up and left without a signal as to where they were headed. Rigby was used to this though; she put down her tea and briskly followed. Down the hallway there was a conference room with a larger table, and Casey thumbed through the photos that were spread out on top of it. Rigby wondered if this was the reason for her delay from the coffee room.

“What are these?” Rigby casually scanned the pictures. Quickly after, her demeanor changed. “Where did you get these?” asked Rigby as she turned to look at Casey.

“Some of them are from various social media accounts posting about an academic party, and some of them are from a security camera.” said Casey. “After you called, I thought I would look into the last time Dr. Finch was here, and I noticed something strange.”

Rigby spoke as she continued to scan the photos, “Perhaps for starters this is the same outfit she was wearing the day she was murdered.” She wondered if there was any connection, might be nothing. It wasn’t on the same day the homicide had taken place.

After a minute or two of looking at the photographs, Casey went over to her computer and reprogrammed the pictures. The set of photos on the table changed. Immediately after, Rigby noticed the cube. Rigby said softly “…and she has that cube we found.”

Casey probed her for more information, “You found a cube?”

Rigby thought to herself that there was something unnerving about these pictures: a couple of them had an odd quality about them. Like they were too dark beyond the people in the photos and too crisp on certain features. “What did she do when she was here? Who did she visit?” Rigby asked.

“Well she wasn’t formally invited to be here for any particular reason, but she met with a few other physicists in the department. No one out of the ordinary. However, you don’t see her for about 20 minutes during her visit on campus, which is unusual. We have a pretty good camera system, but there is a clear gap for how she got from one spot to another spot.”

“Your access to the crime scene should clear in the next day or so." said Rigby. Casey perked up and Rigby continued, "I’ll be looking forward to what you can make of that. The killer was very careless. If they were in the system, we would have had an arrest made by now.” replied Rigby with noticeable irritation. Her face looked tense.

“So, the cube?” said Casey.

“Ah, yes that. She died with it in her hands. It’ll all be in the impression database.” replied Rigby. Her mind was beginning to wonder about who else they might be able to talk to from the pictures. Felix Gardener stood out to her in one of the photos. He was a well-known astronomer that had gotten an award recently, but Rigby couldn’t remember the name of it.

Casey noticed her looking at Felix and began by talking, “You might recognize Dr. Felix Gardener in these pictures. He certainly has been in the press a lot recently for his work in the structure and origins of certain superclusters. He met with Anna, and so did her assistant. It was on the way out of his office that she disappeared from the cameras for about 20 minutes. I mean it could just be a glitch or a blind spot in the video, but I think it would be worth our while to speak with Felix.”

“OK, let’s go for it.” Rigby started to take out her comm.

Casey made a halting hand gesture towards Rigby as she was reaching into her bag, “Way ahead of you on that one. I already tried…” said Casey. Rigby looked up and stopped rummaging through her bag. Casey continued, “He hasn’t been seen by anyone in his group for a couple of days, but it also seems he has been doing that lately. His assistant is currently at telescope for Mars tech, you know..." Casey paused to think of the name, "...the New Atacama Observatory!" she said while pointing at Rigby. "The assistant didn't have anything much to say though. But, she did seem distraught when I mentioned Felix.”

“What does his assistant do at the telescope exactly?” asked Rigby.

Casey explained, “She deals with __adaptive optics__. It essentially reduces the effects of atmospheric disturbances on optical images taken by a telescope. She helps calculate corrections that can compensate by deforming a mirror in the telescope. She should be out there for the next few months at least.”

Rigby stated, “Sounds like it’s time for a trip. And, we can wait for your access to the crime scene to clear in transit.”

Casey immediately started walking towards the comm system to find them tickets on the next shuttle out to Mars.

Rigby went home that night a little nervous about telling her husband she would be gone for about a month on a trip to Mars for the Dr. Finch case. He might be a little worried, but she was pretty sure he would keep that to a minimum. The dogs perked up as she walked in the door. “Hey there you two.” Rigby patted them on the head and scratched their backs as she unloaded her coat and bag in the closet.

“Hey there!” Cam shouted out from the kitchen. “I hope you are hungry.”

The dogs followed Rigby to the kitchen. She could smell what seemed to be Cameron’s go to on weeknights, an exotic mushroom quiche. It was divine. “That smells amazing.” she replied as she entered the kitchen. They both smiled at one another and she moved in for a quick kiss. “I have some good news and some bad news.” said Rigby.

“Uh oh. What’s the good news?” said Cameron as he shuffled about the kitchen moving around brussel sprouts on the stove.

“We might have a lead on a case. It seems that a scientist on Mars might be able to tell us about some odd behavior of Anna’s at Rubin. But…”

“... now you have to take a trip to Mars?” said Cam as he finished her sentence.

“Yes, it will be about a month.” replied Rigby with a look of apology in her eyes.

“Well, duty calls. We will just have to make sure you have everything you need.” replied Cam with a smile. “When do you leave?”

“Tomorrow.” Rigby replied.

“If you want to you can go straight up and start packing. I can let you know when dinner is ready?” said Cameron.

“You are amazing did you know that?” said Rigby. She went upstairs to start packing and the dogs followed. It had been a while since she had been to Mars and she was looking forward to it.

The next morning, Casey met Rigby in the shuttle docking port lobby. “So, no issues with Cameron?”

“Why would there be?” said Rigby firmly.

Casey picked up on that one. She stumbled with her words for a second and then said, “Oh, you know... he is a nurse.” She handed Rigby a ticket. Rigby chose to ignore that last comment.

Casey then said, “We will have a couple stops along the way. Firstly, we will pick-up passengers at Garden Station and then at Europa we pick-up cargo. Also, I got us a private two-person bunk for the second shift. Should be pretty cozy.”

“Sounds good, let’s get settled into one of the launch cabins. How long until the second shift?” said Rigby.

“Should be about 8 hours from now.” replied Casey.

“Perfect.” Rigby looked around her. They walked towards the collection of cabins where everyone that was not a crew member gathered at launch time. Long rows of well fastened seats lined the walls. After passing through the first few cabins, Casey and Rigby strapped in about halfway down an empty row of seats.

*In a room on Garden Station...*

Somewhere in between the lies and the little breaths of air she took, the world propelled forward in tandem. Where there was once ambiguity there was now complete and total clarity for Michael. She could feel the production of acid in her stomach going on high. The juxtaposition of this feeling alongside her projection of calm control fueled her performance.

Later that day she would long for a life of complete solitude. Maybe in that life she’d have the ability to find peace through not wanting anything. That’s where she would bury any negative feelings she had from today, in a dream.

*But if complete and total control over her own desires was not achievable, she would have to settle for Michael’s.*

“Does someone see my purse? I can’t remember for the life of me where I left it.” she said with a warm and barely perceptible smile.

Michael picked it up softly and handed it to her from the chair behind her. “Here it is, Camilla.” he said in a tone of frail love.

She smiled lightly and left the elegant office. It was a bubble within a bubble. After clearing the main lobby, she stepped out onto the grating that lined the floor of the upper deck. Children were running in the middle of the plaza two levels lower with dirt and bits of food on their faces. Even a different dialect was more prominent. None of the children she was looking at now would ever be allowed to go from where she just came. It was unlikely they would ever even set foot on the upper deck. To be a guard was an honor.

She had changed her clothing to better fit in. The administration felt that not knowing what you are missing is less painful.

She needed to get to a flight terminal immediately. Thankfully she had what she needed. In her purse were coins used by dock traders nearly ubiquitously across the new frontier and a handful of old-world compatibility chips.

She knew the fastest route to the terminal to take by memory. Long strides and an undeterred gaze made her move like liquid into a cavity on Earth. She saw an open shuttle, gracefully slipped the operator a couple coins and went straight to strap into the first launch cabin she saw with empty seats and about 20 other passengers. She sat in the first open seat she found. Her eyes rolled upwards as a sigh of relief but that was all she allowed herself. She always sat up straight and right after she was done with this brief indulgence, she opened her eyes to look at the other passengers. Just at that moment the passenger to her left said, “Hi, I’m Casey.” Casey then also pointed to her left and said, “This is my colleague Rigby.”

“Nice to meet you.” nodded Rigby.

With that same warm tone in her voice she said, “Hi. I’m Camilla. It’s nice to meet you as well.”

**Make sure to check out the posts this scene takes inspiration from, ****Theory Thinkers**** and ****Adaptive Optics in the Atacama Desert.**** ***This blog post will also available as a podcast in the near future, as read by the author, Erin Blauvelt.*

Author: **Erin Blauvelt, PhD**

]]>

**Dr. Shruti Paranjape is on a discussion panel with Prof. Miranda Cheng, Prof. Chiara Nappi, and Prof. Silvia Penati on June 29, 2021 (starting at 9:00am ES****T) at ****the annual international Strings conference hosted by ICTP-SAIFR which is being held online this year****. We encourage you to watch and/or participate in this important discussion titled, "4 Generations of Women in String Theory".**

Youtube livestream link: https://youtu.be/OBU36ttpg0Y

**Discussion Description: **The persisting under-representation of women and minorities in physics can be linked to a myriad of factors such as pervasive feelings of inadequacy, lack of social support, negative stereotypes, harassment and struggles with work-life balance. The discussion session begins with presentations by four women from four generations, followed by interaction with the audience.

**Speaker Affiliations:**

Dr. Shruti Paranjape is at the Leinweber Center for Theoretical Physics, University of Michigan.

Prof. Miranda Cheng is at the Institute of Physics and Korteweg-de Vries Institute of Mathematics, University of Amsterdam and Academia Sinica, Taiwan.

Emerita Prof. Chiara Nappi is from the Department of Physics, Princeton University.

Prof. Silvia Penati is at the Dipartimento de Fisica, Universitá di Milano-Bicocca.

**S. N. Hazel Mak is giving a shared talk with Yangrui Hu at the gong show on June 23, 2021 (starting at 12:40pm EST) at ****the annual international Strings conference hosted by ICTP-SAIFR which is being held online this year****. Please join in to watch them present, "Solving a 40-year-old Problem: 11D Superfield". Their presentation is scheduled to begin at the end of the gong show around 13:40pm EST.**

Youtube livestream link: https://youtu.be/uonftryWDgs

Presentation Abstract: To write any 11D off-shell supersymmetric theory, one needs to know which component fields constitute a 11D superfield. This deceptively simple and absolutely fundamental problem, however, was left unsolved for 40 years since the introduction of 11D on-shell supergravity in 1978. Last year, we decided to invoke tools from Lie algebraic representation theory, such as Young tableaux and branching rules. For the very first time in history, all the component fields in the 11D unconstrained scalar superfield have been written in Lorentz irreducible representations. Together with the Breitenlohner's method we are able to write all the components for any 11D superfield. This enables us to find out the semi-prepotential and prepotential candidates for 11D supergravity - which are the scalar superfield and the superfield with one spinor index respectively.

Relevant papers: __1911.00807__, __2002.08502__, __2006.03609__, __2007.05097__, __2007.07390__

**Additionally, Hazel will be presenting a poster, "1D, N = 4 Supersymmetric SYK", at the Strings poster session on June 25, 2021 (starting at 12:40pm EST). We encourage all our readers to check out her work here as well!**

Youtube livestream link: https://youtu.be/RIPZEgkO7e4

Poster Abstract: Proposals are made to describe 1D, N = 4 supersymmetrical systems that extend SYK models by compactifying from 4D, N = 1 supersymmetric Lagrangians involving chiral, vector, and tensor supermultiplets. Quartic fermionic vertices are generated via integrals over the whole superspace, while 2(q-1)-point fermionic vertices are generated via superpotentials. The coupling constants in the superfield Lagrangians are arbitrary, and can be chosen to be Gaussian random. In that case, these 1D, N = 4 supersymmetric SYK models would exhibit Wishart-Laguerre randomness, which share the same feature among other 1D supersymmetric SYK models in literature. One difference with 1D, N = 1 and N = 2 models though, is our models contain dynamical bosons.

Relevant paper: __2103.11899__

The mathematical physics group at the University of Edinburgh will soon be taking PhD applications to commence studies in September 2021. Edinburgh is an exciting place for research in both physics and mathematics, as well as being a fantastic city in which to live and work. Our group in the School of Mathematics consists of seven faculty, two postdocs and eleven PhD students, and we are part of several inter-departmental and cross-institutional research initiatives (including the Edinburgh Mathematical Physics Group, Higgs Centre for Theoretical Physics and Maxwell Institute) which provide a vibrant atmosphere for research and learning with seminars, conferences and many visitors. We also have close links with groups working on algebra, geometry, number theory and topology within the Hodge Institute.

Research in the group ranges across many different aspects of mathematical and theoretical physics. Just a few examples of current research projects in the group include: developing non-relativistic gravity and holography; classifying black holes in higher dimensions; finding new tools to compute scattering amplitudes in strong backgrounds; exploring the non-perturbative mathematical structures of quantum field theory; looking for ways to detect "extremal" black holes in gravitational wave data; and investigating the mathematical structure of supersymmetry and supergravity.

The particular interests of the faculty can be found here, with eligibility requirements and application instructions available here. Applications from women and under-represented minorities are actively encouraged! The deadline for applications to get full consideration is 31 January 2021, and short-listed candidates will be interviewed prior to offers being made.

*Post Author: **Tim Adamo, University of Edinburgh*

__Images: ____Old College, University of Edinburgh Wiki CC license____ and ____The coat of arms of the University of Edinburgh, displayed on St Leonard's Land Wiki CC license____.__

Join us for an interview with Theory Girls Laura Johnson and Shruti Paranjupe as well as Callum Jones! This podcast is available for __live stream or download.__

**The University of Chicago**

Existing at the intersection of these identities has been a turbulent time. Eight years ago, I didn’t even know what physics was, much less that I would be forging a path as the soon-to-be third Black woman to earn a physics PhD from my university. As the __~100th Black woman__ to earn a physics PhD in the United States.

If you’ve been following __#BlackInTheIvory__ on Twitter, a hashtag created by Joy Woods and Shardé Davis, you know that Black scholars carry horror stories of racialized experiences in the academy. Part of my #BlackInTheIvory story is that my working definition of physicist has expanded beyond contributing scientific discoveries about our Universe. Being a Black woman in physics means equipping myself with the knowledge to identify and articulate phenomena like implicit bias and imposter syndrome so that I can navigate structural racism and sexism in my field. It means confronting what comes with being the first woman and/or Black person to do science in some labs, control rooms, and department buildings. It means educating my colleagues and advisors about my Black experience so that I can feel safe and supported in my research endeavors. It means adopting this additional work so that I can make it a little easier for the minoritized students who come after me. This is how my identities have come to coexist.

And then #GeorgeFloyd happened. A white police officer pressed his knee into a Black man’s neck for eight minutes and forty-six seconds. George Floyd died that Monday because he couldn’t breathe. I remember watching the video on my lunch break feeling like I couldn’t breathe either.

As the week went on, my ability to focus on physics degraded significantly. How could I pay attention to anything when my community was grieving another loss of Black life? How could I lose myself in C++ scripts when my friends were being tear gassed in the streets? How could I put in the effort to study invisible neutrino interactions over bringing awareness to the invisible stories of Black women victimized by police violence? My mom always tells me that getting an education is my best form of protest – that breaking barriers is the best way I can effect change. But the importance of that work dimmed in light of what I felt mattered more: the right to life while Black. Once again, my identities felt chaotically at odds with each other.

This dissonance became more pronounced as the physics community responded to the murders of #AhmaudArbery, #BreonnaTaylor, #GeorgeFloyd, and #TonyMcDade. All of a sudden, there was an urgent push to address anti-Blackness in academia with intense discussion and swift action. This was necessary and long overdue, but with everything happening at once, I was extremely spread thin. As one of the few Black scientists in my various research communities, I was suffocating from the pressure to be an active voice in these conversations, contribute to community efforts outside of the Ivory Tower, and continue my daily research and extracurricular activities on top of that. My working definition of physicist expanded yet again: I marched, I organized, I fundraised, I attended town halls, I provided feedback on diversity and inclusion initiatives, I checked in with my Black colleagues and family and friends, and I did my best to respond to all the white people reaching out to me and seeking guidance. Being a Black woman in physics means fighting for my people’s right to life outside of the academy alongside their right to exist and thrive within it.

When I decided to be a scientist, I didn’t know that I was signing up to be an activist as well. But my lived experience in grad school has made it such that this is how I reconcile different parts of my identity. I shoulder the load of working toward more equitable academic spaces while simultaneously trying to prevent the persisting inequities from pushing me out. Being a Black woman in physics means that I am often amplified as a trailblazing example of diversity by the same institutions that structurally impede me. But my progress in spite of these barriers should not be romanticized – it should be catalyzed. I don’t want Black students to normalize additional work and the overcoming of obstacles as necessary for their success. I want my non-Black colleagues to deconstruct the systems requiring it to be this way.

Resource list for more reading: __https://www.particlesforjustice.org/resources__

Andrew Shiva / Wikipedia Oxford College Image https://creativecommons.org/licenses/by-sa/4.0/deed.en

]]>**University of California, Santa Barbara**** **

“So, what do you do all day?” is a typical response after I tell anybody, whether it be a friend or a stranger sitting next to me on an airplane, that I study theoretical physics. I have to admit that I haven’t formulated a concrete answer to this yet, even two years into my PhD. Sometimes the hours slip by and I ask myself “what *do *I do all day?” The most truthful response is that I sit down at my desk, read some papers, work on or outline a calculation, troubleshoot a concept I’m stuck on, attend talks and meetings, and consume 1-2 cups of coffee during the entire process. But I know this question digs deeper into the heart of what theoretical physics really is.

I am a graduate student at UC Santa Barbara, where I study a particular subfield of theoretical physics called particle phenomenology. This means I am a model builder, I investigate different extensions of the Standard Model (SM) to see where they might fit in our parameter space of data, working between the realms of mathematical definitions and experimental observations. The SM comprises our current understanding of particle physics, whereby matter is comprised of twelve different fermion (spin-1/2) matter particles (six quarks and six leptons), five force-carrying boson (spin-1) particles, and the forces between them are the strong nuclear force, weak nuclear force, and electromagnetic force. The latter two forces can be combined under a symmetry breaking scheme to form the electroweak force. The model has been extensively tested, most notably at the Large Hadron Collider (a 27 km-long circular particle accelerator located on the border between Switzerland and France) but also continually in tabletop experiments, smaller circular and linear accelerators around the world, and in cosmological probes and datasets. It has been largely successful at making predictions about the world around us.

However, we know that the SM is incomplete. It only includes three fundamental forces — the fourth, gravity, has not been successfully incorporated. We also have extensive evidence for the existence of dark matter and dark energy, neither of which are described within the realm of the SM. Then there are more subtle problems: there are a large number of free parameters in the theory, each of which have to be input directly into our models. There are a number of fine-tuning problems, whereby the values of certain constants must be precisely adjusted to fit experimental data or predictions. This means that the theory does not provide explanations for key questions, such as why the mass of the electron is what it is, falling short of a desired first-principles approach.

The majority of current work in theoretical particle physics revolves around these ideas. But how does this process actually transpire? There are a few basic ingredients that go into a particle-based theory: a description of the fields, their interactions, and which symmetries they obey. This is bundled together in a mathematical depiction, a Lagrangian, as the endpoint of model building. With any given number of parameters, a theory can often be adjusted to create several potential models aiming to describe some specified phenomenon. If this seems vague, it’s because it is — there’s no cookie-cutter method to come up with a new model because you have to see what works. Creating avenues for new quantum fields and interactions is very much an art form, and one that is inherently speculative. The vast majority of theories turn out to have little experimental evidence, but many of them spurn new ideas and keep our collective knowledge trekking forward in the constant pursuit of an all-encompassing theory, which may, or more likely may not, come within our lifetimes.

But we do not despair. Physicists love when they encounter a new finding that does not fit into predictions derived from our current understanding of physics because it means figuring out a new puzzle, ones which are sometimes decades or centuries in the making. This is the heartbeat that drives the field forward. When confronted with an unusual experimental result, or anomaly, our job as theorists is to figure out what the broader implications of such an anomaly might be. Maybe an excess signal was detected — to figure out what it could be, we might check constraints from dark matter parameters, or look into rare particle interactions. We often have a long list of experimental constraints that we must work within the confines of, but every once in a while, some explanation fits.

There are a few general principles that have procured promising results in the past, although have not proven to be consistently reliable. One of these, naturalness, is a principle which roughly states that there should be no parameters in the theory that are either incredibly large or unusually small numbers, misaligned with the magnitude of the other parameters. While a seemingly strange idea, it hints at new physics above the energy scale we can currently probe. And there is definitely unknown physics at a very small distance scale, and hence large energy scale, since we do not have a working theory of quantum gravity describing our universe.

As an example, for decades, physicists have been looking for supersymmetry, a theory that would nicely align the four fundamental forces at a particular energy scale via a relationship between fermions and bosons. However, we have found no evidence for this theory to date — in testing those potential predictions, we continually come up empty handed. While a disappointment, science is often not so straightforward, but we march forward by coming up with new theories. These might be some less-elegant modification of supersymmetry, string theory, or something completely different. It is not always the case that the prettiest or most compelling solutions end up being answers.

To further illustrate this process, let’s walk through an example that I am currently working on: a parity solution to the strong CP problem. The strong CP problem is one of those subtleties of the SM, and it has to do with the fundamental symmetries of charge conjugation (C) and parity (P). The former refers to a symmetry in which positive charges are exchanged for negative ones and vice versa, while the latter refers to a symmetry in which we flip the sign of all spatial coordinates. The CP transformation refers to the combination of these individual symmetries. The “problem” arises because, while this symmetry can be violated in the case of weak interactions (involving the weak force), it is seemingly conserved in the case of strong interactions (involving the strong force). This at first glance might not seem to be a problem, yet we know of no reason why CP should not be violated in the strong interaction. Further, it seemingly sets a value of a parameter in the theory, known as the CP-violating phase, to be a tiny value of no more than one part in 10e-10. This is an example of fine-tuning.

The most widely-studied solution to the strong CP problem is the axion solution, which sets the CP-violating phase to zero via an introduction of a new dynamical field, corresponding to a particle that has not been discovered but is the target for several current and upcoming experiments. Yet this is not the only option. We first notice that the symmetry group of the SM does not obey parity; when “left-handed” particles and symmetries are exchanged with their “right-handed” counterparts, you have fundamentally changed the theory. This is encompassed in the idea of chirality, in which certain phenomena are not identical to their corresponding mirror images. But if the SM symmetry group were to be extended such that parity is obeyed, this has the effect of also setting the CP-violating phase to zero. Hence, a solution!

Here’s an important nuance: when I say this “solves” the strong CP problem, I don’t mean that I’ve definitively found the answer. I mean that this model extends our list of potential answers, because determining the true solution relies on experimental evidence for verification, and we are very stringent in the requirement that we need a 5 sigma standard deviation in order to confirm a result. This means that there is a one in 3.5 million chance that this result originates from a random fluctuation, an incredible level of accuracy. Only once this has been achieved do we declare the “eureka” of discovery.

Yet we do not know how long it will take to reach such a moment, or even if one will come at all. Physicists often work on extended time scales, as theories can take decades or longer to be properly verified or debunked, and many theorists might not see the fruits of their labor. Current technologies we know and love, like smartphones and GPS, are built from relatively old ideas in fundamental physics and proliferated long after their theoretical foundations were developed. But the majority of us are driven instead by the process of continual scientific questioning and discovery that comes only with exploring the deepest questions the universe has to offer. It’s not for everyone, but it certainly works for me.

So, I suppose that is what I do with my days.

Wiki - Standard Model Image Credit: By MissMJ, Cush - Own work by uploader, PBS NOVA [1], Fermilab, Office of Science, United States Department of Energy, Particle Data Group, Public Domain, __https://commons.wikimedia.org/w/index.php?curid=4286964__

**Brown University**

**Hello! I’m Leah, a second year PhD student at Brown University, working on a wide ****variety of problems at the intersections of gravity, cosmology, and high energy theory. Today, in an attempt to think about something other than the pandemic, I’m going to talk to you about one of the biggest problems in modern cosmology—the cosmological constant problem. **

**“Physics is an experimental science” is the favorite refrain from one of my graduate school professors, who wanted to drill into us the very concrete experimental underpinnings of Jackson E&M. As a theorist, whose forays into experiment have included frustration to the point of tears while attempting to use an oscilloscope and blowing up a superconducting magnet, I always bristled at the insinuation that the experimental aspect of physics was somehow more important or more fundamental. However, I have since realized, although there is certainly fundamental value in theoretical work, at the end of the day, if it doesn’t match experimental evidence, we aren’t doing physics any more. I am certainly not the first person to come to this conclusion, and I will not be the last, but fundamental theories have to match observations and experiments if we want to get closer to explaining the universe. One great example of this was with Albert Einstein and his theory of general relativity, in which he inadvertently stumbled upon one of the biggest puzzles in modern cosmology. **

**Today we take it as a given that the universe is expanding. In any introductory astronomy or cosmology class, we learn ****V****r = ****H****o ****D**** ****- ****Hubble’s law, where ****Ho**** is a constant which accounts for the expansion of the universe. In particular, ****Ho**** determines the proportionality between the recessional velocity ****Vr**** of a distant object (i.e. the velocity at which the distant object moves away or 'recedes' from the observer) and the distance D between the object and the observer. Thus, the further away an object is, the faster it moves away from us. However, an expanding universe was not always taken for granted. In fact, our standard ‘Big ****Bang’ cosmology was actually a nickname tossed out with derision in 1949 by Fred Hoyle, an astronomer who was steadfastly convinced that the universe was static, to mock those who were silly enough to think that the universe was expanding. In the early to mid 1900s, there was a great divide between those who thought that the universe was static and those who thought it was expanding. Initially, Albert Einstein was staunchly on the side of those who posited the universe to be an unchanging, eternal entity. **

**In 1915, Einstein wrote down his famous field equations, describing the curvature of spacetime due to matter. The ten coupled, nonlinear, hyperbolic-elliptic partial differential equations can by neatly summarized in the tensor equation: **

**This set of equations compactly describes the theory of general relativity, elegantly summed up by John Wheeler as ‘spacetime tells matter how to move; matter tells spacetime how to curve’. The left hand side of the equation encodes information about the curvature of spacetime, while the right-hand side describes the matter that is present in spacetime. **

**However, there is a missing piece here. When Einstein initially wrote down these equations, he was dismayed to find that they indicated the universe is a dynamic entity, contradicting his (widely accepted) belief that the universe is static. Assuming there was something wrong in the equations, which needed to describe the physical universe as it was perceived to be, Einstein introduced a ‘fudge factor’—a cosmological constant term ****Λ****—****which would hypothetically keep the universe in equilibrium. The full equations then became **

**Everything seemed all well and good until 1929, when observations by Edwin Hubble confirmed that the ****universe was indeed expanding. Einstein regretted his choice to introduce the cosmological constant into his equations, allegedly referring to it as his ‘biggest blunder’. In hindsight, Einstein shouldn’t have been so hard on himself. We now know that the cosmological constant in the field equations represents the vacuum energy of the universe, which is in fact causing the universe to expand! So, in the end, although Einstein’s attempt to match his theory to the physical universe was initially misguided, he did end up with a theory that describes observations, just not the ones he thought.**

**Einstein’s ‘biggest blunder’ actually ended up being correct, but all is not well with the cosmological ****constant yet. There is another issue, what is generally referred to as ‘the cosmological constant problem,’ which is a ~120 order of magnitude discrepancy between the observed and theoretical values of ****Λ****. Yes, you read that correctly. This conundrum unfortunately cannot be resolved trivially. Given data from the Planck satellite, Λ has an experimental value of 4.33 x 10^-66 eV^2. On the theoretical side, effective field theories predict that ****Λ**** should be on the order of the Planck mass squared, or ~10^54 eV^2. Again, to emphasize, these two values differ by ****120 orders of magnitude****. Although it may be tempting to throw up our hands, add in a ‘~’ instead of an ‘=’, and shrug off the discrepancy, because after all, what are a few orders of magnitude between friends, this is a more difficult problem. There is clearly either something going very wrong with the measurements, or there is a problem with the theory. This may seem hopeless, but fortunately for us there are many smart people working on this problem. There are many possibilities to explain this issue, though none have done so definitively. Three categories of paths to a solution include (but are certainly not limited to):**

**The anthropic principle: Using anthropic arguments, one could consider that there are many different values of the cosmological constant in many different parts of the universe (or multiverse), and we just happen to observe the small value because we live in a patch that happens to have a small****Λ****value. These arguments tend to be unpopular, but if you are willing to accept a multiverse and that physical quantities can be randomly assigned, then this may be the solution for you.****Modifications to general relativity: General relativity is a complete theory, which has been verified to great observational precision. However, there is potential for some modification or extension of the theory that resolves the cosmological constant problem. For example (to include my own research for a moment), one could imagine a scenario in which****Λ****is allowed to vary. If****Λ****itself is dynamical, then it is certainly feasible for it to have multiple values and the discrepancy is not as much as an issue.****QFT solutions: One can also look at the underlying quantum field theory processes that govern****Λ****and resolve it this way. For example, some sort of unknown scalar field dynamics in the early universe could 'relax' the value of the****Λ****in the same way that has been suggested for the very similar electroweak hierarchy problem.**

**This is of course not an exhaustive list of possibilities. There are a myriad of papers on approaching this problem from all possible angles, and yet we still do not have a definitive solution. The cosmological constant problem is still a huge puzzle and an active area of research in cosmology. If you have any thoughts or ideas about the cosmological constant problem, I would love to discuss them with you! Even if you think your idea might be blatantly wrong, just remember, what Einstein thought was one of his greatest failures ended up leading to one of the most important components of modern cosmology. You never know! **

Einstein Image from Wikipedia - by Ferdinand Schmutzer - http://www.bhm.ch/de/news_04a.cfm?bid=4&jahr=2006 [dead link], archived copy (image), Public Domain, __https://commons.wikimedia.org/w/index.php?curid=34239518__

Wilson Telescope photo from Wikipedia - Photograph ©Andrew Dunn, 1989. Website: http://www.andrewdunnphoto.com/ held under the __creative commons license__

**The interview begins with Laura asking Prof. Gerard 't Hooft what inspired him to study physics. We hope you enjoy it! ****Stream it on our podcast page****.**

She is not too far off from the truth. Indeed, the experiment I work on in the basement of Aarhus University’s physics building could be described as a glorified freezer. I have used these very words when giving lab tours to those outside of the field of cold atom physics.

That’s right, a lab. An experimentalist has invaded the theory girls. (Run!)

I spent a lot of time thinking about what I wanted to write here. In many ways, writing a coherent, informative, and engaging blog post is much harder than writing a research paper. Or is it? For a research paper, my colleagues and I present our idea, its background, and then we describe what we did. Sprinkle in a bit of an outlook and you have a reasonable first draft. Why not do that here?

Let’s start with my background, then we can get to the fun stuff.

I’m a postdoctoral researcher in experimental cold atom physics at Aarhus University in Denmark. I’m originally from Austin, Texas, and I did all of my university education at the University of Colorado at Boulder. Eventually, they gave me a PhD and I found a job out here doing what I would call quantum control and simulation. When people ask me (not my mom) what I do for a living, I typically tell them that I shoot lasers at things, very carefully.

**The author, very pleased with how the experiment is functioning.**

*Okay. Let’s break this down.*

Quantum mechanics deals with things at very small scales. In my particular subfield, I deal mostly with atoms and their interaction with electromagnetic fields. A few decades ago, a lot of work was done to understand how to cool bosonic atoms to within billionths of absolute zero (this is known as a __Bose-Einstein condensate, or BEC__). Since that wasn’t hard enough, __physicists decided to cool fermions too__ (it turns out the sign problem that fermions give rise to requires folks to be a bit more clever about how to get such atoms to do their bidding). The next logical step is molecules, so researchers are hard at work understanding the intricacies of molecular cooling and trapping (find a description of some of these molecular cooling techniques __here__).

This is all cool stuff (pun absolutely intended), but it’s not my shtick, at least not right now. I work with one of the simplest atoms, not named hydrogen, one can play with. Rubidium was one of the first atoms condensed into a BEC, and it’s laughably simple to work with compared to the exotic tricks required for more complex atomic and molecular systems. Rubidium is still a super useful atom to play with, though, so it remains one of the workhorses of cold atom physics.

Indeed, what we do in our basement is known as __ quantum gas microscopy__. Simply put, we make BECs of rubidium and trap them in light fields generated by a laser. It’s like a tractor beam for atoms, and that sci-fi analogy never gets old. Even better, if I take a laser beam and reflect it back onto itself, it interferes with itself to make a very nicely ordered pattern of alternating light and dark spots, and I can trap the atoms in the light spots. We call such a trap an

**An illustration of an optical lattice in one dimension. **A laser (wavelength λ, red)

hits a mirror (right side of image) and reflects back on itself, creating a sinusoidal

interference pattern of nodes (no light) and anti-nodes (with light). The period of

the sinusoidal pattern (a) is half of the laser wavelength, and we typically describe

the *depth* of the resulting atom trap in terms of a parameter labelled here by V.

In a 3D lattice, we overlap these optical lattices created from lasers coming from

all three spatial dimensions.

The atoms can then be trapped in this 3D lattice like eggs in an egg carton, but even that is a bit of an oversimplification. In fact, if we have an infinite lattice, Bloch’s theorem says that the ground state of the system is *delocalized* in that a trapped atom effectively samples each lattice site. Thus, the atoms act a lot more like broken eggs in an egg carton. We call this a *superfluid*.

If you make this lattice deep enough and the atoms cold enough, the atoms will start to act more and more *localized*--like whole, intact eggs. This describes what is known as a *Mott insulator*, a term lifted from condensed matter physics. If we then trap these atoms close to a microscope objective, we can take incredible images with single-lattice-site resolution. If we build a Mott insulating state with precisely one atom per lattice site, we can get incredible images of single atoms. This enables us to study an incredible variety of systems, and this has given rise to the field of quantum simulation.

An image taken with the Aarhus quantum gas microscope. **The purple dots are **

**single rubidium atoms, and their color is indicated by the number of photons **

**that we collect from each atom.**

The idea here is straightforward: given a system that is relatively hard to study, like electrons in a solid (here, fermions do a better job of this than our good bosonic friend rubidium) or photons in a light-harvesting complex, we can set up analogous systems in relatively easy to study systems, and ideally, conclusions drawn from experiments with these simpler systems can be mapped onto the more difficult system. Cold atom systems are fantastic for these types of studies. Cold atoms are slower and easier to control than electrons (and certainly slower than photons), and with each passing year, researchers are pushing the technologies and techniques behind this control, allowing for extraordinarily precise experiments that simulate more and more complex systems.

We call these types of __quantum simulators__ *analog quantum simulators*. This is in comparison to *digital quantum simulators*. You may know the latter of these by their more popular name, *quantum computers*. That’s right. Given enough qubits (the quantum analog to a bit) with low enough noise, quantum computers are *universal* in that they can simulate any quantum system. This could lead to serious advances in the understanding of high-temperature superconductivity and quantum chemistry, among many others. I chose the above two examples because they have potentially deep technological implications that could help us improve sustainability and health. Of course, concrete solutions are a long way off, but it’s always fun to dream. Personally, knowing potential applications of my research, even if the road is long, winding, and bumpy, has always been a huge motivator for me.

Okay, I just spent the whole last paragraph telling you about how cool digital quantum simulation is, but I only have an analog quantum simulator. Analog simulators are not (necessarily) universal, and indeed, systems like the one we have in Aarhus are limited in terms of the systems that can be simulated. Why on Earth would we study these sub-par simulators?

Well, right now, they’re not sub-par at all. Currently, the best quantum computers that we have exist firmly in the __noisy, intermediate-scale qubit (NISQ) regime__. Basically, we don’t have a lot of qubits to play with, and the ones we have are riddled with errors. As such, it’s really hard to get useful results out of these systems, even though researchers are becoming increasingly clever at getting these systems to perform as well as possible. Analog quantum simulators are great tools because they are relatively robust to errors. Basically, if you have a good enough analog simulator that captures the important bits of what you are trying to model, you can set the system up and let it run. What you get out when you make appropriate measurements (e.g. of where your atoms are) should be what you expect, given that your lasers and atoms are behaving the way they should. This will allow researchers to make great headway on relevant problems in the field (like high-Tc superconductivity) while we simultaneously work to make quantum computers better and better.

In Aarhus, our experiment is still young. It’s still a bit of a teenager--moody and temperamental. We’re working hard to bring it to the maturity that we need to study the systems that we are interested in. We have wonderful pictures of single atoms in a 3D lattice that were taken with our microscope, and, when everything is complete, we want to study how atoms behave in two- and three-dimensional systems. We want to watch atoms hop around in our lattice, and we want to control how they move and interact. To do this, we will project light (another tractor beam!) up through our microscope objective and onto the atoms. The atoms will “see” this light as an additional potential, and this additional potential will affect how the atoms interact with one another and move through the lattice. These are the knobs we have to turn when we build our simulations, and until we have a quantum computer that can simulate any system we want, these types of analog quantum simulators will get a lot of use in laboratories around the world.

How does this additional tractor beam work? How do we shape the traps that the atoms experience? To do this, we use a *digital mirror device*, or DMD. DMDs are a special example of experimental tools known as *spatial light modulators*. Other spatial light modulators work by using sound waves to change how light moves through crystals (these are known as *acousto-optic deflectors*), and some use liquid crystal technology to manipulate the polarization and amplitude of light. A DMD is basically an array of very small mirrors, about 10 µm in size (smaller than the width of a human hair). Each of these mirrors is individually controllable and can be set to be either on or off, so we basically have a binary image that we can load onto the mirrors. Then, we hit the mirrors with our laser light, and the mirrors that are set to the on state reflect the light, and the reflected light then travels up through the microscope objective and into the system. By controlling the mirror pattern, we can control what gets projected through the microscope objective. Incidentally, this is exactly how projectors work!

**A schematic showing how we collect light from the atoms **and use the DMDs to project

potentials onto the atom. (a) A cartoon of the experiment, showing the science chamber, the

high-resolution microscope objective (oddly colored like a cigarette), and the digital mirror

device. Light from the atoms l(called *fluorescence light*) bounces off of a dichroic (two-color)

mirror and goes to a camera that records how many photons hit each pixel. *Projection light*

from a laser hits the DMD, passes through the dichroic mirror, land is projected onto the

atoms. The DMD pattern is controlled by a black-and-white llbitmap that we upload to the

DMD chip. (b) Zoom-in on the DMD, showing the individual mirrors that can be individually

manipulated to pointl in one of two directions (corresponding to an *on* state and an *off* state).

(c) The pattern shown on the computer in (a) produces the image shown here, which we like

to call *atomic luv*. You can see single atoms in this image too!

Our DMD patterns are limited by the size of the beam that we can project onto the atoms, but we still have a massive range of potential systems that we can simulate. Most of these possible systems are ones that we probably haven’t even thought about yet, and that’s okay! As we make our experiment better and better, we will uncover more and more systems that it is capable of simulating, and there will be no end to the science that we can produce.

The work that my colleagues and I do isn’t limited to experiments. We like to play with theoretical quantum control. We’ve even shown recently that AlphaZero, a state-of-the-art machine learning algorithm, can be __used to optimize gates for quantum computers__. We like to play games with science, __both inside and outside the quantum world__, including an __awesome project__ where we opened our laboratory up for citizens and experts to play in--remotely!

To me, this versatility is one of the best things about working as a physicist. With a physics degree, you have learned the skills to think and work through difficult problems. Physicists can learn to work and play in any environment, inside or outside of academia. This post has illustrated a bit of the playground that I have chosen, but it is only one option in an infinite space of interesting and useful arenas where a physicist can ply their trade.

]]>