symmetry

Construction begins on SuperCDMS SNOLAB

The SuperCDMS SNOLAB project is expanding the hunt for dark matter to particles with properties not accessible to any other experiment.

Photo of one of the experiment's detector crystals within its protective copper housing

The US Department of Energy has approved funding and start of construction for the SuperCDMS SNOLAB experiment, which will begin operations in the early 2020s to hunt for hypothetical dark matter particles called weakly interacting massive particles, or WIMPs. The experiment will be at least 50 times more sensitive than its predecessor, exploring WIMP properties that can’t be probed by other experiments and giving researchers a powerful new tool to understand one of the biggest mysteries of modern physics.

SLAC National Accelerator Laboratory is managing the construction project for the international SuperCDMS collaboration of 111 members from 26 institutions, which is preparing to do research with the experiment.

"Understanding dark matter is one of the hottest research topics—at SLAC and around the world," says JoAnne Hewett, head of SLAC’s Fundamental Physics Directorate and the lab’s chief research officer. "We're excited to lead the project and work with our partners to build this next-generation dark matter experiment."

With the DOE approvals known as Critical Decisions 2 and 3, the researchers can now build the experiment. The DOE Office of Science will contribute $19 million to the effort, joining forces with the National Science Foundation, which will contribute $12 million, and the Canada Foundation for Innovation, which will contribute $3 million.

“Our experiment will be the world’s most sensitive for relatively light WIMPs—in a mass range from a fraction of the proton mass to about 10 proton masses,” says Richard Partridge, head of the SuperCDMS group at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of SLAC and Stanford University. “This unparalleled sensitivity will create exciting opportunities to explore new territory in dark matter research.”

An ultracold search 6800 feet underground

Scientists know that visible matter in the universe accounts for only 15 percent of all matter. The rest is a mysterious substance called dark matter. Due to its gravitational pull on regular matter, dark matter is a key driver for the evolution of the universe, affecting the formation of galaxies like our Milky Way. It therefore is fundamental to our very own existence.

But scientists have yet to find out what dark matter is made of. They believe it could be composed of dark matter particles, and WIMPs are top contenders. If these particles exist, they would barely interact with their environment and fly right through regular matter untouched. However, every so often, they could collide with an atom of our visible world, and dark matter researchers are looking for these rare interactions.

In the SuperCDMS SNOLAB experiment, the search will be done using silicon and germanium crystals, in which the collisions would trigger tiny vibrations. However, to measure the atomic jiggles, the crystals need to be cooled to less than minus 459.6 degrees Fahrenheit—a fraction of a degree above absolute zero temperature. These ultracold conditions give the experiment its name: Cryogenic Dark Matter Search, or CDMS. The prefix “Super” indicates an increased sensitivity compared to previous versions of the experiment.

The collisions would also produce pairs of electrons and electron deficiencies that move through the crystals, triggering additional atomic vibrations that amplify the signal from the dark matter collision. The experiment will be able to measure these “fingerprints” left by dark matter with sophisticated superconducting electronics.

The experiment will be assembled and operated at the Canadian laboratory SNOLAB—6,800 feet underground inside a nickel mine near the city of Sudbury. It’s the deepest underground laboratory in North America. There it will be protected from high-energy particles called cosmic radiation, which can create unwanted background signals.

“SNOLAB is excited to welcome the SuperCDMS SNOLAB collaboration to the underground lab,” says Kerry Loken, SNOLAB project manager. “We look forward to a great partnership and to supporting this world-leading science.”

Over the past months, a detector prototype has been successfully tested at SLAC.

“These tests were an important demonstration that we’re able to build the actual detector with high enough energy resolution, as well as detector electronics with low enough noise to accomplish our research goals,” says KIPAC’s Paul Brink, who oversees the detector fabrication at Stanford.

Together with seven other collaborating institutions, SLAC will provide the experiment’s centerpiece of four detector towers, each containing six crystals in the shape of oversized hockey pucks. The first tower could be sent to SNOLAB by the end of 2018.

“The detector towers are the most technologically challenging part of the experiment, pushing the frontiers of our understanding of low-temperature devices and superconducting readout,” says Bernard Sadoulet, a collaborator from the University of California, Berkeley.

A strong collaboration for extraordinary science

In addition to SLAC, two other national labs are involved in the project. Fermi National Accelerator Laboratory is working on the experiment’s intricate shielding and cryogenics infrastructure, and Pacific Northwest National Laboratory is helping understand background signals in the experiment, a major challenge for the detection of faint WIMP signals.

A number of US and Canadian universities also play key roles in the experiment, working on tasks ranging from detector fabrication and testing to data analysis and simulation. The largest international contribution comes from Canada and includes the research infrastructure at SNOLAB.

“We’re fortunate to have a close-knit network of strong collaboration partners, which is crucial for our success,” says KIPAC’s Blas Cabrera, who directed the project through the CD-2/3 approval milestone. “The same is true for the outstanding support we’re receiving from the funding agencies in the US and Canada.”

Fermilab’s Dan Bauer, spokesperson of the SuperCDMS collaboration says, “Together we’re now ready to build an experiment that will search for dark matter particles that interact with normal matter in an entirely new region.”

SuperCDMS SNOLAB will be the latest in a series of increasingly sensitive dark matter experiments. The most recent version, located at the Soudan Mine in Minnesota, completed operations in 2015.

”The project has incorporated lessons learned from previous CDMS experiments to significantly improve the experimental infrastructure and detector designs for the experiment,” says SLAC’s Ken Fouts, project manager for SuperCDMS SNOLAB. “The combination of design improvements, the deep location and the infrastructure support provided by SNOLAB will allow the experiment to reach its full potential in the search for low-mass dark matter.”

Editor's note: A version of this article was originally published as a SLAC press release.

Continue reading

Q&A: SLAC’s archivist closes a chapter

Approaching retirement, Jean Deken describes what it’s like to preserve decades of collective scientific memory at a national lab.

Jean Deken holding abstract
Jean Deken portrait

Jean Deken was hired at SLAC National Accelerator Laboratory for a daunting task—to chronicle the history and culture of the decades-old lab and its reseachers as the fast pace of its science continued. She became SLAC’s archivist on April 15, 1996. 

Deken is retiring after more than 20 years at the lab. In this Q&A, she discusses big changes in physics, the challenges that archivists face, and her most surprising finds.

What was it like when you first arrived at the lab?

JD

At the time, I remember feeling overwhelmed because the archives were unstaffed for more than a year. When I arrived, I couldn’t fully open the door to my office because there were so many boxes that had been stacked there. Gradually, I unearthed the desk, chair, computer and phone. 

BaBar was ramping up, which was the big experiment at the time that was exploring antimatter, the interactions of quarks and leptons, and new physics. The physicists wanted to know what do with their records, because they knew they were making history. 

The Superconducting Super Collider in Texas had recently been canceled, and the contents of their library were distributed to other labs. SLAC received pallets and pallets of microfilmed physics journals. I worked with the library to figure out what to do with all them. 

There was a pent-up need to get information into the archives. Because I was so busy, I sometimes didn’t have time to eat until the evenings.

How did you first get involved with archiving science?

JD

I was looking for a part-time job between undergraduate and graduate school, and I began working at the Missouri Botanical Garden as a cataloguing assistant. There was a stack of stuff in the corner of the cataloguing department that no one wanted to go near. I started digging into it and found manuscripts from the early days of the botanical garden by the founder and his scientific advisor. 

I became fascinated by these documents, and the director of the library told me, “What you’re interested in, that’s called archiving.”

So I acquired some archival procedure manuals and started working on arranging these papers. Soon, I began fielding all the questions the library got about the history of the garden.

How did you make your way to SLAC?

JD

For many years, I worked at the National Archives in St. Louis, Missouri. While I was there, the Archives decided to celebrate the 50th anniversary of World War II in a really big way. In St. Louis we made a traveling exhibit that focused on war efforts of civilian and military personnel. I took the lead on looking into the civilian war effort, which included Women Air Service Pilots (WASPs) and scientists working in research and development, including those whose work contributed to the Manhattan Project. 

Working on the exhibit, I became increasingly aware of the importance of preserving scientific perspectives as we uncovered stories hidden in personnel records. I thought, “Why did I never hear about this before?” It’s partly because the records of these efforts were scattered. That got me interested in learning more about archiving the records of government science.

At the same time, contemporary records were going electronic, in a big way. I remember thinking, “This changes everything.” I decided that the best solution for an archivist would be to be as close as possible to the records as they’re being created, to be embedded in an organization while working on how to preserve this information. Wanting to be an embedded archivist, and wanting to work with the records of government science, I applied for the archivist job at SLAC, and they offered it to me the day of my interview.

What does it mean to process an archival collection, exactly?

JD

For paper collections, you process the documents to try and maintain the original order. The contextual information gives insight into the personality and intellect of the records’ creator. But there’s often disorder in storage and therefore in reconstructing the original order. 

The first stage is to create an inventory of every box and folder and tag each item to see connections with institutions and topics. This is how to make sure the contents are roughly chronological and sorted by topic. 

Next I would make sure the documents were stored in acid-free boxes and file folders. At this point, I would also look for contaminants, such as acidic paper, insects, old tape and rusty staples. For these damaged items, I would sometimes simply remove the contaminants, and other times [for more damaged items] photocopy the documents on acid-free paper and store the original in a protective sleeve. 

In one collection, I found an envelope full of cash. I went back to the scientist and said, “I’ve never gotten a tip before.” He had been collecting meal money for a conference and had lost track of the envelope. 

After this physical work is done, I would create an electronic guide to the contents. We have also digitized some of the hardcopy archival materials when requested, and those copies are kept in a digital repository. We have just begun to dip our toes into archiving the lab’s digital materials, starting with photographs. The type of digital storage we are using is really an interim fix.

Speaking of the discipline, what are some of the challenges archivists face?

JD

I’ve been concerned about electronic records for decades now. The problem with digital records is that no one’s figured out how to make them last. This is still true, and it’s something archival science needs to address as a field. There are quite a few questions we’re asking ourselves: What data and records are worth preserving? How long should they be saved? Who will save them? And who gets access?

One of my own future efforts in the field—I’ll keep busy during retirement—has to do with data archiving. With data, there’s such a vast amount of information, and each scientific discipline has different protocols. At international and national labs such as SLAC, many of the scientists come from elsewhere, and there are various agreements and regulations about responsibilities towards data and records. I’m working on proposing policies for these varied situations using SLAC datasets as a test case. 

Was it challenging to learn enough about the science to preserve it well?

JD

During the interview for the job, I asked, “You know I don’t have a physics background, why are you interested in me?” The interviewers told me, “We can teach you the physics that you need to know, and we also consider it part of our job to be able to explain physics.” But they told me they needed me to figure out the government regulations that relate to archives. 

When I started, I bought children’s books about physics, listened and asked a lot questions.

What have you learned about scientists themselves?

JD

It surprised me that these absolutely brilliant scientists were actually down-to-earth and approachable. The experimentalists, for example, would test you to make sure you knew your stuff, but then they considered you a member of their team. The researchers are used to multidisciplinary teams and needed to know that you could pull your own weight. 

I was also accustomed to a corporate government setting, and the environment at the lab was totally different. At first, I could not dress down enough to fit in. It was a funny, unexpected cultural shift.

How has the lab changed, from your perspective?

JD

The place has changed completely. When I started, SLAC was a single-purpose lab—focusing on high-energy physics. Later, it became a multipurpose laboratory and expanded into many other research areas. 

In the 1990s, SLAC was mature in the field of high-energy physics. The leaders of the lab had a sense that we had a history that needed to be preserved.

That generation has moved on, and with the shift in scientific focus, everything is new enough that there’s a different sense of history. Right now, we are running full tilt to get research programs set up, and that’s where a lot of the attention is aimed. I often have to say to the scientists, “Remember, you’re doing something that’s historic.”

What are some of the projects you’re most proud of?

JD

During my interview, several people mentioned SLAC’s involvement with the early web. 

SLAC has the oldest web pages still in existence. Even though Tim Berners-Lee at CERN created the first website, the original code wasn’t preserved. It has to do with a quirk of HTML—when you overwrite the code, it disappears. At SLAC, Louise Addis and Joan Winters had the foresight to understand this from almost the beginning, and they saved the original HTML pages from the first North American website. So, I was able to deposit those pages into the Stanford Web Archives when it was established a few years ago.

I was also a co-author of [SLAC Founding Director] Pief Panofsky’s memoir, which I edited. I like to tell people that his first language wasn’t German; it was physics. I really had to pull the story out of him to get the full flavor of what he wanted to say, but it was a lot of fun. 

Overall, I’m really proud of the SLAC archives. It’s a robust and well-respected program with minimal resources. And it’s been a whole lot of fun. There’s nothing I’d rather have done.

Continue reading

First collisions at Belle II

The Japan-based experiment is one step closer to answering mystifying questions about antimatter.

Dozens of researchers celebrate in the control room after first collisions.

For the first time, the SuperKEKB collider at the KEK laboratory in Tsukuba, Japan, is smashing together particles at the heart of a giant detector called Belle II.

“These first collisions represent a moment that all of us at Belle II have been looking forward to for a long time,” says Elisabetta Prencipe, a scientist at the German research center Forschungszentrum Juelich who works on particle tracking software and statistical analyses for Belle II. “It’s a step forward to opening a new door to the universe and our understanding of it.”

The project looks for potential differences between matter and its mirror-world twin, antimatter, to figure out why our universe is dominated by just one of the pair. The experiment has been seven years in the making.

During construction of the Belle II detector, the SuperKEKB accelerator was recommissioned to increase the number of particle collisions, a measure called its luminosity. Even now, the accelerator is preparing for the second part of this upgrade, which will take place in stages over the next 10 years. The upgrade will more tightly focus the beams and solidify SuperKEKB’s position as the highest-luminosity accelerator in the world.

On March 21, SuperKEKB successfully stored an electron beam in the main ring, and on March 31 it stored a beam of positrons, the electron’s antimatter counterparts. With the two colliding beams in place, Belle II saw its first successful collisions today.

Pink and blue swirls radiate out from a black center: the first particle collisions seen by the Belle II detector.
KEK/Belle II

The beauty of quarks

Scientists predict that antimatter and matter should have been created in equal amounts during the hot early stages of the big bang that formed our universe. When matter and antimatter meet, they annihilate in a burst of energy. Yet despite their presumed equal ratio, matter has clearly won the fight, and now makes up everything we see around us. It is this confounding mystery that Belle II seeks to unravel.

Belle II’s beauty lies in its ability to detect unimaginably minute debris from high-energy collisions between electrons and positrons—particles so small they aren’t made up of anything else. In this debris, scientists look for physics beyond what they currently know by comparing particles’ properties to their predictions. The detector is especially sensitive to how other fundamental particles called quarks decay. It can closely study both quark properties and the structure of hadrons: particles made of multiple quarks bound together tightly.

At Belle II’s core, electrons and positrons collide at a high enough energy to create B-mesons, particles made of one matter and one antimatter quark. Scientists are particularly interested bottom quarks, also known as beauty quarks.

Bottom quarks are produced along with charm quarks at the center of Belle II. Both are heftier cousins of up and down quarks, which make up all ordinary matter, including you and whatever device you’re using to read this article. The collisions also produce tau leptons, which are like massive electrons. All of these particles are seldom found in nature, and observing them can reveal new physics.

Since B-mesons contain bottom quarks, which have diverse kinds of decays, scientists will use Belle II to observe the different meson decays. If a meson containing regular quarks decays differently than one containing their antimatter twins, this could help explain why the universe is full of matter.

Bolstering Belle

Belle II is the successor of earlier experiments used to produce B-mesons, Belle and BaBar. It will record about 40 times as many collisions as the original Belle. It’s also a tremendous collaboration between 25 countries, with 750 national and international physicists.

“Every measurement we’ve made until this point and every hint of new physics is limited by statistics and by the amount of data we have,” says Tom Browder, professor at the University of Hawaii and spokesperson for Belle II. “It’s very clear that to find any new physics we need much more data.”

With more collisions at the center of Belle II, scientists have more opportunities for an uncommon or unheard-of decay event to take place, giving them better insight into quarks’ behavior and how it factors into the universe’s creation.

“With 40 times more collisions per second than the previous Belle experiment, we’ll be able to search for rare decays, possibly observe new particles, and try to answer still unsolved questions about the origin of the universe,” Prencipe says. “Many of us are quite excited because this could mean the start of a new era, where lots of data are expected, new detectors will be tested, and we have great possibilities to perform unique physics.”

Continue reading

The coevolution of physics and math

Breakthroughs in physics sometimes require an assist from the field of mathematics—and vice versa.

Einstein and Reimann in a geometric field of color, pattern, and shape

In 1912, Albert Einstein, then a 33-year-old theoretical physicist at the Eidgenössische Technische Hochschule in Zürich, was in the midst of developing an extension to his theory of special relativity. 

With special relativity, he had codified the relationship between the dimensions of space and time. Now, seven years later, he was trying to incorporate into his theory the effects of gravity. This feat—a revolution in physics that would supplant Isaac Newton’s law of universal gravitation and result in Einstein’s theory of general relativity—would require some new ideas.

Fortunately, Einstein’s friend and collaborator Marcel Grossmann swooped in like a waiter bearing an exotic, appetizing delight (at least in a mathematician’s overactive imagination): Riemannian geometry. 

This mathematical framework, developed in the mid-19th century by German mathematician Bernhard Riemann, was something of a revolution itself. It represented a shift in mathematical thinking from viewing mathematical shapes as subsets of the three-dimensional space they lived in to thinking about their properties intrinsically. For example, a sphere can be described as the set of points in 3-dimensional space that lie exactly 1 unit away from a central point. But it can also be described as a 2-dimensional object that has particular curvature properties at every single point. This alternative definition isn’t terribly important for understanding the sphere itself but ends up being very useful with more complicated manifolds or higher-dimensional spaces.

By Einstein’s time, the theory was still new enough that it hadn’t completely permeated through mathematics, but it happened to be exactly what Einstein needed. Riemannian geometry gave him the foundation he needed to formulate the precise equations of general relativity. Einstein and Grossmann were able to publish their work later that year.

“It’s hard to imagine how he would have come up with relativity without help from mathematicians,” says Peter Woit, a theoretical physicist in the Mathematics Department at Columbia University. 

The story of general relativity could go to mathematicians’ heads. Here mathematics seems to be a benevolent patron, blessing the benighted world of physics with just the right equations at the right time. 

When you go far enough back, you really can’t tell who’s a physicist and who’s a mathematician.

But of course the interplay between mathematics and physics is much more complicated than that. They weren’t even separate disciplines for most of recorded history. Ancient Greek, Egyptian and Babylonian mathematics took as an assumption the fact that we live in a world in which distance, time and gravity behave in a certain way. 

“Newton was the first physicist,” says Sylvester James Gates, a physicist at Brown University. “In order to reach the pinnacle, he had to invent a new piece of mathematics; it’s called calculus.”

Calculus made some classical geometry problems easier to solve, but its foremost purpose to Newton was to give him a way to analyze the motion and change he observed in physics. In that story, mathematics is perhaps more of a butler, hired to help keep the affairs in order, than a savior.

Even after physics and mathematics began their separate evolutionary paths, the disciplines were closely linked. “When you go far enough back, you really can’t tell who’s a physicist and who’s a mathematician,” Woit says. (As a mathematician, I was a bit scandalized the first time I saw Emmy Noether’s name attached to physics! I knew her primarily through abstract algebra.)

Throughout the history of the two fields, mathematics and physics have each contributed important ideas to the other. Mathematician Hermann Weyl’s work on mathematical objects called Lie groups provided an important basis for understanding symmetry in quantum mechanics. In his 1930 book The Principles of Quantum Mechanics, theoretical physicist Paul Dirac introduced the Dirac delta function to help describe the concept in particle physics of a pointlike particle—anything so small that it would be modeled by a point in an idealized situation. A picture of the Dirac delta function looks like a horizontal line lying along the bottom of the x axis of a graph, at x=0, except at the place where it intersects with the y axis, where it explodes into a line pointing up to infinity. Dirac declared that the integral of this function, the measure of the area underneath it, was equal to 1. Strictly speaking, no such function exists, but Dirac’s use of the Dirac delta eventually spurred mathematician Laurent Schwartz to develop the theory of distributions in a mathematically rigorous way. Today distributions are extraordinarily useful in the mathematical fields of ordinary and partial differential equations.

Though modern researchers focus their work more and more tightly, the line between physics and mathematics is still a blurry one. A physicist has won the Fields Medal, one of the most prestigious accolades in mathematics. And a mathematician, Maxim Kontsevich, has won the new Breakthrough Prizes in both mathematics and physics. One can attend seminar talks about quantum field theory, black holes, and string theory in both math and physics departments. Since 2011, the annual String Math conference has brought mathematicians and physicists together to work on the intersection of their fields in string theory and quantum field theory.

String theory is perhaps the best recent example of the interplay between mathematics and physics, for reasons that eventually bring us back to Einstein and the question of gravity.  

String theory is a theoretical framework in which those pointlike particles Dirac was describing become one-dimensional objects called strings. Part of the theoretical model for those strings  corresponds to gravitons, theoretical particles that carry the force of gravity.

Most humans will tell you that we perceive the universe as having three spatial dimensions and one dimension of time. But string theory naturally lives in 10 dimensions. In 1984, as the number of physicists working on string theory ballooned, a group of researchers including Edward Witten, the physicist who was later awarded a Fields Medal, discovered that the extra six dimensions of string theory needed to be part of a space known as a Calabi-Yau manifold. 

When mathematicians joined the fray to try to figure out what structures these manifolds could have, physicists were hoping for just a few candidates. Instead, they found boatloads of Calabi-Yaus. Mathematicians still have not finished classifying them. They haven’t even determined whether their classification has a finite number of pieces. 

As mathematicians and physicists studied these spaces, they discovered an interesting duality between Calabi-Yau manifolds. Two manifolds that seem completely different can end up describing the same physics. This idea, called mirror symmetry, has blossomed in mathematics, leading to entire new research avenues. The framework of string theory has almost become a playground for mathematicians, yielding countless new avenues of exploration.

Mina Aganagic, a theoretical physicist at the University of California, Berkeley, believes string theory and related topics will continue to provide these connections between physics and math. 

“In some sense, we’ve explored a very small part of string theory and a very small number of its predictions,” she says. Mathematicians and their focus on detailed rigorous proofs bring one point of view to the field, and physicists, with their tendency to prioritize intuitive understanding, bring another. “That’s what makes the relationship so satisfying.”

The relationship between physics and mathematics goes back to the beginning of both subjects; as the fields have advanced, this relationship has gotten more and more tangled, a complicated tapestry. There is seemingly no end to the places where a well-placed set of tools for making calculations could help physicists, or where a probing question from physics could inspire mathematicians to create entirely new mathematical objects or theories.

Continue reading

The world’s largest astronomical movie

The Large Synoptic Survey Telescope will track billions of objects for 10 years, creating unprecedented opportunities for studies of cosmic mysteries.

LSST Camera

When the Large Synoptic Survey Telescope begins to survey the night sky in the early 2020s, it’ll collect a treasure trove of data. The information will benefit a wide range of groundbreaking astronomical and astrophysical research, addressing topics such as dark matter, dark energy, the formation of galaxies and detailed studies of objects in our very own cosmic neighborhood, the Milky Way.

LSST’s centerpiece will be its 3.2-gigapixel camera, which is being assembled at the US Department of Energy’s SLAC National Accelerator Laboratory. Every few days, the largest digital camera ever built for astronomy will compile a complete image of the Southern sky. Moreover, it’ll do so over and over again for a period of 10 years. It’ll track the motions and changes of tens of billions of stars, galaxies and other objects in what will be the world’s largest stop-motion movie of the universe.

Fulfilling this extraordinary task requires extraordinary technology. The camera will be the size of a small SUV, weigh in at a whopping 3 tons, and use state-of-the-art optics, imaging technology and data management tools. But how exactly will it work?

LSST mirror animation infographic

Artwork by Sandbox Studio, Chicago with Ana Kova

Collecting ancient light

It all starts with choosing the right location for the telescope. Astronomers want the sharpest images of the dimmest objects for their analyses, and they also want to maximize their observation time. They need the nights to be dark and the air to be dry and stable. 

It turns out that the Atacama Desert, a plateau in the foothills of the Andes Mountains, scores very high for these criteria. That’s where LSST will be located—at nearly 8700 feet altitude on the Cerro Pachón ridge in Chile, 60 miles from the coastal town of La Serena.

The next challenge is that most objects LSST researchers want to study are so far away that their light has been traveling through space for millions to billions of years. It arrives on Earth merely as a faint glow, and astronomers need to collect as much of that glow as possible. For this purpose, LSST will have a large primary mirror with a diameter close to 28 feet. 

The mirror will be part of a sophisticated three-mirror system that will reflect and focus the cosmic light into the camera.  

The unique optical design is crucial for the telescope’s extraordinary field of view—a measure of the area of sky captured with every snapshot. At 9.6 square degrees, corresponding to 40 times the area of the full moon, the large field of view will allow astronomers to put together a complete map of the Southern night sky every few days.

After bouncing off the mirrors, the ancient cosmic light will enter the camera through a set of three large lenses. The largest one will have a diameter of more than 5 feet. 

Together with the mirrors, the lenses’ job is to focus the light as sharply as possible onto the focal plane—a grid of light-sensitive sensors at the back of the camera where the light from the sky will be detected.

A filter changer will insert filters in front of the third lens, allowing astronomers to take images with different kinds of cosmic light that range from the ultraviolet to the near-infrared. This flexibility enhances the range of possible observations with LSST. For example, with an infrared filter researchers can look right through dust and get a better view of objects obscured by it.  By comparing how bright an object is when seen through different filters, astronomers also learn how its emitted light varies with the wavelength, which reveals details about how the light is produced.

LSST Camera Focal Plane

Artwork by Sandbox Studio, Chicago with Ana Kova

An Extraordinary Imaging Device

The heart of LSST’s camera is its 25-inch-wide focal plane. That’s where the light of stars and galaxies will be turned into electrical signals, which will then be used to reconstruct images of the sky. The focal plane will hold 189 imaging sensors, called charge-coupled devices, that perform this transformation. 

Each CCD is 4096 pixels wide and long, and together they’ll add up to the camera’s 3.2 gigapixels. A “good” star will be the size of only a handful of pixels, whereas distant galaxies might appear as somewhat larger fuzzballs.

The focal plane will consist of 21 smaller square arrays, called rafts, with nine CCDs each. This modular structure will make it easier and less costly to replace imaging sensors if needed in the future. 

To the delight of astronomers interested in extremely dim objects, the camera will have a large aperture (f/1.2, for the photographers among us), meaning that it’ll let a lot of light onto the imaging sensors. However, the large aperture will also make the depth of field very shallow, which means that objects will become blurry very quickly if they are not precisely projected onto the focal plane. That’s why the focal plane will need to be extremely flat, demanding that individual CCDs don’t stick out or recess by more than 0.0004 inches.

To eliminate unwanted background signals, known as dark currents, the sensors will also need to be cooled to minus 150 degrees Fahrenheit. The temperature will need to be kept stable to half a degree. Because water vapor inside the camera housing would form ice on the sensors at this chilly temperature, the focal plane must also be kept in a vacuum.  

In addition to the 189 “science” sensors that will capture images of the sky, the focal plane will also have three specialty sensors in each of the four corners of the focal plane. Two so-called guiders will frequently monitor the position of a reference star and help LSST stay in sync with the Earth’s rotation. The third sensor, called a wavefront sensor, will be split into two halves that will be positioned six-hundredths of an inch above and below the focal plane. It’ll see objects as blurry “donuts” and provide information that will be used to adjust the telescope’s focus.

Cinematography of astronomical dimension

Once the camera has taken enough data from a patch in the sky, about every 36 seconds, the telescope will be repositioned to look at the next spot. A computer algorithm will determine the patches in the sky that will be surveyed by LSST on any given night.

While the telescope is moving, a shutter between the filter and the third lens camera will close to prevent more light from falling onto the imaging sensors. At the same time, the CCDs will be read out and their information digitized. 

The data will be sent into the processing and analysis pipeline that will handle LSST’s enormous flood of information (about 20 terabytes of data every single night). There, it will be turned into useable images. The system will also flag potential interesting events and send out alerts to astronomers within a minute. 

This way—patch by patch—a complete image of the entire Southern sky will be stitched together every few days. Then the imaging process will start over and repeat for the 10-year duration of the survey, ultimately creating the largest time-lapse movie of the universe ever made and providing researchers with unprecedented research opportunities.   

For more information on LSST, visit LSST’s website or SLAC’s LSST camera website.


LSST camera poster
Artwork by Sandbox Studio, Chicago with Ana Kova
Continue reading

Right on target

These hardy physics components live at the center of particle production.

A row of slim silver objects sit in the center of a tube.

For some, a target is part of a game of darts. For others, it’s a retail chain. In particle physics, it’s the site of an intense, complex environment that plays a crucial role in generating the universe’s smallest components for scientists to study.

The target is an unsung player in particle physics experiments, often taking a back seat to scene-stealing light-speed particle beams and giant particle detectors. Yet many experiments wouldn’t exist without a target. And, make no mistake, a target that holds its own is a valuable player.

Scientists and engineers at Fermilab are currently investigating targets for the study of neutrinos—mysterious particles that could hold the key to the universe’s evolution.

Intense interactions

The typical particle physics experiment is set up in one of two ways. In the first, two energetic particle beams collide into each other, generating a shower of other particles for scientists to study.

In the second, the particle beam strikes a stationary, solid material—the target. In this fixed-target setup, the powerful meeting produces the particle shower.

As the crash pad for intense beams, a target requires a hardy constitution. It has to withstand repeated onslaughts of high-power beams and hold up under hot temperatures.

You might think that, as stalwart players in the play of particle production, targets would look like a fortress wall (or maybe you imagined dartboard). But targets take different shapes—long and thin, bulky and wide. They’re also made of different materials, depending on the kind of particle one wants to make. They can be made of metal, water or even specially designed nanofibers.

In a fixed-target experiment, the beam—say, a proton beam—races toward the target, striking it. Protons in the beam interact with the target material’s nuclei, and the resulting particles shoot away from the target in all directions. Magnets then funnel and corral some of these newly born particles to a detector, where scientists measure their fundamental properties.

The particle birthplace

The particles that emerge from the beam-target interaction depend in large part on the target material. Consider Fermilab neutrino experiments.

In these experiments, after the protons strike the target, some of the particles in the subsequent particle shower decay—or transform—into neutrinos.

The target has to be made of just the right stuff.

“Targets are crucial for particle physics research,” says Fermilab scientist Bob Zwaska. “They allow us to create all of these new particles, such as neutrinos, that we want to study.”

Graphite is a goldilocks material for neutrino targets. If kept at the right temperature while in the proton beam, the graphite generates particles of just the right energy to be able to decay into neutrinos.

For neutron targets, such as that at the Spallation Neutron Source at Oak Ridge National Laboratory, heavier metals such as mercury are used instead.

Maximum interaction is the goal of a target’s design. The target for Fermilab’s NOvA neutrino experiment, for example, is a straight row—about the length of your leg—of graphite fins that resemble tall dominoes. The proton beam barrels down its axis, and every encounter with a fin produces an interaction. The thin shape of the target ensures that few of the particles shooting off after collision are reabsorbed back into the target.

Robust targets

“As long as the scientists have the particles they need to study, they’re happy. But down the line, sometimes the targets become damaged,” says Fermilab engineer Patrick Hurh. In such cases, engineers have to turn down—or occasionally turn off—the beam power. “If the beam isn’t at full capacity or is turned off, we’re not producing as many particles as we can for science.”

The more protons that are packed into the beam, the more interactions they have with the target, and the more particles that are produced for research. So targets need to be in tip-top shape as much as possible. This usually means replacing targets as they wear down, but engineers are always exploring ways of improving target resistance, whether it’s through design or material.

Consider what targets are up against. It isn’t only high-energy collisions—the kinds of interactions that produce particles for study—that targets endure.

Lower-energy interactions can have long-term, negative impacts on a target, building up heat energy inside it. As the target material rises in temperature, it becomes more vulnerable to cracking. Expanding warm areas hammer against cool areas, creating waves of energy that destabilize its structure.

Some of the collisions in a high-energy beam can also create lightweight elements such as hydrogen or helium. These gases build up over time, creating bubbles and making the target less resistant to damage.

A proton from the beam can even knock off an entire atom, disrupting the target’s crystal structure and causing it to lose durability.

Clearly, being a target is no picnic, so scientists and engineers are always improving targets to better roll with a punch.

For example, graphite, used in Fermilab’s neutrino experiments, is resistant to thermal strain. And, since it is porous, built-up gases that might normally wedge themselves between atoms and disrupt their arrangement may instead migrate to open areas in the atomic structure. The graphite is able to remain stable and withstand the waves of energy from the proton beam.

Engineers also find ways to maintain a constant target temperature. They design it so that it’s easy to keep cool, integrating additional cooling instruments into the target design. For example, external water tubes help cool the target for Fermilab’s NOvA neutrino experiment.

Targets for intense neutrino beams

At Fermilab, scientists and engineers are also testing new designs for what will be the lab’s most powerful proton beam—the beam for the laboratory’s flagship Long-Baseline Neutrino Facility and Deep Underground Neutrino Experiment, known as LBNF/DUNE.

LBNF/DUNE is scheduled to begin operation in the 2020s. The experiment requires an intense beam of high-energy neutrinos—the most intense in the world. Only the most powerful proton beam can give rise to the quantities of neutrinos LBNF/DUNE needs.

Scientists are currently in the early testing stages for LBNF/DUNE targets, investigating materials that can withstand the high-power protons. Currently in the running are beryllium and graphite, which they’re stretching to their limits. Once they conclusively determine which material comes out on top, they’ll move to the design prototyping phase. So far, most of their tests are pointing to graphite as the best choice.

Targets will continue to evolve and adapt. LBNF/DUNE provides just one example of next-generation targets.

“Our research isn’t just guiding the design for LBNF/DUNE,” Hurh says. “It’s for the science itself. There will always be different and more powerful particle beams, and targets will evolve to meet the challenge.”

Editor's note: A version of this article was originally published by Fermilab.

Continue reading

How to make a Higgs boson

It doesn’t seem like collisions of particles with no mass should be able to produce the “mass-giving” boson, the Higgs. But every other second at the LHC, they do.

scientists are checking out who they know can interact with the Higgs field, while searching for any potential mystery particles

Einstein’s most famous theory, often written as E=mc2, tells us that energy and matter are two sides of the same coin. 

The Large Hadron Collider uses this principle to convert the energy contained within ordinary particles into new particles that are difficult to find in nature—particles like the Higgs boson, which is so massive that it almost immediately decays into pairs of lighter, more stable particles.

But not just any collision can create a Higgs boson.

“The Higgs is not just created from a ‘poof’ of energy,” says Laura Dodd, a researcher at the University of Wisconsin, Madison. “Particles follow a strict set of laws that dictate how they can form, decay and interact.”

One of these laws states that Higgs bosons can be produced only by particles that interact with the Higgs field—in other words, particles with mass.

The Higgs field is like an invisible spider’s web that permeates all of space. As particles travel through it, some get tangled in the sticky tendrils, a process that makes them gain mass and slow down. But for other particles—such as photons and gluons—this web is completely transparent, and they glide through unhindered.

Given enough energy, the particles wrapped in the Higgs field can transfer their energy into it and kick out a Higgs boson. Because massless particles do not interact with the Higgs field, it would make sense to say that they can’t create a Higgs. But scientists at the LHC would beg to differ.

The LHC accelerates protons around its 17-mile circumference to just under the speed of light and then brings them into head-on collisions at four intersections along its ring. Protons are not fundamental particles, particles that cannot be broken down into any smaller constituent pieces. Rather they are made up of gluons and quarks. 

As two pepped-up protons pass through each other, it’s usually pairs of massless gluons that infuse invisible fields with their combined energy and excite other particles into existence—and that includes Higgs bosons. 

We know that particles follow strict rules about who can talk to whom.

How? Gluons have found a way to cheat.

“It would be impossible to generate Higgs bosons with gluons if the collisions in the LHC were a simple, one-step processes,” says Richard Ruiz, a theorist at Durham University’s Institute for Particle Physics Phenomenology.

Luckily, they aren’t.

Gluons can momentarily “launder” their energy to a virtual particle, which converts the gluon’s energy into mass. If two gluons produce a pair of virtual top quarks, the tops can recombine and  annihilate into a Higgs boson. 

To be clear, virtual particles are not stable particles at all, but rather irregular disturbances in quantum mechanical fields that exist in a half-baked state for an incredibly short period of time. If a real particle were a thriving business, then a virtual particle would be a shell company.

Theorists predict that about 90 percent of Higgs bosons are created through gluon fusion. The probability of two gluons colliding, creating a top quark-antitop pair and propitiously producing a Higgs is roughly one in 2 billion. However, because the LHC generates 10 million proton collisions every second, the odds are in scientists’ favor and the production rate for Higgs bosons is roughly one every two seconds.

Shortly after the Higgs discovery, scientists were mostly focused on what happens to Higgs bosons after they decay, according to Dodd.

“But now that we have more data and a better understanding of the Higgs, we’re starting to look closer at the collision byproducts to better understand how frequently the Higgs is produced through the different mechanisms,” she says.

The Standard Model of particle physics predicts that almost all Higgs bosons are produced through one of four possible processes. What scientists would love to see are Higgs bosons being created in a way that the Standard Model of particle physics does not predict, such as in the decay of a new particle. Breaking the known rules would show that there is more going on than physicists previously understood.

“We know that particles follow strict rules about who can talk to whom because we’ve seen this time and time again during our experiments,” Ruiz says. “So now the question is, what if there is a whole sector of undiscovered particles that cannot communicate with our standard particles but can interact with the Higgs boson?” 

Scientists are keeping an eye out for anything unexpected, such as an excess of certain particles radiating from a collision or decay paths that occur more or less frequently than scientists predicted. These indicators could point to undiscovered heavy particles morphing into Higgs bosons. 

At the same time, to find hints of unexpected ingredients in the chain reactions that sometimes make Higgs bosons, scientists must know very precisely what they should expect.

“We have fantastic mathematical models that predict all this, and we know what both sides of the equations are,” Ruiz says. “Now we need to experimentally test these predictions to see if everything adds up, and if not, figure out what those extra missing variables might be.”

Continue reading

ADMX brings new excitement to dark matter search

Scientists on the Axion Dark Matter Experiment have demonstrated technology that could lead to the discovery of theoretical light dark matter particles called axions. 

Equipment and panels for the ADMX experiment fill the room. An ADMX experiment banner hangs above.

Forty years ago, scientists theorized a new kind of low-mass particle that could solve one of the enduring mysteries of nature: what dark matter is made of. Now a new chapter in the search for that particle, the axion, has begun.

This week, the Axion Dark Matter Experiment (ADMX) unveiled a new result (published in Physical Review Letters) that places it in a category of one: It is the world’s first and only experiment to have achieved the necessary sensitivity to “hear” the telltale signs of these theoretical particles. This technological breakthrough is the result of more than 30 years of research and development, with the latest piece of the puzzle coming in the form of a quantum-enabled device that allows ADMX to listen for axions more closely than any experiment ever built.  

ADMX is managed by the US Department of Energy’s Fermi National Accelerator Laboratory and located at the University of Washington. This new result, the first from the second-generation run of ADMX, sets limits on a small range of frequencies where axions may be hiding, and sets the stage for a wider search in the coming years.

“This result signals the start of the true hunt for axions,” says Fermilab’s Andrew Sonnenschein, the operations manager for ADMX. “If dark matter axions exist within the frequency band we will be probing for the next few years, then it’s only a matter of time before we find them.”

One theory suggests that galaxies are held together by a vast number of axions, low-mass particles that are almost invisible to detection as they stream through the cosmos. Efforts in the 1980s to find these particles, named by theorist Frank Wilczek, currently of the Massachusetts Institute of Technology, were unsuccessful, showing that their detection would be extremely challenging.

ADMX is an axion haloscope—essentially a large, low-noise, radio receiver, which scientists tune to different frequencies and listen to find the axion signal frequency. Axions almost never interact with matter, but with the aid of a strong magnetic field and a cold, dark, properly tuned, reflective box, ADMX can “hear” photons created when axions convert into electromagnetic waves inside the detector.

“If you think of an AM radio, it’s exactly like that,” says Gray Rybka, co-spokesperson for ADMX and assistant professor at the University of Washington. “We’ve built a radio that looks for a radio station, but we don't know its frequency. We turn the knob slowly while listening. Ideally we will hear a tone when the frequency is right.”

Listening for Dark Matter with ADMX

Video of Listening for Dark Matter with ADMX

This detection method, which might make the "invisible axion" visible, was invented by Pierre Sikivie of the University of Florida in 1983. Pioneering experiments and analyses by a collaboration of Fermilab, the University of Rochester and Brookhaven National Laboratory, as well as scientists at the University of Florida, demonstrated the practicality of the experiment. This led to the construction in the late 1990s of a large-scale detector at Lawrence Livermore National Laboratory that is the basis of the current ADMX.

It was only recently, however, that the ADMX team has been able to deploy superconducting quantum amplifiers to their full potential, enabling the experiment to reach unprecedented sensitivity. Previous runs of ADMX were stymied by background noise generated by thermal radiation and the machine’s own electronics.

Fixing thermal radiation noise is easy: A refrigeration system cools the detector down to 0.1 Kelvin (roughly -460 degrees Fahrenheit). But eliminating the noise from electronics proved more difficult. The first runs of ADMX used standard transistor amplifiers, but then ADMX scientists connected with John Clarke, a professor at the University of California Berkeley, who developed a quantum-limited amplifier for the experiment. This much quieter technology, combined with the refrigeration unit, reduces the noise by a significant enough level that the signal, should ADMX discover one, will come through loud and clear.

“The initial versions of this experiment, with transistor-based amplifiers, would have taken hundreds of years to scan the most likely range of axion masses. With the new superconducting detectors, we can search the same range on timescales of only a few years,” says Gianpaolo Carosi, co-spokesperson for ADMX and scientist at Lawrence Livermore National Laboratory.

“This result plants a flag,” says Leslie Rosenberg, professor at the University of Washington and chief scientist for ADMX. “It tells the world that we have the sensitivity, and have a very good shot at finding the axion. No new technology is needed. We don’t need a miracle anymore, we just need the time.”

ADMX will now test millions of frequencies at this level of sensitivity. If axions were found, it would be a major discovery that could explain not only dark matter, but other lingering mysteries of the universe. If ADMX does not find axions, that may force theorists to devise new solutions to those riddles.

“A discovery could come at any time over the next few years,” says scientist Aaron Chou of Fermilab. “It’s been a long road getting to this point, but we’re about to begin the most exciting time in this ongoing search for axions.”

Editor’s note: This article is based on a Fermilab press release.

Continue reading