Here’s the real story on jellyfish taking over the world

Jellyfish have gotten a bad rap. In recent years, concerns about rising jellyfish populations in some parts of the world have mushroomed into headlines like “Meet your new jellyfish overlords.” These floating menaces are taking over the world’s oceans thanks to climate change and ocean acidification, the thinking goes, and soon waters will be filled with little more than the animals’ pulsating goo.

It’s a vivid and frightening image, but researchers aren’t at all certain that it’s true. In her new book, Spineless, former marine scientist Juli Berwald sets out to find the truth about the jellyfish take-over. In the process, she shares much more about these fascinating creatures than merely their numbers.
Among a few of the amazing jellyfish facts and tales throughout the book: Jellyfish have astoundingly complex vision for animals without a brain. They are also the most efficient swimmers ever studied, among the most ancient animals surviving on Earth today and some of the most toxic sea creatures (SN: 9/6/14, p. 16).

Rather than merely reciting these facts, Berwald takes readers on a personal journey, tracing how life pulled her away from science after she earned her Ph.D. — and how jellies brought her back. Through the tale of her experiments with a home jellyfish aquarium, she explains jelly biology, from the amazing shape-shifting properties of the mesoglea that forms a jellyfish’s bulk to why so many species are transparent. As she juggles family life with interviews with the world’s leading jellyfish researchers, Berwald also documents her travels to places around the globe where jellyfish and humans intersect, such as Israel’s coral reefs and Japan’s fisheries.
The answer to the question of whether jellyfish populations are on the rise ultimately lies at this intersection, Berwald finds. Marine scientists are split on whether populations are increasing globally. It depends on which data you include, and it’s possible that jellyfish numbers fluctuate naturally on a 20-year cycle. What is clear is that in coastal areas around the world, people have unwittingly created spawning grounds for huge numbers of jellyfish simply by building docks and other structures that quickly multiplying jellyfish polyps can attach to.

In the end, Berwald says, jellyfish became a “vehicle for me to explore the threats to the ocean’s future. They’re a way to start a conversation about things that can seem boring and abstract — acidification, warming, overfishing and coastal development — but that are changing our oceans in fundamental ways.” And that’s more interesting than an ocean full of goo.

New camera on Palomar telescope will seek out supernovas, asteroids and more

A new eye on the variable sky just opened. The Zwicky Transient Facility, a robotic camera designed to rapidly scan the sky nightly for objects that move, flash or explode, took its first image on November 1.

The camera, mounted on a telescope at Caltech’s Palomar Observatory near San Diego, succeeds the Palomar Transient Factory. Between 2009 and 2017, the Palomar Transient Factory caught two separate supernovas hours after they exploded, one in 2011 (SN: 9/24/11, p. 5) and one earlier this year (SN: 2/13/17). It also found the longest-lasting supernova ever, from a star that seems to explode over and over (SN: 11/8/17).

The Zwicky survey will spot similar short-lived events and other cosmic blips, like stars being devoured by black holes (SN: 4/1/17, p. 5), as well as asteroids and comets. But Zwicky will work much faster than its predecessor: It will operate 10 times as fast, cover seven times as much of the sky in a single image and take 2.5 times as many exposures each night. Computers will search the images for any astronomical object that changes from one scan to the next.

The camera is named for Caltech astronomer Fritz Zwicky, who first used the term “supernova” in 1931 to describe the explosions that mark a star’s death (SN: 10/24/13).

Simulating the universe using Einstein’s theory of gravity may solve cosmic puzzles

If the universe were a soup, it would be more of a chunky minestrone than a silky-smooth tomato bisque.

Sprinkled with matter that clumps together due to the insatiable pull of gravity, the universe is a network of dense galaxy clusters and filaments — the hearty beans and vegetables of the cosmic stew. Meanwhile, relatively desolate pockets of the cosmos, known as voids, make up a thin, watery broth in between.

Until recently, simulations of the cosmos’s history haven’t given the lumps their due. The physics of those lumps is described by general relativity, Albert Einstein’s theory of gravity. But that theory’s equations are devilishly complicated to solve. To simulate how the universe’s clumps grow and change, scientists have fallen back on approximations, such as the simpler but less accurate theory of gravity devised by Isaac Newton.
Relying on such approximations, some physicists suggest, could be mucking with measurements, resulting in a not-quite-right inventory of the cosmos’s contents. A rogue band of physicists suggests that a proper accounting of the universe’s clumps could explain one of the deepest mysteries in physics: Why is the universe expanding at an increasingly rapid rate?

The accepted explanation for that accelerating expansion is an invisible pressure called dark energy. In the standard theory of the universe, dark energy makes up about 70 percent of the universe’s “stuff” — its matter and energy. Yet scientists still aren’t sure what dark energy is, and finding its source is one of the most vexing problems of cosmology.

Perhaps, the dark energy doubters suggest, the speeding up of the expansion has nothing to do with dark energy. Instead, the universe’s clumpiness may be mimicking the presence of such an ethereal phenomenon.
Most physicists, however, feel that proper accounting for the clumps won’t have such a drastic impact. Robert Wald of the University of Chicago, an expert in general relativity, says that lumpiness is “never going to contribute anything that looks like dark energy.” So far, observations of the universe have been remarkably consistent with predictions based on simulations that rely on approximations.
As observations become more detailed, though, even slight inaccuracies in simulations could become troublesome. Already, astronomers are charting wide swaths of the sky in great detail, and planning more extensive surveys. To translate telescope images of starry skies into estimates of properties such as the amount of matter in the universe, scientists need accurate simulations of the cosmos’s history. If the detailed physics of clumps is important, then simulations could go slightly astray, sending estimates off-kilter. Some scientists already suggest that the lumpiness is behind a puzzling mismatch of two estimates of how fast the universe is expanding.

Researchers are attempting to clear up the debate by conquering the complexities of general relativity and simulating the cosmos in its full, lumpy glory. “That is really the new frontier,” says cosmologist Sabino Matarrese of the University of Padua in Italy, “something that until a few years ago was considered to be science fiction.” In the past, he says, scientists didn’t have the tools to complete such simulations. Now researchers are sorting out the implications of the first published results of the new simulations. So far, dark energy hasn’t been explained away, but some simulations suggest that certain especially sensitive measurements of how light is bent by matter in the universe might be off by as much as 10 percent.

Soon, simulations may finally answer the question: How much do lumps matter? The idea that cosmologists might have been missing a simple answer to a central problem of cosmology incessantly nags some skeptics. For them, results of the improved simulations can’t come soon enough. “It haunts me. I can’t let it go,” says cosmologist Rocky Kolb of the University of Chicago.

Smooth universe
By observing light from different eras in the history of the cosmos, cosmologists can compute the properties of the universe, such as its age and expansion rate. But to do this, researchers need a model, or framework, that describes the universe’s contents and how those ingredients evolve over time. Using this framework, cosmologists can perform computer simulations of the universe to make predictions that can be compared with actual observations.
After Einstein introduced his theory in 1915, physicists set about figuring out how to use it to explain the universe. It wasn’t easy, thanks to general relativity’s unwieldy, difficult-to-solve suite of equations. Meanwhile, observations made in the 1920s indicated that the universe wasn’t static as previously expected; it was expanding. Eventually, researchers converged on a solution to Einstein’s equations known as the Friedmann-Lemaître-Robertson-Walker metric. Named after its discoverers, the FLRW metric describes a simplified universe that is homogeneous and isotropic, meaning that it appears identical at every point in the universe and in every direction. In this idealized cosmos, matter would be evenly distributed, no clumps. Such a smooth universe would expand or contract over time.
A smooth-universe approximation is sensible, because when we look at the big picture, averaging over the structures of galaxy clusters and voids, the universe is remarkably uniform. It’s similar to the way that a single spoonful of minestrone soup might be mostly broth or mostly beans, but from bowl to bowl, the overall bean-to-broth ratios match.

In 1998, cosmologists revealed that not only was the universe expanding, but its expansion was also accelerating (SN: 2/2/08, p. 74). Observations of distant exploding stars, or supernovas, indicated that the space between us and them was expanding at an increasing clip. But gravity should slow the expansion of a universe evenly filled with matter. To account for the observed acceleration, scientists needed another ingredient, one that would speed up the expansion. So they added dark energy to their smooth-universe framework.

Now, many cosmologists follow a basic recipe to simulate the universe — treating the cosmos as if it has been run through an imaginary blender to smooth out its lumps, adding dark energy and calculating the expansion via general relativity. On top of the expanding slurry, scientists add clumps and track their growth using approximations, such as Newtonian gravity, which simplifies the calculations.

In most situations, Newtonian gravity and general relativity are near-twins. Throw a ball while standing on the surface of the Earth, and it doesn’t matter whether you use general relativity or Newtonian mechanics to calculate where the ball will land — you’ll get the same answer. But there are subtle differences. In Newtonian gravity, matter directly attracts other matter. In general relativity, gravity is the result of matter and energy warping spacetime, creating curves that alter the motion of objects (SN: 10/17/15, p. 16). The two theories diverge in extreme gravitational environments. In general relativity, for example, hulking black holes produce inescapable pits that reel in light and matter (SN: 5/31/14, p. 16). The question, then, is whether the difference between the two theories has any impact in lumpy-universe simulations.

Most cosmologists are comfortable with the status quo simulations because observations of the heavens seem to fit neatly together like interlocking jigsaw puzzle pieces. Predictions based on the standard framework agree remarkably well with observations of the cosmic microwave background — ancient light released when the universe was just 380,000 years old (SN: 3/21/15, p. 7). And measurements of cosmological parameters — the fraction of dark energy and matter, for example — are generally consistent, whether they are made using the light from galaxies or the cosmic microwave background.

However, the reliance on Newton’s outdated theory irks some cosmologists, creating a lingering suspicion that the approximation is causing unrecognized problems. And some cosmological question marks remain. Physicists still puzzle over what makes up dark energy, along with another unexplained cosmic constituent, dark matter, an additional kind of mass that must exist to explain observations of how galaxies and galaxy clusters rotate. “Both dark energy and dark matter are a bit of an embarrassment to cosmologists, because they have no idea what they are,” says cosmologist Nick Kaiser of École Normale Supérieure in Paris.
Dethroning dark energy
Some cosmologists hope to explain the universe’s accelerating expansion by fully accounting for the universe’s lumpiness, with no need for the mysterious dark energy.

These researchers argue that clumps of matter can alter how the universe expands, when the clumps’ influence is tallied up over wide swaths of the cosmos. That’s because, in general relativity, the expansion of each local region of space depends on how much matter is within. Voids expand faster than average; dense regions expand more slowly. Because the universe is mostly made up of voids, this effect could produce an overall expansion and potentially an acceleration. Known as backreaction, this idea has lingered in obscure corners of physics departments for decades, despite many claims that backreaction’s effect is small or nonexistent.

Backreaction continues to appeal to some researchers because they don’t have to invent new laws of physics to explain the acceleration of the universe. “If there is an alternative which is based only upon traditional physics, why throw that away completely?” Matarrese asks.

Most cosmologists, however, think explaining away dark energy just based on the universe’s lumps is unlikely. Previous calculations have indicated any effect would be too small to account for dark energy, and would produce an acceleration that changes in time in a way that disagrees with observations.

“My personal view is that it’s a much smaller effect,” says astrophysicist Hayley Macpherson of Monash University in Melbourne, Australia. “That’s just basically a gut feeling.” Theories that include dark energy explain the universe extremely well, she points out. How could that be if the whole approach is flawed?

New simulations by Macpherson and others that model how lumps evolve in general relativity may be able to gauge the importance of backreaction once and for all. “Up until now, it’s just been too hard,” says cosmologist Tom Giblin of Kenyon College in Gambier, Ohio.

To perform the simulations, researchers needed to get their hands on supercomputers capable of grinding through the equations of general relativity as the simulated universe evolves over time. Because general relativity is so complex, such simulations are much more challenging than those that use approximations, such as Newtonian gravity. But, a seemingly distinct topic helped lay some of the groundwork: gravitational waves, or ripples in the fabric of spacetime.
The Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO, searches for the tremors of cosmic dustups such as colliding black holes (SN: 10/28/17, p. 8). In preparation for this search, physicists honed their general relativity skills on simulations of the spacetime storm kicked up by black holes, predicting what LIGO might see and building up the computational machinery to solve the equations of general relativity. Now, cosmologists have adapted those techniques and unleashed them on entire, lumpy universes.

The first lumpy universe simulations to use full general relativity were unveiled in the June 2016 Physical Review Letters. Giblin and colleagues reported their results simultaneously with Eloisa Bentivegna of the University of Catania in Italy and Marco Bruni of the University of Portsmouth in England.

So far, the simulations have not been able to account for the universe’s acceleration. “Nearly everybody is convinced [the effect] is too small to explain away the need for dark energy,” says cosmologist Martin Kunz of the University of Geneva. Kunz and colleagues reached the same conclusion in their lumpy-universe simulations, which have one foot in general relativity and one in Newtonian gravity. They reported their first results in Nature Physics in March 2016.

Backreaction aficionados still aren’t dissuaded. “Before saying the effect is too small to be relevant, I would, frankly, wait a little bit more,” Matarrese says. And the new simulations have potential caveats. For example, some simulated universes behave like an old arcade game — if you walk to one edge of the universe, you cross back over to the other side, like Pac-Man exiting the right side of the screen and reappearing on the left. That geometry would suppress the effects of backreaction in the simulation, says Thomas Buchert of the University of Lyon in France. “This is a good beginning,” he says, but there is more work to do on the simulations. “We are in infancy.”

Different assumptions in a simulation can lead to disparate results, Bentivegna says. As a result, she doesn’t think that her lumpy, general-relativistic simulations have fully closed the door on efforts to dethrone dark energy. For example, tricks of light might be making it seem like the universe’s expansion is accelerating, when in fact it isn’t.

When astronomers observe far-away sources like supernovas, the light has to travel past all of the lumps of matter between the source and Earth. That journey could make it look like there’s an acceleration when none exists. “It’s an optical illusion,” Bentivegna says. She and colleagues see such an effect in a simulation reported in March in the Journal of Cosmology and Astroparticle Physics. But, she notes, this work simulated an unusual universe, in which matter sits on a grid — not a particularly realistic scenario.

For most other simulations, the effect of optical illusions remains small. That leaves many cosmologists, including Giblin, even more skeptical of the possibility of explaining away dark energy: “I feel a little like a downer,” he admits.
Surveying the skies
Subtle effects of lumps could still be important. In Hans Christian Andersen’s “The Princess and the Pea,” the princess felt a tiny pea beneath an impossibly tall stack of mattresses. Likewise, cosmologists’ surveys are now so sensitive that even if the universe’s lumps have a small impact, estimates could be thrown out of whack.

The Dark Energy Survey, for example, has charted 26 million galaxies using the Victor M. Blanco Telescope in Chile, measuring how the light from those galaxies is distorted by the intervening matter on the journey to Earth. In a set of papers posted online August 4 at arXiv.org, scientists with the Dark Energy Survey reported new measurements of the universe’s properties, including the amount of matter (both dark and normal) and how clumpy that matter is (SN: 9/2/17, p. 32). The results are consistent with those from the cosmic microwave background — light emitted billions of years earlier.

To make the comparison, cosmologists took the measurements from the cosmic microwave background, early in the universe, and used simulations to extrapolate to what galaxies should look like later in the universe’s history. It’s like taking a baby’s photograph, precisely computing the number and size of wrinkles that should emerge as the child ages and finding that your picture agrees with a snapshot taken decades later. The matching results so far confirm cosmologists’ standard picture of the universe — dark energy and all.

“So far, it has not yet been important for the measurements that we’ve made to actually include general relativity in those simulations,” says Risa Wechsler, a cosmologist at Stanford University and a founding member of the Dark Energy Survey. But, she says, for future measurements, “these effects could become more important.” Cosmologists are edging closer to Princess and the Pea territory.

Those future surveys include the Dark Energy Spectroscopic Instrument, DESI, set to kick off in 2019 at Kitt Peak National Observatory near Tucson; the European Space Agency’s Euclid satellite, launching in 2021; and the Large Synoptic Survey Telescope in Chile, which is set to begin collecting data in 2023.

If cosmologists keep relying on simulations that don’t use general relativity to account for lumps, certain kinds of measurements of weak lensing — the bending of light due to matter acting like a lens — could be off by up to 10 percent, Giblin and colleagues reported at arXiv.org in July. “There is something that we’ve been ignoring by making approximations,” he says.

That 10 percent could screw up all kinds of estimates, from how dark energy changes over the universe’s history to how fast the universe is currently expanding, to the calculations of the masses of ethereal particles known as neutrinos. “You have to be extremely certain that you don’t get some subtle effect that gets you the wrong answers,” Geneva’s Kunz says, “otherwise the particle physicists are going to be very angry with the cosmologists.”

Some estimates may already be showing problem signs, such as the conflicting estimates of the cosmic expansion rate (SN: 8/6/16, p. 10). Using the cosmic microwave background, cosmologists find a slower expansion rate than they do from measurements of supernovas. If this discrepancy is real, it could indicate that dark energy changes over time. But before jumping to that conclusion, there are other possible causes to rule out, including the universe’s lumps.

Until the issue of lumps is smoothed out, scientists won’t know how much lumpiness matters to the cosmos at large. “I think it’s rather likely that it will turn out to be an important effect,” Kolb says. Whether it explains away dark energy is less certain. “I want to know the answer so I can get on with my life.”

Here’s yet more evidence that the mythical yeti was probably a bear

Campfire legends of massive, shaggy bipeds called yetis are grounded in a less mysterious truth: bears.

Eight samples of remains such as fur, bones and teeth purportedly from mountain-dwelling yetis actually come from three different kinds of bears that live in the Himalayas, researchers report November 29 in the Proceedings of the Royal Society B. A ninth sample turned out to come from a dog.

Previous analyses of smaller fragments of “yeti” DNA yielded controversial results. The new study looks at bigger chunks of DNA, analyzing the complete mitochondrial genomes from alleged yetis and comparing them with the mitochondrial genomes of various bears, including polar bears and Tibetan brown bears.
The results also give new insight into the genetic relationships between the different bears that call the Tibetan Plateau home, which could guide efforts to protect these rare subspecies. During a period of glaciation about 660,000 years ago, Himalayan brown bears were one of the first groups to branch off and become distinct from other brown bears, the data suggest.

Tibetan brown bears, on the other hand, share a more recent common ancestor with their relatives in Eurasia and North America. They might have migrated to the area around 340,000 years ago, but were probably kept geographically isolated from Himalayan brown bears by the rugged mountain terrain.

50 years ago, folate deficiency was linked to birth defects

Pregnant women who do not have enough folic acid — a B vitamin — in their bodies can pass the deficiency on to their unborn children. It may lead to retarded growth and congenital malformation, according to Dr. A. Leonard Luhby…. “Folic acid deficiency in pregnant women could well constitute a public health problem of dimensions we have not originally recognized,” he says. — Science News. December 9, 1967

Update
Folic acid — or folate — can prevent brain and spinal cord defects in developing fetuses. Since the U.S. Food and Drug Administration required that all enriched grain products contain the vitamin starting in 1998, birth defects have been prevented in about 1,300 babies each year. But some women still don’t get enough folate, while others may be overdoing it. About 10 percent of women may ingest more than the upper limit of 1,000 micrograms daily — about 2.5 times the recommended amount, a 2011 study found. Too much folate may increase a woman’s risk for certain cancers and interfere with some epilepsy drugs.

Collision illuminates the mysterious makeup of neutron stars

On astrophysicists’ charts of star stuff, there’s a substance that still merits the label “here be dragons.” That poorly understood material is found inside neutron stars — the collapsed remnants of once-mighty stars — and is now being mapped out, as scientists better characterize the weird matter.

The detection of two colliding neutron stars, announced in October (SN: 11/11/17, p. 6), has accelerated the pace of discovery. Since the event, which scientists spied with gravitational waves and various wavelengths of light, several studies have placed new limits on the sizes and masses possible for such stellar husks and on how squishy or stiff they are.
“The properties of neutron star matter are not very well known,” says physicist Andreas Bauswein of the Heidelberg Institute for Theoretical Studies in Germany. Part of the problem is that the matter inside a neutron star is so dense that a teaspoonful would weigh a billion tons, so the substance can’t be reproduced in any laboratory on Earth.

In the collision, the two neutron stars merged into a single behemoth. This remnant may have immediately collapsed into a black hole. Or it may have formed a bigger, spinning neutron star that, propped up by its own rapid rotation, existed for a few milliseconds — or potentially much longer — before collapsing. The speed of the object’s demise is helping scientists figure out whether neutron stars are made of material that is relatively soft, compressing when squeezed like a pillow, or whether the neutron star stuff is stiff, standing up to pressure. This property, known as the equation of state, determines the radius of a neutron star of a particular mass.

An immediate collapse seems unlikely, two teams of researchers say. Telescopes spotted a bright glow of light after the collision. That glow could only appear if there were a delay before the merged neutron star collapsed into a black hole, says physicist David Radice of Princeton University because when the remnant collapses, “all the material around falls inside of the black hole immediately.” Instead, the neutron star stuck around for at least several milliseconds, the scientists propose.

Simulations indicate that if neutron stars are soft, they will collapse more quickly because they will be smaller than stiff neutron stars of the same mass. So the inferred delay allows Radice and colleagues to rule out theories that predict neutron stars are extremely squishy, the researchers report in a paper published November 13 at arXiv.org.
Using similar logic, Bauswein and colleagues rule out some of the smallest sizes that neutron stars of a particular mass might be. For example, a neutron star 60 percent more massive than the sun can’t have a radius smaller than 10.7 kilometers, they determine. These results appear in a paper published November 29 in the Astrophysical Journal Letters.

Other researchers set a limit on the maximum mass a neutron star can have. Above a certain heft, neutron stars can no longer support their own weight and collapse into a black hole. If this maximum possible mass were particularly large, theories predict that the newly formed behemoth neutron star would have lasted hours or days before collapsing. But, in a third study, two physicists determined that the collapse came much more quickly than that, on the scale of milliseconds rather than hours. A long-lasting, spinning neutron star would dissipate its rotational energy into the material ejected from the collision, making the stream of glowing matter more energetic than what was seen, physicists Ben Margalit and Brian Metzger of Columbia University report. In a paper published November 21 in the Astrophysical Journal Letters, the pair concludes that the maximum possible mass is smaller than about 2.2 times that of the sun.

“We didn’t have many constraints on that prior to this discovery,” Metzger says. The result also rules out some of the stiffer equations of state because stiffer matter tends to support larger masses without collapsing.

Some theories predict that bizarre forms of matter are created deep inside neutron stars. Neutron stars might contain a sea of free-floating quarks — particles that are normally confined within larger particles like protons or neutrons. Other physicists suggest that neutron stars may contain hyperons, particles made with heavier quarks known as strange quarks, not found in normal matter. Such unusual matter would tend to make neutron stars softer, so pinning down the equation of state with additional neutron star crashes could eventually resolve whether these exotic beasts of physics indeed lurk in this unexplored territory.

In a first, Galileo’s gravity experiment is re-created in space

Galileo’s most famous experiment has taken a trip to outer space. The result? Einstein was right yet again. The experiment confirms a tenet of Einstein’s theory of gravity with greater precision than ever before.

According to science lore, Galileo dropped two balls from the Leaning Tower of Pisa to show that they fell at the same rate no matter their composition. Although it seems unlikely that Galileo actually carried out this experiment, scientists have performed a similar, but much more sensitive experiment in a satellite orbiting Earth. Two hollow cylinders within the satellite fell at the same rate over 120 orbits, or about eight days’ worth of free-fall time, researchers with the MICROSCOPE experiment report December 4 in Physical Review Letters. The cylinders’ accelerations match within two-trillionths of a percent.

The result confirms a foundation of Einstein’s general theory of relativity known as the equivalence principle. That principle states that an object’s inertial mass, which sets the amount of force needed to accelerate it, is equal to its gravitational mass, which determines how the object responds to a gravitational field. As a result, items fall at the same rate — at least in a vacuum, where air resistance is eliminated — even if they have different masses or are made of different materials.

The result is “fantastic,” says physicist Stephan Schlamminger of OTH Regensburg in Germany who was not involved with the research. “It’s just great to have a more precise measurement of the equivalence principle because it’s one of the most fundamental tenets of gravity.”
In the satellite, which is still collecting additional data, a hollow cylinder, made of platinum alloy, is centered inside a hollow, titanium-alloy cylinder. According to standard physics, gravity should cause the cylinders to fall at the same rate, despite their different masses and materials. A violation of the equivalence principle, however, might make one fall slightly faster than the other.

As the two objects fall in their orbit around Earth, the satellite uses electrical forces to keep the pair aligned. If the equivalence principle didn’t hold, adjustments needed to keep the cylinders in line would vary with a regular frequency, tied to the rate at which the satellite orbits and rotates. “If we see any difference in the acceleration it would be a signature of violation” of the equivalence principle, says MICROSCOPE researcher Manuel Rodrigues of the French aerospace lab ONERA in Palaiseau. But no hint of such a signal was found.

With about 10 times the precision of previous tests, the result is “very impressive,” says physicist Jens Gundlach of the University of Washington in Seattle. But, he notes, “the results are still not as precise as what I think they can get out of a satellite measurement.”

Performing the experiment in space eliminates certain pitfalls of modern-day land-based equivalence principle tests, such as groundwater flow altering the mass of surrounding terrain. But temperature changes in the satellite limited how well the scientists could confirm the equivalence principle, as these variations can cause parts of the apparatus to expand or contract.

MICROSCOPE’s ultimate goal is to beat other measurements by a factor of 100, comparing the cylinders’ accelerations to see whether they match within a tenth of a trillionth of a percent. With additional data yet to be analyzed, the scientists may still reach that mark.

Confirmation of the equivalence principle doesn’t mean that all is hunky-dory in gravitational physics. Scientists still don’t know how to combine general relativity with quantum mechanics, the physics of the very small. “The two theories seems to be very different, and people would like to merge these two theories,” Rodrigues says. But some attempts to do that predict violations of the equivalence principle on a level that’s not yet detectable. That’s why scientists think the equivalence principle is worth testing to ever more precision — even if it means shipping their experiments off to space.

Scientists are tracking how the flu moves through a college campus

COLLEGE PARK, Md. — Campus life typically challenges students with new opportunities for learning, discovery — and intimacy with germs. Lots of germs.

That makes dormitories and their residents an ideal natural experiment to trace the germs’ paths. “You pack a bunch of college kids into a very small environment … we’re not known as being the cleanliest of people,” says sophomore Parker Kleb at the University of Maryland in College Park. Kleb is a research assistant for an ongoing study tracking the spread of respiratory viruses through a student population. The study’s goal is to better understand how these viruses move around, in order to help keep illness at bay — all the more pressing, as the current flu season is on track to be among the worst recorded in the United States.
Called “C.A.T.C.H. the Virus,” which stands for Characterizing and Tracking College Health, the study traces the trajectory of viral infections using blood samples, nasal swabs and breath samples from ailing freshmen and their closest contacts. (Tagline: It’s snot your average research study.)

Donald Milton, an environmental and occupational health physician-scientist, heads the project. On a recent day, he described the study to a classroom of freshmen he hopes to recruit. He ticked off questions this research seeks to answer: What is it that makes people susceptible to getting sick? What makes them contagious? And how do they transmit a virus to others? “Maybe your house, your room has something to do with whether you’re at risk of getting infected,” Milton said.

He had a receptive audience: members of the College Park Scholars’ Global Public Health program. Infection control is right up their alley. “How sick do we have to be?” one student asked. It’s the culprit that matters, she’s told. The study covers acute respiratory infections due to influenza viruses, adenoviruses, coronaviruses or respiratory syncytial virus, known as RSV.

Of most interest, however, is influenza. “Flu is important to everybody,” says Milton. Influenza is thought to spread among humans three ways — touch; coughing and sneezing, which launches droplets containing virus from the lungs onto surfaces; and aerosols, smaller droplets suspended in the air that could be inhaled (SN: 6/29/13, p. 9).
How much each of these modes of transmission contributes to the spread of viruses is a point of fierce debate, Milton says. And that makes infection control difficult, especially in hospitals. “If we don’t understand how [viruses] are transmitted, it’s hard to come up with policies that are really going to work.”
Milton and his colleagues recently reported that people with the flu can shed infectious virus particles just by breathing. Of 134 fine-aerosol samples taken when patients were breathing normally, 52 contained infectious influenza virus — or 39 percent, according to the study, published online January 18 in the Proceedings of the National Academy of Sciences . Those fine-aerosol particles of respiratory tract fluid are 5 microns in diameter or less, small enough to stay suspended in the air and potentially contribute to airborne transmission of the flu, the researchers say.
“This could mean that just having good cough and sneeze etiquette — sneezing or coughing into tissues — may not be enough to limit the spread of influenza,” says virologist Andrew Pekosz at Johns Hopkins University, who was not involved with the study. “Just sitting in your office and breathing could fill the air with infectious influenza.”

The C.A.T.C.H. study aims to find out if what’s in the air is catching. In two University of Maryland dorms, carbon dioxide sensors measure how much of the air comes from people’s exhalations. In addition, laboratory tests measure how much virus sick students are shedding into the air. To get those samples, students sit in a ticket booth‒sized contraption called the Gesundheit-II and breathe into a giant cone. These data can help researchers estimate students’ airborne exposure to viruses, Milton says.

Another key dataset comes from DNA testing of the viruses infecting the students. “The virus mutates reasonably fast,” Milton says, so the more people it’s moved through, the more changes it will have. By combining this molecular chain of transmission with the social chain of transmission, the researchers will try to “establish who infected whom, and where, and how,” Milton says.

The goal is to enroll 130 students in C.A.T.C.H. It’s doubtful they’ll all get sick, but not that many students from this initial group are needed to start the ball rolling, says Jennifer German, a virologist and C.A.T.C.H. student engagement coordinator. “For every index case that has an infection we’re interested in, we’re following four additional contacts,” she says. “And then if any of those contacts becomes sick, we’ll get their contacts and so on.”

The study began in November 2017. As of the end of January, German says, researchers have collected samples from five sick students, but only one was infected with a target virus, influenza. The researchers now are following three contacts from that case.

But timing and the size of the current flu outbreak may be on the researchers’ side. Kleb, the research assistant, says that students are still waiting for this season’s flu to sweep through the dorms. “Once one person gets sick, it goes around to everyone on the floor,” he says. “I’m very interested to see what happens in the next few weeks, and how the study will hopefully benefit.”

Elongated heads were a mark of elite status in an ancient Peruvian society

Bigwigs in a more than 600-year-old South American population were easy to spot. Their artificially elongated, teardrop-shaped heads screamed prestige, a new study finds.

During the 300 years before the Incas’ arrival in 1450, intentional head shaping among prominent members of the Collagua ethnic community in Peru increasingly centered on a stretched-out look, says bioarchaeologist Matthew Velasco of Cornell University. Having long, narrow noggins cemented bonds among members of a power elite — a unity that may have helped pave a relatively peaceful incorporation into the Incan Empire, Velasco proposes in the February Current Anthropology.
“Increasingly uniform head shapes may have encouraged a collective identity and political unity among Collagua elites,” Velasco says. These Collagua leaders may have negotiated ways to coexist with the encroaching Inca rather than fight them, he speculates. But the fate of the Collaguas and a neighboring population, the Cavanas, remains hazy. Those populations lived during a conflict-ridden time — after the collapse of two major Andean societies around 1100 (SN: 8/1/09, p. 16) and before the expansion of the Inca Empire starting in the 15th century.

For at least the past several thousand years, human groups in various parts of the world have intentionally modified skull shapes by wrapping infants’ heads with cloth or binding the head between two pieces of wood (SN: 4/29/17, p. 18). Researchers generally assume that this practice signified membership in ethnic or kin groups, or perhaps social rank.
The Callagua people lived in Colca Valley in southeastern Peru and raised alpaca for wool. By tracking Collagua skull shapes over 300 years, Velasco found that elongated skulls became increasingly linked to high social status. By the 1300s, for instance, Collagua women with deliberately distended heads suffered much less skull damage from physical attacks than other females did, he reports. Chemical analyses of bones indicates that long-headed women ate a particularly wide variety of foods.
Until now, knowledge of head-shaping practices in ancient Peru primarily came from Spanish accounts written in the 1500s. Those documents referred to tall, thin heads among Collaguas and wide, long heads among Cavanas, implying that a single shape had always characterized each group.

“Velasco has discovered that the practice of cranial modification was much more dynamic over time and across social [groups],” says bioarchaeologist Deborah Blom of the University of Vermont in Burlington.

Velasco examined 211 skulls of mummified humans interred in either of two Collagua cemeteries. Burial structures built against a cliff face were probably reserved for high-ranking individuals, whereas common burial grounds in several caves and under nearby rocky overhangs belonged to regular folk.
Radiocarbon analyses of 13 bone and sediment samples allowed Velasco to sort Collagua skulls into early and late pre-Inca groups. A total of 97 skulls, including all 76 found in common burial grounds, belonged to the early group, which dated to between 1150 and 1300. Among these skulls, 38 — or about 39 percent — had been intentionally modified. Head shapes included sharply and slightly elongated forms as well as skulls compressed into wide, squat configurations.

Of the 14 skulls with extreme elongation, 13 came from low-ranking individuals, a pattern that might suggest regular folk first adopted elongated head shapes. But with only 21 skulls from elites, the finding may underestimate the early frequency of elongated heads among the high-status crowd. Various local groups may have adopted their own styles of head modification at that time, Velasco suggests.

In contrast, among 114 skulls from elite burial sites in the late pre-Inca period, dating to between 1300 and 1450, 84 — or about 74 percent — displayed altered shapes. A large majority of those modified skulls — about 64 percent — were sharply elongated. Shortly before the Incas’ arrival, prominent Collaguas embraced an elongated style as their preferred head shape, Velasco says. No skeletal evidence has been found to determine whether low-ranking individuals also adopted elongated skulls as a signature look in the late pre-Inca period.

Are computers better than people at predicting who will commit another crime?

In courtrooms around the United States, computer programs give testimony that helps decide who gets locked up and who walks free.

These algorithms are criminal recidivism predictors, which use personal information about defendants — like family and employment history — to assess that person’s likelihood of committing future crimes. Judges factor those risk ratings into verdicts on everything from bail to sentencing to parole.

Computers get a say in these life-changing decisions because their crime forecasts are supposedly less biased and more accurate than human guesswork.
But investigations into algorithms’ treatment of different demographics have revealed how machines perpetuate human prejudices. Now there’s reason to doubt whether crime-prediction algorithms can even boast superhuman accuracy.

Computer scientist Julia Dressel recently analyzed the prognostic powers of a widely used recidivism predictor called COMPAS. This software determines whether a defendant will commit a crime within the next two years based on six defendant features — although what features COMPAS uses and how it weighs various data points is a trade secret.

Dressel, who conducted the study while at Dartmouth College, recruited 400 online volunteers, who were presumed to have little or no criminal justice expertise. The researchers split their volunteers into groups of 20, and had each group read descriptions of 50 defendants. Using such information as sex, age and criminal history, the volunteers predicted which defendants would reoffend.
A comparison of the volunteers’ answers with COMPAS’ predictions for the same 1,000 defendants found that both were about 65 percent accurate. “We were like, ‘Holy crap, that’s amazing,’” says study coauthor Hany Farid, a computer scientist at Dartmouth. “You have this commercial software that’s been used for years in courts around the country — how is it that we just asked a bunch of people online and [the results] are the same?”

There’s nothing inherently wrong with an algorithm that only performs as well as its human counterparts. But this finding, reported online January 17 in Science Advances, should be a wake-up call to law enforcement personnel who might have “a disproportionate confidence in these algorithms,” Farid says.

“Imagine you’re a judge, and I tell you I have this highly secretive, highly proprietary, expensive software built on big data, and it says the person standing in front of you is high risk” for reoffending, he says. “The judge would be like, ‘Yeah, that sounds quite serious.’ But now imagine if I tell you, ‘Twenty people online said this person is high risk.’ I imagine you’d weigh that information a little bit differently.” Maybe these predictions deserve the same amount of consideration.

Judges could get some better perspective on recidivism predictors’ performance if the Department of Justice or National Institute for Standards and Technology established a vetting process for new software, Farid says. Researchers could test computer programs against a large, diverse dataset of defendants and OK algorithms for courtroom use only if they get a passing grade for prediction.

Farid has his doubts that computers can show much improvement. He and Dressel built several simple and complex algorithms that used two to seven defendant features to predict recidivism. Like COMPAS, all their algorithms maxed out at about D-level accuracy. That makes Farid wonder whether trying to predict crime with anything approaching A+ accuracy is an exercise in futility.

“Maybe there will be huge breakthroughs in data analytics and machine learning over the next decade that [help us] do this with a high accuracy,” he says. But until then, humans may make better crime predictors than machines. After all, if a bunch of average Joe online recruits gave COMPAS a run for its money, criminal justice experts — like social workers, parole officers, judges or detectives — might just outperform the algorithm.

Even if computer programs aren’t used to predict recidivism, that doesn’t mean they can’t aid law enforcement, says Chelsea Barabas, a media researcher at MIT. Instead of creating algorithms that use historic crime data to predict who will reoffend, programmers could build algorithms that examine crime data to find trends that inform criminal justice research, Barabas and colleagues argue in a paper to be presented at the Conference on Fairness, Accountability and Transparency in New York City on February 23.

For instance, if a computer program studies crime statistics and discovers that certain features — like a person’s age or socioeconomic status — are highly related to repeated criminal activity, that could inspire new studies to see whether certain interventions, like therapy, help those at-risk groups. In this way, computer programs would do one better than just predict future crime. They could help prevent it.