Climate ‘teleconnections’ may link droughts and fires across continents

Large-scale climate patterns that can impact weather across thousands of kilometers may have a hand in synchronizing multicontinental droughts and stoking wildfires around the world, two new studies find.

These profound patterns, known as climate teleconnections, typically occur as recurring phases that can last from weeks to years. “They are a kind of complex butterfly effect, in that things that are occurring in one place have many derivatives very far away,” says Sergio de Miguel, an ecosystem scientist at Spain’s University of Lleida and the Joint Research Unit CTFC-Agrotecnio in Solsona, Spain.
Major droughts arise around the same time at drought hot spots around the world, and the world’s major climate teleconnections may be behind the synchronization, researchers report in one study. What’s more, these profound patterns may also regulate the scorching of more than half of the area burned on Earth each year, de Miguel and colleagues report in the other study.

The research could help countries around the world forecast and collaborate to deal with widespread drought and fires, researchers say.

The El Niño-Southern Oscillation, or ENSO, is perhaps the most well-known climate teleconnection (SN: 8/21/19). ENSO entails phases during which weakened trade winds cause warm surface waters to amass in the eastern tropical Pacific Ocean, known as El Niño, and opposite phases of cooler tropical waters called La Niña.

These phases influence wind, temperature and precipitation patterns around the world, says climate scientist Samantha Stevenson of the University of California, Santa Barbara, who was not involved in either study. “If you change the temperature of the ocean in the tropical Pacific or the Atlantic … that energy has to go someplace,” she explains. For instance, a 1982 El Niño caused severe droughts in Indonesia and Australia and deluges and floods in parts of the United States.

Past research has predicted that human-caused climate change will provoke more intense droughts and worsen wildfire seasons in many regions (SN: 3/4/20). But few studies have investigated how shorter-lived climate variations — teleconnections — influence these events on a global scale. Such work could help countries improve forecasting efforts and share resources, says climate scientist Ashok Mishra of Clemson University in South Carolina.

In one of the new studies, Mishra and his colleagues tapped data on drought conditions from 1901 to 2018. They used a computer to simulate the world’s drought history as a network of drought events, drawing connections between events that occurred within three months of each other.

The researchers identified major drought hot spots across the globe — places in which droughts tended to appear simultaneously or within just a few months. These hot spots included the western and midwestern United States, the Amazon, the eastern slope of the Andes, South Africa, the Arabian deserts, southern Europe and Scandinavia.
“When you get a drought in one, you get a drought in others,” says climate scientist Ben Kravitz of Indiana University Bloomington, who was not involved in the study. “If that’s happening all at once, it can affect things like global trade, [distribution of humanitarian] aid, pollution and numerous other factors.”

A subsequent analysis of sea surface temperatures and precipitation patterns suggested that major climate teleconnections were behind the synchronization of droughts on separate continents, the researchers report January 10 in Nature Communications. El Niño appeared to be the main driver of simultaneous droughts spanning parts of South America, Africa and Australia. ENSO is known to exert a widespread influence on precipitation patterns (SN: 4/16/20). So that finding is “a good validation of the method,” Kravitz says. “We would expect that to appear.”
In the second study, published January 27 in Nature Communications, de Miguel and his colleagues investigated how climate teleconnections influence the amount of land burned around the world. Researchers knew that the climate patterns can influence the frequency and intensity of wildfires. In the new study, the researchers compared satellite data on global burned area from 1982 to 2018 with data on the strength and phase of the globe’s major climate teleconnections.

Variations in the yearly pattern of burned area strongly aligned with the phases and range of climate teleconnections. In all, these climate patterns regulate about 53 percent of the land burned worldwide each year, the team found. According to de Miguel, teleconnections directly influence the growth of vegetation and other conditions such as aridity, soil moisture and temperature that prime landscapes for fires.

The Tropical North Atlantic teleconnection, a pattern of shifting sea surface temperatures just north of the equator in the Atlantic Ocean, was associated with about one-quarter of the global burned area — making it the most powerful driver of global burning, especially in the Northern Hemisphere.

These researchers are showing that wildfire scars around the world are connected to these climate teleconnections, and that’s very useful, Stevenson says. “Studies like this can help us prepare how we might go about constructing larger scale international plans to deal with events that affect multiple places at once.”

3-D maps of a protein show how it helps organs filter out toxic substances

A close look at one protein shows how it moves molecular passengers into cells in the kidneys, brain and elsewhere.

The protein LRP2 is part of a delivery service, catching certain molecules outside a cell and ferrying them in. Now, 3-D maps of LRP2 reveal the protein’s structure and how it captures and releases molecules, researchers report February 6 in Cell. The protein adopts a more open shape, like a net, at the near-neutral pH outside cells. But in the acidic environment inside cells, the protein crumples to drop off any passengers.
The shape of LRP2’s structure — and how it enables so many functions — has stumped scientists for decades. The protein helps the kidneys and brain filter out toxic substances, and it operates in other places too, like the lungs and inner ears. When the protein doesn’t function properly, a host of health conditions can occur, including chronic kidney disease and Donnai-Barrow syndrome, a genetic disorder that affects the kidneys and brain.

The various conditions associated with LRP2 dysfunction come from the protein’s numerous responsibilities — it binds to more than 75 different molecules. That’s a huge amount for one protein, earning it the nickname “molecular flypaper,” says nephrologist Jonathan Barasch of Columbia University.

Typically, LRP2 sits at a cell membrane’s surface, waiting to snag a molecule passing by. After the protein binds to a molecule, the cell engulfs the part of its surface containing the protein, forming an internal bubble called an endosome. LRP2 then releases the molecule inside the cell, and the endosome carries the protein back to the surface.

To understand this shuttle system, Barasch and colleagues collected LRP2 from 500 mouse kidneys. The researchers put some of the protein in a solution at the extracellular pH of 7.5, and some in an endosome-mimicking solution at pH 5.2. Using a cryo-electron microscope, they captured images of the proteins and then stitched the images together in a computer, rendering 3-D maps of the protein at both open and closed formations.
The researchers suggest that charged calcium atoms hold the protein open at extracellular pH. But as pH drops due to hydrogen ions flowing into the endosome, the hydrogen ions displace the calcium ions, causing the protein to contract.

A chemical imbalance doesn’t explain depression. So what does?

You’d be forgiven for thinking that depression has a simple explanation.

The same mantra — that the mood disorder comes from a chemical imbalance in the brain — is repeated in doctors’ offices, medical textbooks and pharmaceutical advertisements. Those ads tell us that depression can be eased by tweaking the chemicals that are off-kilter in the brain. The only problem — and it’s a big one — is that this explanation isn’t true.

The phrase “chemical imbalance” is too vague to be true or false; it doesn’t mean much of anything when it comes to the brain and all its complexity. Serotonin, the chemical messenger often tied to depression, is not the one key thing that explains depression. The same goes for other brain chemicals.
The hard truth is that despite decades of sophisticated research, we still don’t understand what depression is. There are no clear descriptions of it, and no obvious signs of it in the brain or blood.

The reasons we’re in this position are as complex as the disease itself. Commonly used measures of depression, created decades ago, neglect some important symptoms and overemphasize others, particularly among certain groups of people. Even if depression could be measured perfectly, the disorder exists amid myriad levels of complexity, from biological confluences of minuscule molecules in the brain all the way out to the influences of the world at large. Countless combinations of genetics, personality, history and life circumstances may all conspire to create the disorder in any one person. No wonder the science is stuck.

It’s easy to see why a simple “chemical imbalance” explanation holds appeal, even if it’s false, says Awais Aftab, a psychiatrist at Case Western Reserve University in Cleveland. What causes depression is nuanced, he says — “not something that can easily be captured in a slogan or buzzword.”

So here, up front, is your fair warning: There will be no satisfying wrap-up at the end of this story. You will not come away with a scientific explanation for depression, because one does not exist. But there is a way forward for depression researchers, Aftab says. It requires grappling with nuances, complexity and imperfect data.

Those hard examinations are under way. “There’s been some really interesting and exciting scientific and philosophical work,” Aftab says. That forward motion, however slow, gives him hope and may ultimately benefit the millions of people around the world weighed down by depression.

How is depression measured?
Many people who feel depressed go into a doctor’s office and get assessed with a checklist. “Yes” to trouble sleeping, “yes” to weight loss and “yes” to a depressed mood would all yield points that get tallied into a cumulative score. A high enough score may get someone a diagnosis. The process seems straightforward. But it’s not. “Even basic issues regarding measurement of depression are actually still quite open for debate,” Aftab says.

That’s why there are dozens of methods to assess depression, including the standard description set by the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, or DSM-5. This manual is meant to standardize categories of illness.

Variety in measurement is a real problem for the field and points to the lack of understanding of the disease itself, says Eiko Fried, a clinical psychologist at Leiden University in the Netherlands. Current ways of measuring depression “leave you with a really impoverished, tiny look,” Fried says.

Scales can miss important symptoms, leaving people out. “Mental pain,” for instance, was described by patients with depression and their caregivers as an important feature of the illness, researchers reported in 2020 in Lancet Psychiatry. Yet the term doesn’t show up on standard depression measurements.

One reason for the trouble is that the experience of depression is, by its nature, deeply personal, says clinical psychologist Ioana Alina Cristea of the University of Pavia in Italy. Individual patient complaints are often the best tool for diagnosing the disorder, she says. “We can never let these elements of subjectivity go.”

In the middle of the 20th century, depression was diagnosed through subjective conversation and psychoanalysis, and considered by some to be an illness of the soul. In 1960, psychiatrist Max Hamilton attempted to course-correct toward objectivity. Working at the University of Leeds in England, he published a depression scale. Today, that scale, known by its acronyms HAM-D or HRSD, is one of the most widely used depression screening tools, often used in studies measuring depression and evaluating the promise of possible treatments.
“It’s a great scheme for a scale that was made in 1960,” Fried says. Since the HRSD was published, “we have put a man on the moon, invented the internet and created powerful computers small enough to fit in people’s pockets,” Fried and his colleagues wrote in April in Nature Reviews Psychology. Yet this 60-year-old tool remains a gold standard.

Hamilton developed his scale by observing patients who had already been diagnosed with depression. They exhibited symptoms such as weight loss and slowed speech. But those mixtures of symptoms don’t apply to everyone with depression, nor do they capture nuance in symptoms.

To spot these nuances, Fried looked at 52 depression symptoms across seven different scales for depression, including Hamilton’s scale. On average, each symptom appeared in three of the seven scales. A whopping 40 percent of the symptoms appeared in only one scale, Fried reported in 2017 in the Journal of Affective Disorders. The only specific symptom common to all seven scales? “Sad mood.”

In a study that examined depression symptoms reported by 3,703 people, Fried and Randolph Nesse, an evolutionary psychiatrist at the University of Michigan Medical School in Ann Arbor, found 1,030 unique symptom profiles. Roughly 14 percent of participants had combinations of symptoms that were not shared with anyone else, the researchers reported in 2015 in the Journal of Affective Disorders.

Before reliable thermometers, the concept of temperature was murky. How do you understand the science of hot and cold without the tools to measure it? “You don’t,” Fried says. “You make a terrible measurement, and you have a terrible theory of what it is.” Depression presents a similar challenge, he says. Without good measurements, how can you possibly diagnose depression, determine whether symptoms get better with treatments or even prevent it in the first place?

Depression differs by gender, race and culture
The story gets murkier when considering who these depression scales were made for. Symptoms differ among groups of people, making the diagnosis even less relevant for certain groups.
Behavioral researcher Leslie Adams of Johns Hopkins Bloomberg School of Public Health studies depression in Black men. “It’s clear that [depression] is negatively impacting their work lives, social lives and relationships. But they’re not being diagnosed at the same rate” as other groups, she says. For instance, white people have a lifetime risk of major depression disorder of almost 18 percent; Black people’s lifetime risk is 10.4 percent, researchers reported in 2007 in JAMA Psychiatry. This discrepancy led Adams to ask: “Could there be a problem with diagnostic tools?”

Turns out, there is. Black men with depression have several characteristics that common scales miss, such as feelings of internal conflict, not communicating with others and feeling the burdens of societal pressure, Adams and colleagues reported in 2021 in BMC Public Health. A lot of depression measurements are based on questions that don’t capture these symptoms, Adams says. “ ‘Are you very sad?’ ‘Are you crying?’ Some people do not emote in the same way,” she says. “You may be missing things.”

American Indian women living in the Southeast United States also experience symptoms that aren’t adequately caught by the scales, Adams and her team found in a separate study. These women also reported experiences that do not necessarily signal depression for them but generally do for wider populations.

On common scales, “there are some items that really do not capture the experience of depression for these groups,” Adams says. For instance, a common question asks how well someone agrees with the sentence: “I felt everything I did was an effort.” That “can mean a lot of things, and it’s not necessarily tied to depression,” Adams says. The same goes for items such as, “People dislike me.” A person of color faced with racism and marginalization might agree with that, regardless of depression, she says.

Our ways to measure depression capture only a tiny slice of the big picture. The same can be said about our understanding of what’s happening in the brain.

The flawed serotonin hypothesis
Serotonin came into the spotlight in part because of the serendipitous discovery of drugs that affected serotonin receptors, called selective serotonin re­uptake inhibitors, or SSRIs. After getting its start in the late 1960s, the “serotonin hypothesis” flourished in the late ’90s, as advertisers ran commercials that told viewers that SSRIs fixed the serotonin deficit that can accompany depression. These messages changed the way people talked and thought about depression. Having a simple biological explanation helped some people and their doctors, in part by easing the shame some people felt for not being able to snap out of it on their own. It gave doctors ways to talk with people about the mood disorder.

But it was a simplified picture. A recent review of evidence, published in July in Molecular Psychiatry, finds no consistent data supporting the idea that low serotonin causes depression. Some headlines declared that the study was a grand takedown of the serotonin hypothesis. To depression researchers, the findings weren’t a surprise. Many had already realized this simple description wasn’t helpful.

There’s plenty of data suggesting that serotonin, and other chemical messengers such as dopamine and norepinephrine, are somehow involved in depression, including a study by neuropharmacologist Gitte Moos Knudsen of the University of Copenhagen. She and colleagues recently found that 17 people who were in the midst of a depressive episode released, on average, less serotonin in certain brain areas than 20 people who weren’t depressed. The study is small, but it’s one of the first to look at serotonin release in living human brains of people with depression.

But Knudsen cautions that those results, published in October in Biological Psychiatry, don’t mean that depression is fully caused by low serotonin levels. “It’s easy to defer to simple explanations,” she says.

SSRIs essentially form a molecular blockade, stopping serotonin from being reabsorbed into nerve cells and keeping the levels high between the cells. Those high levels are thought to influence nerve cell activity in ways that help people feel better.

Because the drugs can ease symptoms in about half of people with depression, it seemed to make sense that depression was caused by problems with serotonin. But just because a treatment works by doing something doesn’t mean the disease works in the opposite way. That’s backward logic, psychiatrist Nassir Ghaemi of Tufts University School of Medicine in Boston wrote in October in a Psychology Today essay. Aspirin can ease a headache, but a headache isn’t caused by low aspirin.

“We think we have a much more nuanced picture of what depression is today,” Knudsen says. The trouble is figuring out the many details. “We need to be honest with patients, to say that we don’t know everything about this,” she says.

The brain contains seven distinct classes of receptors that sense serotonin. That’s not even accounting for sensors for other messengers such as dopamine and norepinephrine. And these receptors sit on a wide variety of nerve cells, some that send signals when they sense serotonin, some that dampen signals. And serotonin, dopamine and norepinephrine are just a few of dozens of chemicals that carry information throughout a multitude of interconnected brain circuits. This complexity is so great that it renders the phrase “chemical imbalance” meaningless.

Overly simple claims — low serotonin causes depression, or low serotonin isn’t involved — serve only to keep us stymied, Aftab says. “[It] just keeps up that unhelpful binary.”
Depression research can’t ignore the world
In the 1990s, Aftab says, depression researchers got intensely focused on the brain. “They were trying to find the broken part of the brain that causes depression.” That limited view “really hurt depression research,” Aftab says. In the last 10 years or so, “there’s a general recognition that that sort of mind-set is not going to give us the answers.”

Reducing depression to specific problems of biology in the brain didn’t work, Cristea says. “If you were a doctor 10 years ago, the dream was that the neuroscience would give us the markers. We would look at the markers and say, ‘OK. You [get] this drug. You, this kind of therapy.’ But it hasn’t happened.” Part of that, she says, is because depression is an “existentially complicated disorder” that’s tough to simplify, quantify and study in a lab.

Our friendships, our loves, our setbacks and our stress can all influence our health. Take a recent study of first-year doctors in the United States. The more these doctors worked, the higher the rate of depression, scientists reported in October in the New England Journal of Medicine. Similar trends exist for caregivers of people with dementia and health care workers who kept emergency departments open during the COVID-19 pandemic. Their high-stress experiences may have prompted depression in some way.

“Depression is linked to the state of the world — and there is no denying it,” Aftab says.
Today’s research on depression ought to be more pluralistic, Adams says. “There are so many factors at play that we can’t just rest on one solution,” she says. Research from neuroscience and genetics has helped identify brain circuits, chemical messengers, cell types, molecules and genes that all may be involved in the disorder. But researchers aren’t satisfied with that. “There is other evidence that remains unexplored,” Adams says. “With our neuro­science advances, there should be similar advances in public health and psychiatric work.”

That’s happening. For her part, Adams and colleagues have just begun a study looking at moment-to-moment stressors in the lives of Black adolescents, ages 12 to 18, as measured by cell phone questionnaires. Responses, she hopes, will yield clues about depression and risk of suicide.

Other researchers are trying to fit together all of these different ways of seeing the problem. Fried, for example, is developing new concepts of depression that acknowledge the interacting systems. You tug on one aspect of it — using an antidepressant for instance, or changing sleep patterns — and see how the rest of the system reacts.

Approaches like these recognize the complexity of the problem and aim to figure out ways to handle it. We will never have a simple explanation for depression; we are now learning that one cannot possibly exist. That may sound like cold comfort to people in depression’s grip. But seeing the challenge with clear eyes may be the thing that moves us forward.

Fish can recognize themselves in photos, further evidence they may be self-aware

Some fish can recognize their own faces in photos and mirrors, an ability usually attributed to humans and other animals considered particularly brainy, such as chimpanzees, scientists report. Finding the ability in fish suggests that self-awareness may be far more widespread among animals than scientists once thought.

“It is believed widely that the animals that have larger brains will be more intelligent than animals of the small brain,” such as fish, says animal sociologist Masanori Kohda of Osaka Metropolitan University in Japan. It may be time to rethink that assumption, Kohda says.
Kohda’s previous research showed that bluestreak cleaner wrasses can pass the mirror test, a controversial cognitive assessment that purportedly reveals self-awareness, or the ability to be the object of one’s own thoughts. The test involves exposing an animal to a mirror and then surreptitiously putting a mark on the animal’s face or body to see if they will notice it on their reflection and try to touch it on their body. Previously only a handful of large-brained species, including chimpanzees and other great apes, dolphins, elephants and magpies, have passed the test.

In a new study, cleaner fish that passed the mirror test were then able to distinguish their own faces from those of other cleaner fish in still photographs. This suggests that the fish identify themselves the same way humans are thought to — by forming a mental image of one’s face, Kohda and colleagues report February 6 in the Proceedings of the National Academy of Sciences.

“I think it’s truly remarkable that they can do this,” says primatologist Frans de Waal of Emory University in Atlanta who was not involved in the research. “I think it’s an incredible study.”

De Waal is quick to point out that failing the mirror test should not be considered evidence of a lack of self-awareness. Still, scientists have struggled to understand why some species that are known to have complex cognitive abilities, such as monkeys and ravens, have not passed. Researchers have also questioned whether the test is appropriate for species like dogs that rely more on scent, or like pigs that may not care enough about a mark on their bodies to try to touch it.

The mixed results in other animals make it all the more astonishing that a small fish can pass. In their first mirror test studies, published in 2019 and 2022, Kohda’s team exposed wild-caught cleaner fish in separate tanks to mirrors for a week. The researchers then injected brown dye just beneath the scales on the fish’s throats, making a mark that resembles the parasites these fish eat off the skin of larger fish in the wild. When the marked fish saw themselves in a mirror, they began striking their throats on rocks or sand in the bottom of the tank, apparently trying to scrape off the marks.

In the new study, 10 fish that passed the mirror test were then shown a photo of their own face and a photo of an unfamiliar cleaner fish face. All the fish acted aggressively toward the unfamiliar photo, as if it were a stranger, but were not aggressive toward the photo of their own face.

When another eight fish that had spent a week with a mirror but had not previously been marked were shown a photo of their own face with a brown mark on the throat, six of them began scraping their throats just like the fish that passed the mirror test. But they did not scrape when shown a photo of another fish with a mark.
Animals that recognize their reflection in the mirror most likely first learn to identify themselves by seeing that the movement of the animal in the mirror matches their own movement, researchers think. Because the cleaner fish were also able to recognize their own faces in still images, they, and possibly other animals that have passed the mirror test, may be able to identify themselves by developing a mental image of their own face that they can compare to what they see in the mirror or photos, the authors say.

“I think it’s a great next step,” says comparative cognitive psychologist Jennifer Vonk of Oakland University in Rochester, Mich., who wasn’t involved in the study. But she would like to see more research before drawing conclusions about what’s being represented in the mind of a nonverbal being like a fish. “As with most other studies, it still leaves some room for further follow-up.”

Kohda’s lab has more experiments planned to continue to probe what’s going on in the brain of the cleaner fish, and to try the new photo-recognition method on another popular research fish, the three-spined stickleback (Gasterosteus aculeatus).

Animal behaviorist Jonathan Balcombe, author of the book What a Fish Knows, is already convinced, describing the new study as “robust and quite brilliant.” People shouldn’t be surprised that fish could be self-aware given that they have already been shown to have complex behavior including tool use, planning and collaboration, Balcombe says. “It’s time we stopped thinking of fishes as somehow lesser members of the vertebrate pantheon.”

What to know about Turkey’s recent devastating earthquake

In the early morning of February 6, a devastating magnitude 7.8 earthquake struck southern Turkey, near the border with Syria. Numerous aftershocks followed, the strongest nearly rivaling the power of the main quake, at magnitude 7.5. By evening, the death toll had climbed to more than 3,700 across both countries, according to Reuters, and was expected to continue to rise.

Most of Turkey sits on a small tectonic plate that is sandwiched between two slowly colliding behemoths: the vast Eurasian Plate to the north and the Arabian Plate to the south. As those two plates push together, Turkey is being squeezed out sideways, like a watermelon seed snapped between two fingers, says seismologist Susan Hough of the U.S. Geological Survey.
The entire country is hemmed in by strike-slip, or sideways-sliding, fault zones: the North Anatolian Fault that runs roughly parallel to the Black Sea, and the East Anatolian Fault, near the border with Syria. As a result, Turkey is highly seismically active. Even so, Monday’s quake, which occurred on the East Anatolian Fault, was the strongest to strike the region since 1939, when a magnitude 7.8 quake killed 30,000 people.

Science News talked with Hough, who is based in Pasadena, Calif., about the quake, its aftershocks and building codes. The conversation has been edited for length and clarity.

SN: You say on Twitter that this was a powerful quake for a strike-slip fault. Can you explain?

Hough: The world has seen bigger earthquakes. Subduction zones generate the biggest earthquakes, as much as magnitude 9 (SN: 1/13/21). But quakes close to magnitude 8 are not common on strike-slip faults. But because they’re on land and tend to be shallow, you can get severe … shaking close to the fault that’s moving.

SN: Some of the aftershocks were very strong, at magnitudes 7.5 and 6.7. Is that unusual?

Hough: As with a lot of things, there’s what’s expected on average, and there’s what’s possible. On average, the largest aftershocks are a full unit smaller than the main shock. But that’s just average; for any individual main shock, the largest aftershock can have a lot of variability.

The other thing people noted was the distance [between the main shock and some aftershocks over a hundred kilometers away]. Aftershock as a term isn’t precise. What is an aftershock isn’t something that seismologists are always clear on. The fault that produced the main shock is 200 kilometers long, and that’s going to change the stress in a lot of areas. Mostly it releases stress, but it does increase stress in some areas. So you can get aftershocks along that fault, but also some distance away. It’s a little bit unusual, but not unheard of.

SN: People have wondered whether Monday’s magnitude 3 earthquake near Buffalo, N.Y., might be related.

Hough: A magnitude 7.8 quake generates [seismic] waves that you can record all around Earth, so it’s technically disrupting every point on Earth. So it’s not a completely outlandish idea, but it’s statistically exceedingly unlikely. Maybe if a seismic wave passed through a fault that was just ready to go in just the right way, it’s possible.

An interesting [and completely separate] idea is that you might get earthquakes around the perimeter of the Great Lakes [such as near Buffalo] because as the lake levels go up and down, you’re stressing the Earth’s crust, putting weight on one side or the other. That’s a source of stress that could give you these pretty small quakes.

SN: The images emerging from this deadly disaster are devastating.

Hough: It’s hard to watch. And it hammers home the importance of building codes. One of the problems that any place is up against is that building codes improve over time, and you’ve always got the problem of older structures. It’s really expensive to retrofit. I expect that earthquake engineers will be looking at the damage, and it will illuminate where the vulnerabilities are [in the area]. The hope is that with proper engineering, we can make the built environment safe.

Mammals that live in groups may live longer, longevity research suggests

For mammals, one secret to a long life may be spending it living with friends and family.

An analysis of the life spans and social lives of nearly 1,000 mammal species shows that species that live in groups, such as horses and chimpanzees, tend to live longer than solitary beasts, like weasels and hedgehogs. The finding suggests that life span and social traits are evolutionarily entwined in mammals, researchers report January 31 in Nature Communications.
The maximum life span of mammals ranges widely. The shortest-lived shrews, for example, survive about two years, while bowhead whales (Balaena mysticetus) can reach roughly 200 years of age (SN: 1/6/15).

When evolutionary biologist Xuming Zhou of the Chinese Academy of Sciences in Beijing was studying the longest-lived mammals to understand the evolution of longevity, he took particular note of naked mole-rats (Heterocephalus glaber). The rodents are exceptionally long-lived, sometimes reaching over 30 years of age. They also live in huge, complex, subterranean societies. In contrast, other rodents like golden hamsters (Mesocricetus auratus), which are solitary, live to only about four years.

Some previous research on specific mammal species showed an effect of social behavior on longevity, Zhou says. For instance, female chacma baboons (Papio ursinus) with strong, stable social bonds live longer than females without them.

Zhou and his colleagues decided to see if there were any links between longevity and social habits shared across a wide range of mammal species.

The researchers compiled information from the scientific literature on the social organization of 974 mammal species. They then split these species into three categories: solitary, pair-living and group-living. When the researchers compared these three groups with data on the mammals’ known longevity, they found that group-living mammals tended to live longer than the solitary species — roughly 22 years compared with nearly 12 years in solitary mammals.

Zhou and his colleagues then accounted for body mass — bigger mammals tend to live longer than smaller ones — and the effect of social bonds held. A stark example comes from shrews and bats. Both are similarly tiny mammals, but the loner shrews live only a few years, while some far more social bat species can live for 30 or 40 years.

“We were so surprised, because individuals who live in groups also face a lot of costs, such as competition for potential mating partners and food,” Zhou says. Frequent social contact in group settings can also encourage the spread of infectious disease.
But there are benefits to living in a group too, he says, such as banding together for protection against predators. Living together may also reduce the risk of starvation if, for instance, group members increase foraging efficiency by finding and gathering food together. These factors may allow social mammals to live longer.

The evolution of a long life may also be more likely in group-living species: Living in a group allows animals to potentially aid the survival of their family members, which carry their genes.

Evolutionary biologist Laurent Keller of the University of Lausanne in Switzerland lauds the study for the sheer size of the sampling effort. “But it would have been useful to be a bit more precise about different levels of sociality.” There are more variations of social organization within the three categories used in the study, he says, and the relative degree of sociality could influence any patterns you see.

Still, fine tuning the social categories “is not an easy task,” Keller notes.

To get an idea of how genes might produce the link between longevity and group living, Zhou and his team took brain tissue samples from 94 mammal species and analyzed the transcriptome — the full complement of RNA — giving insights into different genes’ activity levels. This can reveal whether genes are turned on or off, or how much protein the genes may be instructing cells to produce.

The researchers found 31 genes whose relative activity levels were correlated with both longevity and one of the three prescribed social categories. Many of these genes appear to have roles in the immune system, which may have importance when countering pathogens spreading through the social group. Other genes were associated with hormone regulation, including some thought to influence social behaviors.

In studying these genes in more detail, Zhou envisions uncovering more about how mammals’ social habits and life spans have evolved together.

The biblical warrior Goliath may not have been so giant after all

Early versions of the Bible describe Goliath — an ancient Philistine warrior best known as the loser of a fight with the future King David — as a giant whose height in ancient terms reached four cubits and a span. But don’t take that measurement literally, new research suggests.

Archaeological findings at biblical-era sites including Goliath’s home city, a prominent Philistine settlement called Gath, indicate that those ancient measurements work out to 2.38 meters, or 7 feet, 10 inches. That’s equal to the width of walls forming a gateway into Gath that were unearthed in 2019, according to archaeologist Jeffrey Chadwick of Brigham Young University in Provo, Utah.

Rather than standing taller than any NBA player ever, Goliath was probably described metaphorically by an Old Testament writer as a warrior who matched the size and strength of Gath’s defensive barrier, Chadwick said November 19 at the virtual annual meeting of the American Schools of Oriental Research.

People known as Canaanites first occupied Gath in the early Bronze Age, roughly 4,700 to 4,500 years ago. The city was rebuilt more than a millennium later by the Philistines, known from the Old Testament as enemies of the Israelites (SN: 11/22/16). Gath reached its peak during the Iron Age around 3,000 years ago, the time of biblical references to Goliath. Scholars continue to debate whether David and Goliath were real people who met in battle around that time.

The remains of Gath are found at a site called Tell es-Safi in Israel. A team led by archaeologist Aren Maeir of Bar-Ilan University in Ramat-Gan, Israel — who Chadwick collaborated with to excavate the Gath gateway — has investigated Tell es-Safi since 1996. Other discoveries at Gath include a pottery fragment inscribed with two names possibly related to the name Goliath. Evidence of Gath’s destruction about 2,850 years ago by an invading army has also been recovered.
Archaeologists have long known that in ancient Egypt a cubit corresponded to 52.5 centimeters and assumed that the same measure was used at Gath and elsewhere in and around ancient Israel. But careful evaluations of many excavated structures over the last several years have revealed that standard measures differed slightly between the two regions, Chadwick said.

Buildings at Gath and several dozen other cities from ancient Israel and nearby kingdoms of Judah and Philistia, excavated by other teams, were constructed based on three primary measurements, Chadwick has found. Those include a 54-centimeter cubit (versus the 52.5-centimeter Egyptian cubit), a 38-centimeter short cubit and a 22-centimeter span that corresponds to the distance across an adult’s outstretched hand.
Dimensions of masonry at these sites display various combinations of the three measurements, Chadwick said. At a settlement called et-Tell in northern Israel, for instance, two pillars at the front of the city gate are each 2.7 meters wide, or five 54-centimeter cubits. Each of four inner pillars at the city gate measure 2.38 meters wide, or four 54-centimeter cubits and a 22-centimeter span. Excavators of et-Tell regard it as the site of a biblical city called Bethsaida.

Chadwick’s 2019 excavations found one of presumably several gateways that allowed access to Gath through the city’s defensive walls. Like the inner pillars of et-Tell’s city gate, Gath’s gate walls measured 2.38 meters wide, or four cubits and a span, the same as Goliath’s biblical stature.

“The ancient writer used a real architectural metric from that time to describe Goliath’s height, likely to indicate that he was as big and strong as his city’s walls,” Chadwick said.

Although the research raises the possibility that Goliath’s recorded size referred to the width of a city wall, Chadwick “will need to do more research to move this beyond an intriguing idea,” says archaeologist and Old Testament scholar Gary Arbino of Gateway Seminary in Mill Valley, Calif. For one thing, Arbino suggests, it needs to be established that the measure applied to Goliath, four cubits and a span, was commonly used at the time as a phrase that figuratively meant “big and strong.”

Here’s why COVID-19 vaccines like Pfizer’s need to be kept so cold

Pfizer is racing to get approval for its COVID-19 vaccine, applying for emergency use authorization from the U.S. Food and Drug Administration on November 20. But the pharmaceutical giant faces a huge challenge in distributing its vaccine, which has to be kept an ultrafrosty –70° Celsius, requiring special storage freezers and shipping containers.

It “has some unique storage requirements,” says Kurt Seetoo, the immunization program manager at the Maryland Department of Public Health in Baltimore. “We don’t normally store vaccines at that temperature, so that definitely is a challenge.”

That means that even though the vaccine developed by Pfizer and its German partner BioNTech is likely to be the first vaccine to reach the finish line in the United States, its adoption may ultimately be limited. The FDA’s committee overseeing vaccines will meet on December 10 to discuss the emergency use request. That meeting will be streamed live on the agency’s web site and YouTube, Facebook and Twitter channels.

The companies are also seeking authorization to distribute the vaccine in Australia, Canada, Europe, Japan, the United Kingdom
A similar vaccine developed by Moderna and the U.S. National Institute of Allergy and Infectious Diseases also requires freezing. But it survives at a balmier –20° C, so can be kept in a standard freezer, and can even be stored at refrigerator temperatures for up to a month. Most vaccines don’t require freezing at all, but both Pfizer and Moderna’s vaccines are a new type of vaccine for which the low temperatures are necessary to keep the vaccines from breaking down and becoming useless.

Both vaccines are based on messenger RNA, or mRNA, which carries instructions for building copies of the coronavirus’ spike protein. Human cells read those instructions and produce copies of the protein, which, in turn prime the immune system to attack the coronavirus should it come calling.

So why does Pfizer’s vaccine need to be frozen at sub-Antarctica temperatures and Moderna’s does not?

Answering that question requires some speculation. The companies aren’t likely to reveal all the tricks and commercial secrets they used to make the vaccines, says Sanjay Mishra, a protein chemist and data scientist at Vanderbilt University Medical Center in Nashville.

But there are at least four things that may determine how fragile an mRNA vaccine is and how deeply it needs to be frozen to keep it fresh and effective. How the companies addressed those four challenges is likely the key to how cold the vaccines need to be, Mishra says.

The cold requirement conundrum starts with the difference in chemistry between RNA and its cousin, DNA.
One reason RNA is much less stable than DNA is due to an important difference in the sugars that make up the molecules’ backbones. RNA’s spine is a sugar called ribose, while DNA’s is deoxyribose. The difference: DNA is missing an oxygen molecule. As a result, “DNA can survive for generations,” Mishra says, but RNA is much more transient. “And for biology, that’s a good thing.”

When cells have a job to do, they usually need to call proteins into service. But like most manufacturers, cells don’t have a stockpile of proteins. They have to make new batches each time. The recipe for making proteins is stored in DNA.

Rather than risk damaging DNA recipes by putting them on the molecular kitchen counter while cooking up a batch of proteins, cells instead make RNA copies of the recipe. Those copies are read by cellular machinery and used to produce proteins.
Like a Mission Impossible message that self-destructs once it has been played, many RNAs are quickly degraded once read. Quickly disposing of RNA is one way to control how much of a particular protein is made. There are a host of enzymes dedicated to RNA’s destruction floating around inside cells and nearly everywhere else. Sticking RNA-based vaccines in the blast freezer prevents such enzymes from tearing apart the RNA and rendering the vaccine inert.

Another way the molecules’ stability differs lies in their architecture. DNA’s dual strands twine into a graceful double helix. But RNA goes it alone in a single strand that pairs with itself in some spots, creating fantastical shapes reminiscent of lollipops, hair pins and traffic circles. Those “secondary structures” can make some RNAs more fragile than others.

Yet another place that DNA’s and RNA’s chemical differences make things hard on RNA is the part of the molecules that spell out the instructions and ingredients of the recipe. The information-carry subunits of the molecules are known as nucleotides. DNA’s nucleotides are often represented by the letters A, T, C and G for adenine, thymine, cytosine and guanine. RNA uses the same A, C and G, but in place of thymine it has a different letter: uracil, or U.

“Uracil is a problem because it juts out,” Mishra says. Those jutting Us are like a flag waving to special immune system proteins called Toll-like receptors. Those proteins help detect RNAs from viruses, such as SARS-CoV-2, the coronavirus that causes COVID-19, and slate the invaders for destruction.

All these ways mRNA can fall apart or get waylaid by the immune system create an obstacle course for vaccine makers. The companies need to ensure that the RNA stays intact long enough to get into cells and bake up batches of spike protein. Both Moderna and Pfizer probably tinkered with the RNA’s chemistry to make a vaccine that could get the job done: Both have reported that their vaccines are about 95 percent effective at preventing illness in clinical trials (SN: 11/16/20; SN: 11/18/20).

While the details of each company’s approach aren’t known, they both probably fiddled slightly with the chemical letters of the mRNAs in order to make it easier for human cellular machinery to read the instructions. The companies also need to add additional RNA — a cap and tail — flanking the spike protein instructions to make the molecule stable and readable in human cells. That tampering may have disrupted or created secondary structures that could affect the RNA’s stability, Mishra says.
The uracil problem can be dealt with by adding a modified version of the nucleotide, which Toll-like receptors overlook, sparing the RNA from an initial immune system attack so that the vaccine has a better chance of making the protein that will build immune defenses against the virus. Exactly which modified version of uracil the companies may have introduced into the vaccine could also affect RNA stability, and thus the temperature at which each vaccine needs to be stored.

Finally, by itself, an RNA molecule is beneath a cell’s notice because it’s just too small, Mishra says. So the companies coat the mRNA with an emulsion of lipids, creating little bubbles known as lipid nanoparticles. Those nanoparticles need to big enough that cells will grab them, bring them inside and break open the particle to release the RNA.

Some types of lipids stand up to heat better than others. It’s “like regular oil versus fat. You know how lard is solid at room temperature” while oil is liquid, Mishra says. For nanoparticles, “what they’re made of makes a giant difference in how stable they will be in general to [maintain] the things inside.” The lipids the companies used could make a big difference in the vaccine’s ability to stand heat.

The need for ultracold storage might ultimately limit how many people end up getting vaccinated with Pfizer’s vaccine. “We anticipate that this Pfizer vaccine is pretty much only going to be used in this early phase,” Seetoo says.

The first wave of immunizations is expected to go to health care workers and other essential employees, such as firefighters and police, and to people who are at high risk of becoming severely ill or dying of COVID-19 should they contract it such as elderly people living in nursing facilities.

Pfizer has told health officials that the vaccine can be stored in special shipping containers that are recharged with dry ice for 15 days and stay refrigerated for another five days after thawing, Seetoo says. That gives health officials 20 days to get the vaccine into people’s arms once it’s delivered. But Moderna’s vaccine and a host of others that are still in testing seem to last longer at warmer temperatures. If those vaccines are as effective as Pfizer’s, they may be more attractive candidates in the long run because they don’t need such extreme special handling.

These plants seem like they’re trying to hide from people

Fritillaria plants should be simple to spot.

The usually bright green plants often stand alone amid the jumbled scree that tops the Himalayan and Hengduan mountains in southwestern China — easy pickings for traditional Chinese medicine herbalists, who’ve ground the bulbs of wild Fritillaria into a popular cough-treating powder for more than 2,000 years. The demand for bulbs is intense, since about 3,500 of them are needed to produce just one kilogram of the powder, worth about $480.

But some Fritillaria are remarkably difficult to find, with living leaves and stems that are barely distinguishable from the gray or brown rocky background. Surprisingly, this plant camouflage seems to have evolved in response to people. Fritillaria delavayi from regions that experience greater harvesting pressure are more camouflaged than those from less harvested areas, researchers report November 20 in Current Biology.

The new study “is quite convincing,” says Julien Renoult, an evolutionary biologist at the French National Centre for Scientific Research in Montpellier who wasn’t involved in the study. “It’s a nice first step toward demonstrating that humans seem to be driving the very rapid evolution of camouflage in this species.”
Camouflaged plants are rare, but not unheard of, says Yang Niu, a botanist at the Kunming Institute of Botany in China, who studies cryptic coloration in plants. In wide open areas with little cover, like mountaintops, blending in can help plants avoid hungry herbivores (SN: 4/29/14). But after five years of studying camouflage in Fritillaria, Niu found few bite marks on leaves, and he did not spot any animals munching on the plants. “They don’t seem to have natural enemies,” he says.

So Niu, his colleague Hang Sun and sensory ecologist Martin Stevens of the University of Exeter in England decided to see if humans might be driving the evolution of the plants’ camouflage. If so, the more heavily harvested a particular slope, the more camouflaged the plants that live there should be.

In an ideal world, to measure harvesting pressure “you’d have exact measures of exactly how many plants had been collected for hundreds of years” at multiple sites, Stevens says. “But that data is practically nonexistent.”

Luckily, at seven study sites, local herbalists had noted the total weight of bulbs harvested each year from 2014 to 2019. These records provided a measure of contemporary harvesting pressure. To estimate further back in time, the researchers assessed ease of harvesting by recording how long it took to dig up bulbs at six of those sites, plus an additional one. On some slopes, bulbs are easily dug up, but in others they can be buried under stacks of rocks. “Intuitively, areas where it’s easier to harvest should have experienced more harvesting pressure” over time, Stevens says.

Both measures revealed a striking pattern: The more harvested, or harvestable, a site, the better the color of a plant matched its background, as measured by a spectrometer. “The degree of correlation was really, really convincing for both metrics we used,” Stevens says.
Human eyes also had a harder time spotting camouflaged plants in an online experiment, suggesting that the camouflage actually works.

Hiding in plain sight may present some challenges for the plant. Pollinators might have a harder time finding camouflaged plants, and the gray and brown coloration could impair photosynthetic activity. Still, despite those potential costs, these F. delavayi show just how adaptable plants can be, Steven says. “The appearance of plants is much more malleable than we might have expected.”