Editors’ Highlights are summaries of recent papers by AGU’s journal editors.
Source: Journal of Geophysical Research: Earth Surface
Glacier ice is a crystalline material that flows across the Earth’s surface and is often close to the pressure-melting point. The way ice deforms is therefore an interplay of many factors including the temperature, grain size, and purity of the ice. Numerical models of ice flow are based on the Glen-Nye flow law (Glen’s Law)—a simple relationship between stre
Source: Journal of Geophysical Research: Earth Surface
Glacier ice is a crystalline material that flows across the Earth’s surface and is often close to the pressure-melting point. The way ice deforms is therefore an interplay of many factors including the temperature, grain size, and purity of the ice. Numerical models of ice flow are based on the Glen-Nye flow law (Glen’s Law)—a simple relationship between stress and strain in ice developed by John Glen and John Nye from laboratory experiments in the 1950s. Glen’s Law derives strain (creep, or deformation flow of ice) from the applied stress raised to the power of the exponent n, multiplied by the temperature-dependent constant A. The values for these parameters are empirical, and both linear and power-law forms of Glen’s Law have been proposed, although a value of 3 is typically used for n.
Lilien et al. [2026] use a flowline model to explore the impact of the choice of value for Glen’s n on the outcome of projections of ice sheet mass change, considering different values for A and different glacier sliding laws. They found that the relationship between n and glacier mass loss is complicated and varies depending on glacier type. For dynamically controlled glaciers, increasing n increased mass loss, as ice flowed more rapidly into ablation areas. For surface mass balance-controlled glaciers, increasing n decreased mass loss, because ice flux decreased at the equilibrium line. The authors find that using a single value for Glen’s n is likely to lead to large uncertainties in projections of ice sheet change, and therefore studies of future ice sheet mass loss need to consider how the flow-law exponent varies spatially.
Citation: Lilien, D. A., Ranganathan, M., & Shapero, D. R. (2026). Effect of the flow-law exponent on ice-stream sensitivity to melt. Journal of Geophysical Research: Earth Surface,131, e2025JF008726. https://doi.org/10.1029/2025JF008726
Source: Journal of Geophysical Research: Solid Earth
Magnetic rocks with iron oxide concentrations act as natural chroniclers of Earth’s past continental movements. Using small samples of rocks, scientists can isolate magnetic grains that were frozen in orientation as the rock solidified. The magnetization of these grains acts as a miniature compass needle, pointing toward ancient magnetic poles. This same principle applies to extraterrestrial samples, such as meteorites and lunar rocks, whi
Magnetic rocks with iron oxide concentrations act as natural chroniclers of Earth’s past continental movements. Using small samples of rocks, scientists can isolate magnetic grains that were frozen in orientation as the rock solidified. The magnetization of these grains acts as a miniature compass needle, pointing toward ancient magnetic poles. This same principle applies to extraterrestrial samples, such as meteorites and lunar rocks, which preserve evidence of the early solar nebula’s evolution.
However, traditional bottle cap–sized bulk samples often contain a mixture of reliable and unreliable magnetic signals, resulting in complex data that hamper interpretation. To improve accuracy, researchers have turned to magnetic microscopy. This technique maps magnetic fields at submillimeter to submicrometer scales in thinly sliced rock sections using advanced tools like a quantum diamond microscope (QDM) or a cryogenic superconducting quantum interference device microscope. By creating high-resolution maps of individual magnetic particles, scientists can reconstruct ancient fields with much higher precision while filtering out muddy signals from unstable grains.
Despite its potential, magnetic microscopy is an emerging field with its own set of uncertainties. To help constrain measurement data, Bellon et al. combined QDM observations with computer modeling to analyze how a magnetic particle’s stray field—the magnetic flux that leaks into the surrounding space—decays as it moves away from the source. They specifically investigated how a particle’s internal magnetic structure and external measurement noise affect the accuracy of these reconstructions.
The study found that in iron oxides, the smallest and most magnetically stable particles produce signals that are strong at the source but fade rapidly with distance. In contrast, larger particles produce signals that remain detectable farther away. This creates a challenge: The most stable grains for long-term geological data (the smallest ones) are the hardest to detect if the sensor is not perfectly positioned or if sensor interference is present.
By quantifying measurement error, the authors provide a road map for the field of micropaleomagnetism. Their findings could allow researchers to better account for uncertainty, leading to more robust reconstructions of Earth’s magnetic history and a deeper understanding of planetary evolution. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2025JB033133, 2026)
—Aaron Sidder, Science Writer
Citation: Sidder, A. (2026), Navigating the past with ancient stone compass needles, Eos, 107, https://doi.org/10.1029/2026EO260122. Published on 16 April 2026.
Lead-acid batteries are omnipresent. An integral part of most electric vehicles and all conventional vehicles globally, they also serve as backup energy storage systems in developing countries. But if lead-acid batteries are recycled in smelting units without adequate pollution control measures, they can cause elevated lead pollution that persists in local soils for thousands of years. However, because recycling sites with pollution control measures cost millions of dollars, most efforts are in
Lead-acid batteries are omnipresent. An integral part of most electric vehicles and all conventional vehicles globally, they also serve as backup energy storage systems in developing countries. But if lead-acid batteries are recycled in smelting units without adequate pollution control measures, they can cause elevated lead pollution that persists in local soils for thousands of years. However, because recycling sites with pollution control measures cost millions of dollars, most efforts are informal and unregulated.
In a recent study, researchers reported that scraping lead-contaminated soil in the vicinity of an abandoned recycling site for used lead-acid batteries and treating it with phosphate was linked to a 22% reduction in the blood lead levels (BLLs) of children who were living close to that site in a Bangladeshi town. The research was published in the International Journal of Hygiene and Environmental Health.
“Informal battery recycling is rampant in Bangladesh.”
“Informal battery recycling is rampant in Bangladesh,” said study coauthor Mahbubur Rahman, an environmental health scientist at the International Centre for Diarrhoeal Disease Research, Bangladesh. “Used lead-acid batteries are broken up and smelted in close proximity to residential and agricultural areas, which exposes those communities to lead emissions that contaminate their soil and water sources.”
Rahman and colleagues analyzed the BLLs of 130 children living close to two recycling sites for used lead-acid batteries (ULAB) in the Tangail District of Bangladesh that were abandoned in early 2019. They also assessed the BLLs of 37 children who did not live anywhere near ULAB recycling sites. The researchers then carried out soil remediation efforts at one of the ULAB sites but not the other. Prior to the work, the team members held informational sessions for the community about the dangers of lead pollution so locals could provide informed consent to participate.
The team observed that following remediation efforts, the lead content of the soil in and around the former battery recycling site decreased from more than 20,000 parts per million to less than 400 parts per million, which was considered acceptable by the U.S. EPA when the study was conducted, from 2022 to 2023. (The EPA reduced the limit to 200 parts per million in 2024.)
The researchers collected and cleaned up soil from children’s play areas, roadsides, and courtyards of 68 households that belonged to the intervention group. A year after the lead-contaminated soil was cleaned up, the 89 children from those households had the most significant decreases in their BLLs: from 90.1 to 70.4 micrograms per liter, a decrease of more than 21%.
“We know for sure that the areas close to abandoned ULAB recycling sites are as contaminated as areas around abandoned lead mines.”
The children in the group who lived close to the second abandoned ULAB recycling site, where soil remediation was not conducted, experienced only about an 8.4% decrease in their BLLs, from 88.5 to 81.1 micrograms per liter. The reduction in the control group’s BLLs could be attributed to a government initiative focused on reducing lead levels in turmeric, which was happening over the same time period as the study, Rahman said.
Anne Riederer, an environmental health scientist at the University of Washington who was not involved in the new study, said the dangers of lead exposure from ULAB recycling sites are well documented.
“We know for sure that the areas close to abandoned ULAB recycling sites are as contaminated as areas around abandoned lead mines. This study fits with the bigger picture of what we have learned to date about cleaning up contaminated sites and how that could improve children’s health,” she said.
A Widespread Issue
Similar studies conducted in Brazil and Bangladesh reported 46% and 35% reductions, respectively, in children’s BLLs following soil remediation initiatives around ULAB recycling sites.
Despite those drastic improvements, the children’s BLLs were still far above the World Health Organization’s threshold of 50 micrograms per liter. “This could mean there are other sources of lead exposure, like paints and cookware items,” said Rahman. “Or the persistently high BLLs could be because of chronic and long-term lead exposure, due to which lead gets deposited deep into the bones for several decades, even if [people] move away from toxic sites.”
Rahman explained that while soil remediation is an effective mitigation measure for lowering childhood lead exposure, it is also labor-intensive and expensive. Though the team identified hundreds of toxic sites borne from informal ULAB recycling, it wasn’t possible for them to remediate the soil at every site.
“The reason why this issue is so widespread is [that] informal recycling is cheap,” he said. “That makes the formal sector reluctant to invest in costly pollution control measures.”
—Anuradha Varanasi, Science Writer
Citation: Varanasi, A. (2026), Cleanup of battery recycling sites may lower childhood lead exposure, Eos, 107, https://doi.org/10.1029/2026EO260120. Published on 15 April 2026.
Source: AGU Advances
Models of glacial flow and retreat rely on estimates of glacial ice viscosity, the measure of the ice’s resistance to flow.
Ice viscosity is dependent on the stress applied to the glacier. Most ice sheet models use a standard equation to model ice flow that includes the variable n, called the stress exponent. A larger value of n means ice viscosity is more sensitive to changes in stress. For decades, glaciologists have, almost exclusively, used an assumed n value of 3
Models of glacial flow and retreat rely on estimates of glacial ice viscosity, the measure of the ice’s resistance to flow.
Ice viscosity is dependent on the stress applied to the glacier. Most ice sheet models use a standard equation to model ice flow that includes the variable n, called the stress exponent. A larger value of n means ice viscosity is more sensitive to changes in stress. For decades, glaciologists have, almost exclusively, used an assumed n value of 3 in the models they use to predict ice flow.
However, through recent experiments and observations, researchers have found that an n value of 4 may actually better represent the conditions of Earth’s ice sheets and glaciers.
Martin et al. created a model representation of the fast-retreating Pine Island Glacier in West Antarctica. The ice sheet in their model had a true n value of 4, but they ran model projections using both n = 4 and n = 3. That allowed them to observe how their model would incorrectly predict glacial flow and resulting sea level change, given an incorrect n value.
The researchers modeled glacial retreat for 100 years under both equations with two different glacial melting scenarios. They then modeled glacial recovery for another 300 years. Under a moderate scenario, the n = 3 model underestimated glacial retreat by 18% and sea level change contributions by 21%. Under an extreme melting scenario, the model underestimated sea level contributions by 35%.
Notably, those disparities in glacial retreat and sea level change contribution predictions increased more than would be expected between the two scenarios, potentially increasing the level of uncertainty in current projections of sea level change. The researchers also suggest that incorrect n values may be mistakenly attributed to other physical processes in current ice sheet models.
The results could have far-reaching implications for predictions of future glacial melt and may prompt investigations into its effects on sea level, the authors say. (AGU Advances, https://doi.org/10.1029/2025AV001946, 2026)
—Madeline Reinsel, Science Writer
Citation: Reinsel, M. (2026), Glaciers may flow into the ocean more quickly than we think, Eos, 107, https://doi.org/10.1029/2026EO260107. Published on 14 April 2026.
In the winter of 923, a magnitude 7.5 earthquake struck the heart of Puget Sound. Shorelines slid into the water, the seafloor rose up, and a tsunami swept through the region.
The Seattle fault zone, actually a mesh of faults that runs right under its eponymous city, was responsible for this quake. The fault continues to pose one of the deadliest threats to the Pacific Northwest; if a similar quake were to hit today, it would threaten millions of lives and cause billions of dollars in damage
In the winter of 923, a magnitude 7.5 earthquake struck the heart of Puget Sound. Shorelines slid into the water, the seafloor rose up, and a tsunami swept through the region.
The Seattle fault zone, actually a mesh of faults that runs right under its eponymous city, was responsible for this quake. The fault continues to pose one of the deadliest threats to the Pacific Northwest; if a similar quake were to hit today, it would threaten millions of lives and cause billions of dollars in damage.
Two new papers dig into recurrence intervals, or the quiescent periods between earthquakes, for the Seattle fault zone. They offer good news and bad news: One study, published in Geology, found that in the past 11,000 years, the massive 923 event was the only quake of magnitude 7.5 or greater. The other study, published in GSA Bulletin, found that smaller, but still damaging, quakes occur more frequently than previously thought.
The new research indicates the worst-case scenario of frequent 923-style events is less likely than some scientists thought, said Harold Tobin, a geophysicist at the University of Washington and head of the Pacific Northwest Seismic Network, who was not involved in either study. But researchers also found that “the less worse, but still bad scenarios” are more likely than previously thought.
Meet the Seattle Fault
“For a fault that has had so much attention, there’s so much we still don’t know.”
The Seattle fault zone is a thrust fault system that stretches about 75 kilometers (46 miles) from the foothills of the Cascades east of Seattle to the Hood Canal, which runs along the shores of the Olympic Peninsula to the city’s west, passing under Seattle along the way.
Geologists began rigorously exploring the fault system in the early 1990s, intrigued by gravitational anomalies, uplifted marine terraces (stair-step geological formations along coastlines), and evidence of a roughly 1,000-year-old tsunami. All these features hinted at a major, shallow earthquake on a local fault zone—likely the 923 event.
But “for a fault that has had so much attention, there’s so much we still don’t know,” said Elizabeth Davis, an earthquake geologist at the University of Washington who led the Geology study.
The most pressing questions are how big quakes on the fault get, how often they hit, and, ultimately, what risks the fault poses to people who live in the Puget Sound area.
“It takes some real geologic sleuthing to get at those tough questions,” Tobin said.
Biggest Seattle Fault Quakes Are Rare
Davis focused on the activity of the main fault, which can generate the biggest quakes in the Seattle fault zone complex. It was responsible for the 923 quake. But the existing record went back only about 5,000 years.
“We just don’t know what the recurrence interval for these big quakes is,” Davis said. “We wanted to lengthen the record.”
To do so, Davis and her collaborators turned to marine terraces, the oldest of which date back to the end of the last ice age about 11,000 years ago. The quake in 923 raised terraces by about 8 meters (26 feet), and scientists wanted to look for similar-scale uplift in terraces all around the sound.
The researchers mapped more than 150 terraces around Puget Sound and measured their depths. After accounting for regional slopes, they estimated uplift over time that could have been caused by quakes.
They found that in that 11,000-year period, only the 923 event generated significant uplift. Thick sediment mantles could mask smaller events but not 923-scale quakes, Davis said.
Estimating true recurrence intervals requires knowing the timing of multiple events. But the finding is “not bad news,” she said. It provides some evidence that the recurrence interval is likely not shorter than about 5,000 years.
“That could give us more of a buffer between now and when the next big one like that will happen,” said Stephen Angster, a U.S. Geological Survey geologist who led the GSA Bulletin study.
Smaller, Damaging Quakes Are More Frequent
Angster’s work focused on Seattle’s secondary faults, which are smaller, mostly blind faults (those not visible at the surface) capable of generating damaging earthquakes. Previous work had shown that one of these secondary faults generated a magnitude 6.7 earthquake, highlighting the risk they pose. Angster wanted to explore rupture histories of these secondary faults, particularly whether they could rupture independently from the main fault.
The researchers used a suite of paleoseismic tools, including magnetic data, field and lidar mapping, trenches dug across faults, and geochronology. They studied two newly identified secondary faults that have orientations similar to the main fault.
They found three new earthquakes to add to the region’s seismic history, including the oldest and youngest events in the known record, which were around 11,000 years ago and in the early 1800s, respectively. The earthquakes appear to be evidence of ruptures that occurred independently of the main fault, suggesting that the smaller—but still dangerous—secondary faults should be considered in hazard modeling.
With that lengthened record and the addition of three quakes, the recurrence interval the researchers found was about every 350 years over the past 2,500 years. This timing refined the previous estimate of every several hundred years.
There also appears to be an increase in activity over the past 2,000 years.
“Maybe we should be paying attention to that,” Angster said.
What Happens Next
“There are other earthquakes that aren’t as big but that occur more frequently. Those might not be as catastrophic, but it would be a very bad scenario for Seattle” if such events occurred.
“These are both carefully done studies,” Tobin said. “We now have evidence that the 923 event was the biggest in 11,000 years. But there are other earthquakes that aren’t as big but that occur more frequently. Those might not be as catastrophic, but it would be a very bad scenario for Seattle” if such events occurred.
It’s still to be determined whether the risk from secondary faults will be incorporated into the National Seismic Hazard Model, which includes the 923 quake but not smaller ones along the Seattle fault zone. The secondary faults were left out in previous efforts because they are shorter than the minimum length required to be included and because of uncertainties in their potential rupture magnitude.
Citation: Dzombak, R. (2026), On the Seattle Fault, the biggest quakes aren’t the most likely, Eos, 107, https://doi.org/10.1029/2026EO260114. Published on 14 April 2026.
Editors’ Highlights are summaries of recent papers by AGU’s journal editors.
Source: Tectonics
Scientific progress rarely follows a straight path. Instead, it develops through open discussion, critical evaluation, and the testing of new ideas. The exchange between authors and colleagues illustrates how this process unfolds in modern Earth sciences and provides a valuable example of constructive scientific debate.
At the center of the discussion lies a fundamental question about one of
Scientific progress rarely follows a straight path. Instead, it develops through open discussion, critical evaluation, and the testing of new ideas. The exchange between authors and colleagues illustrates how this process unfolds in modern Earth sciences and provides a valuable example of constructive scientific debate.
At the center of the discussion lies a fundamental question about one of Earth’s most remarkable geological features: how did the Himalaya and the Tibetan Plateau become the highest and largest mountain system on the planet?
In their paper “Raising the Roof of the World: Intra-Crustal Asian Mantle Supports the Himalayan–Tibetan Orogen,” Sternai et al. [2025] address this question using numerical geodynamic modeling. These computer simulations reproduce the physical behavior of large rock masses deep inside the Earth and allow researchers to investigate the long-term evolution of this vast orogenic system.
Their study specifically explores the possibility that, during the collision between the Indian and Asian plates, layers of mechanically strong Asian mantle rock became embedded within the thickened Indian continental crust beneath the Tibetan Plateau. According to this hypothesis, these mantle layers could help sustain the elevation of the Plateau by effectively withstanding stresses over long geological timescales: the Indian crust would provide buoyancy (raising the roof), while the Asian mantle would contribute mechanical strength to support the Himalayan–Tibetan topography.
Hetényi and Cattin disagree with and challenge this interpretation in their Comment. Drawing on a large body of well-established geophysical and geological observations, they argue that the structure beneath southern Tibet is better explained by underthrusting, the process by which the Indian plate slides beneath the Tibetan Plateau. Seismic imaging studies, including receiver-function analyses that use earthquake waves to map subsurface structures, consistently reveal features interpreted as Indian crust and upper mantle extending far north beneath Tibet.
In their Reply, Sternai and colleagues clarify that their models were not intended to accurately reproduce the present-day structure of the region in detail. Instead, they were designed as process-oriented experiments to test whether existing and/or alternative mechanisms for crustal thickening and plateau support are mechanically and rheologically viable.
This exchange highlights an important aspect of contemporary geoscience—observations of Earth’s interior such as seismic images, gravity data, and geological records often allow multiple, non-unique interpretations. Numerical modeling provides a complementary approach by evaluating whether proposed geological mechanisms are physically plausible.
Equally significant is the tone of the discussion itself. The Comment and Reply show how scientists, while strongly disagreeing about interpretations, can maintain a constructive and respectful dialogue. Such approach fuels scientific advance by encouraging the community to re-examine established assumptions, refine models, and integrate new observations.
Debates like this one, therefore, extend well beyond a specific geological question. They illustrate how scientific understanding advances through the interplay of observations, theoretical reasoning, and modeling experiments.
In this way, the dialogue highlighted here contributes not only to our understanding of the Himalayan–Tibetan mountain system but also to the broader methodology of Earth science.
Citations
Sternai, P., Pilia, S., Ghelichkhan, S., Bouilhol, P., Menant, A., Davies, D. R., et al. (2025). Raising the roof of the world: Intra-crustal Asian mantle supports the Himalayan-Tibetan orogen. Tectonics, 44, e2025TC009057. https://doi.org/10.1029/2025TC009057
Hetényi, G., & Cattin, R. (2026). Comment on “Raising the roof of the world: Intra-crustal Asian mantle supports the Himalayan-Tibetan orogen” by Sternai et al. Tectonics, 45, e2025TC009214. https://doi.org/10.1029/2025TC009214
Sternai, P., Pilia, S., Ghelichkhan, S., Bouilhol, P., Menant, A., Ostorero, L., et al. (2026). Reply to comment by Hetényi and Cattin on: “Raising the roof of the world: Intra-crustal Asian mantle supports the Himalayan-Tibetan orogen”. Tectonics, 45, e2026TC009436. https://doi.org/10.1029/2026TC009436