Research & Developments is a blog for brief updates that provide context for the flurry of news that impacts science and scientists today.
In the contiguous United States, crop irrigation, municipal water supplies, and thermoelectric power generation use more than 224 billion gallons of fresh water every day. Conducting water research or making decisions about water use, until now, often required referencing datasets across various agencies. The U.S. Geological Survey (USGS) National
Research & Developments is a blog for brief updates that provide context for the flurry of news that impacts science and scientists today.
In the contiguous United States, crop irrigation, municipal water supplies, and thermoelectric power generation use more than 224 billion gallons of fresh water every day. Conducting water research or making decisions about water use, until now, often required referencing datasets across various agencies. The U.S. Geological Survey (USGS) National Water Availability Assessment Data Companion (NWDC), announced this week, aims to streamline this process. In part, the tool is designed to help decisionmakers better understand the balance between how high demand and limited supply affect water availability in their communities.
“While the United States has abundant water nationally, regional imbalances between supply and demand may create water challenges affecting millions of Americans,” said lead scientist Shirley Leung in a USGSpress release. “What once required significant resources and time can now be done in minutes, giving communities of all sizes the same foundation for water planning.”
The lower 48 states are home to about 80,000 sub-watersheds, from those in the arid southwest to the Great Lakes Basin, where about 84% of North America’s surface fresh water is located. According to the USGS, the NWDC is the first tool that integrates information about water availability in individual watersheds at a national scale.
The tool is designed to complement Water Data for the Nation (WDFN), another USGS product that consolidates observational data from the agency’s thousands of local monitoring stations gathering data on streams, lakes, reservoirs, precipitation, water quality, and groundwater. The new tool uses modeling to fill in spatial and temporal gaps between the observations made at these stations.
Water managers, researchers, agricultural experts, and others can use the NWDC to compare watershed conditions, identify seasonal patterns in water use, or to create data visualizations of statewide water use, for example. Though the tool currently covers only the contiguous United States, it will soon be extended to Alaska, Hawaii, and Puerto Rico, according to the USGS.
David Tarboton, a professor of civil engineering at the Utah Water Research Laboratory, said he was “intrigued” by the new tool, and is interested in examining the data its model produces.
While Tarboton was disappointed that the tool’s most recent available data are from 2020, “having a sort of integrated, wall-to-wall dataset that’s consistently produced is very valuable,” he said. He works, in part, in the areas of hydroinformatics and data sharing, and noted that the modern methods the agency is using to share the data could be useful in developing automated tools.
These updates are made possible through information from the scientific community. Do you have a story about science or scientists? Send us a tip at eos@agu.org.
Editors’ Vox is a blog from AGU’s Publications Department.
The supply of magma from the Earth’s mantle is a primary source of heat to volcanic arc crust, where the heat is then dissipated through various processes. Much of this magmatic heat is dissipated as heated water, or aqueous fluid.
A new article in Reviews of Geophysics compares 11 different volcanic-arc segments where heat discharge via aqueous fluid has been well-inventoried to better understand the factors that influence this p
The supply of magma from the Earth’s mantle is a primary source of heat to volcanic arc crust, where the heat is then dissipated through various processes. Much of this magmatic heat is dissipated as heated water, or aqueous fluid.
A new article in Reviews of Geophysics compares 11 different volcanic-arc segments where heat discharge via aqueous fluid has been well-inventoried to better understand the factors that influence this process. Here, we asked the authors to give an overview of heat discharge from volcanic arcs, how scientists measure it, and what questions remain.
Why is it important to study how heat is dissipated from volcanic arcs?
The heat from these magmas matters for geothermal energy, patterns of groundwater flow, and the patterns of volcanic activity at the surface.
Volcanic arcs are the chains of volcanoes on top of subduction zones. They can produce some of Earth’s most explosive and hazardous eruptions. But much of the magma beneath the surface never erupts. Nevertheless, the heat from these magmas—and the simple fact of their existence and abundance—still matters for geothermal energy, patterns of groundwater flow, and the patterns of volcanic activity at the surface.
What are the main modes in which heat is discharged from volcanic arcs?
Heat at volcanic arcs can be carried by magmas, transmitted through the crust conductively, and carried by water seeping slowly through the crust. At the base of the crust, magmas are probably most important, with conduction coming in second. But as magmas move upwards through the crust, some of them solidify and impart their heat to their surroundings where it is transferred by conduction. Within a few kilometers of the surface, fluids seeping through the crust begin to take up all that heat, and so if we can quantify the heat carried by those fluids, we can retrace it to its origins in magmas.
How do scientists measure these different forms of heat loss?
Scientists measure the heat carried by erupting magmas using satellites, or by adding up the erupted masses and making an estimate of how much energy was released by cooling from eruption temperatures to solid igneous rocks at Earth’s surface. Conductive heat flow is measured by drilling holes in the Earth’s crust to see how quickly it gets hotter with depth.
Measuring the heat carried by aqueous fluids in the crust is in some ways the trickiest. One approach is to find all the springs where hot or even slightly warm water is trickling out and measure the temperature and discharge to estimate how much extra heat that water is carrying.
What are the challenges and uncertainties in measuring hydrothermal heat discharge?
One challenge is that many springs are only slightly warmer than you’d expect. There is good data for many hot springs, but there are data tracking these ‘slightly warm’ springs for only a subset of arcs. Another challenge is that warm underground fluids can flow laterally, so you have to try to account for that. This is not an uncertainty in hydrothermal discharge, but one additional big uncertainty for our study, where we were trying to quantify the proportion of magmas that freeze underground versus erupting, is in the estimates of how much magma has actually erupted through time.
What are some of the factors that influence hydrothermal heat loss?
A major goal of our paper is to try to quantify these hidden magmas.
A main factor that influences hydrothermal heat loss is the magmas that solidify underground. This link is the key motivation for our study. A major goal of our paper is to try to quantify these hidden magmas—how much magma needs to intrude the crust beneath the surface to supply the hydrothermal heat fluxes that we observe? The composition of magmas influences how much heat they can release. The depth at which magmas are emplaced also matters, because magmas that intrude the shallow crust eventually cool to lower temperatures than magmas emplaced in the lower crust and therefore release more heat.
What are the remaining questions or knowledge gaps where additional research efforts are needed?
A big outstanding challenge is combining estimates from hydrothermal data of how much magma is coming into the crust – like ours – with other approaches to estimating the same thing. The magmatic systems beneath volcanoes span the crust. At the base of the crust, you have magma supply, sort of like the water main feeding your plumbing system. Despite how fundamental magma supply is, we know remarkably little about it. It’s exciting to think about how the rates of magma supply could vary through time and space and why. Applying a range of techniques—including geophysical imaging, hydrothermal budgets, gas and igneous geochemistry, and petrology—to understand these questions could really be a game changer.
Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.
Citation: Black, B. A., S. E. Ingebritsen, and K. Sawayama (2026), Hydrothermal heat flow as a window into subsurface arc magmas, Eos, 107, https://doi.org/10.1029/2026EO265017. Published on 28 April 2026.
This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s).
A critical ocean current that regulates Antarctica’s climate may have formed only once continents separated and winds aligned with new ocean passageways, according to a new study published in the Proceedings of the National Academy of Sciences of the United States of America.
Today, the Antarctic Circumpolar Current transports more than 100 times as much water as all of Earth’s rivers combined and, critically, insulates the Antarctic Ice Sheet from heat at lower latitudes. A clear picture of
Today, the Antarctic Circumpolar Current transports more than 100 times as much water as all of Earth’s rivers combined and, critically, insulates the Antarctic Ice Sheet from heat at lower latitudes. A clear picture of the origins of this current can help scientists further understand the relationships between contemporary ocean dynamics, the global climate, and ice formation in Antarctica.
“It’s very interesting to learn more about this current, how it developed, and what role it played in the climate change that was happening at that time,” said Hanna Knahl, a paleoclimatologist and doctoral student at the Alfred-Wegener-Institut in Germany and lead author of the new study.
The Birth of a Current
About 34 million years ago, Earth was undergoing a climatic shift, now known as the Eocene-Oligocene transition, during which atmospheric carbon dioxide decreased and the planet cooled.
Earth’s tectonic plates in the Southern Ocean moved away from each other, opening and deepening bodies of water such as the Tasmanian Gateway and the Drake Passage, which separate Antarctica, Australia, and South America.
For years, scientists hypothesized that the alignment of these newly formed waterways, along with westerly winds, could have channeled ocean water and spurred the formation of the Antarctic Circumpolar Current.
“The exact position of the westerly winds and their relative position to the [ocean] gateways have to click together.”
To test that hypothesis, Knahl and her colleagues simulated conditions of the early Oligocene Southern Ocean with a coupled model that included ocean dynamics, atmosphere and wind patterns, temperatures, ice sheet growth, and precipitation. The research team compared these simulations to data from actual Antarctic sediment cores and scans of the ocean floor.
Results confirmed that westerly winds were necessary for the Antarctic Circumpolar Current to form.
“The exact position of the westerly winds and their relative position to the [ocean] gateways have to click together,” Knahl said.
Joanne Whittaker, a marine geophysicist at the University of Tasmania who was not involved in the new study, was a coauthor of a 2015 study that proposed westerly wind alignment played a role in the formation of the current. Knahl’s study presents a more sophisticated model of the early Oligocene Southern Ocean and is a great next step in the investigation of the current’s origins, Whittaker said.
“They did a really nice job of taking a range of different people’s work and linking it all together,” she said.
Oligocene Understandings
“If you can have a model that works in the past, it’s going to give you confidence that it’s going to work for the future, as well.”
Scientists often use Earth’s past behavior to better understand how Earth systems may behave in the present or future. “If you can have a model that works in the past,” Whittaker explained, “it’s going to give you confidence that it’s going to work for the future, as well.”
The Eocene-Oligocene transition is a key to understanding the relationship between atmospheric carbon, ocean dynamics, and the glaciation of Antarctica, Whittaker said. Knowing how the current’s behavior affected carbon uptake millions of years ago helps scientists model how the present current’s behavior might also affect atmospheric carbon.
In addition to carbon uptake, the new research hints at how changes in westerly winds may influence the advance and retreat of the Antarctic Ice Sheet. Some modeling and proxy data indicate the westerly winds that spurred the Antarctic Circumpolar Current’s formation 34 million years ago have shifted in the past century and may continue to shift in the future. Understanding the role these winds initially played in the current’s development may shed light on the current’s present ability to guard the Antarctic Ice Sheet from warmer air masses.
There are still Oligocene patterns that require more research to sort out, though. For example, modeling in the new study showed interesting asymmetries in the timing of the development of different parts of the Antarctic Circumpolar Current, Knahl said. Scientists know from proxy data and modeling that similar asymmetry exists in the history of the Antarctic Ice Sheet; the ice sheet in East Antarctica began to form about 7 million years before the ice sheet began to form in West Antarctica.
“It could be interesting to see if there’s a connection between the asymmetries that we see here,” Knahl said. “Are they linked, or were they more or less independent?”
Citation: van Deelen, G. (2026), Widening channels and westerly winds together formed Earth’s strongest current, Eos, 107, https://doi.org/10.1029/2026EO260126. Published on 24 April 2026.
This story was produced by Grist and the Food & Environment Reporting Network, a nonprofit news organization. Sign up for Grist’s weekly newsletter here.
Will Runion’s 736-acre cattle and hay farm is tucked into a horseshoe bend of the Nolichucky River in northeast Tennessee. On the morning of Friday, September 27, 2024, he was in the middle of two big projects: building a riverfront campground on his land to bring in tourists and income, and cutting the last of the season’s hay. Hurrica
Will Runion’s 736-acre cattle and hay farm is tucked into a horseshoe bend of the Nolichucky River in northeast Tennessee. On the morning of Friday, September 27, 2024, he was in the middle of two big projects: building a riverfront campground on his land to bring in tourists and income, and cutting the last of the season’s hay. Hurricane Helene had been arcing up from Florida toward the Appalachian Mountains, carrying heavy rain, and the river was high. Even though the banks seemed to be holding, he decided to move some of his cows and equipment to higher ground.
But the river kept rising. At about 11 a.m., the brown water topped its banks. He and his fiancée, his son-in-law’s parents, and neighbors scrambled to salvage what farm equipment they could, but they were nearly trapped when the quickly expanding river flowed into a low-lying area behind where they were working, cutting them off from dry land.
By afternoon, the river had swollen to some 1,200 feet wide—nearly 10 times its usual size. It “looked just like a lake,” Runion said. Trees snapped in the swift current and neighbors’ barns, roofs, hay bales, and household debris swirled by. The water swallowed Runion’s hay equipment and sent the little white house he’d planned to use as the new campground’s office sailing across a field.
At around 8 p.m., the Nolichucky finally crested and started to recede. Runion found a third of his fields covered in debris, dead fish, and tomatoes from upstream vegetable growers. The flood had gouged two holes the size of football fields in his hay pastures, down to a depth of 12 feet. Other sections of the farm were buried in up to 8 feet of sand or silt.
Flooding from Hurricane Helene brought massive damage to Will Runion’s farm, eroding the land in some places and washing up feet of sand on agricultural fields in other sections. Courtesy of Bryan LeBarre, via Grist
Helene dropped up to 30 inches of rain on southern Appalachia, causing historic flooding and landslides in parts of North Carolina, South Carolina, Tennessee, Georgia, Kentucky, and Virginia—a largely rural region where agriculture is a vital economic driver and cultural cornerstone. The mountains make it hard to spread out here, so farms tend to be small, and many growers use flood-prone bottomland because it is flat and fertile. But floods of this magnitude hadn’t hit here in generations. In North Carolina alone, Helene caused an estimated $4.9 billion in damage to the state’s agriculture sector. In Tennessee, agricultural losses were estimated at $1.3 billion. Thousands of farmers lost crops, tools, machinery, barns, buildings, animals, and fences.
“When you see 4 feet of sandy soils on top of your topsoil, you know that’s going to be a challenge. That was overwhelming.”
More than a year later, growers are also contending with the loss of something more vital, and more difficult to replace: their soil.
Runion knew immediately that his livelihood was ravaged. Without good soil, a farmer can’t farm. “When you see 4 feet of sandy soils on top of your topsoil, you know that’s going to be a challenge,” he said. “That was overwhelming.”
He sent drone footage of the damage to Forbes Walker, an environmental soil specialist with University of Tennessee Extension. “How do you fix this?” he asked.
“I don’t know,” Walker recalled thinking when he got Runion’s email. “How do we fix this?”
Over millennia, floods helped build the fertile land that farmers depend on. But today, climate change is driving more powerful and unpredictable storms. One study found that rainfall associated with Helene was 10 percent heavier due to man-made climate change. Research by the U.S. National Science Foundation suggests that what scientists call “100-year storms” will become three times more likely, and 20 percent more severe, over the next 50 years. What’s more, there’s little solid information about what happens to soil during a flood, or what to do when a farm’s soil is eroded or covered with material from elsewhere—its nutrients washed away and microbial communities disrupted. It’s a blind spot that is becoming more of a liability as storms like Helene become more common.
“None of us had ever seen anything like this before or responded to an emergency like that,” said Stephanie Kulesza, a nutrient and soil scientist at North Carolina State University. “And so we weren’t really prepared for recommendations to provide to producers.”
Soil can take thousands of years to form. Rock is weathered and slowly dissolves into smaller and smaller pieces. As dead leaves, animals, trees, and other plants decompose, they add organic matter and nutrients to the rock. Microorganisms establish themselves in the mix, driving nutrient cycling, aiding with decomposition, and stimulating plant growth; then worms and bugs, like beetles and ants, burrow in the mixture, aerating it. For soils to work well for agriculture, they need the right structure—airy enough to allow water to enter and move through, but not too quickly or too slowly—and sufficient biological and chemical richness, including nutrients like nitrogen, phosphorous, and potassium, to nourish crops.
Farmers use synthetic or natural fertilizers to ensure their soil has enough nutrients. They can also introduce practices like no-till—farming without plowing up the ground—to maintain the physical properties of their dirt. Topsoil, the rich, uppermost layer with the most available nutrients for crops, tends to make up less than a foot of the entire soil profile, but it’s crucial for agriculture.
Soil scientist Forbes Walker visits Will Runion’s farm in 2025, examining the deep sandy deposits left behind by Hurricane Helene. Credit: Raffe Lazarian/University of Tennessee Institute of Agriculture via Grist
Helene’s floodwaters either washed away significant topsoil or deposited new sediment on top of it on thousands of farms. Some, including one of Runion’s neighbors, saw their fields stripped down to bedrock, or river rock. Runion and others woke to pastures blanketed by feet of sand or stone.
When topsoil is washed away, the necessary nutrients for growing go with it. And when topsoil is covered with sand, farmers can’t get to it. Both scenarios can significantly alter the land’s usability. Topsoil can take decades or centuries to develop, and sand lacks both organic matter and the physical structure to hold water and nutrients. “These aren’t soils yet,” said Kulesza of what Helene left on Runion’s and other farmers’ land. “They are in their infancy now. The clock has been reset.”
Runion had cared for his soils, working to eliminate weeds, adding fertilizer to keep nutrient levels ideal, and lime to control pH. “They were our way of life,” Runion said. “They were our income.”
After the storm, from October to April, he removed debris, bulldozed sand off his fields to get closer to the topsoil, filled holes, and graded uneven land. Crews from the Federal Emergency Management Agency removed and shredded downed trees. He applied for government relief and received close to $1 million in state and federal aid. Runion said he could have easily used all of that money replacing equipment and paying for cleanup labor, fertilizer, and fuel, but he’s trying to stretch the money as much as possible.
By June, it was time to mow the fields that hadn’t flooded. He managed to put up enough bales of hay to feed his herd of 125 cattle, but not enough to sell. In a normal year, hay sales made up about a third of the farm’s income. With months of work behind him and his flooded land still too sandy and generally depleted, he realized the recovery would be a slog.
Runion returned to work on the campground, which he hoped would diversify the family’s earnings. The longer-term plan included a music venue and some hiking trails, and to host weddings and corporate events. After the storm, finishing it took on new urgency. He chose a new spot, about 450 feet upland from the river, and began clearing enough land for 45 camping sites.
One environmental soil specialist described the academic literature on flood-damaged soils as “thin.”
Runion also prepared a parcel of land for Walker, the extension soil specialist, to run tests that could guide his recovery. Last November, soon after the one-year anniversary of Helene, Walker showed me around Runion’s farm.
Working with students, Walker established four experiments over about 300 test plots. He’s looking at how different soil amendments—hay, wood chips, poultry litter, and a charcoal called biochar, to help the soil hold water and fertilizer; and Triple 19, a common plant food with equal parts nitrogen, phosphorous, and potassium—affect the growth of wheat and fescue grasses.
When I visited, some of the plots remained mostly bare while, in others, tufts of green had sprouted. “We actually got some stuff to grow,” Walker said.
He described the academic literature on flood-damaged soils as “thin.” While some research and case studies exist on how agricultural soil recovers after a flood, there are few systematic investigations like the one Walker is conducting—on what works, and what does not—particularly in Appalachia, where floods of this magnitude have been historically rare.
When so-called atmospheric rivers spawned devastating floods in the Pacific Northwest and southwestern British Columbia in 2021, Aimé Messiga, a Canadian soil research scientist at the Agassiz Research and Development Centre, found a similar “scarcity of data.” He conducted a detailed review of the existing research and concluded that there was limited long-term monitoring, little understanding of how floods affect nutrients and microorganism communities in the soil, and uncertainties about what the actual impacts of floods on agriculture and crops are. Complicating everything is the variability between different farms, soils, and crops.
“You need decades of accumulated data in order to be able to predict what will happen. We don’t have those data.”
“You need decades of accumulated data in order to be able to predict what will happen,” Messiga said. “We don’t have those data.”
Today, some researchers are attempting to replicate flood conditions in labs to better understand, but field work is rare, Messiga said. There’s little money for it—and in the U.S., the Trump administration has cut funding for climate-related research. In addition, “many among us still look at these events as random,” Messiga said. “They’re not random. They will keep occurring.”
Since 1980, 45 flooding events have caused damages over $1 billion each in the U.S., with more than half of those occurring in the past 15 years. In 2024, flooding in the upper Midwest drowned crops. Repeat events in central California damaged agricultural operations from winter 2022 to spring 2023. Flooding along the Mississippi River in 2019 reduced crop planting by millions of acres. There also have been numerous smaller or more localized floods. One study found nearly 75,000 flash floods in the contiguous U.S. from 1996 to 2017, with increasing frequency in the past 22 years. Flooding frequency and strength is predicted to rise in the years to come due to climate change—a warmer atmosphere holds more moisture and leads to stronger rain events—and poor land-use management.
Scientists are also starting to study a new type of event, called “weather whiplash,” when sudden changes occur from one extreme to another, amplifying the effects of the disaster. In Texas in 2025, a flood came after prolonged drought, causing widespread destruction.
For farmers, the effects of flooding on soil may linger for years after the disaster. In 2011, the Missouri River flooded states in the Upper Midwest, including thousands of acres of farmland. Fields were swamped for months with up to 20 feet of water. When the water finally receded, those fields were covered with anywhere from 2 to 20 feet of sand; other fields had washed out holes up to 70 feet deep. It looked like the surface of the moon, said John Wilson, a now-retired educator and agricultural expert who served Burt County, Nebraska, which was particularly hard-hit. “It was just bare soil,” he said. “There was no crop residue whatsoever.”
Wilson led teams that sampled the soil and helped farmers build back. He found that levels of nitrogen and organic matter were low in flooded soils, and fertility suffered when farmers planted their crops. Over about five years, fertility generally improved, but not everywhere. “If you went out today and did a yield map, you could still tell exactly where the erosion was because those areas are not as productive,” Wilson said.
Yield is money for farmers, who already navigate thin margins and, often, years without any profit at all. North Carolina’s strategic plan for agriculture recently enumerated just how thin: Of the state’s “42,500 farms, only 8,000 produce annual gross sales that exceed $100,000 annually. The overwhelming majority … some 23,400, gross less than $10,000 in sales, with only around 40 percent of the farms in the state having a positive net income in 2022.”
As floods increasingly wreck farmland, more researchers are starting to focus on understanding the effects of the floods and how to address them. Most of that work is happening in Asia, Messiga said. But a study in coastal North Carolina, where hurricanes regularly land, found that after a storm there was less organic matter in the soil, including carbon, and a disruption of microbial activity and nutrient cycling. The ground also absorbed water less readily.
Coastal flooding is also driving saltwater into the soil of farmland, making it more saline and unable to sustain crops. A North Carolina State University team has been developing test kits for farmers to sample the salinity of their soils, as well as a set of recommendations for keeping their soil viable. Such local work is important because soils vary greatly from place to place, and findings are not often easily transferable.
Nicole DelCogliano’s farm near Asheville, North Carolina, was wiped out almost entirely by floods from Hurricane Helene in 2024. Courtesy of Nicole DelCogliano via Grist
For now, in the wake of Helene, farmers are relying largely on trial and error to build back what was lost. Nicole DelCogliano has been farming vegetables, flowers, and livestock with her husband on 50 acres on the South Toe River, near Asheville, North Carolina, for 25 years. Helene washed away her barn, tractor, and other infrastructure. Of her 6 acres of vegetable fields, one was covered with several feet of sand, another got a foot, and a third field suffered extensive erosion.
“Our entire operation was wiped out, essentially,” she said.
“It’s not something that can be fixed overnight. This is a long process.”
With the help of some friends with tractors, DelCogliano cleared her main field and spread compost and lime on everything. “There was a mix of guidance about what you should do, like should you disturb the soil, should you not?” she said. “At an instinctual level, we just felt like we got to get the soil covered, we got to get something in the ground.” They sowed rye, a dependable cool season grass, as a cover crop, to protect the soil from erosion and add nutrients.
Karen Blaedow, an agricultural educator in Henderson County, North Carolina, said farmers should expect to put in at least three years of cover cropping before they see results in their soil. “It’s not something that can be fixed overnight,” she said. “This is a long process.”
In the spring following the flood, DelCogliano spread various amendments on her least-damaged field, including compost, lime, biochar, and blood and bone meal, which provide nitrogen and phosphorus, respectively. After all that, she and her husband seeded crops.
Their new vegetables came in about two weeks later than normal, but the season was more productive than ever, even though they grew on just 4 instead of 6 acres—“which is pretty amazing,” she said. “When we first started harvesting crops [after Helene], we didn’t yet have power at the farm. I had to dig one of our sinks out of a bank and bleach it and clean it and drag it up to the new barn—that we barely got a roof on—to wash and pack for that first [farmers] market.”
She doesn’t really know what made the year so productive. They planted more intensively to account for the smaller acreage and were able to harness their years of expertise to restart their operation basically from scratch. She also attributes the relative health of her soil to years of organic practices. “We’re dirt farmers,” she said. “Our primary job is to tend the dirt. Because that’s the basis of everything.”
Some farmers who’ve seen good harvests may have gotten a little lucky. Rather than sand, floods dumped silt. Even Runion got silt deposits in one section of his farm. Unlike the sand, the silty layers carry nutrients and create a positive growing environment. “We have a producer we work with and he said it’s the most fertile soil that he’s had in decades,” said Emine Fidan, a biosystems engineering and soil science researcher at the University of Tennessee, who’s also working on Runion’s farm. “And he said it grew the sweetest corn he’s ever had. It was growing just beautifully.”
Runion didn’t plant anything until this past fall. He prepared about 65 acres of the 220 that were underwater. It was slow going; he used a disking machine to till his land but had to stop often to clear sticks and trash and to grade out low spots. He mixed in mulch and planted oats, wheat, and fescue. Walker drove me past one of the fields and it still looked sandy, the grasses just a pale green shadow on the tan land. Runion said the greenery was “struggling to have any vigor about it.” He won’t know for sure how well or poorly the grasses do until spring, their peak growing season.
He considered planting more acreage but decided to wait and see what he learned from Walker’s trials. “It’s a process, and the knowledge we’re gaining there will help on the whole rest of it, too,” Runion said.
This spring, Walker’s team will measure the biomass in each plot as well as the quality of the crop, including how much protein it has and its digestibility. They’ll also be evaluating the soil itself, including its ability to hold water, to determine if any of the treatments improved the structure of the sandy dirt.
One farmer thinks the hay he’ll get in the coming years will be lower-yielding, lower-quality, and will cost more to produce due to the extra prep time, new seeds, and fertilizers.
Preliminary results suggest that, in plots where they put down mulch, the grasses are growing better than in plots with other amendments. The woody debris is reducing erosion and seeds are germinating well and standing up in the rough matrix. Spreading this kind of mulch isn’t an obvious solution, Walker said: Wood chips are a carbon-rich material, but as they break down in the soil they consume nitrogen, which can lead to a deficiency for the crops. But this mulch had sat in piles and started to decompose before it was applied to Runion’s fields, which made it less likely to cause these problems.
Runion had asked FEMA to leave the piles of wood chips on his farm rather than remove them like they normally would. Walker is looking for solutions to the soil problem that not only work but are also accessible. Have a mountain of mulch? Put it to work. Have nearby chicken houses? Maybe their nitrogen-rich manure can help revive flooded fields. His hope is that his team’s research can provide some guidance to farmers who find themselves in similar situations in the future. “I think it will have broad implications for a number of different crops,” including vegetables, Walker said.
Meanwhile, Runion is coming to terms with his situation. He thinks the hay he’ll get in the coming years will be lower-yielding, lower-quality, and will cost more to produce due to the extra prep time, new seeds, and fertilizers. He used to sell a lot of square bales, which tend to contain high-quality grasses and fetch a higher price, but he doesn’t expect to be doing that for a while. He’d initially hoped to have his land back in shape in a year or two. “Now it’s a four- to five-year [plan], I think,” Runion said. “It has been frustrating, and exhausting, too.”
He’s still optimistic, though. On my visit, I watched him grade out the new campground in a large dump truck. Freshly exposed red soil lay open to the sky. He thinks he can get the campground open by late summer or early fall. Over time, he hopes, it will be a more lucrative, and more sustainable, source of income. “The farm is really beautiful,” Runion said. “It still has a lot to offer.”
Planting more trees will decelerate climate change only if those trees are placed in optimal locations—primarily the tropics and subtropics—suggests new research published in Communications Earth and Environment. However, planting trees in locations like Alaska, Siberia, and large parts of the United States could actually lead to warming, said lead author and doctoral student at ETH Zurich Nora Fahrenbach.
Much of the current thinking in nature-based solutions, Fahrenbach said, is ba
Planting more trees will decelerate climate change only if those trees are placed in optimal locations—primarily the tropics and subtropics—suggests new research published in Communications Earth and Environment. However, planting trees in locations like Alaska, Siberia, and large parts of the United States could actually lead to warming, said lead author and doctoral student at ETH Zurich Nora Fahrenbach.
Much of the current thinking in nature-based solutions, Fahrenbach said, is based on the idea that “more is better.”
As in, “we’ll plant a trillion trees, or we’ll plant more than a trillion trees, and we are going to get more cooling, right?” Fahrenbach said. “That’s something we show is just not the case.”
Fahrenbach researches reforestation potentials, or global maps that identify areas where trees could be planted to mitigate climate change. In this work, she and her colleagues compared three prominent reforestation potentials to determine the effect of tree placement on local and global temperatures.
One scenario involved reforesting about 926 million hectares focused mostly on the tropics and resulted in about 0.25°C of cooling by 2100. Another called for reforesting 894 million hectares, including large areas in northern temperate and polar latitudes, and resulted in 0.13°C of cooling by 2100.
The third scenario involved planting forests strategically over only 440 million hectares of mostly tropical and subtropical land (less than half of the area covered in the other scenarios) but also resulted in 0.13°C of cooling. Geography, the findings suggest, may matter more than quantity when it comes to the cooling benefits of reforestation efforts.
Let’s Get (Biogeo)physical
The researchers modeled all three scenarios using the same parameters: Trees were planted from 2015 to 2070 and then remained steady in their population until 2100.
Planting trees in one area doesn’t just change the local temperature but has effects across the world.
All three models identified reforestation opportunities in regions such as the eastern United States, Amazonia, the Congo rainforest, and eastern China, as well as regions for which reforestation would not be as impactful, such as polar regions in the Northern Hemisphere. The researchers also found significant temperature changes across the Atlantic and Indian oceans as a result of atmospheric changes induced by reforestation, demonstrating an interconnected reality: Planting trees in one area doesn’t just change the local temperature but has effects across the world.
These local and nonlocal effects can be explained by a combination of biogeochemical and biogeophysical effects.
A biogeochemical effect relates to the movement of chemicals or chemical elements, such as trees absorbing carbon from the atmosphere.
A biogeophysical effect relates to the physical results of changing the land’s surface: Placing a tree in a snowy region, for instance, decreases the land’s albedo, meaning it causes the land surface to become darker and absorb more light, leading to more local heat. This rise in surface temperature also raises air temperature, creating cascading effects on wind patterns and oceanic currents.
Considering both processes together is essential for understanding whether a net cooling or net heating effect exists, but most policies focus only on biogeochemical effects, seeing trees solely for their ability to absorb carbon from the atmosphere, Fahrenbach said. They include prominent international policies such as the Paris Agreement and the United Nations’ Framework for REDD+.
“Really, we would also need to consider the biogeophysical effects,” Fahrenbach said. “That’s harder to do, right, considering those nonlocal effects, because just imagine, some country is going to plant a lot of trees, and that’s going to lead to warming somewhere else.”
A Call to Policymakers
Emilio Vilanova, a forest ecologist at the climate action nonprofit Verra, wrote by email, “The most important message for me is that this study emphasizes something that is often not well addressed in reforestation projects: Reforestation is not just about planting trees—it’s about designing where new forests go to maximize benefits and avoid unintended consequences.”
“Reforestation is a helpful tool, not a stand-alone solution to climate change.”
Vilanova also said the study puts the potential for reforestation efforts to address climate change in perspective. “Even very large reforestation efforts would only reduce global temperatures by about 0.13–0.25°C by the end of the century,” he said. “While meaningful, this finding also reinforces that reforestation is a helpful tool, not a stand-alone solution to climate change.”
Though the limited potential for change is sobering, the authors and Vilanova pointed out that this change does matter and that it matters how we think of our approach. They advocate for policies that adopt reforestation strategies based on location and that acknowledge both the local and nonlocal effects of reforestation.
“We really need to make sure that where we plant first, it has benefits locally, it has benefits globally,” Fahrenbach said.
22 April 2026: This article was updated to correct Nora Fahrenbach’s position at ETH Zurich.
This news article is included in our ENGAGE resource for educators seeking science news for their classroom lessons. Browse all ENGAGE articles, and share with your fellow educators how you integrated the article into an activity in the comments section below.
Citation: Meissen, A. (2026), Location, location, location: The “where” of reforestation may matter more than the extent, Eos, 107, https://doi.org/10.1029/2026EO260125. Published on 22 April 2026.
Editors’ Highlights are summaries of recent papers by AGU’s journal editors.
Source: AGU Advances
The evolution of rivers that split into multiple channels is a scientific challenge in terms of modeling and prediction. On the other hand, these rivers are widespread and play a key role for ecosystems’ life, groundwater recharge, and therefore, water security. They are also extremely sensitive to hydroclimatic changes, leading to shifts in precipitation, erosion and sediment transport.
Z
The evolution of rivers that split into multiple channels is a scientific challenge in terms of modeling and prediction. On the other hand, these rivers are widespread and play a key role for ecosystems’ life, groundwater recharge, and therefore, water security. They are also extremely sensitive to hydroclimatic changes, leading to shifts in precipitation, erosion and sediment transport.
Zhao et al. [2026] investigate the drivers of river evolution for 97 multithread river reaches worldwide, spanning diverse climates and morphologies. The study reveals the key role of intermittency for river evolution. In particular, higher flow intermittency could lead to more even flow partitioning among threads, therefore impacting hydrology and ecosystems. With flow variability increasing after climate change, rivers are likely to increase their thread count, thus impacting livelihoods and ecosystems.
Two example multithread reaches shown in Landsat images from (b) the Irtysh River (wandering) and (c) the Yukon River (braided). Credit: Zhao et al. [2026], Figure 1(b,c)
Citation: Zhao, F., Ganti, V., Chadwick, A., Greenberg, E., McLeod, J., Liu, Y., et al. (2026). Global hydroclimatic controls on multithread River dynamics. AGU Advances, 7, e2025AV002166. https://doi.org/10.1029/2025AV002166
Since 1989, Utah’s Great Salt Lake has lost some 70% of its surface area, reducing its ecosystem services and creating stretches of drying lake bed (playa) that send toxic dust into the air.
That drying ground has also provided opportunities for scientists to survey what lies below the lake’s floor. In a study published in Geosciences, researchers revealed glimpses of fresh water and salt water, with some fresh water lurking only a few meters below the surface. The work could provide clues f
Since 1989, Utah’s Great Salt Lake has lost some 70% of its surface area, reducing its ecosystem services and creating stretches of drying lake bed (playa) that send toxic dust into the air.
That drying ground has also provided opportunities for scientists to survey what lies below the lake’s floor. In a study published in Geosciences, researchers revealed glimpses of fresh water and salt water, with some fresh water lurking only a few meters below the surface. The work could provide clues for conserving the lake, a crucial resource for both the ecology and the economy of the region.
Salt Lake, Fresh Water
In 2023, Michael Thorne and colleagues began using a technique known as electrical resistivity tomography (ERT), which can reveal the presence of fresh or salty water, at dozens of spots near the southern and eastern edges of the Great Salt Lake. Thorne is a geophysicist at the University of Utah in Salt Lake City and a coauthor of the new study.
The lake’s desiccation allowed the researchers to access areas where “at previous times, you would never be able to do measurements because [they] would be underwater,” said Thorne.
Establishing a network of ERT sensors requires robust fieldwork. Over the course of long days in the field, Mason Jacketta, lead author of the new study, and others placed electrodes into the ground a few meters apart, making lines that stretched hundreds of meters. Between pairs of electrodes, they measured the resistance to electrical current. Salty water, filled with electricity-conducting ions, has lower resistance than fresh water.
Paired with information on the rock and sediment beneath the surface, as well as with measurements from nearby wells, the ERT data allowed the team to work out a profile of how electrical resistance varied with depth and to figure out what kind of water seeped through pores in the ground below. The team shared the results of their work on the southern part of the lake in Geosciences, while more in-depth findings about the eastern shore will appear in an upcoming publication.
“What this is really showing is that [fresh water is] prevalent all over the place.”
At many of the sites, Jacketta and others found fresh water near the surface.
“What this is really showing is that [fresh water is] prevalent all over the place,” said Elliot Jagniecki, a geologist at the Utah Geological Survey who wasn’t part of the work.
That fresh water was often in close proximity to patches of salty groundwater. At one spot in the southeastern part of the lake, the team found a shallow layer of brine. But right below that, at only 5 meters of depth, they encountered fresh water. At the team’s most northern study site, they found fresh water around 2 meters deep. On the southern shore, they found fresh water in some places as shallow as 2.8 meters.
Mysterious Formations
The team’s results also helped explain curious features around the Great Salt Lake, including mounds made of salt and islands made of reeds.
The lacy-looking layers of the lake’s so-called mirabilite mounds form in the winter, when the cold freezes upwelling salty water, concentrating its salts. With measurements taken next to where some mirabilite mounds form, the researchers could visualize the underground conduits that send salty water to the surface.
While mirabilite mounds form close to shore, mounds made of Phragmites reeds appear in the lake’s interior as well as along its periphery. Thorne and his colleague William Johnson first noticed these mysterious circles popping up in Google Maps more than a decade ago. When they went to investigate, they found Phragmites.
“The population of Phragmites around the Great Salt Lake is really not allowing fresh groundwater to go back into the Great Salt Lake.”
In the new work, the team placed a line for electrical resistivity tomography straight through a Phragmites mound. These reeds wouldn’t be able to survive in the lake’s briny water, Thorne said, but the team’s results showed fresh water rising right to where the invasive reeds grew thick.
“The population of Phragmites around the Great Salt Lake is really not allowing fresh groundwater to go back into the Great Salt Lake,” said study coauthor Tonie van Dam, a geophysicist at the University of Utah. The reeds suck up some 70,000 acre-feet of fresh water that could go back into the lake, she said. In “sucking up [fresh water] for their own existence,” van Dam explained, the reeds crowd out native plant species that provide habitat for native birds.
More Than a Beautiful Landscape
Overall, the study provides a new picture of the fresh and salty groundwater beneath the lake and how these resources feed what people observe at the surface.
It’s also helped to prompt other work, Thorne said, including one recent study in which researchers used a helicopter carrying a wire loop to create and sense electrical currents underground. That study, published in Scientific Reports, suggested there could be a large amount of fresh water under one part of the lake.
But that work is a proof of concept, Jagniecki said, and accessing such potential aquifers might not be sufficient to help address the lake’s current desiccation. Even if they could, refilling them could take thousands of years. “I just don’t think that’s a solution,” he said.
Saline lakes are fragile ecosystems sensitive to climate change, Jagniecki said. The Great Salt Lake harbors plenty of life, such as brine shrimp that become food for a host of migratory birds that use the lake as a stopover. Mineral extraction and the use of brine shrimp for feed in aquaculture are important drivers of Utah’s economy.
Getting a better understanding of how saline lake systems function could be helpful in conserving them and maintaining the resources they provide humans, Jagniecki explained.
“It’s actually more than that. It’s a beautiful landscape,” he said.
—Carolyn Wilke, Science Writer
Citation: Wilke, C. (2026), What’s below the Great Salt Lake? More water, Eos, 107, https://doi.org/10.1029/2026EO260127. Published on 21 April 2026.
Editors’ Highlights are summaries of recent papers by AGU’s journal editors.
Source: Journal of Geophysical Research: Earth Surface
Glacier ice is a crystalline material that flows across the Earth’s surface and is often close to the pressure-melting point. The way ice deforms is therefore an interplay of many factors including the temperature, grain size, and purity of the ice. Numerical models of ice flow are based on the Glen-Nye flow law (Glen’s Law)—a simple relationship between stre
Source: Journal of Geophysical Research: Earth Surface
Glacier ice is a crystalline material that flows across the Earth’s surface and is often close to the pressure-melting point. The way ice deforms is therefore an interplay of many factors including the temperature, grain size, and purity of the ice. Numerical models of ice flow are based on the Glen-Nye flow law (Glen’s Law)—a simple relationship between stress and strain in ice developed by John Glen and John Nye from laboratory experiments in the 1950s. Glen’s Law derives strain (creep, or deformation flow of ice) from the applied stress raised to the power of the exponent n, multiplied by the temperature-dependent constant A. The values for these parameters are empirical, and both linear and power-law forms of Glen’s Law have been proposed, although a value of 3 is typically used for n.
Lilien et al. [2026] use a flowline model to explore the impact of the choice of value for Glen’s n on the outcome of projections of ice sheet mass change, considering different values for A and different glacier sliding laws. They found that the relationship between n and glacier mass loss is complicated and varies depending on glacier type. For dynamically controlled glaciers, increasing n increased mass loss, as ice flowed more rapidly into ablation areas. For surface mass balance-controlled glaciers, increasing n decreased mass loss, because ice flux decreased at the equilibrium line. The authors find that using a single value for Glen’s n is likely to lead to large uncertainties in projections of ice sheet change, and therefore studies of future ice sheet mass loss need to consider how the flow-law exponent varies spatially.
Citation: Lilien, D. A., Ranganathan, M., & Shapero, D. R. (2026). Effect of the flow-law exponent on ice-stream sensitivity to melt. Journal of Geophysical Research: Earth Surface,131, e2025JF008726. https://doi.org/10.1029/2025JF008726
Source: Journal of Geophysical Research: Solid Earth
Magnetic rocks with iron oxide concentrations act as natural chroniclers of Earth’s past continental movements. Using small samples of rocks, scientists can isolate magnetic grains that were frozen in orientation as the rock solidified. The magnetization of these grains acts as a miniature compass needle, pointing toward ancient magnetic poles. This same principle applies to extraterrestrial samples, such as meteorites and lunar rocks, whi
Magnetic rocks with iron oxide concentrations act as natural chroniclers of Earth’s past continental movements. Using small samples of rocks, scientists can isolate magnetic grains that were frozen in orientation as the rock solidified. The magnetization of these grains acts as a miniature compass needle, pointing toward ancient magnetic poles. This same principle applies to extraterrestrial samples, such as meteorites and lunar rocks, which preserve evidence of the early solar nebula’s evolution.
However, traditional bottle cap–sized bulk samples often contain a mixture of reliable and unreliable magnetic signals, resulting in complex data that hamper interpretation. To improve accuracy, researchers have turned to magnetic microscopy. This technique maps magnetic fields at submillimeter to submicrometer scales in thinly sliced rock sections using advanced tools like a quantum diamond microscope (QDM) or a cryogenic superconducting quantum interference device microscope. By creating high-resolution maps of individual magnetic particles, scientists can reconstruct ancient fields with much higher precision while filtering out muddy signals from unstable grains.
Despite its potential, magnetic microscopy is an emerging field with its own set of uncertainties. To help constrain measurement data, Bellon et al. combined QDM observations with computer modeling to analyze how a magnetic particle’s stray field—the magnetic flux that leaks into the surrounding space—decays as it moves away from the source. They specifically investigated how a particle’s internal magnetic structure and external measurement noise affect the accuracy of these reconstructions.
The study found that in iron oxides, the smallest and most magnetically stable particles produce signals that are strong at the source but fade rapidly with distance. In contrast, larger particles produce signals that remain detectable farther away. This creates a challenge: The most stable grains for long-term geological data (the smallest ones) are the hardest to detect if the sensor is not perfectly positioned or if sensor interference is present.
By quantifying measurement error, the authors provide a road map for the field of micropaleomagnetism. Their findings could allow researchers to better account for uncertainty, leading to more robust reconstructions of Earth’s magnetic history and a deeper understanding of planetary evolution. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2025JB033133, 2026)
—Aaron Sidder, Science Writer
Citation: Sidder, A. (2026), Navigating the past with ancient stone compass needles, Eos, 107, https://doi.org/10.1029/2026EO260122. Published on 16 April 2026.
Lead-acid batteries are omnipresent. An integral part of most electric vehicles and all conventional vehicles globally, they also serve as backup energy storage systems in developing countries. But if lead-acid batteries are recycled in smelting units without adequate pollution control measures, they can cause elevated lead pollution that persists in local soils for thousands of years. However, because recycling sites with pollution control measures cost millions of dollars, most efforts are in
Lead-acid batteries are omnipresent. An integral part of most electric vehicles and all conventional vehicles globally, they also serve as backup energy storage systems in developing countries. But if lead-acid batteries are recycled in smelting units without adequate pollution control measures, they can cause elevated lead pollution that persists in local soils for thousands of years. However, because recycling sites with pollution control measures cost millions of dollars, most efforts are informal and unregulated.
In a recent study, researchers reported that scraping lead-contaminated soil in the vicinity of an abandoned recycling site for used lead-acid batteries and treating it with phosphate was linked to a 22% reduction in the blood lead levels (BLLs) of children who were living close to that site in a Bangladeshi town. The research was published in the International Journal of Hygiene and Environmental Health.
“Informal battery recycling is rampant in Bangladesh.”
“Informal battery recycling is rampant in Bangladesh,” said study coauthor Mahbubur Rahman, an environmental health scientist at the International Centre for Diarrhoeal Disease Research, Bangladesh. “Used lead-acid batteries are broken up and smelted in close proximity to residential and agricultural areas, which exposes those communities to lead emissions that contaminate their soil and water sources.”
Rahman and colleagues analyzed the BLLs of 130 children living close to two recycling sites for used lead-acid batteries (ULAB) in the Tangail District of Bangladesh that were abandoned in early 2019. They also assessed the BLLs of 37 children who did not live anywhere near ULAB recycling sites. The researchers then carried out soil remediation efforts at one of the ULAB sites but not the other. Prior to the work, the team members held informational sessions for the community about the dangers of lead pollution so locals could provide informed consent to participate.
The team observed that following remediation efforts, the lead content of the soil in and around the former battery recycling site decreased from more than 20,000 parts per million to less than 400 parts per million, which was considered acceptable by the U.S. EPA when the study was conducted, from 2022 to 2023. (The EPA reduced the limit to 200 parts per million in 2024.)
The researchers collected and cleaned up soil from children’s play areas, roadsides, and courtyards of 68 households that belonged to the intervention group. A year after the lead-contaminated soil was cleaned up, the 89 children from those households had the most significant decreases in their BLLs: from 90.1 to 70.4 micrograms per liter, a decrease of more than 21%.
“We know for sure that the areas close to abandoned ULAB recycling sites are as contaminated as areas around abandoned lead mines.”
The children in the group who lived close to the second abandoned ULAB recycling site, where soil remediation was not conducted, experienced only about an 8.4% decrease in their BLLs, from 88.5 to 81.1 micrograms per liter. The reduction in the control group’s BLLs could be attributed to a government initiative focused on reducing lead levels in turmeric, which was happening over the same time period as the study, Rahman said.
Anne Riederer, an environmental health scientist at the University of Washington who was not involved in the new study, said the dangers of lead exposure from ULAB recycling sites are well documented.
“We know for sure that the areas close to abandoned ULAB recycling sites are as contaminated as areas around abandoned lead mines. This study fits with the bigger picture of what we have learned to date about cleaning up contaminated sites and how that could improve children’s health,” she said.
A Widespread Issue
Similar studies conducted in Brazil and Bangladesh reported 46% and 35% reductions, respectively, in children’s BLLs following soil remediation initiatives around ULAB recycling sites.
Despite those drastic improvements, the children’s BLLs were still far above the World Health Organization’s threshold of 50 micrograms per liter. “This could mean there are other sources of lead exposure, like paints and cookware items,” said Rahman. “Or the persistently high BLLs could be because of chronic and long-term lead exposure, due to which lead gets deposited deep into the bones for several decades, even if [people] move away from toxic sites.”
Rahman explained that while soil remediation is an effective mitigation measure for lowering childhood lead exposure, it is also labor-intensive and expensive. Though the team identified hundreds of toxic sites borne from informal ULAB recycling, it wasn’t possible for them to remediate the soil at every site.
“The reason why this issue is so widespread is [that] informal recycling is cheap,” he said. “That makes the formal sector reluctant to invest in costly pollution control measures.”
—Anuradha Varanasi, Science Writer
Citation: Varanasi, A. (2026), Cleanup of battery recycling sites may lower childhood lead exposure, Eos, 107, https://doi.org/10.1029/2026EO260120. Published on 15 April 2026.
Source: AGU Advances
Models of glacial flow and retreat rely on estimates of glacial ice viscosity, the measure of the ice’s resistance to flow.
Ice viscosity is dependent on the stress applied to the glacier. Most ice sheet models use a standard equation to model ice flow that includes the variable n, called the stress exponent. A larger value of n means ice viscosity is more sensitive to changes in stress. For decades, glaciologists have, almost exclusively, used an assumed n value of 3
Models of glacial flow and retreat rely on estimates of glacial ice viscosity, the measure of the ice’s resistance to flow.
Ice viscosity is dependent on the stress applied to the glacier. Most ice sheet models use a standard equation to model ice flow that includes the variable n, called the stress exponent. A larger value of n means ice viscosity is more sensitive to changes in stress. For decades, glaciologists have, almost exclusively, used an assumed n value of 3 in the models they use to predict ice flow.
However, through recent experiments and observations, researchers have found that an n value of 4 may actually better represent the conditions of Earth’s ice sheets and glaciers.
Martin et al. created a model representation of the fast-retreating Pine Island Glacier in West Antarctica. The ice sheet in their model had a true n value of 4, but they ran model projections using both n = 4 and n = 3. That allowed them to observe how their model would incorrectly predict glacial flow and resulting sea level change, given an incorrect n value.
The researchers modeled glacial retreat for 100 years under both equations with two different glacial melting scenarios. They then modeled glacial recovery for another 300 years. Under a moderate scenario, the n = 3 model underestimated glacial retreat by 18% and sea level change contributions by 21%. Under an extreme melting scenario, the model underestimated sea level contributions by 35%.
Notably, those disparities in glacial retreat and sea level change contribution predictions increased more than would be expected between the two scenarios, potentially increasing the level of uncertainty in current projections of sea level change. The researchers also suggest that incorrect n values may be mistakenly attributed to other physical processes in current ice sheet models.
The results could have far-reaching implications for predictions of future glacial melt and may prompt investigations into its effects on sea level, the authors say. (AGU Advances, https://doi.org/10.1029/2025AV001946, 2026)
—Madeline Reinsel, Science Writer
Citation: Reinsel, M. (2026), Glaciers may flow into the ocean more quickly than we think, Eos, 107, https://doi.org/10.1029/2026EO260107. Published on 14 April 2026.
In the winter of 923, a magnitude 7.5 earthquake struck the heart of Puget Sound. Shorelines slid into the water, the seafloor rose up, and a tsunami swept through the region.
The Seattle fault zone, actually a mesh of faults that runs right under its eponymous city, was responsible for this quake. The fault continues to pose one of the deadliest threats to the Pacific Northwest; if a similar quake were to hit today, it would threaten millions of lives and cause billions of dollars in damage
In the winter of 923, a magnitude 7.5 earthquake struck the heart of Puget Sound. Shorelines slid into the water, the seafloor rose up, and a tsunami swept through the region.
The Seattle fault zone, actually a mesh of faults that runs right under its eponymous city, was responsible for this quake. The fault continues to pose one of the deadliest threats to the Pacific Northwest; if a similar quake were to hit today, it would threaten millions of lives and cause billions of dollars in damage.
Two new papers dig into recurrence intervals, or the quiescent periods between earthquakes, for the Seattle fault zone. They offer good news and bad news: One study, published in Geology, found that in the past 11,000 years, the massive 923 event was the only quake of magnitude 7.5 or greater. The other study, published in GSA Bulletin, found that smaller, but still damaging, quakes occur more frequently than previously thought.
The new research indicates the worst-case scenario of frequent 923-style events is less likely than some scientists thought, said Harold Tobin, a geophysicist at the University of Washington and head of the Pacific Northwest Seismic Network, who was not involved in either study. But researchers also found that “the less worse, but still bad scenarios” are more likely than previously thought.
Meet the Seattle Fault
“For a fault that has had so much attention, there’s so much we still don’t know.”
The Seattle fault zone is a thrust fault system that stretches about 75 kilometers (46 miles) from the foothills of the Cascades east of Seattle to the Hood Canal, which runs along the shores of the Olympic Peninsula to the city’s west, passing under Seattle along the way.
Geologists began rigorously exploring the fault system in the early 1990s, intrigued by gravitational anomalies, uplifted marine terraces (stair-step geological formations along coastlines), and evidence of a roughly 1,000-year-old tsunami. All these features hinted at a major, shallow earthquake on a local fault zone—likely the 923 event.
But “for a fault that has had so much attention, there’s so much we still don’t know,” said Elizabeth Davis, an earthquake geologist at the University of Washington who led the Geology study.
The most pressing questions are how big quakes on the fault get, how often they hit, and, ultimately, what risks the fault poses to people who live in the Puget Sound area.
“It takes some real geologic sleuthing to get at those tough questions,” Tobin said.
Biggest Seattle Fault Quakes Are Rare
Davis focused on the activity of the main fault, which can generate the biggest quakes in the Seattle fault zone complex. It was responsible for the 923 quake. But the existing record went back only about 5,000 years.
“We just don’t know what the recurrence interval for these big quakes is,” Davis said. “We wanted to lengthen the record.”
To do so, Davis and her collaborators turned to marine terraces, the oldest of which date back to the end of the last ice age about 11,000 years ago. The quake in 923 raised terraces by about 8 meters (26 feet), and scientists wanted to look for similar-scale uplift in terraces all around the sound.
The researchers mapped more than 150 terraces around Puget Sound and measured their depths. After accounting for regional slopes, they estimated uplift over time that could have been caused by quakes.
They found that in that 11,000-year period, only the 923 event generated significant uplift. Thick sediment mantles could mask smaller events but not 923-scale quakes, Davis said.
Estimating true recurrence intervals requires knowing the timing of multiple events. But the finding is “not bad news,” she said. It provides some evidence that the recurrence interval is likely not shorter than about 5,000 years.
“That could give us more of a buffer between now and when the next big one like that will happen,” said Stephen Angster, a U.S. Geological Survey geologist who led the GSA Bulletin study.
Smaller, Damaging Quakes Are More Frequent
Angster’s work focused on Seattle’s secondary faults, which are smaller, mostly blind faults (those not visible at the surface) capable of generating damaging earthquakes. Previous work had shown that one of these secondary faults generated a magnitude 6.7 earthquake, highlighting the risk they pose. Angster wanted to explore rupture histories of these secondary faults, particularly whether they could rupture independently from the main fault.
The researchers used a suite of paleoseismic tools, including magnetic data, field and lidar mapping, trenches dug across faults, and geochronology. They studied two newly identified secondary faults that have orientations similar to the main fault.
They found three new earthquakes to add to the region’s seismic history, including the oldest and youngest events in the known record, which were around 11,000 years ago and in the early 1800s, respectively. The earthquakes appear to be evidence of ruptures that occurred independently of the main fault, suggesting that the smaller—but still dangerous—secondary faults should be considered in hazard modeling.
With that lengthened record and the addition of three quakes, the recurrence interval the researchers found was about every 350 years over the past 2,500 years. This timing refined the previous estimate of every several hundred years.
There also appears to be an increase in activity over the past 2,000 years.
“Maybe we should be paying attention to that,” Angster said.
What Happens Next
“There are other earthquakes that aren’t as big but that occur more frequently. Those might not be as catastrophic, but it would be a very bad scenario for Seattle” if such events occurred.
“These are both carefully done studies,” Tobin said. “We now have evidence that the 923 event was the biggest in 11,000 years. But there are other earthquakes that aren’t as big but that occur more frequently. Those might not be as catastrophic, but it would be a very bad scenario for Seattle” if such events occurred.
It’s still to be determined whether the risk from secondary faults will be incorporated into the National Seismic Hazard Model, which includes the 923 quake but not smaller ones along the Seattle fault zone. The secondary faults were left out in previous efforts because they are shorter than the minimum length required to be included and because of uncertainties in their potential rupture magnitude.
Citation: Dzombak, R. (2026), On the Seattle Fault, the biggest quakes aren’t the most likely, Eos, 107, https://doi.org/10.1029/2026EO260114. Published on 14 April 2026.