Reading view

Most people don’t know what they don’t know, but think they do – correcting your metaknowledge can make you a better teacher and learner

The ability to say 'I know that I know nothing' could be considered a sign of wisdom. Nicolas-André Monsiau/Pushkin Museum of Fine Arts via Wikimedia Commons

Do you know what the Apple logo looks like?

Chances are, you think you do. It’s ubiquitous and iconic. How could you not know it?

But when tested, it turns out very few people can remember all the features of the logo. One study of 85 people found that only about half could pick the correct logo out of a lineup of similar ones. And only one person could correctly draw it.

This isn’t an isolated example. A classic study from 1979 found that people similarly couldn’t draw a penny accurately or pick out a correctly drawn penny from incorrect ones.

People aren’t just bad at remembering things they see all the time, but also in actually knowing how they work. In a 2006 study, many people made significant errors when drawing a bicycle, like putting the chain around the front wheel as well as the back wheel. More than just a forgotten detail, putting the chain around both wheels shows a deeper misunderstanding of how a bicycle works. A bicycle with a chain around both wheels wouldn’t be able to turn.

Illustration of bike with different components labeled
Do you truly know how a bicycle works? Al2/Grandiose via Wikimedia Commons, CC BY-SA

It turns out people’s knowledge of how the world works is often fragmented and sketchy at best. They systematically overestimate their understanding of everyday devices and natural phenomena. People will tend to give themselves high ratings on how well they understand something, such as how bicycles or zippers work. But when they’re asked to actually explain the mechanics of these objects, their ratings of their understanding typically drop.

Just like how your knowledge of the world around you is imperfect, your knowledge about your own knowledge – also called metaknowledge – is often flawed. My field of cognitive science has been uncovering various gaps in human metaknowledge for decades.

If people are systematically overconfident about how well they understand things, why don’t they notice when they don’t understand something? And what can people do to better recognize the limits of their own knowledge?

Why you think you know more than you do

Researchers have identified several factors behind people’s overconfidence in their knowledge.

One is that people confuse environmental support with understanding: The information is out in the world but not actually in your head. With a bicycle or a zipper, all of the parts are visible to you, and you may confuse this transparency for an internal understanding of how they work. But until you go to use that knowledge by attempting to explain how they work, you may not recognize that you don’t understand how those parts interact.

A second factor is confusing different levels of analysis. People can often describe how something works at a very high level. You know that the engine of a car makes the car go, and the brakes slow and stop the vehicle. But confidence in your high-level understanding of the car may bias you to think you also have a good grasp of the finer details, like how the engine pistons and brake pads work.

Additionally, people can be blind to the ways their knowledge shapes their own perception. In one study, researchers had participants tap out the tune to a popular song. On average, the tappers thought listeners would be able to identify the song about 50% of the time. But when listeners had to identify the tapped song, they actually could identify it only 2.5% of the time. The tappers didn’t realize how much their knowledge was making identifying the song seem easy to them.

A teacher talks to a student before a chalkboard wall filled with equations, chemical structures and graphs
Intellectual humility can help you see your expert blind spot. Vitaly Gariev/Unsplash, CC BY-SA

This disconnect has consequences beyond whether someone else can understand your Morse code version of a song. When teaching people, whether in formal classroom settings or through casual mentorship, you can sometimes have an expert blind spot: the inability to recognize the difficulties beginners face when learning something you have expertise in.

Building expertise often involves internalizing knowledge to the point where it becomes invisible to you. You draw on knowledge you don’t realize you have, making it hard to relate to learners who lack this knowledge – and, of course, hard for learners to relate to your teaching. You might have experienced this when you’ve gotten partway through explaining something, only to realize you’ve been using jargon you forgot isn’t common knowledge and lost your listener.

How to address metaknowledge failures

Your metaknowledge can fail in two directions: You can think you know more than you do, and you can be blind to how much you’re relying on knowledge you do have. Each calls for a different response to correct it.

When you’re overconfident in your knowledge, the remedy is using that knowledge. You’ll quickly realize how much you actually understand and dial down your confidence. Challenging yourself to actually try to walk through how something works is a great exercise in intellectual humility – that is, recognizing that you may be wrong – and can keep you from getting out over your skis.

Building a greater appreciation for what you know is more difficult. You can’t simply unlearn what you’ve internalized. But what this challenge shows is that, to some extent, knowing a subject and knowing how to teach it are two separate skills. Some experts are great teachers, but not simply by virtue of being experts. Recognizing that you have to approach teaching with humility, and that your expertise doesn’t automatically make you a skilled teacher, can go a long way toward making you a better teacher and mentor.

These aren’t easy and quick fixes to failures of metaknowledge. Both require ongoing intellectual humility and a willingness to distrust your own confidence. But acknowledging the fallibility of your own metaknowledge is a good place to start.

The Conversation

Thomas Blanchard does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

  •  

How tarot readers are using AI – and what it says about our growing reliance on chatbots for emotional support and advice

Tarot readings can encourage self-reflection. But what happens when you turn to AI to interpret the cards? Ilya S. Savenok/Getty Images for Sally Hansen

If you’ve ever turned to artificial intelligence to try to figure out how to handle a tricky situation with a friend or colleague, you’re far from alone. For many, AI has become a modern oracle – a source of guidance, emotional support or clarity in moments of uncertainty – though critics worry that they could lead to emotional dependence on the technology.

Of course, the urge to seek answers from forces beyond ourselves is hardly new. For generations, people have turned to psychics, astrology charts or tarot cards for reassurance.

Once fringe, these practices have increasingly become mainstream. According to a 2025 Pew Research survey, nearly 1 in 3 Americans consult tools such as tarot or astrology at least once a year, interest that’s thought to largely be fueled by Gen Z and social media.

Now, we’re seeing these two forces – AI and occult practices – meeting in strange and fascinating ways. An increasing number of tarot readers, from novices to seasoned practitioners, have been turning to AI to help make sense of their tarot readings.

What makes this pairing so striking is that interpretation is the whole point of tarot. And yet AI often brings little knowledge of your history or your unique situation when it dispenses advice.

In a study published in April 2026, we examined which aspects of the practice that tarot readers were delegating to AI, and how the technology was shaping their interpretations.

Watching what happens when readers hand that important interpretive step to AI may offer a glimpse of what helpful AI guidance could look like – and where it could go wrong.

The mainstreaming of occult practices

Tarot cards are experiencing a revival.

Tarot did not start out as a spiritual or fortune-telling tool. It began as a popular card game in the Italian Renaissance, before spreading across Europe.

Over time, readers and occultists layered the cards with mystical symbolism drawn from Kabbalah, Egyptology, numerology and other mystical and symbolic traditions. In the early 20th century, the British publisher William Rider & Son released the Rider-Waite-Smith deck, which became the most popular tarot deck in the English-speaking world.

Whereas only a handful of tarot decks were being published in the early 1970s, today thousands of tarot and oracle decks are in circulation. A standard tarot deck contains 78 cards, each carrying its own symbolic meaning. Practitioners use the cards to sit with hard questions, which can range from difficult relationships to world events: Should I leave my partner? Is this job worth it? What’s going to happen with Donald Trump and the Strait of Hormuz?

After cards are pulled, their meanings are interpreted through the lens of the reader’s question, circumstances and life history.

Someone asking about a relationship and drawing the Tower card, for instance, might read it as impending rupture, or as false assumptions finally giving way. Which reading fits depends on the other cards, the specific question and what the reader already knows about their own situation.

This stands in contrast to AI, which is primed to produce a seemingly definitive answer, even when it’s unaware of the nuances of your situation and context.

The adoption of AI in tarot reading

For our study, we interviewed 12 tarot practitioners about their use of AI in readings they did for themselves.

They generally found themselves pulled in two directions.

On the one hand, they often sought explicit guidance from AI in the process of self-reflection. By using AI to interpret the cards, they could sidestep the frustration of interpreting many cards in light of the question asked.

Say someone drew the Fool and the Ten of Wands for a question about a career change. The Fool points toward a leap into the unknown, while the Ten of Wands speaks to burnout and an unsustainable load.

But do the cards say, “Leave, you’re exhausted and something better awaits”? Or “Leave, and the new job will be just as demanding”?

Rather than sit with that ambiguity, some readers simply ask the AI for the meaning of the reading.

A middle-aged woman wearing glasses smiles and gazes at a large, blue tarot card in her right hand.
An attendee at Google’s 2025 I/O developers conference wears Android XR glasses with Gemini AI, which she’s using to interpret a tarot card. Camille Cohen/AFP via Getty Images

For more challenging readings, AI’s “yes man energy” helped them feel more confident about their interpretations. This was true for cases where participants both drew physical tarot cards and then interpreted them with AI, or used AI to directly simulate tarot readings.

These uses of AI are seductive. They make the act of self-reflection less demanding. But within the broader tarot community, we found a lot of criticism of AI, and there were concerns about how the sycophantic nature of the technology could undermine people’s intuition and reasoning.

AI as a tool for critical engagement

On the other hand, the tarot readers we interviewed also used AI as a tool to challenge their own biases and assumptions – blind spots in their readings, or what they might be missing in their own interpretation of the cards.

Along these lines, they used AI to generate alternative perspectives so they could compare the different interpretations and see which resonated more. And some even asked for an “objective reading” of the cards, because AI appears to have no skin in the game and be unburdened by personal biases or motives.

Many readers did this when they didn’t want to “bug” or “pester” their friends for help with a reading. Instead, they relied on chatbots in a one-sided relationship that feels supportive – an example of what scholars call parasocial interaction.

Some interviewees even treated bizarre AI-generated outputs or hallucinations as meaningful precisely because they were random and unintended, the same way that a card drawn at random feels like it carries a secret message.

What does this mean for the future of AI?

AI is becoming a powerful new oracle in its own right.

In one recent survey, researchers found that up to 87% of generative AI users are consulting the technology for “personal applications,” which includes advice and emotional support for relationship conflicts and mental health struggles.

Sometimes these chatbots are genuinely helpful. But at the same time, advice seekers can also become emotionally dependent. Some rely on the technology for companionship and guidance instead of friends and family. Chatbots have also been found to nurture delusional beliefs and even lead to self-harm.

Meanwhile, professionals that regularly give guidance are using AI in their practice, from lawyers to therapists and even priests. Pope Leo XIV recently urged priests to resist the temptation to use AI to write sermons.

We think it’s important to make sure the technology isn’t seen as an all-knowing source of truth. It can certainly open up users to new ideas, but it should be a tool to enhance self-reflection, rather than one that serves as a substitute for it.

In some cases, that’s what the tarot readers in our study did. They tapped into their own capacity for reflection by using AI to explicitly challenge their own biases and assumptions. This points to an alternative blueprint for the future of AI – one in which the technology doesn’t simply hand you answers but keeps you actively engaged in the process of finding them.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

  •  

Detroit’s water affordability crisis is tied to the uneven distribution of stormwater management costs – a fraught history explains why

Workers repair a water pipeline that dates back to the 1930s. In the coming years, utility bills in Detroit are likely to rise to pay for upgrades to aging infrastructure. Jim West/UCG/Universal Images Group via Getty Images

Beginning in July 2026, Detroiters will be paying higher water and sewer bills.

That’s because The Great Lakes Water Authority, or GLWA, voted unanimously on Feb. 25, 2026, to increase water rates by 5.8% and sewer rates by 4.26% for its customers. GLWA raised rates by similar amounts in 2025.

Residents at GLWA’s last rate hearing spoke of their difficulty keeping up with utility bills. For low-income customers across the GLWA system, rate increases aggravate a deeply entrenched water affordability crisis.

In the coming years, utility bills will likely continue to rise, driven by maintenance costs to upgrade infrastructure nearing the end of its life cycle.

Utility bills are the primary source of revenue for public water and wastewater systems. Yet both the Detroit Water and Sewerage Department, or DWSD, and GLWA are caught in what utility experts call an affordability gap. That is, the discrepancy between what it costs to maintain essential infrastructure and what ratepayers can reasonably afford.

Utilities across the country are facing down a similar contradiction. For DWSD customers, the gap is wider still because they carry a greater burden for water quality improvements that benefit the wider metropolitan region.

I am a political ecologist at Loyola Marymount University, specializing in the politics of resource management in the Great Lakes.

While water affordability is a long-standing concern for communities within the GLWA system and across Michigan, the crisis remains the most acute in Detroit. Taking a look at the fraught history of wastewater management helps to explain why.

Who pays to keep waterways clean?

Since the late 1990s, water bills in Detroit have risen by 400%.

At $87.54 per month, DWSD’s average residential water bill can consume up to 25% of disposable income for households living below the poverty line. The U.S. Environmental Protection Agency sets an affordability threshold of 4.5% of disposable income to cover water bills.

About three-quarters of a DWSD residential water bill pays for wastewater and stormwater treatment. These revenues also help to maintain Detroit’s wastewater treatment plant, which serves the city and 76 suburban communities.

My research, which combined archival research and interviews with state regulators, Detroit city staff, DWSD and GLWA representatives and grassroots water affordability advocates, documents how Detroit’s water affordability crisis involves a less visible form of environmental injustice. This term often describes uneven exposure to pollution or other environmental harms. Detroit’s case raises a different question: Who pays to keep local waterways clean?

Regionalizing Detroit’s wastewater system

Detroit’s wastewater treatment plant is the largest single-site treatment facility in the country. While suburban communities own and operate local sewer systems, they are connected by a regional sewer network that stretches across 944 square miles of Wayne, Oakland and Macomb counties. This network conveys raw sewage to the treatment plant in Detroit.

The wastewater system was not initially designed to serve the metropolitan region, however. It was expanded through the 1950s-70s to help suburban communities address new state wastewater mandates.

Truck stuck in flooded highway
Water bills generate revenue for much-needed infrastructure repairs as climate change increases the frequency and intensity of storms. Photo by Matthew Hatcher/SOPA Images/LightRocket via Getty Images

The postwar period is well known for its economic boom, but it also ushered in important social, political and environmental shifts.

Following World War II, for example, local waterways were slick with both industrial and municipal wastes. Polluted waters posed a threat to the safety of people’s drinking water and to Detroit’s water-intensive industrial manufacturers.

In response, Michigan revamped its water pollution law in 1949, requiring cities, towns and villages to install wastewater treatment. Some suburbanizing communities resisted these mandates. They argued their tax bases, then only a few thousand residents, were insufficient to finance such costly infrastructure.

Meanwhile, civil rights organizers in Detroit and across the nation struck down racist segregation laws through the 1960s. Black families began moving into historically white neighborhoods. Following Detroit’s turbulent summer of 1967, demand for suburban housing in all-white communities skyrocketed. More than 40,000 white residents left Detroit for the suburbs that year, a figure that doubled in 1968.

This phenomenon, known as white flight, not only spurred suburbanization but left the tri-county area largely segregated by race and class.

The convergence of stricter water quality laws, suburban growth and white flight also had implications for the wastewater system and its management.

By the late 1950s, Michigan’s Department of Public Health had begun denying sewer permits to developers building in places with insufficient wastewater treatment. Permit denials helped to enforce the state’s wastewater mandates. They became known as “construction bans” for the way they slowed suburban growth.

The quickest way to resolve these “bans” was to route suburban sewage to Detroit. By 1974, DWSD provided wastewater treatment to more than 70 suburban communities across a deeply segregated service area.

Protests march with signs
In 2014, demonstrators gathered to protest the city’s widespread water shutoffs, which left thousands of Detroit residents without water due to unpaid bills. Photo by Joshua Lott/Getty Images

An uneven burden for improving public infrastructure

Regionalizing the wastewater system opened DWSD to suburban political and economic pressure – just as Detroit was becoming a majority-Black city under its first Black mayor, Coleman Young.

In 1975, DWSD hiked sewer rates for both city and suburban customers to finance upgrades for state and federal water quality regulations.

Suburban officials challenged the rate hikes in court, alleging DWSD was attempting to “fleece” the suburbs. While these and future allegations went unsubstantiated, they entrenched long-standing anti-Black stereotypes into the politics of public infrastructure management.

In addition to ongoing rate disputes, suburban politicians introduced “takeover bills” in the state Legislature. The goal was to transfer control of DWSD’s infrastructure to a new regional authority. Both tactics persisted through the 1980s and ’90s, forcing DWSD to make compromises that shifted more costs onto Detroit ratepayers.

A prime example is the 1999 rate settlement agreement that resolved a decade of suburban rate disputes over DWSD’s stormwater charges. Known as “the 83/17 split,” the agreement assigned 83% of stormwater improvement costs to Detroit, while suburban customers shared the remaining 17%, divided 76 ways.

The rates under dispute were introduced to meet new state regulations targeting combined sewer overflows. These overflows occur when pipes release raw sewage and stormwater into waterways during heavy rain. Suburban officials argued for a reduced share of improvement costs. They pointed out many of their sewer systems already separated storm and sanitary pipes, reducing the occurrences of combined sewer overflows. Yet state-mandated improvements required expanding shared infrastructure, not simply combined sewer overflow outlets in Detroit.

GLWA’s own wastewater master plan documents suburban stormwater entering regional sewers long after the 83/17 split was established. Suburban sprawl also paved over vast stretches of land, funneling more runoff into the system.

Nevertheless, the settlement reduced the suburban share of combined sewer overflow improvement costs to 17%. DWSD was ordered to set aside US$10.6 million to reimburse suburban customers for previous stormwater charges above the 17% threshold.

For the past 25 years, Detroiters have borne the bulk of stormwater upgrades – a capital program that has exceeded $1.5 billion.

The approximately 680,000 residents of Detroit have borne these costs despite accounting for only 23% of GLWA’s 2.9 million wastewater customers.

A push toward water affordability

The 83/17 split remains in place today. It was grandfathered into GLWA’s 40-year lease agreement with DWSD that took effect in 2016.

While DWSD continues to provide local water and sewer service to city residents, the lease transferred fiscal and operational control of regional water and wastewater infrastructure to GLWA. This means cost-sharing for stormwater improvements will continue to be structured by the 83/17 split for decades to come – unless GLWA consents to renegotiating the deal.

In 2016, Detroit’s blue ribbon panel on water affordability recommended that DWSD revisit how cost is allocated across all users of the system.

DWSD initiated discussions with GLWA in 2020 and 2021 to revisit the terms of the 83/17 split. GLWA officials concluded, however, that existing legal agreements and contracts made the 83/17 split “logistically challenging” to renegotiate. As long as the 83/17 split remains in place, protecting local waterways from combined sewer overflows will continue to exacerbate the water affordability crisis in Detroit.

Since 2014, 170,000 Detroiters have been met with water shutoffs for unpaid bills. Shutoffs, in turn, have triggered housing abandonment and foreclosures. They have also increased residents’ exposure to waterborne illnesses, affected mental health and threatened family stability.

This is an especially pressing concern now, with state funding for DWSD’s low-income “lifeline rate” program recently exhausted and urban flooding worsening as storms grow more frequent and severe. While DWSD plans to reopen applications to the lifeline plan later this year, the program can support only about 5,000 residents. This is down from almost 30,000 residents it supported in previous years and far below the level of need with 31.5% of Detroiters living below the poverty line.

Organizations such as the People’s Water Board Coalition have spent two decades building coalitions across Michigan to push for a statewide water affordability plan. A statewide plan that pegs water bills to household income could create a more stable and more equitable revenue source for critical wastewater infrastructure in Detroit.

The Conversation

Nicole Van Lier’s research received funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and the Fulbright Canada Foundation. During her fieldwork, Nicole was a member of the People’s Water Board Coalition.

  •  

We tested the new World Cup ball – this is what you need to know about how it will fly, dip and swerve

Small variations in the ball can influence how it behaves once it leaves the foot. Robbie Jay Barratt/AMA/Getty Images

Every four years, the men’s World Cup delivers some certainties. The pitch dimensions are tightly regulated, offside is signaled with a flag, and referees end the match with a blast of a whistle. But one key piece of equipment is changed on purpose: the ball.

Adidas, which has supplied World Cup soccer balls since 1970, introduces a new match ball for every tournament, and with that comes fresh aerodynamic calculations for players. How will it fly through the air, weave and dip?

For the past 20 years, my engineering colleagues in Japan and England and I have put the new balls through their paces, investigating soccer ball aerodynamics. Our work begins by putting balls in wind tunnels to measure drag, side and lift forces. We use the measurements from these tests in trajectory simulations that tell us how the ball will behave in a real-game setting.

Putting the 2026 World Cup ball through the wind tunnel test.

That may all sound a little academic, and we do produce an academic paper on our findings. But what our data indicates could mean the difference between a goal or a miss for strikers, a save or a blunder for goalkeepers, and jubilation or heartache for fans.

At the World Cup, the ball is the most important piece of equipment in the biggest tournament of the world’s most popular sport.

This year’s ball, the Trionda, is especially interesting. When FIFA and Adidas unveiled it in fall 2025, the first thing many people noticed was the color and the paneling.

An orange ball and a black and white ball are under a trophy.
Earlier World Cup balls used many panels; modern balls use far fewer. Manfred Rehm/picture alliance via Getty Images

The ball’s red, blue and green graphics correspond to the three host countries, with maple leaf, star and eagle motifs representing Canada, the United States and Mexico. And for the first time in men’s World Cup history, matches will be played with a four-panel ball.

But with so few panels, has Adidas made the ball too smooth? That is the trap engineers fell into with the Jabulani ball used at the 2010 World Cup in South Africa that became notorious for sudden dips and swerves, which made goalkeepers’ lives far trickier.

You do not want the World Cup ball to feel like the start of a science experiment once it is in the air. And if it behaves strangely, players and goalkeepers notice immediately.

The evolution of soccer balls

World Cup balls have come a long way over the decades. If you go back to 1930, the ball looked very different. The first World Cup final used two different leather balls: Argentina’s Tiento in the first half and Uruguay’s T-Model in the second. Both were hand-sewn, multipaneled balls, inflated through a bladder opening that had to be tied off and tucked back beneath the laces. In damp conditions, the leather absorbed water, making the ball heavier and less predictable in play.

A ball nestles in the top of a goal.
Uruguayan keeper Enrique Ballestrero fails to save a shot from Argentina’s Carlos Peucelle in the final of the first World Cup. Keystone/Getty Images

By 1994 – when the United States last hosted the men’s tournament – the official ball, Adidas’ Questra, had evolved into a foam-based design. The modern World Cup ball is no longer just stitched leather. It is an engineered aerodynamic surface.

Trionda pushes that evolution further. It has only four panels, the fewest in men’s World Cup history, which have been thermally bonded – melded together using heat and adhesive.

Fewer panels might suggest less total seam length and therefore a smoother ball. And smoothness matters because the thin boundary layer of air clinging to the ball determines where the flow separates, how large a wake forms, and how much drag the ball experiences.

The Trionda has intentionally deep seams, three pronounced grooves on each panel and fine surface texturing.

But will these textures and grooves do the trick? To find that out, my colleagues and I measured the ball’s seam geometry and overall aerodynamic behavior. We compared it with Trionda’s four predecessors: 2022’s Al Rihla, 2018’s Telstar 18, the Brazuca used in 2014 and the Jabulani in 2010.

What the measurements show

In our wind tunnel tests at the University of Tsukuba, we measured something called the drag coefficient, which is a way of describing how much air resistance a ball experiences as it moves.

Using this data, we gained insights into how the airflow changes around the ball after it is kicked. The tests helped identify the drag crisis, the speed range in which changes in the boundary layer and flow separation produce a sharp change in drag, which can alter the ball’s acceleration, trajectory and range.

A ball is seen suspended.
The Trionda soccer ball prepares for the wind tunnel. Goff/Hong/Liu/Asai

We found that the Trionda is effectively rougher than those predecessors.

Trionda reaches its drag crisis at a lower speed, at about 27 mph (43 kph). That is below the roughly 31-40 mph (50-65 kph) range for Al Rihla, Telstar 18 and Brazuca, and far below Jabulani’s roughly 49-60 mph (79-97 kph) range, depending on orientation.

Why does all that matter? Because a ball can feel ordinary off the boot and still behave differently in flight. When the drag crisis occurs in the middle of game-relevant speeds, small changes in launch speed, orientation or spin can shift the ball from one aerodynamic regime to another.

That was Jabulani’s problem. Once kicked with little spin, it had a tendency to slow down too much as it passed through its critical-speed range.

Trionda does not look like that kind of ball. It has a more steady and consistent drag coefficient in the range of speeds associated with corner kicks and free kicks.

But there is a trade-off. Our measurements also showed that once Trionda enters the higher-speed, turbulent-flow regime, its drag coefficients are somewhat larger than those of Brazuca, Telstar 18 and Al Rihla.

In plain language, that suggests a hard-hit long ball may lose a little range.

In our simulations, the difference is not huge. But it is large enough that players may notice long kicks coming up a few meters short.

It is also important to note that we tested a nonspinning ball. As such, our results do not provide a prediction of every pass, clearance or free kick fans will see this summer. Balls in flight often spin due to off-center kicks. That, along with altitude, humidity, temperature and air pressure all influence how a ball flies through the air once kicked.

A ball mounted on a rod.
Close-up of the Trionda ball during wind tunnel testing. Goff/Hong/Liu/Asai

The big test yet to come

Fewer panels and more texturing aren’t the only differences with the new ball.

Trionda also carries technology that has little to do with its flight and a great deal to do with officiating. Like Al Rihla, Trionda includes “connected-ball technology” that lets computers know when the ball is kicked, helping with offside decisions.

But the architecture has changed. In 2022, the measurement unit was suspended at the center of the ball. With Trionda, it sits in a specially created layer inside one panel, with counterbalancing weights in the other three panels. The chip sends data to the video assistant referee, or VAR, system and the tournament’s semi-automated offside system.

That tweak will help referees, but will the new ball in general help or hinder players?

The evidence from our tests suggests that the ball won’t be behaving in a way that leads to baffling and erratic flight.

But the more intriguing possibilities are subtler and outside the scope of our tests. Will the grooves on Trionda help players generate more backspin on the ball, generating more lift and possibly offsetting Trionda’s somewhat larger high-speed drag coefficient?

That is why I keep studying World Cup balls both in the lab and through their behavior in play. Every four years, a new design offers a fresh way to watch physics enter the game, not in theory, but in the movement of an object in which every player on the soccer field must place their trust.

The Conversation

John Eric Goff currently works as a visitor in the Department of Physics at the University of Puget Sound in Tacoma, Washington. Following the conclusion on 30 June of that one-year appointment, he will start on 1 July as Professor of Engineering Practice in the Weldon School of Biomedical Engineering and the School of Mechanical Engineering at Purdue University.

  •  

Falling space debris poses an escalating risk as spacecraft get stronger and more heat resistant

Not all space debris burns up in the atmosphere before it makes it back to Earth. PaulFleet/iStock via Getty Images

When it comes to space debris, what goes up is coming down more often – and not safely.

When spacecraft launch, some components, including nonreusable rocket boosters, are jettisoned to decrease weight, leaving them to intentionally burn up as they reenter the atmosphere. Satellites also enter the atmosphere at the end of their life, supposedly burning up. But in many cases, they are not doing so as predicted.

Debris from partially burned-up spacecraft components and satellites reentering Earth’s atmosphere can pose a risk to people and structures on the ground. The surge in launches, driven largely by private players such as SpaceX, is turning a once-remote risk into a growing threat.

Our materials research group at the University of Wisconsin-Stout is studying the materials that allow reentry debris to survive. We look for ways to safely modify their exceptional heat-resistant qualities to make them safer for atmospheric reentry.

Debris landing on Earth

Reentry debris has fallen on both private and public property around the world multiple times since 2021. Some of the most notable events involve pieces from SpaceX Dragon’s carbon fiber trunk, which stays attached to the crewed capsule until just hours before its reentry. These trunks are larger than a 15-passenger van and used for storage.

Trunk debris from the Crew 7 mission to the International Space Station has landed in North Carolina, and fragments from the Crew 1 mission landed in New South Wales, Australia. Similarly, debris from the Axiom 3 mission landed in Saskatchewan, Canada.

A large piece of space debris from a SpaceX Dragon capsule was found by a campsite groundskeeper in North Carolina in 2025.

In addition to trunk debris, carbon fiber components that hold pressurized gases to adjust a spacecraft’s orientation also make up a lot of recovered reentry debris. Some of these most recent recoveries have been in Australia, Argentina and Poland.

Most of the debris that reenters the atmosphere burns up, so why are these pieces making it down to Earth’s surface?

Atmospheric reentry

Satellites such as SpaceX’s Starlink reside in low Earth orbit, typically between 190 and 1,240 miles (300 and 2000 kilometers) above the Earth’s surface. To stay there, they need to move really fast, at about 17,000 miles (27,000 km) per hour. To reach this speed, a rocket with a million pounds of fuel had to accelerate it, and part of this energy is still contained within the satellite’s momentum.

As an object in orbit drifts down, closer to Earth’s upper atmosphere, it starts to collide with air molecules, slowing the object down. The amount of heat generated from this interaction rapidly consumes the satellite, melting metal at over 3,000 degrees Fahrenheit (1,600 degrees Celsius).

More launches

Countries around the world have been launching items into space since the 1950s, so why is reentry a concern now?

Starting in the 1960s, about 100 objects were launched into space every year – or at least that was the case until 2016. Since then, the number has been increasing exponentially. In 2016, 200 objects launched. But in 2025, that number was 4,500, meaning 20% of all objects launched into space since the 1950s were launched last year.

Most of these launches came from companies in the United States, such as SpaceX and Rocket Labs. Companies like these, along with those outside of the U.S., have plans for large satellite constellations composed of hundreds of thousands to a million satellites.

The more objects and payloads launched, the more reentry events occur. Satellite operators are required to remove their decommissioned satellites from orbit after 25 years to comply with regulations set in place by international committees. Groups across the world, including the Federal Communications Commission in the U.S., have pushed to shorten the deorbit window to five years. Because of these guidelines, the full influx of reentry debris events from these recent launches will not be felt for another 10 or more years.

The objects launched and policy decisions made today will have a lasting effect on future safety.

Carbon fiber

As the world has progressed technologically, efficiency for launching items into space has too.

Satellites and spacecraft are becoming lighter, stronger and more heat resistant because of materials such as carbon fiber-reinforced plastics and new metals. These strong materials are sought after because they’re lightweight, but they can also cause deorbiting debris to withstand reentry temperatures.

Carbon fiber, once used exclusively in space technology, is now found in common items such as bicycle frames and racing car bodies. It is still the gold standard for fabricating high-strength, low-weight materials for spacecraft components such as rocket fuselages, interstaging – the protective housing found between the rocket stages – and pressure vessels that experience extreme temperatures and high mechanical stress and strain.

Simple metals such as aluminum and steel melt and burn away, while complex materials such as carbon fiber, which is manufactured at up to 5,000 F (3,000 C), burn away unpredictably, changing the way jettisoned components break up upon reentry.

Since the early 2000s, a majority of recovered space debris contains either carbon fiber-reinforced plastic sections or metal components wrapped with carbon fiber. The carbon fiber can act as an unintentional heat shield for heavier, more harmful debris.

A map showing the world with dots spread across the U.S., South America, the coasts of southern Africa, Australia and Southeast Asia.
This map shows locations where confirmed space debris has been recovered. With the increase in launches, the European Space Agency predicts that future space debris could fall practically anywhere across the world. European Space Agency

Design For demise

Design for demise is a major area of research focused on mitigating the risk of reentry debris. Instead of relying on controlled and meticulously timed deorbits that send components that survive reentry into the ocean at the end of their lives, spacecraft components are engineered to ensure they completely disintegrate while deorbiting through the atmosphere.

Design for demise can take many forms. These range from changing to more heat-susceptible materials to relocating harder-to-burn components to areas of the spacecraft that will be hotter during reentry, or using linkages that break apart at high temperatures to separate structures into smaller components to help them burn up.

With so much focus historically on spacecraft being made from the lightest, strongest and most heat-resistant materials available, it may seem counterintuitive to intentionally make some materials weaker. The key is making materials smarter, so they maintain their strength during their mission but weaken under the heat of reentry.

The Conversation

Matthew Ray's lab is developing and working toward patenting a system to decrease risk from future carbon fiber based reentry debris.

Reese Hufnagel conducts research on space debris and is developing ways to make future carbon composites safer for use in orbit.

  •  

Why Trump’s call to pull 5,000 US troops from Germany will hurt America

The propeller of a 'raisin bomber' airplane from World War II is seen in Frankfurt, Germany, in June 2020. AP Photo/Michael Probst

President Donald Trump announced on May 1, 2026, that the United States will withdraw 5,000 U.S. troops from Germany – personnel who had been deployed there as a response to Russia’s invasion of Ukraine.

Germany-U.S. tensions started after the U.S. invasion of Iran. German Chancellor Friedrich Merz refused to support Trump’s war and stated that Iran had humiliated Washington’s leadership by closing the Strait of Hormuz. Trump followed the initial U.S. troop withdrawal announcement with threats to pull more armed forces.

U.S. troops will depart Germany over the next six to 12 months, leaving about 31,000 troops in the country.

The Trump administration’s decision to withdraw personnel comes after weeks of mounting tensions between the U.S. and NATO members. The United Kingdom and Portugal have restricted Washington’s ability to use its bases in those countries for certain activities related to the Iran war.

Trump also threatened to withdraw U.S. troops from Spain and Italy over their opposition to the war and refusal to help the U.S.

“Why shouldn’t I?” Trump said on April 30, 2026, referring to possible U.S. troop withdrawal from the two European countries. “Italy has not been of any help. Spain has been horrible. Absolutely.”

These remarks suggest the Trump administration views U.S. troop withdrawal as punishment for noncompliant European allies. But the reality is more complicated. Although this proposed 5,000-troop reduction is less than 15% of current U.S. forces in Germany, its logic and consequences speak to broader issues of power projection.

As experts in international relations, foreign policy and security cooperation, we have studied the relationship between U.S. military deployments and their host countries for years. While U.S. deployments contribute to the security of the host state, having troops based in Europe and other countries provides the U.S. with significant flexibility for pursuing its own foreign policy goals.

US deployment levels

Europe has historically been one of the regions with the highest concentrations of U.S. military personnel deployed overseas.

Since the end of the Cold War, for example, Italy has hosted between 20,000 and 40,000 personnel, and Spain between 2,000 and 7,000 personnel. Germany has regularly hosted the largest deployments. At the end of the Cold War, the U.S. maintained approximately 227,000 military personnel in Germany. Though Europe remains a significant location for basing U.S. troops, this number fell dramatically in the 1990s, hovering between 50,000 and 75,000 for most years since then.

US power projection

Historians and policymakers often explained U.S. deployments to Europe as a means of deterring the Soviet Union during the Cold War.

Nobel laureate Thomas Schelling described the logic in 1966: Even a small deployment in West Berlin served as a trip wire, ensuring that Soviet incursions would trigger a much larger military response from the U.S. and its European allies.

But a closer look at U.S. foreign policy challenges this view. While U.S. troops stationed in Europe were meant to defend Europe, their utility has extended far beyond that.

U.S. military bases and deployments provide the U.S. with greater flexibility and opportunities to pursue its foreign policy goals. By forward positioning military personnel and assets, the U.S. can reduce response times during crises, as well as the costs of moving its military resources into strategic positions.

A military plane lands on a runway.
A U.S. military aircraft lands at Incirlik Air Base in Adana, Turkey, as part of the operations against ISIS on Aug. 10, 2015. Volkan Kasik/Anadolu Agency/Getty Images

Foreign deployments can convince countries not to attack countries that host them. During the Cold War, for example, the U.S. deployed nuclear weapons to Incirlik Air Base in Turkey, a NATO ally. Turkey’s close proximity to the Soviet Union increased the U.S.’s ability to challenge its superpower rival with these weapons.

These missiles were famously later withdrawn during the Cuban missile crisis in 1962, giving the U.S. something to bargain with in persuading the Soviets to remove their missiles from Cuba.

Larger military engagements, such as the Vietnam War or the wars in Iraq and Afghanistan, have typically relied on U.S. military facilities in allied states that are closer to the conflict. During the Vietnam War, U.S. bases in Germany, Japan and the Philippines were used as staging areas through which U.S. personnel and equipment moved on their way in or out of Southeast Asia.

U.S. facilities in Germany, such as Ramstein Air Base and Landstuhl Regional Medical Center, have been integral to combat operations, satellite control of drones and treating U.S. personnel wounded in combat. Landstuhl has admitted over 97,000 wounded soldiers since its founding in 1953 and has already treated service members injured during the ongoing Iran war.

Further, military equipment such as radar and interceptor missiles often have limited ranges. Deploying this equipment closer to rival countries can increase the chance of successfully intercepting and destroying incoming missiles.

Humanitarian benefits

Beyond warfare, U.S. humanitarian relief and disaster response operations often benefit from U.S. bases.

For instance, after a large earthquake struck Japan in 2011, U.S. personnel and facilities located in and around Japan enabled the rapid mobilization of relief operations.

A military transport plane takes off from a runway.
A U.S. Air Force C-17 Globemaster transport plane takes off from Ramstein Air Base in Germany on June 23, 2025. Boris Roessler/Picture Alliance via Getty Images

In 2004, a powerful earthquake in the Indian Ocean triggered large tsunamis, affecting millions of people in nearby countries. U.S. personnel stationed at Yokota Air Base near Tokyo provided relief and supplies to people throughout Southeast Asia and as far as eastern Africa.

Similarly, after an earthquake in Turkey in 2023, U.S. medical personnel relocated from Germany to Incirlik Air Base to help provide relief.

Beyond their humanitarian benefits, these missions can increase favorable views of the U.S. More positive public views of America may also make foreign governments more likely to support U.S. foreign policy goals.

Lower costs for the US

Host states often make direct and indirect contributions to the costs of hosting and sustaining U.S. personnel. These can range from direct financial transfers to construction, tax reductions and subsidies. Japan and South Korea increased the amount they pay to host U.S. troops after Trump demanded they do so in 2019.

U.S. equipment – from tanks and trucks to planes and ships – also often relies on a host country’s infrastructure to operate and move within the host country. Germany, for example, paid over US$1 billion for construction costs and the stationing of U.S. troops in Germany during the 2010s.

Not all countries that host U.S. troops invest as much in their infrastructure as Germany does, and having those troops elsewhere could prove far more costly than having them in Germany.

The Conversation

Michael A. Allen received grant research funding from the Department of Defense's Minerva Initiative, the US Army Research Laboratory, and the US Army Research Office from 2017 to 2021.

Carla Martinez Machain has received funding from the Department of Defense's Minerva Initiative, the US Army Research Laboratory, and the US Army Research Office.

Michael E. Flynn has received funding from the Department of Defense's Minerva Initiative, the US Army Research Laboratory, and the US Army Research Office.

  •  

Immigrant patients often choose doctors with a shared cultural background – what they are seeking isn’t sameness but connection

Patients seek clinical interactions where they feel heard. Evgeniia Siiankovskaia/Moment via Getty Images

At a recent dental appointment, I was unexpectedly seen by a new provider in my longtime dentist’s practice. Early in the visit, he realized we were both Iranian American. Like me, he had been born and raised in the United States. We were both fluent English speakers and fully accustomed to navigating American medical settings.

After we briefly discussed how the war in Iran was affecting our families there, something shifted. The exchange was short, but deeply human. I left feeling an immediate sense of connection, trust and familiarity with a provider I had only just met.

That experience helped me better understand something I had long observed among immigrant families – that immigrant patients often seek out healthcare providers from similar backgrounds. What they are often seeking goes beyond a shared language or cultural familiarity.

I am a health administration professor and lawyer who studies how people navigate health systems. In my work, and through conversations with immigrant families, including my own, I have seen how subtle interactions in clinical settings can shape whether patients feel confident or dismissed and unsure about returning for care. For some, choosing a doctor with a similar background represents their best attempt to feel more understood.

The fact that many patients actively seek out providers who share aspects of their cultural background, even when doing so may require additional effort or limit their options, illustrates that it is not a minor preference, but a meaningful part of how people experience care.

Beyond a shared language

Immigrants make up a growing share of patients in the U.S., accounting for about 15% of the population.

Large national studies suggest that patients often seek providers with whom they share a cultural background. That choice is especially pronounced among racial and ethnic minority patients, those who speak a language other than English at home and those with public insurance.

Even as the U.S. physician workforce becomes more diverse, many patients still report difficulty finding providers who share their cultural or linguistic background. At the same time, some evidence suggests the number of foreign-born physicians may also be declining. In my view, that makes the effort to find such providers all the more noteworthy.

Busy health care waiting room with a doctor discussing treatment plans with mother and daughter and a desk in the background.
Healthcare providers can do a lot to support patients’ sense of trust in their care. Dragos Condrea/iStock via Getty Images

A shared language may seem like the most obvious explanation for why immigrants seek out doctors from similar backgrounds. And in many cases, it does matter. When patients and clinicians speak the same language, communication improves and medical errors decline, especially for patients who are not fluent in English.

But language alone does not explain experiences like my own.

Narrative research on immigrant patients describes broader issues. For example, a patient might raise a concern about a persistent symptom, only to feel too quickly dismissed, or hear an explanation delivered in a simplified way that does not match their level of knowledge or experience.

These moments can be subtle, but as they accumulate over time, they may contribute to a sense that medical care feels transactional or dismissive rather than responsive to patients’ concerns. Even patients who are fully fluent in English and comfortable navigating the health system may come to expect not to be fully heard.

That expectation can shape where people feel comfortable seeking care.

Why shared background can matter

Sharing a background, whether through race, ethnicity, language or cultural experience, can sometimes help create a sense of connection – especially at the start of a relationship.

But research suggests the relationship is more nuanced than simply matching patients and doctors by identity. The way a doctor communicates, as well as whether they listen carefully, take concerns seriously and involve patients in decisions, also plays a central role.

In one study that examined physician-patient relationships across racial and ethnic groups, patients who felt personally similar to their physician – for example, in how the physician communicated, approached decisions or seemed to understand their concerns – were more likely to trust their doctor, feel satisfied with their care and follow medical advice.

Research on patient-centered care has similarly found that patients value interactions where they feel respected, understood and able to communicate openly.

Together, these studies suggest that while shared backgrounds can sometimes help create trust, communication and interpersonal connection may matter just as much.

More research is needed to understand how much these experiences reflect differences in communication itself versus connection spurred by a common background. But for immigrant patients, it may not be the shared identity itself that matters most, but the expectation that it will help them feel more easily understood. When patients consistently struggle to find that experience, shared background can become one of its few visible signals.

Understanding why immigrant patients make these choices ultimately reveals something more universal: Trust in medicine is shaped not only by clinical expertise, but by everyday human interaction. And patients value this quality so highly that they actively seek out providers who they believe will offer that sense of understanding and connection.

The Conversation

Yasamine Salkar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

  •  

Why Pennsylvania’s low-income residents are feeling the squeeze as gas prices rise

Pennsylvania consistently ranks among states with the highest gas prices. eyecrave productions/iStock via Getty Images Plus

When gas prices rise, not everyone feels the pain equally. For low-income and rural Pennsylvanians, a trip to the gas station can mean choosing between a full tank and groceries. Many factors, such as crude oil costs, distribution and marketing, and to some extent Pennsylvania gas taxes all add up to keep Pennsylvania’s gas prices higher than average.

Pittsburgh gas prices are among the highest in Pennsylvania due to higher urban demand, refinery maintenance issues in the Midwest and supply shortages.

Currently, the average gas price in the U.S. is $4.50. In Pennsylvania, the average is $4.66, and in Pittsburgh it’s $4.91.

To understand why, and what – if anything – can be done about high gas prices, The Conversation U.S. spoke with Hannah Wiseman, an energy and environmental law scholar whose work focuses on how regulation is designed. She explains who gets hit hardest by high gas prices and why relief is so hard to come by.

How do rising gas prices hit low-income Pennsylvanians differently than middle- or upper-income residents?

Low-income people typically have a limited monthly budget, with fewer or no savings to draw from. Each essential expense is a portion of an individual’s or family’s fixed budget, and when an essential expense rises, it eats up more of this fixed budget. For the costs of fuel and electricity, this is called the “energy burden” – the percentage of someone’s income that goes to energy costs. The higher the cost of energy, the more this impacts people’s ability to pay for other essential goods, such as food, medicine and medical care.

Pennsylvania consistently ranks among states with the highest gas prices. What regional conditions make Pennsylvania expensive?

Like any other good, the cost of gas is influenced by the cost of the raw product from which gasoline is refined, crude oil, the costs of operating the facilities that transport and distribute gas, and the amount of retail competition.

As the U.S. Energy Information Administration explains, distance from supply – refineries, ports and pipelines – usually means higher prices. This type of infrastructure is scarcer in the mid-Atlantic region, including Pennsylvania. And some rural areas have fewer gas stations, which can result in less retail competition.

Gasoline prices tend to be lowest in Gulf Coast states, such as Texas, with a current average of $4.01, and Louisiana, with a current average of $3.99, where there are many crude oil refineries and oil pipelines.

A landscape scene featuring two silos and farmland.
Due to lack of public transit, rural Pennsylvania residents rely on their personal vehicles to get to work. aimintang/E+ collection via Getty

How does the lack of reliable public transit in rural areas deepen the inequality issue?

Rural areas tend to have less public transportation – making personal vehicles essential – and people have to drive to their jobs to make ends meet. So when gas prices go up, rural residents often have no option but to fill up their tank at a high cost and potentially forgo other essentials.

Rural populations also have a substantial percentage of individuals defined as the “working poor.” These are low-income individuals for whom getting to work is essential. They are already saddled with high energy burdens, which rise with higher gas prices, and they live in rural areas with few affordable options for getting to work.

Are there existing state or federal programs that help low-income residents offset fuel costs?

Low-income support tends to come from states. Most government programs support home heating costs and utility bill payments for low-income residents; programs are more limited for gasoline. In California during the 2022 spike in gasoline prices the state sent checks to low-income families. Currently, Pennsylvania has no formal legislation in place to assist low-income families with gasoline costs.

Most electric-vehicle owners can no longer rely on the $7,500 federal tax credit for owning one. UCG/Universal Images Group via Getty Images

Electric vehicles remain out of reach for many low-income families. Does the green energy transition risk widening the equity gap?

Many U.S. residents cannot buy electric vehicles, largely because of tariffs on the import of affordable electric vehicles from countries such as China.

Additionally, the H.R. 1 Act erased the $7,500 tax credit for buying electric vehicles. This limited access to EVs widens the gap – wealthier families with electric vehicles can plug in their vehicles and avoid high gas prices, while lower-income individuals lack this option.

What can be done about high gas prices for low-income Pennsylvanians?

Pausing gasoline taxes, which is currently being debated by Pennsylvania state legislators, can reduce prices, but it also lowers revenues needed for public programs.

Direct rebates from the state to low-income individuals offer more value. However, Pennsylvania lawmakers are not presently considering direct rebates.

Read more of our stories about Pittsburgh and Pennsylvania.

The Conversation

Hannah Wiseman is a member of the Center for Progressive Reform. Her research on renewable resources, carbon sequestration, hydrogen, and energy/land use connections has received funding from the Sloan Foundation, Arnold Ventures, the Center for Rural Pennsylvania, the U.S. Department of Energy, and the National Science Foundation.

  •  

Suspending federal gas tax wouldn’t save drivers as much as they might hope – here’s what goes into the price of a gallon of gas

Gas taxes – federal and state – make up only a small piece of the price of a gallon of gas. AP Photo/Jenny Kane

With gasoline prices still high – averaging over US$4.50 a gallon in mid-May 2026 – President Donald Trump said he wanted Congress to suspend the federal gas tax, which is 18.4 cents a gallon for gasoline and 24.3 cents a gallon for diesel. A bill has been introduced in the Senate, and one is expected to follow in the House, according to Politico, but their fate is unclear.

States also charge their own taxes, ranging from 70.9 cents a gallon for gas in California to 8.95 cents in Alaska. Indiana, Georgia and Utah have suspended their gas taxes for at least some of 2026, and other states are considering similar measures.

As an energy economist, I have seen how suspending those taxes does reduce prices, but not as much as politicians – or drivers – might hope. Research on past gas tax holidays has found that consumers get about 79% of the reduction in gas taxes. That means oil companies and fuel retailers keep about one-fifth of the tax cut for themselves rather than passing that savings to the public.

Suspending the federal gas tax, which would require Congress to pass a law, wouldn’t help consumers much anyway. Even if oil companies passed on the whole savings to consumers, national average gas and diesel prices would drop only about 4%. The percentage reduction in high-cost states such as California would be even smaller.

Gas taxes are just one part of what drives gas prices. Overall, the price of a retail gallon of gas is the sum of four things: the cost of crude oil, refining, distribution and marketing, and taxes.

In nationwide figures from January 2026, crude oil accounted for about 51% of the pump price, refining roughly 20%, distribution and marketing about 11% and taxes about 18%. That mix shifts with conditions: When crude oil prices spike, that can drive more than 60% of the price; when the price drops, taxes and logistics are larger shares of the cost.

Crude oil is the biggest ingredient

Because the price of crude oil is the largest element, most of the price at the pump is derived from the global oil market.

Usually, big swings in crude prices come mainly from shifts in global demand and expectations – not from supply disruptions, according to widely cited research in 2009 by the economist Lutz Kilian.

But what is happening in early 2026 with the war in Iran is one of the exceptions: a classic supply shock. Severe disruptions to shipping through the Strait of Hormuz and attacks on Middle East oil infrastructure have taken millions of barrels a day off the global market.

Most drivers generally can’t quickly reduce how much they drive or how much gas they use when prices rise, so gasoline demand doesn’t change much in the short run. That means a jump in crude costs tends to result in people paying more rather than driving less.

Refining, regulations and the California puzzle

Refining turns crude into gasoline at industrial scale. The U.S. doesn’t have a single gasoline market, though. Roughly a quarter of U.S. gasoline is a cleaner-burning blend of petroleum-derived chemicals called “reformulated gasoline,” which is required in urban areas across 17 states and the District of Columbia to reduce smog.

California uses an even stricter formulation that few out-of-state refineries make. California is also geographically isolated: No pipelines bring gasoline in from other U.S. refining regions.

California’s gasoline prices have long run above the national average, explained in part by higher state taxes and stricter environmental rules. But since a refinery fire in Torrance, California, in 2015 reduced production capacity, the state’s prices have been about 20 to 30 cents a gallon higher than what those factors would indicate.

Energy economist and University of California, Berkeley, professor Severin Borenstein has called this the “mystery gasoline surcharge” and attributes it to the fact that there isn’t as much competition between refineries or gas stations in California as in other states. California’s own Division of Petroleum Market Oversight says the surcharge cost the state’s drivers about $59 billion from 2015 to 2024. It’s not exactly clear who is getting that money, but it could be gas stations themselves or refineries, through complex contracts with gas stations.

A person stands near a long metal truck in front of a gas station.
A tanker truck delivers fuel to a gas station. AP Photo/Erin Hooley

Getting the gas into your car

The distribution and marketing category covers the costs of everything involved in getting the gasoline from the refinery gate to your tank.

Gasoline moves by pipeline, ship, rail and truck to wholesale terminals, and then by local delivery truck to service stations.

At the retailer’s end, the key factors are station rent and labor, the cost to buy gasoline in bulk to be able to sell it, credit card fees of as much as 6 to 10 cents a gallon at current prices, and franchise fees paid to the national brand, such as Sunoco or ExxonMobil, for permission to put their branding on the gas station.

Most gas station operators net only a few cents per gallon on fuel itself – which is why many gas stations are really convenience stores with pumps out front. Borenstein and some of his collaborators have also documented that retail gas prices rise quickly when wholesale costs climb but fall slowly when wholesale costs drop.

The question of gas tax holidays

Gas tax holidays reduce funding for what the taxes are designed to pay for, typically roads and bridges. That pushes road and bridge upkeep costs onto future drivers and general taxpayers.

There is an additional problem, too: Taxes on gasoline are supposed to charge drivers for some of the costs their driving imposes on everyone else – carbon emissions, local air pollution, congestion and crashes. But Borenstein has found that U.S. fuel tax levels are already far below the true cost to society. Removing the tax on drivers effectively raises the costs for everyone else.

A fisherman holds a pole in the foreground as an oil tanker sails by at sunset
Suspending the Jones Act allows foreign-based oil tankers to sail between U.S. ports. AP Photo/Eric Gay

The Jones Act: A small number that adds up

The 1920 Jones Act is a federal law that requires cargo moving between U.S. ports to travel on vessels built and registered in the U.S., owned by U.S. citizens, and crewed primarily by U.S. citizens and permanent residents. Of the world’s 7,500 oil tankers, only 54 meet this requirement. Only 43 of these can transport refined fuels such as gasoline.

So, despite significant refining capacity on the Gulf Coast, some U.S. gasoline is exported overseas even as the Northeast imports fuel, in part reflecting the relatively high cost of moving fuel between U.S. ports.

Economists Ryan Kellogg and Rich Sweeney estimate that the law raises East Coast gasoline prices by about a penny and a half per gallon on average, costing drivers roughly $770 million a year. In light of the war’s effect on gas prices, the Trump administration has temporarily suspended the Jones Act requirements – an action more commonly taken when hurricanes knock out Gulf Coast refineries and pipeline networks.

What moves the number

The result of all these factors is that the price that drivers see at the pump mostly reflects the global price of crude, plus a stack of domestic costs, only some of which are inefficient.

Tax holidays give a partial, short-lived rebate. Jones Act waivers trim pennies, though permanent repeal may cause more fundamental changes, such as reduced rail and truck transport of all goods, which could lower costs, emissions and infrastructure damage associated with cargo transportation. Harmonizing fuel blends across states and seasons may lower prices somewhat, but likely at the expense of increased emissions.

Ultimately, the best protection against oil price shocks is a more efficient gas-burning vehicle, or one that doesn’t burn gasoline at all. In the meantime, the best I can offer as an economist is clarity about what that $4.50 actually buys.

This article includes material previously published on May 1, 2026.

The Conversation

Robert I. Harris does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

  •  

Many of the Caribbean’s most important reefs are going unprotected

A researcher checks on corals in Banco Chinchorro, off Quintana Roo, Mexico. Lorenzo Alvarez-Filip

Living by the sea in the tropics means being exposed to some of nature’s most powerful forces. Hurricanes can bring storm surges, flooding and destructive waves that threaten homes, infrastructure and livelihoods.

For many communities, coral reefs are a natural first line of defense against these storms. The reefs’ rugged structures break the incoming waves, reducing the waves’ energy by as much as 97%. Globally, reefs prevent about US$4 billion a year in storm damage. Without them, studies suggest, the damage would double.

Yet, these vital ecosystems are under increasing pressure. Rising ocean temperatures, pollution and coastal development are driving the loss of reef-building corals – the species that create the physical structure of coral reefs and underpin their ability to protect coastlines and provide habitat for marine life.

Protecting key coral reefs from these human-caused stresses could help the reefs continue to reduce future storm damage.

But which reefs should be prioritized?

An aerial view of a reef just off shore.
Reefs visible just offshore protect the coastline of Puerto Morelos, Mexico, in part by breaking waves during storms. Lorenzo Alvarez-Filip

We study coral reefs and marine environments. In a new research paper, we examined the likely impact that future warming will have on reefs across the Caribbean over the coming decades, including which reefs are most likely to persist under rising temperatures. Then we looked at which reefs were likely providing the greatest protective benefits for coastlines based on their functional characteristics.

The results show that about half of all the reefs with the greatest potential to continue to protect coastlines as the oceans warm are currently unprotected from human harms.

The Caribbean’s hidden coastal defenders

The value of coral reefs is evident along the Mexican Caribbean coast, where tourism is a major economic driver and the main source of income for local communities. The tourism industry there can generate up to $15 billion in a single year. Much of that value depends directly or indirectly on healthy coral reefs.

Losing the reefs would not only affect fish that rely on coral structures for habitat, and the livelihoods of people who depend on them, it would also cost millions of dollars in increased storm damage. An estimated 105,800 people, along with buildings and other infrastructure worth $858 million, are located in coastal areas protected by reefs in the Mexican Caribbean alone.

An overhead view of a dense coral reef.
Elkhorn corals (Acropora palmata) are among the most important corals in the Caribbean. They can form dense clusters that are highly effective at taking the energy out of waves. Lorenzo Alvarez-Filip

The role of reefs becomes especially clear during extreme events.

In 2005, Hurricane Wilma, a Category 5 storm, struck the coast of Quintana Roo in the Yucatán Peninsula, Mexico. Near the small town of Puerto Morelos, the coral reefs broke the waves, helping lower the wave height that had reached nearly 36 feet (11 meters) offshore to less than 6 feet (2 meters) near the coast. The reefs near Puerto Morelos are part of a protected national park where public access to the reefs is heavily regulated.

Not all reefs protect the coast equally

However, not all reefs provide the same level of protection for coastlines. Our research shows that the differences depend on the reef engineers – the coral species that built the reef.

Reefs dominated by large, complex and rigid corals, such as thickets of elkhorn corals, create rough, elevated structures that can break and slow incoming waves, providing the greatest protection. In contrast, reefs made up of smaller or flatter species offer less resistance.

Knowing which reefs deliver the greatest structural protection can help countries and communities prioritize protecting them from human pressures, such as pollution and ship traffic.

We found that of the highest-priority reefs – based both on functionality and how well they are expected to survive rising water temperatures by midcentury – only 54% were protected. In the Caribbean’s western, southwestern and Florida ecoregions, priority reefs were most likely to be in formal marine protected areas, while the Greater Antilles and Bahamas had several unprotected reefs.

The Bahamas, Puerto Rico, Turks and Caicos, and Cuba have many high-value reefs that remain unprotected, meaning there are opportunities to increase protection on these important reefs. The reefs that we identified as important for conservation based on their physical functionality have also been reported to support high levels of biological diversity.

A coral reef with large groups of corals.
Reefs dominated by complex and rigid structures are often the most functional for protecting coastlines. They also provide important habitat for fish. Lorenzo Alvarez-Filip

While a large percentage of coral reefs off Belize, Honduras and Puerto Rico are protected, we found that several reefs with the greatest potential for protecting coastlines were not within marine protected areas.

Why does this matter in a warming world?

Ocean warming is driving more severe and frequent coral bleaching events. When water temperatures rise too high, corals expel zooxanthellae – the algae that live in their tissues, provide them with energy and give corals their color. If heat stress is too intense or prolonged, many corals won’t recover.

As corals die, the reef structures they built break down and lose complexity over time. The coastal defenses they provide disappear.

At the same time, high-intensity hurricanes are becoming more frequent.

This creates a dangerous combination: stronger storms hitting coastlines that are less protected.

Protecting coral reefs is essential, not only for the sake of marine biodiversity, but for safeguarding coastal communities, their economies and the millions of people who live there.

The Conversation

Sara M. Melo Merino received a scholarship from Secretaría de Ciencia, Humanidades, Tecnología e Innovación (Secihti No. 246257).

Lorenzo Alvarez-Filip and Steven Canty do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

  •  

Why Kevin Warsh might still prove to be an independent Federal Reserve chair

The nomination of Kevin Warsh as Federal Reserve chair is reviving a debate about Fed independence. AP Photo/Jose Luis Magana

Kevin Warsh is now likely to secure Senate approval on May 13, 2026, as the next Federal Reserve chair – and become arguably the most powerful central banker in the world. But when Warsh appeared before the Senate Banking Committee for his confirmation hearing in April, one punchy question underscored the dilemma that Warsh, lawmakers and the Fed all face:

“Are you going to be the president’s human sock puppet?” asked Republican Senator John Kennedy of Louisiana.

On one level, the question reflects President Donald Trump’s intense pressure on the central bank to cut rates, with current Chair Jerome Powell often the target of his ire. But it also points to Warsh’s own inconsistency on inflation.

Earlier in his career, he was a “hawk,” pushing for interest rate hikes to curb inflation and opposing the novel crisis management authorities that the Fed took on after the 2008 financial meltdown. Now, Warsh supports the interest rate cuts that Trump has exhorted as a way to juice growth.

Warsh has also come under fire for his deep ties to the financial sector, where he once worked. Lawmakers such as Democratic Senator Elizabeth Warren of Massachusetts have cited the potential conflict of interest posed by his undisclosed assets, even though in theory they’ll be divested as part of Warsh’s arrangements with the government’s ethics watchdogs if he becomes chair.

As scholars who study central banks and the politics of finance, we understand why concerns about Warsh’s credibility have persisted. But perhaps counterintuitively, we also believe that once he’s confirmed, his finance background could reinforce his prior hawkish leanings, leading to more independence from Trump on inflation and interest rates.

Is past prologue?

If confirmed as chair, as expected, Warsh and his colleagues on the Fed’s policy-setting committee would wield enormous power. Not only does the central bank set the benchmark rate that determines short-term lending, but the Fed also oversees a US$6.7 trillion balance sheet, mostly in government bonds, that partially affects longer-term borrowing costs. Guided by its mandate to control inflation, the Fed’s decisions impact everything from grocery prices to mortgage rates.

Along with Warsh’s prior stints in government and on the Fed’s policymaking board as a governor, he worked for the investment firm Morgan Stanley and the hedge fund Duquesne Capital. In those positions, Warsh advanced his career in an industry that has long preferred hawkish Fed policies, even at the cost of job growth: Wall Street is generally “conservative” in that it favors lower inflation and higher interest rates on grounds that those policies can support bigger bank profits and higher prices for bank shares, while reducing the risks brought by disinflation policies.

While serving as a Fed governor in the aftermath of the 2008 financial crisis, Warsh’s comments reflected this outlook. He talked extensively about inflation being a “choice” – that is, the result of poor policy decisions, rather than broader structural forces.

He also questioned the Fed’s massive bond purchases, which were meant to stimulate the economy and reduce high unemployment by pushing long-term borrowing rates lower. The Fed revived those bond buys during the pandemic recession, while waiting too long, in the eyes of many economists, to hike rates once inflation began rising in 2021.

More recently, Warsh has focused his criticism on the central bank’s “bloated” balance sheet as well as its inflation record. Those legacies, along with the stimulative government spending under President Joe Biden, prompted Warsh to warn in February 2022 that “extraordinary excesses in monetary and fiscal policy caused the inflation dragon to resurface after 40 years of dormancy.”

A red-and-blue 'For Sale' sign stands in front of a foreclosed home in Las Vegas in the early days of the great financial crisis, on Feb. 8, 2008.
The 2008 financial crisis and housing meltdown prompted the Fed to take unprecedented steps to intervene in the economy. AP Photo/Jae C. Hong

Which Warsh will show up?

Given that long record, many Fed watchers looked at his turnaround in the second Trump administration with some skepticism. When he was a finalist for the nomination to chair the central bank in summer 2025, he told CNBC that the Fed’s hesitancy to cut rates – which was already drawing Trump’s wrath – was “quite a mark against them.”

“The specter of the miss they made on inflation, it has stuck with them,” he added. “So one of the reasons why the president … is right to be pushing the Fed publicly is we need regime change in the conduct of policy.”

Warsh’s rhetorical shift has led many to ask whether he can reconcile his responsibilities with political pressure. But the worsening inflation outlook for both the U.S. and world, driven by spiking oil prices, may force his hand regardless.

The spike in oil prices from the Iran war, in particular, has economists raising their inflation forecasts for the U.S. At his last Fed meeting as chair, Powell indicated that the central bank could be a long way off from lowering rates given inflation concerns. The Bank of England and the European Central Banks are also bracing for possible rate hikes if inflation doesn’t ease.

Wearing safety helmets, Jerome Powell and Donald Trump look over a document of construction cost figures during a visit to the Federal Reserve headquarters on July 24, 2025.
In 2025, President Donald Trump ramped up pressure on Federal Reserve Chair Jerome Powell to cut interest rates and attacked the Fed for construction cost overruns at its Washington headquarters. AP Photo/Julia Demaree Nikhinson

Trump ramp ups the pressure

For his part, Trump has used unprecedented means to bend the Fed since returning to office.

Those tactics include trying to fire Fed Governor Lisa Cook and threatening to fire Powell – who just announced he will stay on as a governor on the Fed’s board after his chairmanship ends. Those kinds of pressure tactics – which effectively seek to restaff the Fed’s leadership with more members favoring interest rate cuts – are more often seen in countries like Turkey or Argentina.

So why do we believe that Warsh won’t be the “human sock puppet” some fear?

In our view, it’s his background in finance that leads us to think he’ll be able to resist political pressure once on the job. After all, when Powell was appointed by Trump during his first term, he had also worked in that sector – and he has demonstrated independence from both Trump and Biden.

This is not just a theory. Political scientist Chris Adolph has identified a pattern in which Wall Street is the “shadow principal” of the central bankers who shuffle in and out of the financial sector. Similarly, economist Adam Posen has described finance as the interest group with the most prominent lobbying role over monetary policy.

In practical terms, this means that Warsh has long been steeped in ideas about inflation that have traditionally held sway over the financial sector, and he may well be more open about these preferences once confirmed. Moreover, he’s likely to return to finance once his term at the Fed ends. Together, we believe these factors may give Warsh the intrinsic motivation and enough incentives to resist overt political pressure from the president.

Of course, being too beholden to Wall Street is also a risk, as pointed out by Warren and others. The Fed is meant to support Wall Street in times of crisis – and even more so since the 2010 Dodd-Frank reform. However, the Dodd-Frank Act also asked the Fed to monitor risks to the entire financial system by supervising and regulating financial institutions. That requirement requires the Fed to prevent crises, not just bail out Wall Street when a crisis hits.

As it happens, the Fed today is quietly but surely moving to water down the rules put in place after 2008 – a deregulatory shift that Warsh strongly supports.

Fed independence from government, as a matter of law and of norms, is deeply important for the health of the U.S. economy. And Warsh’s rhetorical shifts on monetary policy raise serious questions about its fate under his chairmanship. Senators have been right to push him as a nominee on this matter. However, the Fed also faces pressure from the finance industry, often pulling policy in the opposite direction. As such, we believe that Warsh’s professional history in finance may bolster his autonomy from Trump on rates once he’s confirmed.

This article was updated to add date of Warsh Senate confirmation vote.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

  •  

How America’s independence from England revolutionized US philanthropy

John Hancock, like many American men and women of his generation, transformed the new nation's charitable activities. Universal History Archive/UIG via Getty images

John Hancock did something revolutionary 250 years ago when the Massachusetts merchant signed the Declaration of Independence, announcing to the world that 13 English colonies were freeing themselves from Great Britain and from monarchy.

About a decade later, he signed up as a member of a charity aiding drowning strangers.

That endeavor was revolutionary, too.

As I explain in my 2016 book, “From Empire to Humanity,” the American Revolution transformed how Americans, and also Britons, engaged in giving. Many Americans turned to philanthropy after gaining independence to pursue their ideals of life, liberty and happiness for the new nation.

And while curating the Smithsonian’s National Museum of American History’s “Giving in America” exhibition, for which I collect objects telling stories about Americans’ volunteering, donating and working to aid others, I’m often reminded that Americans still pursue these ideals through their everyday philanthropy.

Charity in North American colonies

Hancock, who was born in Braintree, Massachusetts, on Jan. 23, 1737, grew up in a world where men like his uncle Thomas Hancock dominated charitable activity. Thomas Hancock had made a fortune in business ventures, including the slave trade and military contracting. When he died, he left an array of charitable bequests, including one used for Communion silver for his church.

An engraved silver plate is displayed.
This Thomas Hancock silver communion plate was made around 1764 in Boston. Bequest of Arthur Michael/National Museum of American History

By having Thomas Hancock’s name engraved on the silver plates, the church leaders highlighted what colonial Americans knew: Leadership in philanthropy, as in society at large, was in the hands of elite white men.

That uncle raised John after his father’s death, educating him so he would be prepared for business and civic leadership.

When colonists fell on hard times, they might be eligible for an early form of governmental benefits, known as “poor relief.” They could also turn to their churches, to one another or to a small number of ethnic aid societies, such as Boston’s Scots Society, for support.

In the mid-1700s, Americans founded a number of new welfare and educational institutions, including colleges and charity schools. Benjamin Franklin, a leading philanthropic innovator, helped establish the Pennsylvania Hospital with mixed public and private funding. That funding model would later become common for charitable institutions.

The Revolutionary War interrupted these developments. After independence was won in 1783, the number of charitable organizations and institutions would soon soar.

Humane societies to protect people

U.S. charitable institutions began to rapidly change in the 1780s, as Americans sought to reform society by establishing organizations to support people in need.

An old medal is shown.
This Humane Society of the Commonwealth of Massachusetts Medal was made in 1852. National Numismatic Collection/National Museum of American History

One of those groups was the charity dedicated to rescuing drowning victims and aiding shipwrecked sailors that John Hancock joined, along with Paul Revere. It was known as the Humane Society of the Commonwealth of Massachusetts and, like other similar groups, offered rewards or honors to motivate people to undertake the risky work of saving people from watery graves.

Americans in several cities, along with their peers in the British Isles, the Caribbean and Europe, worked together by publicizing resuscitation techniques, sharing information on effective methods and offering each other moral support.

“Humane” was a popular word in the names of charities dedicated to an array of causes in this era, long before it became associated with the protection of animal welfare.

Philanthropy’s meaning at the time

Throughout the 1700s and much of the 1800s, the word “philanthropy” referred to a sentiment – the love of humanity. That reflected the word’s origins: It’s derived from the Greek words for “love” – “philos” – and “anthropos” – “man.”

For Americans of the founding generation, philanthropy meant, above all else, aiding strangers – people outside their local, religious or ethnic community. Spurred by African Americans’ advocacy, some prominent white Americans, such as Alexander Hamilton, joined antislavery societies, while Northern states gradually began passing antislavery laws.

Making maritime travel safer for people of all backgrounds and nationalities was another way to uphold this value of universal benevolence. Humane societies’ rescuers and rescued people alike included African Americans and foreign mariners, including some from Asia and the Spanish empire. African Americans received awards from anti-drowning groups using the same criteria applied to white people.

In 1794, one of the highest honors went to Dolphin Garler, a Black man in Plymouth, Massachusetts, who had risked his life to rescue a young boy from drowning. Many Americans at this time saw benevolence as a criteria for citizenship. By lauding Garler, the leaders of the Massachusetts Humane Society were challenging other white Americans to recognize Black Americans’ humanity.

Like humane societies, other charities innovated by giving aid across ethnic or denominational lines as Americans built bonds in the new nation. Among them was New York Hospital, which had “charity to all” as its motto and had a diverse patient population. Many were British, Irish and German, with small numbers of people, probably mariners, from places like Portugal and South Asia. The hospital also treated African Americans in segregated wards.

Another new charity embracing this new more universal approach was the Society for the Relief of Poor Widows with Small Children, established in New York City in 1797. It supported poor widows with small children and helped the widows find jobs. While the organization excluded African American women, it innovated by aiding white women without regard to their ethnic or religious background.

New leaders with new causes

The Widows Society, as it was known, was notable for another reason. It was one of the first charities founded and led by women in the new United States.

Before the late 1780s, women made charitable donations to institutions run by men and gave personal alms, but women didn’t lead organizations.

Engraving of a woman writing in a book, wearing a bonnet.
Isabella Graham was a 19th-century diarist and charitable pioneer. Wikimedia

In New York, Scottish immigrant Isabella Graham and other women challenged traditional roles by founding the Widows Society in 1797. That they came together from various Protestant backgrounds was notable at the time.

Within a few years, Eliza Hamilton, Alexander Hamilton’s wife, would join and help lead the Orphan Asylum Society of the City of New York, which grew out of the Widows Society.

Engraving of a well-dressed man.
Richard Allen, an African American bishop, established the first church for Black people in Philadelphia in the late 1700s. Hulton Archive/Getty Images

And yes, that’s the orphanage Eliza Hamilton sings about in “Hamilton,” Lin-Manuel Miranda’s award-winning musical.

Black Americans likewise broke ground by creating charities and independent churches in the founding era. Black men like Richard Allen and Absalom Jones, for example, created the Free African Society, a mutual aid organization, in 1787 in Philadelphia.

In addition to supporting members of the Black community at times of need, the Free African Society led to the creation of independent Black churches as African Americans struggled for inclusion.

Revolutionizing charity management

Founding charities was one thing. Running them was another.

Americans applied managerial skills acquired from operating business, churches and households to caring for people in distress. They also became pros at the business of fundraising: cultivating donors, hosting fundraising events and publishing annual reports, including names of donors.

In short, Americans developed the critical skills to make philanthropy work.

Philadelphia doctor and signer of the Declaration of Independence Benjamin Rush was one of the most skilled philanthropic communicators. As he undertook one humanitarian endeavor after another, Rush collaborated with philanthropic leaders like Isabella Graham and Richard Allen.

Like others of his generation, Rush devoted himself to reforming the country and world. Medical philanthropy, education, antislavery, prison reform – he was engaged in all of them.

He routinely placed excerpts of his letters with other humanitarian leaders in newspapers. Publicity documents, he knew, helped build momentum for humanitarian causes.

Many others shared his belief in the power of philanthropy to help make the world anew.

The Humane Society of the Commonwealth of Massachusetts’ “provision made for Ship-wrecked Marriners is also highly estimable in the view of every philanthropic mind,” George Washington said in 1788. “These works of charity & goodwill towards men … presage an æra of still farther improvements.”

This goodwill could go global. Cooperating across the Atlantic in this cause and others helped Americans and Britons reaffirm and reimagine their bonds.

Bedrock of the American experiment

It was only when rich Americans like steel magnate Andrew Carnegie and oil baron John D. Rockefeller began to make massive donations and set up their own foundations in the late 1800s and early 1900s that the word philanthropy would come to be associated with giving on a massive scale.

As Americans celebrate the 250th anniversary of the Declaration of Independence, I believe it’s worth remembering that the founding generation embraced civic engagement, organizational innovation and generosity as essential pillars in the pursuit of life, liberty and happiness.

For that generation, philanthropy – love of humanity – was the bedrock of the American experiment in republican government.

The Conversation

Amanda Moniz has received funding from the William L. Clements Library in Ann Arbor, Michigan, for research on Isabella Graham.

  •  
❌