Normal view

Received — 28 April 2026 The Conversation

Facial recognition data is a key to your identity – if stolen, you can’t just change the locks

When you're out and about, your face isn't just visible − it's captured. John Keeble/Getty Images

A woman strolls into a grocery store, thinking about grabbing some apples. Before she even reaches the produce aisle, a security camera has scanned her face. Whether the system is checking for shoplifters or simply logging her arrival, her face has joined a digital ledger, a trace she can’t easily erase. Retailers, banks, airports, stadiums and office buildings are doing the same.

But what if the woman’s facial information is stolen or misused? If a cybercriminal steals her password, she can change it. If they acquire her credit card number, she can cancel the card. But she can’t reset or revoke the appearance of her cheekbones.

Facial recognition systems don’t keep actual images. They convert a face into a mathematical template that maps the positions and proportions of the face’s features. When another camera scans a person later, the system checks their live face against these templates to confirm an identity.

In my work as a cybersecurity professor at Rochester Institute of Technology, I have found that even though templates are more secure than photos – which anyone online can capture and manipulate – templates, too, can be stolen. Once that happens, these digital keys create a lifelong vulnerability. If a facial recognition database is breached, the “locks” that a template opens – accessing a bank app, getting through security at an airport, entering an office building – can’t be reset. A person’s face is permanent, and so is the threat.

The threat isn’t theoretical. Biometric data has been stolen in data breaches. In 2024, biometric data from a facial recognition system used at bars and clubs in Australia was hacked. And in 2019, biometric data from a pilot facial recognition system set up by U.S. Customs and Border Protection was breached in an attack on a subcontractor’s network. It’s not clear whether anyone’s stolen biometric data has been exploited, however.

a sandwichboard sign outside a stadium
Catching a ballgame? Security cameras might be catching and digitizing your face. AP Photo/Matt Slocum

Tracking your face

All biometric identifiers carry risks. Fingerprints and iris scans, however, are typically used in controlled situations, such as unlocking a person’s phone or allowing someone to enter a building. In these cases, a person has to deliberately look at a scanner. Cameras in public spaces, in contrast, can capture faces as people walk by, from a distance and without the people whose faces are scanned realizing it.

If a fingerprint or iris database is breached, a thief still needs to physically present that finger or eye, or a fake of it, to a scanner. However, someone could match a stolen facial template against images from surveillance cameras or photos circulating online, making it easier to identify a person of interest or track someone’s movements and activities.

There’s also a big difference, technically and ethically, between keeping a face on a phone versus handing it over to a database. On modern Apple devices and many Android systems, biometric data used to unlock the devices is stored locally in a dedicated hardware chip and is not shared with the manufacturer or cloud services for authentication. As a result, a breach of corporate or cloud systems would not expose these device-level biometric templates.

Some street and security cameras in public are passive, just watching as people pass by, with no long-term records. But others may be following people’s steps, linking faces to databases and creating a persistent digital trail. The risk rises when organizations use systems to track particular people across multiple databases. Airport systems could compare a traveler’s face against passport or airline databases. Stadiums may compare faces against local security watch lists or law enforcement lists. The company that manages Madison Square Garden has used facial recognition to bar entry to lawyers at firms that represented people who sued the company.

Some large retail chains, such as Wegmans and Target, also use facial recognition systems in their theft prevention efforts. Every new capture adds another permanent record.

People hold small cardboard images of Amazon CEO Jeff Bezos in front of their faces.
Demonstrators hold images of Amazon CEO Jeff Bezos in front of their faces during a protest over the company’s facial recognition system. AP Photo/Elaine Thompson

Many companies do not have expertise in cybersecurity and rely on third-party vendors to manage their data. If those centralized systems are breached – or the datasets are linked across platforms, vendors or data brokers – your face can become a sort of persistent identifier, which can be used to expose or track you. In some cases, when combined with other compromised data, your captured face can lower the barrier to impersonating you.

When a person’s face meets their data

A face can function like a “primary key” – a unique and stable identifier that connects records. If one database links a facial template to an email address, and a data breach connects that email to financial or personal records, an identity thief with a stolen template could access all that information.

And combining a template with AI tools such as deepfakes or three-dimensional face models could, in some cases, allow a criminal to impersonate an individual in systems that require proof of a live face, slipping into a forged digital identity like slipping into a costume.

When criminals combine biometric templates with other leaked data, such as logins for social media profiles or home addresses, they can build “super-profiles” connected to many of a person’s activities. Because the face acts as a permanent linking key, this level of identity theft is difficult to reverse.

How to minimize the threat

People are still figuring out how to live with widespread biometric collection. The convenience of smoothly passing security checks or making purchases is appealing, but it often comes with a permanent risk to privacy and security.

To lessen the threat, organizations can follow several data privacy best practices. They can keep only information that is necessary, erase the rest quickly and encrypt every mathematical template. They can store only encrypted templates rather than raw photos. They can use safeguarding techniques, such as the latest liveness detection techniques, to help ensure that their systems are interacting with real people rather than photographs, masks or deepfakes. And they can adopt a privacy-by-design approach, which means they will keep data only as long as necessary, clearly document how it’s used and restrict who has access.

Consumers can take steps as well. In places with privacy laws, such as California, Illinois and the European Union, people can submit a data access request to see what biometric data a company holds and, in some cases, ask for its deletion. They can also ask retailers anywhere what data is collected, how long it is kept and how it’s protected.

The Conversation

Jonathan S. Weissman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How the concept of ‘medical freedom’ is reshaping the military’s decades-long stance on the flu vaccine mandate − and endangering troops’ readiness

Vaccine mandates in the U.S. military are nearly as old as the country itself. jacoblund/iStock via Getty Images Plus

For the first time in almost 80 years, U.S. service members will no longer be mandated to receive the annual influenza vaccine.

Defense Secretary Pete Hegseth announced the change on April 22, 2026. Citing medical autonomy and religious freedom, he described the requirement as “overly broad and not rational,” telling troops that “your body, your faith and your convictions are not negotiable.”

The flu shot requirement that Hegseth ended had been in place since 1945, with one brief pause in 1949. It was part of a tradition of military vaccine mandates nearly as old as the United States itself.

As an epidemiologist who studies vaccine-preventable diseases, I find the end of the flu mandate striking less for its immediate impact than for what it signals. For most of American history, military commanders took for granted that infectious disease could cost them a war, which is why vaccination was considered a matter of military readiness rather than personal choice.

A tradition that started with George Washington

The first American military vaccine mandate predates the Constitution. In the winter of 1777, Gen. George Washington ordered the mass inoculation of the Continental Army against smallpox.

His decision wasn’t ideological – it was strategic. The year before, a smallpox outbreak had torn through American troops outside Quebec, contributing to the collapse of the northern campaign. John Adams famously wrote to his wife, Abigail, that smallpox was killing 10 soldiers for every one felled in battle.

Inoculation in 1777 was itself risky. The procedure, called variolation, involved deliberately infecting a soldier with a small amount of smallpox virus to build immunity. Washington gambled that losing some to inoculation was better than losing a war to the virus. Historians have credited the decision with saving the Continental Army.

The COVID-19 pandemic reframed the politics surrounding vaccine mandates.

That pattern held for centuries: When an infectious disease threatened to take more soldiers off the line than enemy fire did, the military required protection.

U.S. troops received smallpox vaccinations from the War of 1812 through World War II. During World War I, the Army added typhoid vaccination. During World War II, it expanded vaccine requirements to also include tetanus, cholera, diphtheria, plague, yellow fever and, in 1945, influenza.

1945: New war, new vaccine

The flu vaccine mandate grew out of military experiences during the influenza pandemic of 1918. That spring, a novel influenza strain spread through crowded Army training camps and traveled to Europe with American troops. About 45,000 American soldiers died of influenza during World War I – nearly as many as the roughly 53,000 killed in combat.

The 1918 pandemic made clear that a respiratory virus could cripple an army. In 1941, as the country prepared to enter another world war, the U.S. Army organized an influenza commission that partnered with the University of Michigan to develop the first influenza vaccine. Clinical trials in military recruits showed that the vaccine reduced the incidence of influenza illness by 85%, and in 1945 the military mandated the vaccine. Roughly 7 million service members were vaccinated that year.

The mandate was briefly paused in 1949 after scientists realized the vaccine needed regular updates due to the virus changing. Once formulations could be adjusted seasonally, the mandate returned in the early 1950s and has stayed in place continuously – until Hegseth’s change of policy.

Emergency hospital at Camp Funston, Kansas in 1918, during the influenza epidemic
The influenza pandemic of 1918 killed nearly as many American troops as were killed in battle during World War I. Otis Historical Archives, National Museum of Health and Medicine

COVID-19 changed vaccine politics

For decades, vaccine mandates were an unremarkable fact of military life, but COVID-19 changed that.

In August 2021, all service members were ordered to be vaccinated against COVID-19. More than 98% of active duty troops complied, but the mandate became a flash point. More than 8,000 service members were involuntarily discharged for refusing the shot.

In 2023, Congress passed a law requiring the Pentagon to rescind the military COVID-19 vaccine mandate. This reversal reframed the politics of military vaccine requirements. In January 2025, President Donald Trump ordered the reinstatement, with back pay, of troops discharged over COVID-19 vaccine refusal.

In announcing the end of the flu mandate, Hegseth relied heavily on “medical freedom” language that emerged from the COVID-19 vaccine debate, rather than on any new evidence about influenza or the effectiveness of the flu vaccine.

The medical freedom movement opposes government involvement in what its supporters see as personal health decisions – including public health recommendations such as vaccine mandates, masking and social distancing.

Does the vaccination rationale still hold?

Critics of the military flu vaccine mandate argued that flu is a milder threat than it was in 1918, that service members are healthier than the general population and that personal choice should outweigh public health logic for a seasonal virus.

The epidemiology tells a different story.

Although flu seasons can vary in disease severity, the virus mutates so unpredictably that pandemic flu seasons – like those in 1918, 1957, 1968 and 2009 – are a recurring possibility. Flu still hospitalizes and kills tens of thousands of Americans annually. The Centers for Disease Control and Prevention estimates the influenza vaccine prevented roughly 180,000 hospitalizations and 12,000 deaths during the 2024-2025 season.

The military operates in precisely the conditions that favor the spread of respiratory viruses: recruit training centers, barracks, ships and submarines where people live in close quarters.

The logic that drove Washington in 1777 and the Army surgeon general in 1945 to require vaccination hasn’t really changed. A sick soldier can’t deploy, can’t train and can spread illness through an entire unit.

What has changed is the political weight assigned to individual refusal – and that, more than the biology of the flu or the effectiveness of the vaccine, is what the end of this mandate reflects.

The Conversation

Katrine L. Wallace does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Wearable glucose monitors offer real-time data, but for healthy people no guidelines exist to interpret the numbers

Continuous glucose monitors once required a prescription but can now be purchased over the counter. Jesus Rodriguez/iStock via Getty Images Plus

Keeping tabs on blood sugar throughout the day used to be the exclusive domain of people with diabetes. But in 2026, anyone can buy a user-friendly wearable device that provides minute-by-minute readouts on how their glucose levels respond to food and movement.

These glucose numbers are increasingly being tracked by people who are healthy but want to lose weight or optimize their wellness.

I am a behavioral scientist who has spent the past decade studying how real-time data captured through wearable sensors and mobile technologies can help promote a healthier lifestyle. I’ve found that for people who don’t have diabetes, using such a device for a few weeks can bring insight into how their body reacts to their eating patterns and daily habits.

But researchers still don’t know how these fluctuations affect health for people who don’t have diabetes. In the absence of meaningful metrics for interpreting these numbers, monitoring a constant stream of data doesn’t directly help people make health-related decisions and can lead to confusion and needless anxiety.

What are glucose levels – and why track them?

Glucose is a type of sugar that circulates in the bloodstream after being absorbed from food. It is the body’s primary source of energy.

For people without diabetes, glucose levels generally stay in the range of 70-120 milligrams per deciliter (mg/dL) of blood throughout the day. After eating or drinking, levels could exceed 140 mg/dL but should come down to the normal range within a couple of hours. That’s because the pancreas responds to a glucose spike by releasing a hormone called insulin, which brings the glucose number down.

Blood sugar levels on a spectrum from hypoglycemia to diabetes.
A healthy range for glucose levels is between 70 and 120 milligrams per deciliter. For people with diabetes, glucose levels generally run high. piyaset/iStock via Getty Images Plus

Muscles burn glucose for fuel, so physical activity also helps normalize glucose levels.

Glucose levels generally run high with diabetes. People with Type 1 diabetes, whose bodies don’t make enough insulin, rely on glucose numbers to tell them when to take a dose of insulin. People with Type 2 diabetes use the numbers to monitor the effect of their medications and lifestyle changes and to get a fuller picture of their glucose control.

From test strips to AI-enabled sensors

Devices that track glucose numbers have been around since the early 1970s. Early versions consisted of test strips that detected glucose levels in urine. Finger prick tests, or glucometers, which were developed in the 1980s, are still used by some people today and measure them more directly by applying a tiny blood drop to a test strip.

To make the technology more convenient, companies in the early 2000s developed continuous monitoring devices that consist of tiny sensors inserted just under the skin that detect glucose in fluid that surrounds cells. Initially, these devices could give readings every 15 minutes for several days at a time, but recent versions sample more frequently.

Today, the technology has evolved even further. The most advanced glucose monitors under development come in the form of watches or rings with noninvasive sensors that use light-based techniques to detect glucose in body fluids. Many also rely on machine learning to provide more accurate readings by detecting each person’s unique physiological patterns over time.

For decades, continuous glucose monitors were available only with a doctor’s prescription. But in March 2024, the Food and Drug Administration approved the first over-the-counter continuous glucose monitor in the U.S., making them widely accessible.

Glucose monitoring for diabetes

There’s no doubt that continuous glucose monitors are a game-changer. People living with diabetes rely on these devices to track what percentage of the day their blood glucose stays within healthy limits – a measure called “time in range.” Patients make decisions about managing their condition - for example, when to take insulin – on guidelines developed by researchers and physicians based on that measure.

Infographic explaining glucose uptake and response to insulin in type 1 and type 2 diabetes.
In people living with diabetes, cells don’t absorb glucose properly from the bloodstream. VectorMine/iStock via Getty Images Plus

According to a 2026 report from the Centers for Disease Control and Prevention, almost 11 million adults who have diabetes – more than 1 in 4 adults with the condition – are undiagnosed. Type 2 diabetes can develop slowly and silently, often with no noticeable symptoms for years except glucose levels that remain elevated for a majority of the day, including when people are sleeping. Tracking glucose levels might offer clues that glucose is elevated.

Tracking glucose levels may also benefit the 115.2 million Americans – 43.5% of all U.S. adults – who have a condition called prediabetes. Prediabetes is when a person’s metabolic system shows early warning signs of diabetes but they don’t have the full-blown disease.

Prediabetes generally has no noticeable symptoms, but it is reversible – meaning, it’s possible to shift your glucose levels back into a healthy range. Tracking your glucose number can reveal how diet and exercise affect it. Observing how a soda spikes your glucose levels, for example, might give you pause before you drink one next time.

Daily glucose rhythms

Increasingly, though, people who use continuous glucose monitor aren’t diabetic – or even prediabetic. Instead, they want to understand how their bodies react to activities in their daily lives.

Diet, exercise and other lifestyle behaviors have long-term effects on health. Weight loss, for example, happens slowly. Changes in blood glucose, on the other hand, are more immediate. Tracking glucose levels thus offers real-time feedback on how your body responds to the food you just ate or the workout you just finished.

In studies I’ve conducted with colleagues, many people have found this information powerful. They were surprised to learn that eating certain foods – sugary soda, or even something healthy like a banana – causes their glucose levels to spike.

Seeing your glucose levels changing in real time can spur insights, but if you don’t have diabetes there are no guidelines for how to respond to those fluctuations.

One study participant told us that seeing their real-time glucose numbers led them to make more intentional dietary choices, such as cutting back on snacking. “I’m more aware and I’m making the changes,” they explained. Another participant also noted behavior changes prompted by continuous glucose monitoring, such as trying to avoid eating so late in the evening and consuming only half a fast-food meal.

That initial wow factor – and its capacity to motivate people to make healthy lifestyle changes – may be valuable. But it’s not clear how long these changes last, or how exactly people should respond to fluctuations in their glucose number to decrease their diabetes risk or to address other health issues.

Unlike the time in range guidelines for diabetes, there is no clear framework for what daily glucose patterns are abnormal in people who don’t have diabetes – or what patterns may indicate future disease risks.

Mapping the numbers

Researchers like me and my team are exploring exactly these questions.

Building a dynamic picture of how glucose levels fluctuate throughout the day in people without diabetes may point to early indicators for various chronic diseases. For example, my colleague and I recently developed a mathematical model to examine how monitoring glucose levels during sleep might help predict the risk of metabolic diseases – such as Type 2 diabetes, heart disease or fatty liver disease – in people with and without diabetes.

Additionally, continuous glucose data may reveal how people’s bodies might react differently to the same food, workout or other activity. Understanding how each person’s biology responds to the choices they make throughout the day could eventually lead to a more personalized approach to lifestyle changes that can help people maintain their health.

The Conversation

Liao Yue receives funding from the American Institute for Cancer Research, the American Heart Association, the Cancer Prevention & Research Institute of Texas and the Texas Higher Education Coordinating Board.

More than 140,000 Americans die from COPD each year – here’s why survival depends on more than avoiding smoking

COPD puts people at risk for many other adverse health conditions. AndreyPopov/iStock via Getty Images Plus
The Conversation, CC BY-ND

Chronic obstructive pulmonary disease, or COPD, caused 141,733 deaths in the United States in 2023 – the latest data that has been reported. That number reflects not just the effects of smoking, but a broader set of medical and social factors that shape who survives.

As of early 2026, COPD remains the fifth-leading cause of death nationwide and carries a substantial economic burden, with annual medical costs estimated at US$24 billion among adults ages 45 and older. COPD is a progressive condition that limits airflow, making it increasingly difficult to breathe and carry out everyday activities.

Nearly 16 million U.S. adults live with COPD, and many more remain undiagnosed.

COPD also encompasses chronic bronchitis, which inflames the airways, and emphysema, a condition that damages the air sacs in the lungs. Both conditions limit the flow of air in and out of the lungs.

I am a physician and doctoral researcher in public health who studies chronic disease outcomes using nationally representative U.S. data. In my research examining long-term mortality among adults living with COPD, one pattern stands out clearly: My colleagues and I found that both current and former smokers had a higher risk of death compared with those who never smoked, highlighting that smoking increases mortality risk – but it does not act alone.

How smoking and COPD are intertwined

Smoking has been recognized for over five decades as the primary cause of COPD. It is a major factor in how the disease develops and progresses, although other factors such as secondhand smoke, air pollution and occupational exposures also play a role. Even after accounting for age and other health conditions, people with COPD who have smoked face a higher risk of death than those who have never smoked.

Quitting smoking, while essential, does not fully erase the damage caused by smoking. This is because long-term exposure to tobacco smoke leads to persistent inflammation and structural damage in the lungs, changes that are not fully reversible. They continue to affect airflow and respiratory function even after a person stops smoking, although quitting significantly slows further decline.

COPD is a long-term condition that continues to affect the lungs and the pulmonary blood vessels over time, contributing to both breathing problems and other chronic conditions.

In some cases, higher risks among former smokers with COPD may reflect the lasting effects of smoking or underlying illness that led them to quit.

Illustration of a human torso showing the lung divided into two sections, one healthy (on left) and the other affected by pulmonary emphysema on the right.
Emphysema is a form of COPD that limits the flow of air in and out of the lungs. ILUSMedical/Science Photo Library via Getty Images

COPD affects more than the lungs

COPD is often described as a lung disease, but its effects extend far beyond breathing.

People living with COPD also face a higher risk of other health problems, including lung infections such as flu or pneumonia, lung cancer, heart disease, weak muscles and depression or anxiety, all of which can increase the risk of death.

One of the most noticeable ways COPD affects daily life is through persistent breathlessness, which can make even simple tasks such as walking, cooking or getting dressed more difficult. As activity declines, overall health can worsen, creating a cycle that is hard to break.

COPD is also frequently diagnosed late and progresses gradually, limiting opportunities for early treatment.

Social connections can shape survival

A growing body of research shows that social factors play a meaningful role in health outcomes with chronic diseases including COPD. Social isolation has been linked to a higher risk of premature death, with effects comparable to well-known risk factors such as smoking and obesity. This is a major problem because nearly 1 in 6 adults with COPD experience social isolation, and 1 in 5 experience loneliness.

Among people living with COPD who were single or never married, the increase in overall risk of death associated with smoking was substantially higher. In this socially isolated group, current smokers faced roughly a 50% higher risk of death and former smokers faced nearly four times the risk compared with those who never smoked, highlighting how social context can shape survival rates.

Other research has similarly found that social isolation is associated with a higher risk of death among people with COPD, reinforcing the importance of social support. Managing a demanding chronic illness alone can be difficult; without support to monitor symptoms or assist with care, the burden of disease may be grave.

One reason is that social connections influence how people manage chronic illnesses. People who are socially isolated are more likely to engage in unhealthy behaviors such as smoking, poor diet and physical inactivity, and may be less likely to follow treatment plans.

Support from family members, caregivers or community networks can improve peoples’ likelihood of following treatments, reduce their stress and make it easier to quit smoking. For people living with COPD, a condition that requires daily management, these differences can significantly affect their quality of life and how long they live.

What can help reduce COPD deaths?

Reducing deaths from COPD begins with prevention and early intervention. Avoiding or quitting smoking remains the most effective way to lower risk. Reducing exposure to tobacco smoke, air pollution and occupational hazards such as dust from mining and chemical fumes can also help prevent long-term lung damage.

For people already living with COPD, consistent access to care can improve outcomes. Treatments such as inhalers that help open the airways, pulmonary rehabilitation and oxygen therapy, along with vaccinations against respiratory infections, can help manage symptoms and reduce complications.

Improving survival in COPD depends on more than treatment alone – it also requires addressing social factors such as isolation, access to support and living conditions.

One practical step is making screening for social isolation part of routine care.

The Conversation

Olamide Asifat does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Reading gains in Alabama, Mississippi and Louisiana are often touted, but don’t show full picture of literacy

A fourth grade teacher leads a small group of students in a reading exercise in March 2023 at Tuskegee Public School in Tuskegee, Ala. Julie Bennett/The Washington Post via Getty Images

Despite decades of legislation meant to boost children’s reading levels, literacy scores have remained relatively stagnant across the U.S. over the past 30 years.

Educators, policymakers and parents were genuinely excited in the late 2010s, when three Southern states – Alabama, Mississippi and Louisiana – appeared to buck the literacy trend. All three of these states, which have long lagged in literacy scores, made notable gains in fourth grade reading scores from 2013 to 2024, as measured by the National Assessment of Educational Progress, or NAEP.

We are researchers in literacy and learning. Two of us are at the University of Alabama and Mercer University, where we educate elementary teachers. The other two work at Temple University, where we research early language and the science of learning. We all study how children develop as readers and how teaching styles and policies shape that development.

Some observers and scholars have called Alabama, Mississippi and Louisiana’s reading gains the “Southern surge” and say this progress shows that recent literacy reforms are working.

A straightforward explanation has taken hold: As more schools spent additional time on phonics and implemented other “science of reading” reforms, students became stronger readers.

This narrative accurately captures some of the available evidence. But it also simplifies a complex set of patterns in literacy data, and it limits the discussion that policymakers should have.

A girl with blonde hair and a large bow wears a face mask and raises her hand, while she sits at her desk in a classroom with other students and books.
A fourth grade student raises her hand during a reading and language arts class in Columbia, Miss., in August 2020. Edmund D. Fountain/The Washington Post via Getty Images

Reading scores under pressure

Since the early 2000s, new federal and state policies have placed pressure on schools to improve students’ reading outcomes. The 2001 No Child Left Behind Act required all states to track and report literacy testing results. This law, which the Obama administration replaced in 2015 with the Every Student Succeeds Act, mandated annual testing in reading and math for students in third through eighth grades.

Many schools narrowed their curriculum to try to boost their students’ reading scores. They cut time for science, social studies, art and recess to focus on reading and math. Students entering school in the early 2000s – the first classes fully exposed to No Child Left Behind’s requirements – spent more time on reading instruction than any previous generation.

But sustained reading gains still didn’t follow.

The NAEP is often called the nation’s report card. It is the only federally administered test that allows meaningful comparisons in reading levels across states.

The NAEP found that fourth grade reading scores nationwide increased modestly beginning in 2005. They peaked around 2017 and have declined since.

But there’s a complication in how those scores are interpreted. NAEP’s mid-level score, called “proficient,” does not mean a student is reading at grade level – it reflects a high standard that most students do not reach. In the case of fourth grade readers, it means they can recognize a text’s structure and organization, explain how characters influence others and make other complex observations. Students can also receive a lower “basic” score, or a higher “advanced” one.

Alabama’s example illustrates the gap that can emerge between NAEP test results and a state’s assessments.

The state’s 2025 assessments show that 81% to 88% of second and third graders were reading “on grade level.” But the 2024 NAEP shows only about 30% of Alabama fourth graders – the youngest grade the NAEP measures for literacy – were “proficient” at reading.

Both numbers can be accurate. They reflect different definitions and measurement systems.

Understanding reading gains in the South

Despite differences in measuring reading, a small number of states have shown clear improvement over the past decade, according to the NAEP.

Mississippi has shown the strongest gains. In 2013, it was 49th out of all 50 states when it came to ranking fourth grade reading scores. In 2024, Mississippi climbed to ninth in fourth grade reading.

Mississippi’s progress predates recent national attention to the science of reading – meaning, the body of research on reading – suggesting its gains cannot be attributed solely to the current wave of related reforms.

In 2013, Mississippi passed the Literacy-Based Promotion Act, which combined early reading screening, teacher training, literacy coaching and additional support. Research shows that the policy could account for roughly five points of reading gains, on average. These gains reflect long-term, system-wide efforts rather than a rapid shift tied to a single policy change.

At the middle school level, however, the pattern in Mississippi looks different.

Improvements in fourth grade reading have not translated into similar gains in eighth grade reading. Early improvements in children’s ability to decode words do not necessarily lead to success with more complex texts that require additional vocabulary and background knowledge.

This gap does not negate Mississippi’s progress, but it does raise questions about what the next decade of work needs to look like.

Louisiana’s reading score trajectory is more modest. Recent NAEP scores for fourth grade students in Louisiana are similar to those from the mid-2010s – a rebound to a prior level.

While Louisiana ranked 50th in fourth grade reading in 2019, it rose to 38th in 2024.

A 32-point gap between Black and white students’ average fourth grade reading scores persists in 2024 data, nearly unchanged from the late 1990s. In this case, some reading progress happened. Yet the underlying inequities between students did not shift.

Alabama’s results illustrate a third pattern: relative stability in fourth grade reading scores during a period of national decline. The state ranked 35th in fourth grade NAEP reading in 2013 and remains in a similar position in 2024, showing little change. The state’s average NAEP score for fourth grade students shifted by a single point between 2019 and 2024 – not a surge, but a state holding its ground while others fell.

Meanwhile, chronic absenteeism has fallen in Alabama since 2019. As research links attendance to academic achievement, it makes it difficult to attribute the state’s small shift in reading scores to any single factor.

Across all three states, substantial gaps between Black and white students’ reading scores persist on NAEP scoring.

The same pattern extends nationally to Hispanic students, poor students and other groups. This shows that fourth grade students’ reading gains have not been accompanied by comparable reductions in social, racial and ethnic inequities.

A woman stands near a projector screen in front of a group of children seated on the floor in a classroom.
Students follow a reading lesson in a first grade class in Aurora, Colo., in October 2024. Hyoung Chang/The Denver Post via Getty Images

A more complicated story

Still, parts of the Southern surge in reading is genuinely encouraging. It is also the latest chapter in a long story.

Mississippi’s gains, for example, came alongside coaching, professional development and early intervention.

Louisiana’s reading recovery unfolded alongside a 34% increase in education funding over the past decade.

Test score changes reflect a combination of policy decisions, classroom practices and broader conditions, often unfolding over many years. Reading is hard to teach, hard to sustain and not connected to any one policy shift.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Your local storm forecast is likely based on weather miles away – we’re trying to bring it closer to home

Weather apps might see that a storm is coming, but mesonets capture what's happening as it arrives with local real-time data. Patrick Emerson/Flickr, CC BY-SA

Whether you’re planning a weekend hike, deciding what to wear to work or preparing your home for severe storms, the weather forecast is essential. You might instinctively grab your smartphone and check an app for an instant weather update.

But how many times have you looked at your app, only to step outside and see the sky painting a different picture than what’s on your screen?

As a meteorologist who operates a weather station network in Wisconsin, I’ve heard many of the same cliches time and again: “The weatherman is always wrong!” “Just wait five minutes and the weather will be different!”

Before you blame the local forecasters, let’s talk about where the data in your weather app comes from, and why it might not always show what you expect. It’s why my colleagues and I are working to bring forecast data closer to home.

The nuts and bolts of weather forecasting

Earth is huge. It has a diameter of 7,926 miles (12,756 kilometers) at the equator and has 62 miles (100 kilometers) of atmosphere overhead.

If you want a perfect weather forecast, you will have to precisely measure every molecule of the atmosphere, land and water, and perfectly predict how they will interact with each other for the next minute, day or week. This is, of course, physically impossible.

Instead, scientists run computer models. These models take the observations we do have and simulate the weather on a large scale to a remarkable degree of accuracy. In fact, storm track forecasts from the National Hurricane Center were among its best ever in 2025, and forecasts using machine learning are starting to improve those forecasts even further.

These models are hungry for data. Supercomputers ingest measurements from satellites, weather balloons, Doppler radar, lightning detection networks, buoys, surface weather stations and other measurement platforms to solve the equations that provide weather predictions.

When you open your phone, your weather app isn’t doing the meteorology – it’s just showing the output of the model’s calculations. Even though they generally aren’t tailored by a local meteorologist, these short-term forecasts are usually pretty good. But they could be better.

All weather is local

You’ve probably seen it before: It’s raining on one side of the street and not on the other. You flip on the news to see the nearest airport received an inch of rain, but your garden is dry.

There are more than 2,500 airports in the United States with weather stations, which is where much of the weather data shared on TV and online is collected. But for many people, the closest airport is more than 20 miles (32 kilometers) away. This is especially true in rural areas.

A map shows large gaps in many states, particularly across the West but also in large parts of Mississippi, Alabama, Missouri and Arkansas, as well as regions of the Northeast.
All of the areas in green are more than 20 miles from an airport weather station. In many cases, that means they’re 20 miles or farther from the weather observations feeding their local forecasts. Chris Vagasky

Because of the chaotic nature of weather, the only way to truly know what’s happening in your yard is to measure the weather in your yard. But not everyone is interested in installing a rain gauge or personal weather station.

Filling the gaps

To bridge this gap, many states and universities have established local weather station networks called mesonets – short for mesoscale networks, meaning intermediate scale. These weather stations are installed in locations to ensure everyone in the state is within 20 miles of the nearest station.

Nationally, there are nearly 3,000 mesonet stations installed in 38 states, with more networks planned.

Like the weather stations at airports, mesonets measure things like air temperature and relative humidity, air pressure, rainfall or melted snow, and wind speed and direction – often every five minutes.

Many mesonets collect additional data such as soil moisture levels to help farmers. Some even have camera images updated every five minutes to show current weather conditions. Mesonet data is then shared through websites or direct data transmission so that the public, weather forecasters and researchers can easily access it.

I lead the team at Wisconet, a new mesonet that just finished installing 78 weather stations across Wisconsin. Our stations are installed on 10-foot-tall (3-meter) tripods in open areas near orchards and cranberry marshes, farms and airfields, schools and other educational centers, and on city, state and federally owned lands.

A tall tripod with various weather equipment and a solar panel for power.
Wisconet weather stations, like this one in Amery, Wisc., provide local weather data for areas where forecasts used to be based on what was happening many miles away. Caitlin Wienkes, Wisconet

These added weather stations are already proving useful. On Aug. 18, 2025, slow-moving thunderstorms moved over a Wisconet station, with more than 3 inches of rain falling in just a couple of hours. The National Weather Service was able to issue a flash flood warning for the area because of the data provided by that station.

In addition to providing a near-real-time snapshot of the local weather, mesonets help farmers decide when to run irrigation systems, spray pesticides or plant crops. They also help provide better weather warnings, particularly when tornadoes and other storms intensify over small areas that farther-away weather stations would miss.

A nationwide network of networks

Because of the immense value of high-frequency weather and soil measurements, the National Oceanic and Atmospheric Administration leads a National Mesonet Program. The program collects weather data from public, private and academic sources, validates the quality of the data, and ensures it flows to users, including the National Weather Service. National Weather Service forecasters use that data to make more timely and accurate severe weather warnings.

Congress is considering expanding that program, with legislation proposed in the House and the Senate. The bills aim to authorize $50 million to $70 million annually to the National Mesonet Program between 2026 and 2030 to improve and expand mesonets across the country. An expansion would mean more weather stations and new capabilities, like real-time snowfall, fire weather and air quality measurements, closer to the people who rely on them.

So the next time you check your smartphone and grumble because the app doesn’t match the weather in your backyard, remember that all weather is local. If you don’t have a nearby mesonet station, the nearest measurements may be many miles away.

The Conversation

This work is supported by the Institute for Rural Partnerships, project award no. 2023-70500-38915, from the U.S. Department of Agriculture's National Institute of Food and Agriculture. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author and should not be construed to represent any official USDA or U.S. Government determination or policy. Wisconet receives monthly payments for their data from the National Mesonet Program.

Tapping your genome with AI and quantum computing could deliver on the promise of personalized medicine – but practical and ethical hurdles remain

While quantum computing has a long way to go, it can open tantalizing new doors for the field of genomics. herstockart/iStock via Getty Images Plus

Decades after researches first sequenced the human genome, scientists throughout the world are still working to understand it. Despite diligent global efforts to link uncommon variations in DNA sequences with human disease, progress has been slow, in large part due to limitations in scientific understanding and in part due to limitations in computational technologies.

Artificial intelligence has the potential to help scientists decipher the millions of genetic variations present in the genomes of different people in order to identify which ones lead to disease and which ones do not. In order to fully exploit the power of AI, however, scientists need to compare the genomes of thousands or tens of thousands of people. This task not only requires intense computational effort, it is also prone to error and will take years to complete.

Quantum computing has the potential to facilitate that process. We are researchers with a long-standing interest in finding ways to use genetics in the clinic and developing new technologies to study the human genome. Combining quantum computing with AI has the potential to accelerate genomic analysis far beyond traditional methods. For time-sensitive medical conditions, faster decoding of genetic information can directly inform urgent treatment decisions and, in some cases, be lifesaving.

Conventional vs. quantum computing

In conventional computing, individual bits of information – binary digits, also called bits – can represent only two states: namely, 0 and 1.

However, the qubits used in quantum computing can have more than two distinct states. Adding qubits together increases the number of states exponentially. The power of quantum computers lies in being able to check all the possibilities at once for problems with large numbers of variables, rather than one at a time like even the fastest possible classical computer must do. This allows quantum computers to solve certain types of problems, such as factoring large numbers for today’s encryption schemes and performing combinatorial optimization to find the best route through a large number of points.

Quantum computers work much differently from the computer you’re likely using to read this article.

Still, quantum computing is currently in its infancy. Despite the enormous potential of this technology, computer scientists are dealing with challenges related to its scalability, error correction, hardware development and the setting of standards.

There are also significant time and cost constraints associated with ameliorating these challenges. Experts in the field estimate that it may be at least a decade before quantum computing will be truly useful outside of the laboratory.

Bigger and better data analysis

If researchers are able to overcome these challenges, combining AI and quantum computing may not only enable scientists and clinicians to better understand the human genome but also to leverage that understanding to improve patient care.

Currently, researchers are able to use AI to analyze genomic data in combination with limited amounts of other biological information, such as gene activity, epigenomics, RNA signatures and protein function. Quantum computing could allow AI to process increasingly more massive and highly detailed datasets.

This might look like integrating large-scale genetic, protein and spatial datasets with clinical, demographic and real-time physiological data. This systems-level approach enables a more comprehensive and accurate understanding of complex biological systems beyond DNA sequence alone that could be used to improve public health.

In other words, quantum computing could make it possible to sequence a patient’s genome and combine that information with other information about how their body works at the molecular level to improve the accuracy of diagnoses and determine the best course of treatment in hours instead of months.

Challenges in access and privacy

Like many burgeoning technologies, combining AI with quantum computing has inherent and inescapable challenges. In particular, there are several ethical issues related to healthcare access.

One will be the cost. New technologies are typically expensive and that will likely widen the gap between those who can afford the best healthcare and those who cannot. Anticipating these costs and finding preemptive creative solutions is necessary to allow everyone to benefit equally.

While there are likely many approaches to reducing out-of-pocket expenses for healthcare, federal legislation could mandate affordable or free genetic information-based care to those in greatest financial need. Similar to the 2008 Genetic Information Nondiscrimination Act, which prohibits discrimination based on genetics, a new law could prohibit healthcare providers from withholding genetic information-based care from those who cannot afford it.

Close-up of face of person viewing computer screen, colorful DNA sequence reflected on their glasses
Biological data inherently comes with a privacy risk. Tek Image/Science Photo Library

Another challenge will be availability. These technologies will likely first be available at only the top medical centers in the country, which traditionally have the research funding and the cadre of skilled scientists and clinicians needed to develop new diagnostic methods and treatments. Consequently, the latest advances in health technology will be unavailable to people who physically or financially cannot travel to receive the best medical care.

A combination of telemedicine, centralized laboratories and shared data could potentially help make new technologies more accessible.

There are also privacy concerns intrinsic to sharing personal health data. Truly anonymizing personal information remains a challenge, and privacy concerns are likely to prevent some people from taking advantage of potentially lifesaving technologies.

One approach that may quell these fears is a model called federated blockchain governance. This approach involves sharing control of a blockchain, which is a digital ledger used to track transactions, among a small group of institutions rather than a single entity or the general public. Limiting the number of trusted curators of genetic data reduces the risk of privacy violations or security breaches and subsequently increases the chance that patient data will remain private.

Improving public health

Despite these challenges, combining advances in quantum computing and AI has the potential to significantly drive innovation and improve public health.

When scientists and clinicians are able to accurately identify the genetic basis of disease and potential risk factors, they will not only be able to develop better treatments but also help patients and healthcare providers know what symptoms to look for among those predisposed to certain conditions.

Taken together, this knowledge can improve public health, reduce the cost of healthcare and improve quality of life.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Received — 27 April 2026 The Conversation

Potential signs of life on distant planets sound exciting – but confirmation can take years

The Taurus molecular cloud is a relatively close star-forming region at 450 light-years away. It has been the site of many astromolecule discoveries. European Southern Observatory

Astronomers can use telescopes to find specific molecules in the atmospheres of neighboring planets, in nebulae – clouds of interstellar dust and gas – hundreds or thousands of light-years away, or in galaxies beyond the far reaches of the Milky Way.

So far, astronomers have found more than 350 molecules in the spaces between and around stars in just under a hundred years – the first such molecule was reported in 1937. Each year, the cosmic chemical stockroom grows by anywhere from a handful to a couple of dozen new finds. Many of these molecules are precursors to biomolecules, meaning they might provide hints about life’s origins elsewhere in the cosmos.

As an astrochemist, my research is all about chemicals found in space, especially in distant cosmic clouds where infant stars are born. Even so, the precise observations captured by these telescopes never cease to amaze me.

With this ongoing boom in astrochemical census data, there is a lot to be excited about. Sometimes, however, this excitement can be premature. Finding molecules in places people will likely never visit is no simple task, so vetting and sometimes correcting these observations is a continual process – especially for molecules whose signals aren’t as strong.

‘Seeing’ molecules in space

Astronomers can’t visit neighboring planets, let alone distant star-forming regions. So, how do they see what is out there?

Astronomers observe the cosmos with telescopes that collect all different wavelengths of electromagnetic energy. For astrochemistry, they typically use radio telescopes. These satellite-dishlike instruments are used to “see” radio waves, which have wavelengths much longer than the human eye can perceive.

Large white radio telescope at center with cloudy blue sky overhead and orange, green, and yellow field in the foreground. Mountains are in the background, and a crop of trees are to the right of the image.
The Robert C. Byrd Green Bank Telescope in West Virginia is a radio telescope that has been used in the discovery of many astromolecules. NSF/AUI/NRAO/John Stoke, CC BY

When molecules freely tumble around as gases in space, they rotate, and this motion releases energy in the form of photons, or electromagnetic particles. Different types of rotations require different levels of energy. Each photon carries that energy with it to a telescope, which records its signal. The more photons of a given energy, the stronger that signal.

If a radio telescope records all of the expected signals for a given molecule – its spectrum – then astronomers can confidently say that they have detected that molecule.

Infrared telescopes, such as the James Webb Space Telescope, or telescopes that detect visible light, such as the Hubble Space Telescope, can also be used for astrochemistry. Both kinds of telescopes, however, collect chemical signals, which are often more difficult to distinguish from one another.

Knowing what to look for

Behind every discovery of a new molecule in space is months or even years of work to capture a chemical’s “fingerprints,” or its spectrum.

I spent about a year doing this kind of work at the University of Cologne in Germany as a Fulbright research fellow. There, I used computer models of astrophysically interesting chemicals to predict what their spectra would look like.

In the lab, I injected the chemicals into a glass tube held under vacuum to mimic conditions in space. Using sensitive instruments, I recorded what a radio telescope would see if it were looking at only that molecule.

Astronomers had already found some of these molecules in space, and my colleagues and I were reexamining them, but we were also looking at molecules that we predicted might exist somewhere in space.

I worked with a team of scientists to adjust the computer inputs over and over until the simulated spectra matched the experimental data. When simulated spectra matched the experiments, that meant that the simulated spectra reliably modeled what a molecule’s fingerprint looks like in space. Reliable model spectra allow astronomers to detect chemical features at frequencies beyond what they can measure in the laboratory.

While my contributions to the Cologne team didn’t lead to a discovery of a new molecule in space, I gained an appreciation for the work behind the scenes of molecule discovery. The laboratory measurements are done precisely so that astronomers can be confident in their detections.

When detections get cloudy

Even with powerful radio telescopes and thorough experiments, some detections aren’t quite as clear as astronomers would like them to be. Sometimes, the signals are too faint for astronomers to be totally confident that they represent the molecules they think they do. Other times, there are too many molecule signals crowded together, causing different signals to blend.

Scientists have detected molecules relevant to biological processes back on Earth in comets and the atmospheres of other planets. These detections are exciting, but most scientists exercise caution to avoid jumping to conclusions because those molecules generally can exist outside of living things.

Sometimes, however, the excitement overshadows the caution and leads to premature conclusions.

Scientists often get excited when new molecules, especially biologically relevant molecules, are potentially present, and they want to share those findings with the world. Some researchers are also concerned about being the first to publish a new result, especially because a lot of telescope data is publicly available after a brief proprietary period.

Perhaps one of the most exciting nondiscoveries in astrochemistry was that of glycine in interstellar space more than 20 years ago. Glycine is the simplest amino acid, a type of molecule essential for life as we know it. Finding this molecule in a nebula would change how scientists think about the evolution of life’s ingredients.

Follow-up studies showed that key signals were missing in the initial report of glycine. As a result, astrochemists now generally agree that glycine had not been found in star-forming nebulae.

Pink and purple infrared images of dust in Sagittarius B2 captured by JWST
This is a mid-infrared image of Sagittarius B2 captured by the James Webb Space Telescope. Sagittarius B2 is a molecule-rich region of space and one of the places scientists thought they had observed glycine before that claim was refuted. NASA, ESA, CSA, STScI, A. Ginsburg (University of Florida), N. Budaiev (University of Florida), T. Yoo (University of Florida). Image processing: A. Pagan (STScI), CC BY

More recently, another molecular discovery has been scrutinized: the potential detection of phosphine in Venus’ atmosphere. Unlike with glycine, scientists have not yet agreed on whether phosphine, which is associated with some biological processes on Earth, is indeed present on Venus.

Initial reports of phosphine on Venus spurred chatter about biosignatures and evidence of potential life on Earth’s much hotter sister planet. However, follow-up studies by other scientists couldn’t confirm the initial results.

Over the past five years, scientists have continued to try to confirm or definitively refute Venusian phosphine.

Vetting claims

When reading about discoveries of new molecules in interstellar space or on other planets, how can you be confident in the detections you are reading about? It’s important to watch out for flashy headlines that claim signs of life have been found elsewhere in the universe. Molecule discoveries that rely on only one or two signals being detected are generally less reliable than those based on five or more signals.

For discoveries that tease hints of life on other worlds, other scientists are almost certainly going to try to reproduce the results. If you wait a few months for the initial fanfare to die down, you can do a web search to see what new results have come out to support – or refute – the original claim.

The Conversation

Olivia Harper Wilkins receives funding from NASA and the National Radio Astronomy Observatory (NRAO).

Why is water wet?

Evaporating water is essential to helping your body cool down. Imgorthand/E+ via Getty Images

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Why is water wet? – Philip S., age 12, Northville, Michigan


Spring is often a rainy season. If you get caught in a downpour without an umbrella, you will quickly learn what it means to be wet. But what is it about water that makes it wet?

I am an atmospheric scientist, and water is a fundamental part of the atmosphere. I study storms and wildfires, both of which are closely connected to water.

Why water is wet has to do with how water molecules interact with each other and the things around them.

Wet you can see

Imagine you accidentally spill water on your clothes one day. You will notice two things: First, the water spreads out on the cloth, and the wet part sticks to your body more than the dry part does; and second, the wet area feels cool.

Wet clothes stick to your body and water spreads across the fabric because water molecules are strongly attracted to other molecules, a chemical property called adhesion.

One important reason why water molecules are so attracted to other molecules is that they’re polar. Like a microscopic magnet, one end of the molecule carries a small negative charge, while the other end carries a small positive charge.

Diagram of a v-shaped molecule at bent in a 104.5-degree angle, a surrounding cloud in a gradient of red around the oxygen and blue around the hydrogens
Water, also known as H2O, has a slightly negative charge surrounding its oxygen atom and a slightly positive charge around its hydrogen atoms. Riccardo Rovinetti/Wikimedia Commons, CC BY-SA

Many everyday materials, such as glass, skin and clothing, are also polar. When water touches these surfaces, the electric charges on those materials attract the water molecules and hold them in place. This strong attraction also helps water spread out over surfaces. Whether something feels “wet” to you has to do with how good a liquid is at staying in contact with a surface. Water feels wet because its molecules stick tightly to each other and to your skin.

Compared to water, mercury has much weaker attraction to surfaces. Mercury’s molecules are much more attracted to each other, meaning they have very strong cohesion. As a result, mercury does not easily stick to other surfaces.

The cool feeling of being wet comes from evaporation. Liquids need energy to change into gas because they must overcome the forces holding molecules together before they can float away. They take this energy from their surroundings in the form of heat.

Diagram depicting spherical red molecules in three erlenmeyer flasks, arranged from solid (a packed cube), to liquid (a loose pile of molecules), to gas (a few molecules flying around).
As temperature increases, the adhesion between molecules decreases. OpenStax, CC BY-SA

When you step out of a pool and the water on your swimsuit evaporates, you might feel cold because it’s taking away heat from your body. Wet things often feel cool because evaporation takes heat away from the skin. Sometimes something that feels cool can trick you into thinking it’s also wet, even if no liquid is actually present.

Evaporative cooling is very useful in daily life, and other liquids can also do it. For example, when you clean a wound with an alcohol wipe, it also feels cool. Like water, alcohol evaporates and carries heat away from your body. Similarly, when sweat evaporates, it removes heat from your body and cools you down.

Wet you cannot see

Sometimes you can feel damp even when you don’t see any water. This is related to the amount of water vapor in the air, also called humidity.

Air can hold only a limited amount of water vapor. When there is already a lot of water vapor in the air, evaporation slows down. This makes it harder for sweat on your skin to evaporate, so you feel sticky and wet.

When air becomes completely full of water vapor, the vapor starts to condense and turn back into liquid water to form dew or fog.

How much water vapor air can hold depends on temperature. Warm air can hold more water vapor, while cold air can hold less. As temperature increases, water molecules gain more energy and can more easily escape their attraction to each other and become a vapor.

This is why dark or shady places often feel damp. These areas get less sunlight, stay cooler and cannot hold much water vapor. As a result, water does not evaporate easily and the area stays wet.

Group of people sitting on a bench under a wooden gazebo in a grassy area surrounded by trees
Shady, cool areas can feel wet even when you don’t see water around. Ketut Agus Suardika/iStock via Getty Images Plus

A lot of water, but not wet

Because the air’s ability to hold water depends on temperature, sometimes the air can contain a lot of water vapor but you don’t feel wet.

For example, when you are near a fire, the burning process produces water vapor. However, because the temperature is also higher, the air can hold more water vapor. This speeds up evaporation. If there are wet clothes nearby, they may actually dry more quickly.

In weather forecasts, scientists use relative humidity to describe how humid the air feels, rather than the actual amount of water vapor in the air.

Because hot air can hold so much moisture that relative humidity stays low, people are often surprised when I tell them that wildfires release large amounts of water vapor. Fire is the last thing most people associate with being wet.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Yunyao Li does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Perseverance doesn’t always pay off for companies – sometimes it’s better to ‘fail fast’

Slack's embrace of a ‘fail fast’ approach helped it become the world's dominant intra-office messaging app. AP Photo/Kiichiro Sato

Across the business world, companies often double down on struggling ideas, retreating only after clear evidence shows they won’t work.

A recent spectacular example was Meta’s metaverse push. After the organization invested US$80 billion over several years, it announced changes in March 2026 that all but abandoned its grand strategy.

But many companies are following the opposite approach of quickly walking away from failure instead of blindly sticking to a vision. Google ended its cloud gaming service Stadia when it failed to take off, choosing instead to reuse the technology elsewhere. Mercedes abandoned its zero-sidepod F1 concept once it clearly hit a competitive dead end. And Slack transitioned from a failed gaming app to a ubiquitous intra-office messaging platform.

What drove all these decisions wasn’t a tolerance for failure. Instead, executives read signals of weakness early, confronted inconvenient evidence and changed course before greater losses accumulated. In other words, they embraced “failing fast.”

As business professors who study sales performance and sales failure, we argue that this concept is one of the most important yet most misunderstood ideas in our field. It’s not about celebrating mistakes or lowering standards, nor does it give leaders permission to abandon rigor or give up easily.

At its core, it’s about creating the conditions for faster learning: building the managerial discipline to recognize when an opportunity is unlikely to pay off, stopping before sunk costs deepen, and redirecting scarce resources to more promising bets. And this is a strategy that can work for any company, at any level, no matter how high or low the stakes.

The Slack model

Slack is everywhere these days. But few recall that it was actually founded in 2011 as a multiplayer online game called Glitch that failed to take off. The company, then known as Tiny Speck, shut it down in 2012, but in the process its leaders identified hidden value in an internal communication tool they had built simply to coordinate their own work.

This practical side project looked like a tool that could do well in the burgeoning market for team-collaboration software. So the company pivoted by deploying its remaining capital and talent to launch Slack in 2013. Since that time, Slack has become one of the fastest-growing enterprise software platforms in history, eventually leading to a $27.7 billion acquisition by the business platform Salesforce in 2021.

Stories like these are often told as tales of persistence, but they’re actually examples of disciplined quitting. Similar cases include 3M’s accidental invention of Post-it Notes (first used as ad hoc bookmarks for hymnals); Shopify’s pivot from selling snowboards to enabling e-commerce infrastructure; and Instagram’s shift from a cluttered check-in app to a focused photo-sharing platform.

Together, these stories suggest that success depends not only on staying the course but also on recognizing early when the course is no longer worth pursuing and changing to a better one.

Know when (and how) to fold ‘em

Despite this history, much of business culture still promotes a simpler message that grit drives success.

This mindset, however, can also foster a sunk cost fallacy. Myriad examples of this trap linger across business lore to this day: Blockbuster failing to accept an offer to purchase Netflix and instead expanding its physical footprint model; Kodak inventing digital cameras but opting to prioritize its dominant film business; and the persistent joint venture funding of the Concord supersonic airliner despite strong evidence that the project wouldn’t become commercially viable. All three businesses eventually went bankrupt after once dominating their respective industry.

An ungrammatical sign over a Blockbuster store in Chicago reads:
Blockbuster went bankrupt in 2011 after it failed to innovate, while Netflix became dominant. AP Photo/Kiichiro Sato

Sunk costs, in short, come into direct tension with notions of failing fast. But our research underscores the latter’s benefits, showing that associated payoffs extend beyond high-profile corporate pivots and even apply to everyday decision-making. Studies in business-to-business sales, for example, find that walking away early from low-potential opportunities can improve motivation and performance.

That said, there’s an important condition: This approach only works when executives and customer-facing personnel have a grounded understanding of what the company can do and what customers want – rather than treating early exit as a suboptimal default.

Across these varied cases, our research has pointed to another clear pattern that emerges: Failing fast is typically structured in a way to make decisions under uncertainty, with three distinct stages. Again, the origin story of Slack is a good example.

The first step is to gather information that suggests whether any given project will succeed. These signals can come from direct observation or data. The goal is to build an early, evidence-based picture of whether an effort is gaining traction. In the case of Slack, CEO Stewart Butterfield and his team recognized through direct user experience that Glitch, the game, just wasn’t fun. But they also saw other signals that showed structural limitations preventing a viable path to succeed on mobile devices.

The next step is to interpret the collected data – combining experience, contextual awareness and analytical tools to distinguish between ideas that warrant investment and those that don’t. Structured approaches, like comparing goals to historical benchmarks, can make sure that assessments are consistent and grounded in evidence rather than intuition alone. In Slack’s case with Glitch, Butterfield synthesized the early signals and concluded that, despite significant sunk costs, the game didn’t justify further resources.

The final and most difficult step is execution. When signals and analysis point to early exit as the most effective course, acting on that conclusion is hard. Withdrawing, even when continuing no longer makes strategic sense, feels counterintuitive in an environment that rewards persistence. That’s why executives need to make the case that there’s a smarter way to allocate time, capital and attention. With Slack, Butterfield followed through on his analytical convictions by shutting down the game and repurposing internal technology to create Slack – reframing this “failure” as a strategic reallocation.

A lesson for everyone

These lessons extend far beyond the world of sales, startup culture and Big Tech. Managers face similar choices in product development, partnerships and hiring – situations where the real risk is not failure, but failing late. This way, strong organizations understand how to fail by design. That is, defining success and failure criteria early, testing assumptions quickly and containing any downside before commitment becomes wasteful. These are, in fact, universal lessons that apply across industries, up and down the chain.

As a more poetic analogy, we turn to the sea. No skilled sailor tries to cross every channel. Some waters will test their endurance, while others will open up new routes. The best sailors prove sound judgment by reading the winds early and changing course before a storm takes hold.

Business leaders face the same choice. Growth comes from neither persistence alone nor reflexive retreat, but from knowing when the effort no longer creates value.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Donkeys are a symbol of endurance for Palestinians – they are also a target of settler violence and care

A young Palestinian rides a donkey in the occupied West Bank on Sept. 30, 2025. John Wessels/AFP via Getty Images

Donkeys tend to symbolize humility and redemption; in Jewish tradition, the Messiah will arrive on a white donkey.

But in today’s “land of the Bible,” donkeys have become victims of the war in Gaza and, increasingly, targets of the growing settler violence in the West Bank.

Take what happened in December 2025 near Jaba, north of Ramallah. While a Palestinian child watched, seven Jewish settlers from Gur Aryeh, a small illegal outpost, reportedly led away his family’s three donkeys.

When an Israeli peace activist later arrived at the scene, she found one of the donkeys with a rope around the animal’s neck and in severe pain. She later told me how she had to avert her eyes as she shone the flashlight at the stricken donkey for the rescue crew from the Starting Over Sanctuary, a nonprofit dedicated to treating and rehabilitating animals in Israel, the West Bank and Gaza.

The donkey didn’t survive the journey to the hospital.

While violence toward animals tends to be seen as distinct from that directed at humans, the two phenomena are deeply intertwined. As someone who studies settler colonial violence alongside political ecology and human-animal relationships, I argue that Israeli settlers’ attacks on donkeys as well as the care they practice toward these animals reveal how colonial dispossession happens and is in turn naturalized on the ground.

A donkey stands in front of hills.
A donkey owned by a Palestinian herder from Deir Istiya in the northern West Bank in June 2025. Irus Braverman, CC BY-SA

Harming animals through direct attack, deprivation, seizure and forced separation has long accompanied Israeli violence against Palestinian communities. During the Nakba in 1948, in which 750,000 Palestinians fled or were displaced from their land by Zionist forces, farm and domestic animals were killed, seized, left without care or driven to starvation.

A similar pattern has occurred in the war on Gaza following the attack by Hamas and other militants on Israel on Oct. 7, 2023. By August 2025, as many as 97% of farm animals in Gaza were killed through bombing, starvation and the destruction of agricultural infrastructure, according to the Euro‑Mediterranean Human Rights Monitor. Farms were razed, and cats and dogs were left to fend for themselves as families were repeatedly displaced from their homes by the Israeli airstrikes.

Carrying the burden for millennia

Donkeys, in particular, carry a deep history in the region and today face heightened vulnerabilities.

First domesticated approximately 7,000 years ago in the Horn of Africa, they transformed human mobility and are still important in the daily lives of millions of poor people around the world.

To Palestinians, donkeys have become emblems of “sumūd,” or steadfast endurance – an ethic they often emphasize to describe daily life under Israeli occupation.

Prominent Palestinian poet Mahmoud Darwish said in a television interview in 1997: “I wish I was a donkey. A peaceful, wise animal that pretends to be stupid. Yet he is patient, and smarter than we are in the cool and calm manner he watches on as history unfolds.”

Amid the ruins in Gaza and with fuel scarce, donkeys have provided vital transport for the injured as well as for goods and belongings.

Palestinian political analyst Ahmed Najar put it aptly on July 20, 2025: “My mother, who is in Gaza, cannot walk. Since October 2023, my family has been displaced seven times. Every time the bombs fell too close or the leaflets rained down warning my family to flee, the only way she could be moved was on a donkey. … (In) the dust and the terror – donkeys became ambulances, buses, lifelines.”

A destroyed building is seen with a person on a cart pulled by a donkey nearby.
A Palestinian man rides a donkey-pulled cart past a damaged U.N.-run school in the Jabalia refugee camp in the northern Gaza Strip on May 31, 2024. Omar al-Qattaa/AFP via Getty Images

The December abduction of a donkey in Jaba was not an isolated incident. Settlers regularly seize and steal donkeys, alongside other farm animals, in raids on Palestinian pastoralist communities, especially in the Jordan Valley and Hebron Hills.

Since October 2023, such attacks have intensified significantly. In March 2025, U.N. agencies documented the theft or killing of more than 1,400 sheep and goats in one Jordan Valley attack.

Palestinian shepherds often ride their donkeys when taking their flocks out to pasture. But as settler harassment has increased, frequently carried out by armed settler shepherds riding on donkeys themselves, Palestinians rarely take their flocks out. With grazing routes rendered dangerous, Palestinian-owned donkeys are left behind, often spending their days tied to a tree – still loved, still named, but no longer moving across a landscape that has become hostile. They stand as quiet reminders of a disappearing pastoralist tradition.

‘Freedom flights’

A short distance from Jaba, a seemingly different donkey story unfolds. At the Starting Over Sanctuary in central Israel, volunteers prepare donkeys for “freedom flights” to Europe.

Since 2018, the charity has operated as Israel’s largest donkey sanctuary, rescuing and rehabilitating animals subjected to abuse, neglect and hard labor, particularly from the country’s south. Since the early 2020s, the Israeli sanctuary has periodically organized rehoming projects for the donkeys, transferring them by airplanes to partner sanctuaries across Europe. After a yearlong pause amid war-related disruptions, and newly overwhelmed with injured donkeys pouring in from Gaza, the Starting Over Sanctuary recently resumed the flights, airlifting the rescued donkeys to sanctuaries in France and Belgium.

When I visited the sanctuary in December 2025, there were 800 donkeys in residence, many rescued by soldiers or informal networks encountering the injured or abandoned animals near conflict zones.

A white donkey and a white car are seen amongst hay.
A donkey and cat at the Starting Over Sanctuary in Herut, Israel, on Dec. 16, 2025. Irus Braverman, CC BY-SA

While the donkey rescues carried out by the Starting Over Sanctuary are clearly motivated by what its workers describe as a deep love for donkeys, several Palestinian analysts and residents frame these rescues very differently. For them, a donkey taken from the Palestinian community represents another form of settler dispossession, regardless of whether that removal is carried out through acts of care by sanctuary workers near Tel Aviv or through physical violence by Jewish shepherds in the West Bank.

The tension between the cruelty toward Palestinian-owned animals by violent settler shepherds and the compassionate rescue of Palestinian-owned animals by Israeli animal activists exposes how animal and human life are mutually entangled, and morally charged, within the structures of what I and many others see as Israel’s settler colonialism.

The donkey stands at the center of these tensions: a symbol, companion, laborer, witness, target of violence and object of compassion.

Normalizing dispossession

Meanwhile, a third donkey story has been unfolding in the rural landscapes of the Israeli occupied West Bank, where Jewish settlers increasingly use donkeys while grazing sheep across the contested terrain. Settler shepherds on donkeys lead their herds across the open hills in scenes that closely resemble Palestinian herding routines, which were once common in the same areas.

A man sits on a donkey followed by sheep.
An Israeli settler riding a donkey herds his flock of goats and sheep near an outpost in the occupied West Bank on June 29, 2025. Menahem Kahana/AFP via Getty Images

The resemblance is particularly striking because many Palestinians are now barred from practicing their pastoralist traditions in areas where settlers continue to roam freely. The settlers’ use of donkeys evokes a biblical past while recasting pastoralist forms of land use as their inherited birthright, even as Palestinian pastoralism is increasingly framed as backward, ecologically harmful and illegal.

Donkeys thus play an often overlooked role in the broader shift in settler strategy unfolding across the West Bank in the past decade or so – and increasingly since October 2023 – in which small shepherding outposts have moved from the margins to the center of settlement expansion. In recent years, herding has become a key tool for claiming territory beyond the established settlements, allowing settlers to control large swaths of land with minimal infrastructure. These outposts now form a cutting edge strategy for what The Guardian has described as the largest land grab in the West Bank since 1967.

Beyond their material effects, such pastoralist practices by the settler shepherds help normalize this land grab. Donkeys, sheep and cows, alongside olives and other natural entities, are part of ongoing ecological warfare that naturalizes both Palestinian dispossession and settler reclamation, as I explore in an upcoming academic paper in the journal American Anthropologist.

In the occupied West Bank, as in all other places, human and animal vulnerabilities are intertwined. A donkey may be flown to safety, but the humans who depended on her remain in danger. The animal’s rescue, as such, reveals disturbing asymmetries about who gets saved and who is left behind.

The Conversation

Irus Braverman receives funding from the Baldy Center for Law & Social Policy and the National Humanities Center.

Texas proposes Bible readings for K-12 students, reigniting century-old legal battle over their place in public schools

A proposed list of required reading for Texan public schools includes several stories from the Bible. plherrera/E+ via Getty Images

In 2023, Texas passed a law aimed at improving K-12 students’ reading. In part, it called for a required reading list to spell out “at least one literary work to be taught in each grade level.”

An initial list named about 300 texts – many of them from the Bible. The Texas State Board of Education then cut the list by 100 readings but still included more than a dozen biblical texts.

Debate over the Bible’s place in classrooms, if any, has erupted since the list was published. At the board’s April 10, 2026, meeting, all nine Republican members preliminarily approved the materials, while the five Democrats rejected the list. The board plans to take a final vote in June.

Critics argue that mandatory Bible readings in public schools would violate the religion clauses in the First Amendment to the U.S. Constitution.

American courts have considered similar questions for 150 years – with the answer often depending on a lesson’s purpose.

Courts, Bible and schools

The first reported case on the Bible in U.S. schools was in 1872, when the Supreme Court of Ohio affirmed a ban against religious instruction in public classrooms. Conversely, 50 years later, the Supreme Court of Georgia upheld an ordinance to start school days with readings from the King James Version of the Bible.

A black and white photograph, taken from the back of a classroom, shows a few rows of students standing with their heads bowed.
Students in San Antonio, Texas, pray in 1962. Bettmann via Getty Images

Bible reading first reached the U.S. Supreme Court in 1963, in the case of School District of Abington Township v. Schempp. This case, from Pennsylvania, was consolidated with a similar one from Maryland, called Murray v. Curlett.

Opponents in both states challenged mandatory Bible readings and prayer at the start of school days. The plaintiffs argued that these activities violated the establishment clause of the U.S. Constitution’s First Amendment: that “Congress shall make no law respecting an establishment of religion.”

The justices struck down both practices, finding that they did not have a secular purpose and that their main effect was to advance religion.

Attempting to allay concerns they were anti-religious, the justices declared, “It certainly may be said that the Bible is worthy of study for its literary and historic qualities. Nothing we have said here indicates that such study of the Bible or of religion, when presented objectively as part of a secular program of education, may not be effected consistently with the First Amendment.”

Justice William Brennan’s concurrence added, “The holding of the Court today plainly does not foreclose teaching about the Holy Scriptures or about the differences between religious sects in classes in literature or history.”

Similarly, in the following decades, lower courts invalidated classes as violating the establishment clause if the subject matter promoted Christianity – teaching it as religious truth rather than discussing the Bible’s literary and historical qualities. In 1981, for instance, the 5th U.S. Circuit Court of Appeals banned a Bible literature course in Alabama.

Two years later, the 8th Circuit summarily affirmed a judgment striking down a program in Arkansas allowing students to take voluntary Bible classes during school hours.

In 1996, a federal trial court in Mississippi invalidated Bible study classes taught in a rotation with music, physical education and library courses, plus another called A Biblical History of the Middle East. The courts agreed that the classes were unacceptable because they advanced Christianity.

Texas proposal

Returning to Texas, the board’s reading list is far from inclusive. Proposed passages are primarily from a handful of translations of the Bible: the English Standard Version, New International Reader’s Version, King James Version, and one from the Jewish Publication Society. The list does not include translations used by Catholics or sacred texts from non-Jewish and non-Christian faiths.

Two students, facing away from the camera, read text on computers positioned up against a white wall.
Students work under Ten Commandments and Bill of Rights posters in a classroom at Lehman High School in Kyle, Texas, on Oct. 16, 2025. AP Photo/Eric Gay

Texts on the proposed list include well-known biblical lessons such as the Golden Rule for kindergarten, the Parable of the Prodigal Son for first grade, Corinthains’ definition of love for seventh grade, and the Beatitudes for eighth grade – the passage that begins, “Blessed are the poor.” Selections for older students include David and Goliath, The Tower of Babel, and passages from the books of Job and Ecclesiastes – that “for everything there is a season.”

As of now, the proposal permits parents who object to opt their children out of specific readings if they conflict with their religious or moral beliefs.

2 types of teaching

As Brennan noted in Abington, the Supreme Court “plainly does not foreclose teaching about the Holy Scriptures or about the differences between religious sects in classes in literature or history.” However, there is a significant difference between objectively teaching about religion and teaching of religion from a faith perspective.

This difference has been important throughout my own career. For 36 years, I have taught law with a special interest in the relationships between religion, law and education. But in addition to my education and law degrees, I hold a master’s degree in divinity. I previously taught religion, social studies and law to high school students, while teaching college theology part time.

Teaching religion at two Catholic high schools before and after law school, my job was to inculcate Roman Catholic values in my students. Conversely, teaching theology to adult students, I emphasized 11th-century theologian Anselm of Canterbury’s dictum that theology represents “faith seeking understanding.” In other words, my goal was to enable them to make their own judgments about whether to follow religious teachings.

In many cases, I have argued that increasing religious practices in public life is constitutional. My concern about Texas, however, is that the readings fail to distinguish between teaching about and of religion. Expanding students’ horizons and advancing tolerance by exposing them to religious perspectives is a good intention. Yet the breadth of selections is hardly inclusive, given its focus primarily on Christianity, to the exclusion of other faiths. Texas certainly can promote teaching about religion to enhance understanding of others, but it must be careful not to teach religion.

The Conversation

Charles J. Russo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

❌