Normal view

What makes a good teacher? Ask a Republican and a Democrat, and they are likely to agree

Support for students is one value that both Democrats and Republicans alike value in a teacher. Brittany Murray/MediaNews Group/Long Beach Press-Telegram via Getty Images

If you follow the headlines, it can seem like K-12 schools in the United States are a political battlefield.

Some conservative parents and advocacy groups are lobbying to remove certain books from classrooms and libraries, most often those that highlight LGBTQ+ issues or race and racism.

Some civil liberties groups, librarians and progressive parents, meanwhile, are pushing back against book bans, saying they are a form of unnecessary censorship.

Parents and school boards also are clashing over a range of other issues, ranging from how transgender and nonbinary students are treated and which bathrooms they can use, to whether teachers should use artificial intelligence in the classroom.

Beyond this evidence of political polarization, though, there’s another, less divisive reality. Ask people to name their best teacher, and regardless of their political affiliation, they will likely offer a similar answer. Most people will say that they learned a lot from a teacher who knew them, cared about them and made learning relevant to their lives.

Over five years, from 2020 through 2025, we asked more than 2,000 Americans, including Democrats, Republicans and independents, what makes a very good teacher. We expected deep partisan divides. Instead, we found something rare: genuine, cross-partisan agreement.

How we ran the study

We began in 2020 with a nationally representative survey of 334 adults, asking them to recall a teacher they learned a lot from. We then asked the survey participants to look at 10 statements that might describe a good teacher and rank them from most to least important.

Five of the statements we offered focused on relationships – like caring about students, making educational lessons relevant and giving students individualized support. The other five focused on whether teachers covered a lot of material, rewarded top performers with grades or prizes, and whether they applied rules consistently to all students.

Respondents generally focused on highlighting the same seven out of 10 statements, giving us a vision of how they perceived a very good teacher. People prioritized the same factors – how much the teachers cared about their students and whether they supported them – regardless of their age, race, gender or political affiliation. Republicans and Democrats were indistinguishable in their descriptions of effective teaching.

People did not prioritize whether teachers covered a lot of material, made students compete or ran a strict and disciplined classroom.

In 2022, we conducted a similar survey of 179 teachers in Arizona and California. The results echoed our 2020 survey participants’ view: Teachers also defined very good teachers as ones who emphasized relationships, made lessons relevant and knew the subject matter.

Given the prominence of politically charged education debates, we were a bit surprised by our results. We began to wonder: Do people privately agree on what it means to be a good teacher, but change their opinion if their image of good teaching is associated with an ideological orientation they disagree with?

A woman with blonde hair hugs a girl wearing a backpack, and they both smile as a man wearing a tie looks at them and also smiles.
A student gets a hug from a teacher at a Garden Grove, Calif., elementary school on the first day of class in September 2024. Paul Bersebach/MediaNews Group/Orange County Register via Getty Images

Adding a partisan label

To explore this question in late 2024 and early 2025, we ran a third experiment with a nationally representative sample of 1,562 adults from a range of political backgrounds.

We gave all participants the same description of a very good teacher, identified in our previous experiments. We then randomly noted if these descriptions of a good teacher were endorsed by Democrats, Republicans or people with no political affiliation.

When the participants read the teacher descriptions without any political labels attached, about 85% of Democrats, Republicans and independents agreed with the description of a very good teacher.

When we added a note saying that a political party the survey participant did not identify endorsed a particular description of a good teacher, they became less likely to support the statement.

The effect was sharpest among Republicans: Support fell from 85% to 64% when the description was tied to Democrats. Democrats’ agreement slipped less, from 86% to 76%, when the description was tied to Republicans.

Even with these caveats, nearly two-thirds of Republicans and Democrats still agreed on what it means to be a good teacher.

Political scientists call this affective polarization: How we react to an idea depends not just on the idea, but on who we think supports it.

At the national level, education is often framed as an intractable partisan conflict.

Yet at the individual level, many Americans continue to express confidence in their own local schools. Our findings suggest that part of this gap may be driven by how issues are framed rather than by fundamentally incompatible beliefs.

A man wears a tie and gives a thumbs up as a group of children seated at desks raise their hands.
Regardless of political affiliation, people are less likely to prioritize whether teachers cover a lot of material or ran a strict and disciplined classroom. Paul Bersebach/MediaNews Group/Orange County Register via Getty Images

This matters more than you might think

Federal and state education policy over the past four decades, including laws like No Child Left Behind, which mandated routine federal testing in reading and math, emphasize testing and competition. These priorities don’t always match what Americans across the political spectrum say they value most.

Americans continue to differ on many important education questions, including what children should learn in school, the role of school boards and other issues.

But these disagreements coexist with a shared beliefs about what good teaching looks like in practice.

Recognizing this gap could open new possibilities for education reform. When debates focus exclusively on disagreements, they can obscure areas of agreement that might otherwise serve as starting points for collaboration.

We encourage readers to go ahead and run a similar, small experiment: Ask people about their best teacher, then listen to what they say. The answer, it turns out, is likely more unifying than you expect.

The Conversation

For this specific project, Gustavo E. Fischman received funding from the Institute of Social Science Research at ASU. He also received funds for other projects from the National Science Foundation, the Spencer Foundation, the Open Society Foundation, the IDRC, and the Fulbright Commission.

Eric Haas and Margarita Pivovarova do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

‘Devil Wears Prada 2’ shows how Christian imagery circulates in unusual ways through the fashion industry

Actress Meryl Streep attends the world premiere of 'The Devil Wears Prada 2' in New York. Angela Weiss / AFP via Getty Images

At the world premiere of “The Devil Wears Prada 2,” actress Meryl Streep leaned into her character’s devilish persona. She wore the character’s signature sunglasses along with long black gloves and a flowing red leather cape from Givenchy’s Winter 2026 collection.

Streep’s outfit, though, is a small moment in a much larger story – one in which Christianity and fashion have been intertwined for centuries, sometimes as adversaries, sometimes as collaborators.

While neither of the “Devil Wears Prada” movies revolve around Christianity, the invocation of the devil taps into an older moral rhetoric. For centuries, fashion was cast as the troublesome, if not villainous, enemy of a pure and spiritual Christianity – a symbol of putting material desires before holy ones. For example, 18th-century cleric and founder of Methodism John Wesley urged his followers to show their faith by dressing “neatly” and “plainly.”

Yet Christian imagery has come to shape the industry in profound ways. As a scholar who researches the relationship between Christianity and fashion, I have traced how Christian imagery circulates in surprising forms. The devil, for instance, occasionally appeared in fashion advertising to suggest sin, sensuality and transgression.

Christian imagery of angels and Eve

In the mid-20th century, Christianity often occupied a supporting role in the fashion industry. It showed up in articles by Christian religious leaders and color photographs of Christian art and architecture published in fashion magazines.

For example, articles on how Christianity addresses contemporary problems by Catholic Bishop Fulton Sheen and Columbia University Chaplain James A. Pike appeared in Vogue alongside ads for makeup and fashion photo shoots.

Christian imagery also appeared in fashion advertisements featuring “Sunday best” clothing and Easter dresses. Ads showed angels gifting consumers “heavenly” products that promised beauty and ease.

The devil only occasionally played a part in ads for fashion products, such as perfumes, makeup and handkerchiefs. These advertisements depicted the devil as a snake or alluded to him and his role in the Book of Genesis. The biblical passage recounts how the serpent, typically interpreted as the devil in Christian theology, tempted Eve to sin by eating the forbidden fruit. Eve then offers the fruit to Adam, and, having both sinned, they realize their nakedness, are ashamed and make clothing.

Fashion advertisements, ranging from Revlon in the 1940s to Hanes in the 1960s, celebrated Eve’s rebellious action. Revlon “double” dared women to try their “Fatal Apple” makeup so they could look like Eve, while Hanes stated, “Poor Girl! She never knew the temptation of seamless stockings by Hanes,” next to an illustration of Eve holding an apple by a serpent.

Ads played with the idea of fashion as a temptation in which female consumers should indulge. Female consumers were urged to “Be Eve” and give into the desire to purchase products.

The devil was eclipsed as ads featured garden settings and products that promised “the look of Eve.” Eve symbolized beauty and promised consumers the same results through their purchasing power.

A 1967 ad for the “Eve Petticoat” issued an invitation: “Come, pretty girl. Be Eve, if you wish.” In that same decade, Catalina’s “part of the art of Eve” campaign for their swimwear showed what this meant. Each ad featured a woman in a provocative pose wearing a Catalina bathing suit in a garden setting. By donning Catalina, the ad implies, the wearer can become Eve – attractive, stylish and sexy. By highlighting Eve’s rebellion alongside her beauty, ads framed her as a fashion heroine.

Eve’s prominent role in advertising demonstrates how the Judeo-Christian tradition permeated American culture, including the fashion industry.

An evolving fashion landscape

While Christianity appeared in industry advertisements, it also slowly began to take a more prominent role in fashion garments, as designers became more bold. Christianity inspired the design of many garments, and later, Christian figures began to appear on designer garments.

For example, in the 1960s, American designer Geoffrey Beene, known for his minimalist design aesthetic, drew inspiration from the cassocks worn by Catholic priests. So, too, did Spanish designer Cristóbal Balenciaga. In 1967, his black evening gown with cape radiated simplicity in form and draping even as it also referenced the attire of Catholic priests.

While Beene and Balenciaga received praise for their restraint and elegance, the lesser-known London-born designer Walter Holmes created controversy with his “mini-medievals” in 1968. Modeled after a monk’s robe and a nun’s habit, Holmes combined Christian inspiration with the miniskirt trend, which some people found fun, while others labeled it offensive.

Luxury fashion brand Krizia’s collection.

In the 1990s, Italian luxury fashion brand Krizia’s collection included women wearing cassock-like dresses, while Italian fashion designer Stefano Pilati’s 2010 line for Yves Saint Laurent played on the attire of Catholic nuns.

More recently, in spring 2020, French designer Virginie Viard’s designs for Chanel referenced nuns and Catholic school girl uniforms.

Yves Saint Laurent 2010/2011 fashion show.

‘Spiritual marketplace’

In the 1990s, Christianity began playing an even larger part in fashion, as the Virgin Mary and saints began to appear on garments. Prior to this, designers often avoided using religious figures; they preferred more abstract interpretations; it also helped prevent any controversy that might emerge from depicting sacred figures.

Designer Gianni Versace challenged this tacit rule in his Fall/Winter 1991 collection. It included biker jackets adorned with bejeweled crosses and, in the finale, a halter top that featured the Virgin Mary made out of a mosaic of jewels. The garment was also the centerpiece of ads for the collection and showcased the fashion potential of Christian figures.

Versace’s Marian halter reflected the larger shift away from institutional religion toward individual spirituality. Christian symbols were lifted from church contexts and recirculated through popular culture, including fashion, in new ways. Versace’s rock star rendering of the Virgin Mary offered people a new way of seeing her – one open to interpretation outside of doctrine. Like Versace, they could claim her and reimagine her on their own terms.

Sociologist Wade Clark Roof described the religious landscape as a “spiritual marketplace.” People relied less on religious authorities and more on the meaning they could create from “available images, symbols, moral codes, and doctrines.”

Religious ideas and products circulated through music and movies, crystal shops and sports stadiums, Christian bookstores and designer collections. Within this spiritual bazaar, fashion became a place where people could reimagine Christian symbols, figures and history in new ways.

Modern-day trends

In the years since, Christianity has become a regular feature in fashion collections. Most notably, Christianity regularly has a starring role in the work of Dolce & Gabbana. Their 1998 “Stromboli” collection revolved around a Christian theme, a Marian procession, and dresses, tunics and blouses featuring Marian imagery.

The design duo have returned to Christian imagery several times. For example, their 2013 “Tailored Mosaic” line, inspired by the golden mosaics in the Cathedral of Monreale in Sicily, featured garments adorned with angels, saints and Mary, as well as biblical figures.

Dolce & Gabbana ‘Tailored Mosaic’ show.

A critic called the mix of garments the designers’ “most heavenly offerings to date.” In 2018, Christian themes and symbols again permeated their collection.

It is now almost commonplace for fashion lines to reference or include Christian symbols, themes and figures. At New York Fashion Week in 2026, YesuGod, “a luxury Christian fashion house,” showcased its designs – garments adorned with the words “anno domini” and others with “the Lord is Coming.” More recently, in 2025, the vestments of Catholic priests inspired Dolce & Gabbana’s menswear collection.

The devil makes only an occasional appearance on the fashion runway and on the red carpet; historically, too, his presence has been minimal. Christian figures who embody ideals of goodness and holiness – saints, Mary and even Jesus – are the ones who rule the runway. Christianity and fashion are not so separate after all.

The Conversation

Lynn S. Neal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The missing link in America’s critical minerals push isn’t mining – it’s processing expertise

MP Materials’ Mountain Pass mine and processing facility in California was for years the only U.S. rare earth elements mine. Tmy350/Wikimedia Commons, CC BY-SA

The United States is spending billions of dollars to secure access to critical minerals – minerals and metals that are essential to modern technology, from electric vehicles to smartphones and military systems.

But amid the push to dig more, one question gets far too little attention: Who will actually process what comes out of the ground?

Between mining and the finished product lies a complex chain of separation, refining and advanced manufacturing. Since the 1990s, however, the United States has lost much of its critical mineral processing capacity.

Rebuilding domestic mineral supply chains will depend not only on resource availability and funding, but also on whether the U.S. can rebuild the technical expertise and industrial systems required to process those materials on a large scale.

How America lost its lead

The United States was a global leader in rare earth minerals from 1965 through the mid-1980s. It produced about 15,000 metric tons a year, about three times the amount produced by the rest of the world.

The Mountain Pass mine in California supplied the majority of the world’s rare earth elements used in electronics and the defense industry. American metallurgists, chemical engineers and processing facilities had significant expertise in its production and processing.

However, environmental damage, including wastewater pipeline leaks that released radioactive wastewater into the Mojave Desert during the 1980s and 1990s, and tightening regulations increased operating costs in the United States. During that period, much of the world’s manufacturing base for rare earth elements shifted to China, where labor costs were lower and environmental regulations were less stringent.

As production grew abroad, U.S. production of rare earth elements fell sharply – to near zero by the early 2000s, according to the U.S. Geological Survey.

In recent years, as much as 90% of the rare earth minerals extracted in the United States and allied countries have been shipped to China for processing. In 2024, the U.S. relied on imports for about 80% of its rare earth compounds and metals.

Why bringing processing back is not simple

The U.S. government is now pushing to increase domestic critical minerals production, citing national security. But building a processing facility is not like opening a warehouse.

These facilities require years of permitting, highly specialized equipment and a workforce trained in metallurgy, chemical engineering and industrial systems operation. The time from investment decision to production can stretch across a decade.

The U.S. currently has two domestic rare earth mining locations. One is in southeast Georgia, which extracts rare earth elements as a byproduct of heavy mineral sand mining. The other is Mountain Pass, which produces bastnaesite, a rare earth carbonate mineral. The mines produced about 51,000 metric tons of rare earth mineral concentrates in 2025, while the U.S. imported about 21,000 metric tons of rare earth compounds, most of them from China, according to 2025 U.S. Geological Survey data.

The U.S. has also lost expertise. Mining and mineral engineering education programs now produce only a few hundred graduates per year, well below the levels of past decades. The number of accredited programs has declined since the 1980s. Many faculty members are nearing retirement.

Industry projections estimate that the mining workforce will need to grow significantly in the coming years to meet rising demand. Specialized skills in areas such as rare earth separation, metallurgical testing and environmental systems design require years of training and practical experience. And while mining can produce high-paying jobs, the industry also has a reputation for environmental damage and hazardous conditions.

Environmental compliance is part of the skill set

Processing critical minerals is a dirty industry. That fact has made it more difficult for processing and refining companies to operate in the U.S.

For example, separating rare earth elements typically involves chemical processing with acids and solvents. When waste streams are poorly managed, these processes can produce toxic wastewater and air pollution and contribute to soil erosion. In parts of China where rare earth production expanded rapidly in the 1990s and 2000s, contamination from mining and processing has polluted rivers and damaged nearby farmland, and the wastewater can seep into soil and groundwater.

In the U.S., modern facilities must meet strict federal and state standards for air quality, water discharge and waste management that raise the cost of processing. These regulations were developed in response to environmental disasters, like the Cuyahoga River fire of 1969, when industrial oil and waste on the river burned, and hazardous waste crises like the Love Canal disaster that led to landmark environmental laws.

Operating a refinery or separation facility in compliance with regulatory standards today requires expertise in pollution control, waste treatment and sustainable process design. That requires a workforce skilled in materials science and engineering and with knowledge of environmental systems. Without environmental expertise, operational risks, regulatory challenges and project delays can increase, affecting long-term viability.

How to build a US supply chain

Rebuilding U.S. supply chains will require more than expanding extraction.

Canada’s critical minerals strategy offers an example. It connects mining projects to battery and electric vehicle manufacturing by funding processing facilities, developing regional supply chain hubs and investing in workforce training programs tied to those industries.

Australia has combined critical minerals policies with incentives and public financing to encourage domestic mineral processing, while also expanding university and vocational training in mining, metallurgy and mineral processing.

The United States has many of the key ingredients needed to rebuild its processing capacity, including research universities and workers with transferable industrial skills. Land-grant and technical universities could expand programs that integrate mining, materials science, environmental restoration and recycling. In regions such as Appalachia, where coal’s decline has left workers with skills but few job opportunities, retraining programs for new mineral recovery jobs could help people transition to a new industry.

A few federal programs support parts of this transition, including research hubs that develop new extraction and processing technologies, apprenticeship initiatives and university-industry partnerships. However, these efforts are spread across multiple agencies, with limited coordination to align priorities and investment.

The real bottleneck

America’s critical minerals strategy is often discussed in terms of geology and geopolitics – where resources are located and who has access to them.

But supply chains depend on people and systems. That’s America’s real bottleneck in creating a domestic supply chain.

A successful domestic supply chain will require workers who know how to separate neodymium from praseodymium, operate solvent extraction circuits and maintain hydrometallurgical plants within regulatory standards. These are highly specialized skills that take years to develop.

The United States has significant mineral resources and growing policy support. Now, it needs to pay attention to the workforce and industrial capacity needed to transform those resources into usable materials.

This gap developed over decades. Addressing it will likely require sustained investment alongside broader mineral policy changes such as permitting reforms and investment in domestic processing facilities.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

When you don’t have the facts, argue the law: How Trump’s EPA is limiting its own ability to protect public health far into the future

The Trump administration is trying to tie the hands of future administrations when it comes to regulating pollution, including greenhouse gas emissions. Chris Sattlberger/Tetra Images via Getty Images

As the Trump administration moves to weaken America’s air pollution rules, it is deploying new legal interpretations that are intended to tie the hands of future administrations for years to come.

In practice, the changes limit the Environmental Protection Agency’s authority under the Clean Air Act. The result allows EPA officials to ignore science, data and the adverse effects their decisions will have on public health and the environment.

But the new interpretations are also designed to apply not just to the rule in which they are first set forth but into the future.

If affirmed by the U.S. Supreme Court in inevitable legal challenges, these interpretations could make it harder for future administrations to restore the public health protections that the Trump administration eliminates. They could also make it difficult to update rules to respond to new information about health risks.

Typically, moves to weaken pollution regulations through novel legal interpretations would have a good chance of being overturned in court. But the EPA’s new interpretations are strategically designed to appeal to the current U.S. Supreme Court’s view of federal agencies’ authority, especially in light of the court’s 2024 ruling in Loper Bright v. Raimondo. In that case, the court overturned what’s known as the Chevron doctrine. A 1984 Supreme Court ruling had established that courts should defer to executive agencies’ legal interpretations of their governing statutes when the text of the law was ambiguous or left gaps. That deference no longer applies.

As a former EPA appointee who helped write and review dozens of regulations under the Clean Air Act during the Obama and Biden administrations, I find these efforts to prevent the EPA from doing its job of protecting public health and the environment to be alarming. Here are two examples of how the new interpretations are playing out.

Blocking future climate regulations

In February 2026, the EPA rescinded its 2009 endangerment finding, a determination under the Clean Air Act that carbon dioxide and five other greenhouse gases “may reasonably be anticipated to endanger public health or welfare” because they contribute to climate change.

The endangerment finding was the scientific and legal basis for EPA rules requiring automakers, power plants and oil and gas operations to cut their greenhouse gas emissions. Erasing it would make it easier for the Trump administration to eliminate greenhouse gas regulations.

Rather than try to challenge the science of climate change, which would be difficult given the growing mountain of evidence, the Trump EPA relied on legal arguments that were intended to dispense forever with the EPA’s ability to regulate greenhouse gas pollutants under the Clean Air Act.

Two men walk toward a podium. One of them, Zeldin, is grinning. The promotional sign reads 'Largest Deregulation in History
President Donald Trump and U.S. Environmental Protection Agency Administrator Lee Zeldin arrive for a White House event to announce a rollback of the 2009 Endangerment Finding on Feb. 12, 2026. Anna Moneymaker/Getty Images

Among the administration’s numerous arguments, two stand out:

First, the Trump EPA says the Clean Air Act should be read to limit the EPA’s authority to regulate air pollution only if its harm to the public is “through local or regional exposure.”

That would mean contributions from U.S. sources to global air pollution, no matter how demonstrable or how much they endanger Americans, are not covered by the Clean Air Act.

Second, the Trump EPA says that reducing greenhouse gas emissions from motor vehicles and engines would be “futile.” It points to global climate modeling that suggest these reductions would not meaningfully reduce the harm to public health and welfare.

What that argument fails to mention is that actions by people around the world to reduce emissions across different sectors add up. Motor vehicle emissions are the No. 1 contributor of U.S. emissions. If this sector is too small to regulate, then nothing is big enough.

Each of these interpretations is contrary to positions that the EPA took in the original endangerment finding, which the D.C. Circuit Court of Appeals upheld in 2012.

Allowing more toxic air pollutants

A second example involves the EPA’s proposal on March 17, 2026, to weaken pollution restrictions on businesses that sterilize medical equipment using ethylene oxide, a known carcinogen.

In that proposal, the EPA is also changing a legal interpretation in a way that would constrain the agency’s ability to protect human health into the future, this time from emissions of toxic air pollutants.

The Clean Air Act, under Section 112, establishes a methodical program for the EPA to regulate industries that emit significant quantities of air pollutants that can cause cancer, birth defects, genetic mutations or neurological harm, or harm reproductive health.

The EPA reviews how facilities control their emissions and sets standards that require all facilities to meet what the best-controlled sources are doing. But Section 112 has an important provision called “residual risk” review: Eight years after the EPA sets the first technology-based standards, it must determine whether the public health risk posed by emissions from the facilities after controls are added is acceptable.

In 2024, the EPA updated its hazardous air pollution rule for facilities that use ethylene oxide to sterilize medical equipment sensitive to steam heat, such as devices containing plastic, rubber or electronic components. Because recent research showed that ethylene oxide posed a much higher risk of cancer than previously thought, the EPA also updated its 2006 residual risk finding and required additional safeguards.

The Trump EPA is now arguing that the agency can assess residual risk only once, even if more recent information shows that the health risk is unacceptably high.

By constraining its own authority, the EPA is withholding standards that would protect thousands of people from a higher risk of cancer. It is also creating a legal precedent that will justify weakening other standards. Those include standards for chemical manufacturing facilities that the Biden EPA updated in 2024 through residual risk review.

That precedent would also prohibit the EPA in the future from taking into account new information about the health effects of any regulated hazardous air pollutant from any type of industry the EPA regulates under Section 112 of the Clean Air Act, including petroleum refineries, chemical manufacturing and paper mills.

Arguing the law

These rules are just two examples of the administration’s “if you don’t have the facts, argue the law” approach.

If the administration’s strategy works, the American public may be living, and dying, with the consequences of these industry-friendly regulations for years to come.

The Conversation

Janet McCabe is a volunteer with the Environmental Protection Network and has held several appointed positions at the United States Environmental Protection Agency. Consistent with the Indiana University Statement of Policy on Institutional Neutrality, the comments contained in this communication are solely my views and are not intended to be construed, and shall not be construed, as the views of Indiana University or comments made on behalf of or by Indiana University.

We studied what happened when financially struggling artists received $1,000 a month, no strings attached, for 18 months

A few commissions, contracts or cancellations can dramatically change an artist's annual earnings. Hyoung Chang/The Denver Post via Getty Images

Though artificial intelligence is making it easier than ever to produce images, music and text, the technology is also making it harder for the people who have traditionally produced this work to earn a living.

A photographer who once was commissioned to make art for an advertising campaign is now competing with graphics produced by the AI image generator Midjourney. A novelist who used to make money on the side as a technical writer is seeing that work be replaced by a series of prompts in ChatGPT.

The extent to which AI will upend creative work remains unsettled. But that uncertainty has made guaranteeing income for creatives a more viable policy idea.

In fact, creatives in New York recently participated in the largest basic income program for artists in U.S. history, the Guaranteed Income for Artists initiative.

Spearheaded by Creatives Rebuild New York and primarily funded by the Andrew W. Mellon Foundation, the program gave 2,400 artists across New York state US$1,000 a month beginning in June 2022. There were no work requirements and no restrictions on how the money could be spent. The program sought to improve the financial stability of artists and encourage the public to see them as workers who deserve a stable income and social support.

As researchers who study artists, cultural work and public policy, we evaluated this program to see whether it achieved its stated goals. Our main finding was simple: Artists did not stop working. Instead, they changed the kind of work they did.

Cash buys time

Artists often make choices that look strange in standard economic models, which typically assume workers will prioritize higher wages while balancing work against leisure time.

Artists, on the other hand, may stay in poorly paid, unstable arts work, even when other work pays more. Economists have long described this as a “work-preference” model. Put plainly, they argue that artists get value from the work itself, not just from the paycheck.

The guaranteed-income program, which was geared toward low-income artists, offered a rare chance to see how a financial cushion would influence the kind of work they focused on, along with their overall earnings.

The program selected artists through a weighted lottery. It adopted an expansive definition of “artist.” Anyone engaged in artistic, cultural or community-centered creative practices – such as musicians, storytellers or muralists – was eligible to apply. However, it excluded commercial workers like wedding photographers or food caterers.

Our analysis, which is forthcoming in the Journal of Cultural Economics, compared artists who received payments with applicants who hadn’t been selected.

For purposes of the study, artists broke down their work time into “artistic/cultural practice(s),” “other arts work” and “non-arts work.” The work didn’t necessarily have to involve a paycheck or stipend; it could simply mean time spent on a personal artistic pursuit. However, it’s safe to assume that “non-arts work” usually involved some sort of side job to earn extra money.

The results lined up almost exactly with what the work-preference model predicts. Artists who received the payments spent about 3.9 more hours per week on arts work than comparable artists who did not receive the payments. They also spent about 2.4 fewer hours per week on non-arts work.

Opponents to basic income programs often argue that recipients will become less motivated to do any work whatsoever. That isn’t what happened, though. The money helped artists move time out of work they were doing mainly to survive, and into the creative work they preferred.

Earnings told a messier story

The earnings results were more complicated.

Artists receiving the monthly payments earned significantly less from non-arts work. That makes sense, given that many of them switched away from non-arts work. But total earnings from all work also fell by about $11,600 a year on average, close to the $12,000 annual value of the cash payments.

But we cannot confidently say that the basic income program reduced total earnings by that amount. That’s because artists’ incomes are so volatile: A few commissions, contracts, sales or cancellations can dramatically change what artists earn in a given year. Income varied widely among both the artists who received monthly payments and the applicants who hadn’t been selected, which made it hard to see the precise cause and effect of the program on total earnings.

A young man folding his arms is visible through a mirror, which shows a room with walls filled with photographs and other imagery.
A New York City artist participates in Bushwick Open Studios, an annual event when hundreds of neighborhood artists open their workspaces to the public. Andrew Lichtenstein/Corbis via Getty Images

The program may have given artists enough financial room to stop chasing some non-arts income, but it did not change their overall income from where it had been.

That is a very different policy effect than “more cash equals more income.” It is closer to “more cash equals more control over time.”

A lesson beyond the arts

The findings do not mean that guaranteed income is the right policy for everyone. Artists are unique. Many have strong reasons to keep doing creative work even when it pays poorly.

The study also took place after the COVID-19 pandemic, while the arts and entertainment sector was still recovering. And Creatives Rebuild New York’s guaranteed-income program was a temporary, one-time opportunity.

A longer-term follow-up could show whether these shifts lasted. Did artists keep making more art after the payments ended? Did the extra time they spent on their own artistic pursuits lead to new work, new income or more stable careers? Those questions remain open.

But to us, the most important lesson may be that work is not one thing.

A monthly cash transfer can reduce one kind of work while increasing another. It can lower earnings from gigs people take mainly to pay the bills, while freeing up time to spend on work that is meaningful, socially valuable or personally sustaining.

For artists in this program, $1,000 a month did not buy a vacation or a chance to slack off. It bought time for work they valued more.

That distinction matters, particularly as debates over the use of basic income policies grow alongside advances in AI and automation. The question is not only whether people work when they receive cash with no strings attached. It is what kind of work becomes possible when financial pressures ease.

The Conversation

Joanna was previously funded by Creatives Rebuild New York to conduct an independent evaluation of their Guaranteed Income for Artists program.

Doug Noonan receives funding from the National Endowment for the Arts. He previously was funded by Creatives Rebuild New York to conduct an independent evaluation of their Guaranteed Income for Artists program.

How does your brain decide between the road not taken or the same old route? Resolving conflicting memories is key to navigation

Which route should you take? The familiar or the unknown? francescoch/iStock via Getty Images Plus

When was the last time you paid attention to your commute? And I don’t mean a couple of feet in front of you, at the car merging into your lane without a blinker. I mean really paid attention to the route you take.

Did you see the landmarks in the distance that make up the city skyline? Did you drive right past the grocery store you promised to stop by at the corner of this Peachtree Street or that Peachtree Street, a struggle Atlanta locals know well?

“Oops! Force of habit,” you might say to yourself as you miss your turn and begin to think about when and where you can turn around.

Relying on familiarity can either facilitate or impede daily navigation. As a researcher studying memory and navigation, I aim to understand how the brain supports spatial navigation and what happens if the cognitive mechanisms for choosing the best route home begin to decline, such as during stress or with aging.

Humans are creatures of habit – at least that’s what people tell themselves when wary of trying something new. But what if a new route is faster or safer than the one you usually take? Would you try it?

Research from my team suggests that people balance between exploration and habit – that is, trying something new or sticking with the familiar – when deciding what route to take. Which navigation strategy someone chooses depends not only on their spatial abilities but on their network of brain regions that support navigation.

Close-up of side view mirror reflecting city skyline and other cars on the road
When was the last time you paid attention to the scenery of your usual commute? Boonchai Wedmakawand/Moment via Getty Images

A spatial blueprint

Spatial navigation refers to the cognitive ability that helps you travel from one location to another. It may sound simple, but it requires using cognitive functions such as memory, attention, decision-making and assessing potential rewards – never mind the ability to simply perceive the environment itself.

Spatial navigation uses memories of things you consciously experienced. Two types of memory relevant to navigation are what scientists call episodic and semantic.

For example, you might retrieve an episodic memory about a specific event: remembering a detour you took a week ago to drop a package off at the post office, including the traffic and weather that day.

You might also retrieve a semantic memory that’s more factual and knowledge-based: remembering how many blocks away the post office is from the park and the turns you need to make to get there.

Together, these kinds of memory inform your spatial memory, which allows you to retrieve location information. This could be where buildings are in relation to each other or where objects are situated in your house. Spatial memories help form your cognitive map, which is essential for getting around in the world.

Often, these different ways of remembering interact, and you can use one type of memory to inform the other. For example, you’ve become accustomed to your commute to work and know it’s relatively short (semantic memory), but over the past three days you’ve been arriving late due to heavy traffic (episodic memory), so you choose to take a different route next time.

Research from my team has found that disagreements in your brain over possible routes can happen. Different types of memory can come up with different solutions for what route you can take, and this conflict is a big factor in how hard your brain needs to work when navigating an environment.

Responding to new and familiar memories

Habits stem from what researchers call stimulus-response memories. These include the knee-jerk reaction you might have to familiar landmarks – when you perceive these places, your brain signals you to make a turn along your commute without needing to consciously think about it.

Habits are rigid, but they can also be beneficial: By taking care of the navigation for you, habit frees up your brain to have a conversation with someone or plan what to make for dinner when you get home.

When navigating less familiar routes or environments, where habit doesn’t kick in automatically, you rely on brain regions such as the hippocampus to call on detailed memories from recent experiences to help guide the way.

Aerial view of a busy intersection in a city, crowds of people milling about and buildings lit with animated billboards
When visiting a new city, you might rely on your existing mental map of urban environments. Francesco Riccardo Iacomino/Moment via Getty Images

But let’s say you’re shopping at a new grocery store where most things are where you expect them to be, even though you’ve never been in this particular store before. What happens when your brain experiences both something new and something familiar about an environment?

Researchers have shown that when something about an environment is familiar and aligns with your prior experiences, the prefrontal regions of your brain – those responsible for executive functions such as decision-making – become more active. They can bypass or even inhibit your hippocampus’s ability to form new memories about specific events.

In other words, your brain can weave information about a new experience into your database of existing knowledge, rather than storing it as completely new information with little relation to the past. This process may help fast-track your understanding about new experiences.

Updating cognitive maps

Researchers know that cognitive maps of the environment depend on the hippocampus and its database of memories about specific events. However, I and other researchers argue these maps can also function as a schema – a collection of memories made up of associations between environmental details. You can add new information to these collections and use it to infer new relationships.

Say a new pedestrian bridge is built between the park and the post office. Your brain can more easily weave this new route information into your existing memories compared with learning a new environment from scratch. Similarly, if you just moved to a new town and know very little about the spatial layout, you might rely on your past experiences of towns to infer where something is.

Schemas help you interpret and incorporate new information more quickly.

Using neuroimaging techniques as well as virtual reality programs designed to test a participant’s ability to navigate different routes, my team found that there is likely an interdependent relationship between the brain areas that store memories of specific events and areas that store related information across memories when planning to navigate less familiar places.

New routes are more difficult to follow when they differ from your prior experiences. Thus, a stronger schema helps integrate your knowledge of the spatial relationships between locations and landmarks (such as the distance between the post office and the park) with more general knowledge (such as prior route difficulty). This all informs how you choose to navigate.

Navigating daily life

These memory principles help explain why inconsistencies with your previous experiences can make it so difficult to navigate many aspects of daily life.

Imagine you woke up tomorrow and the GPS on your smartphone was no longer available. How will you plan your route to get to your destination?

You might be used to navigating north from your home to the grocery store – but have you ever tried to navigate to that grocery story from a different location? It’s much harder!

Factors such as stress, aging and general cognitive decline can affect brain function and human behavior. Imagine how much harder that new route to the grocery store is for an older adult.

Relating new information to your prior experiences may help strengthen your schema and make navigation easier. And understanding what processes the brain needs to go through to solve these navigation problems can help you understand why getting around can be challenging.

The Conversation

This work was supported in part by grants from the National Institute on Aging of the NIH.

How AI can lead to false arrests and wrongful convictions

AI algorithms such as facial recognition systems produce probabilities, not facts. Matthew Horwood/Getty Images

In Baltimore on Oct. 20, 2025, a 17-year-old student named Taki Allen was sitting outside his high school after football practice when an artificial intelligence-enhanced surveillance camera falsely identified the Doritos bag in his pocket as a gun. Within moments police cars arrived, officers drew their weapons and Allen was forced to his knees and handcuffed while they searched him. All they found was a crumpled bag of chips. The AI’s misidentification and the human decisions that followed turned a normal evening into a traumatic confrontation.

On Dec. 24, 2025, Angela Lipps, a Tennessee grandmother, was released after spending five months in jail because facial recognition software had incorrectly connected her to fraud crimes in North Dakota, a state she had never visited. Police had arrested her at gunpoint while she was babysitting her four grandchildren.

These are unfortunate examples of how AI can lead to mistreatment of people because of technical flaws as well as misplaced human faith in the technology’s supposed objectivity. These cases involve different tools, but the underlying issue is the same. AI systems produce probabilities, and people treat them as certainties.

We are researchers who study the intersection of technology, law and public administration. In researching how police departments use AI and how digital technologies operate in a democratic society, we have seen how quickly the shift from probabilistic prediction to operational certainty happens in practice.

AI policing tools are used in dozens of U.S. cities, although no public registry tracks the full footprint. The tools ingest historical crime data and score neighborhoods on predicted risk so officers can be routed toward the resulting hot spots. The mechanism is straightforward, but its consequence is not. Once a system signals a possible threat, the question is no longer how certain the prediction is but what to do about it. A statistical output turns into a deployment decision, and the uncertainty that produced it gets lost on the way.

A matter of probabilities

When generative AI models such as ChatGPT or Claude respond to human requests, they are not searching a database and pulling out facts. They are predicting the most likely answer based on patterns in data they have been trained on. When asked, “Who invented the light bulb?” the models do not go to a source or fact-check a finding. They generate a statistically probable answer which is “Thomas Edison.” The reply might be right, but it might not capture the full story – such as Joseph Swan’s parallel invention at the same time as Edison’s. The danger arises when people believe that the model is retrieving truth rather than generating likelihoods.

This distinction matters. The most probable response is not the same as a factually verified answer, complete with context.

Police handcuffed teenager Taki Allen at gunpoint after an AI camera system incorrectly indicated he had a gun.

This reality can be highly problematic for policing and law. For example, when law enforcement agencies use AI systems trained on geographical data to estimate where criminal activity is likely to occur, the algorithms analyze historical crime data and geographic patterns. These systems generate statistical risk scores or heat maps for locations based on prior incidents. But such predictions may have little bearing on who was involved in a new crime in the area, even if an algorithm generates information that sounds authoritative.

Some researchers have argued that predictive policing systems do not increase the likelihood that racial minorities will be arrested more often relative to traditional policing practices. The broader concern, however, is not limited to measurable disparities in arrest outcomes alone. It is about how probabilistic predictions can become standardized operational decisions absent further verification.

Artificial intelligence researchers caution against using these models in isolation for crime and legal proceedings or decision-making. Research at the University of Virginia’s Digital Technology for Democracy Lab with police chiefs shows that some law enforcement groups follow strict policies that dictate when technology is used in tandem with, or in place of, human discretion, while others have no such policy.

What most users do not realize is that AI systems rarely produce binary answers: yes or no, a positive identification or a negative one. They generate probabilities. Some systems assign scores that assess the system’s confidence in a prediction. In those cases, engineers set a confidence threshold, a level of certainty that determines when the system should trigger an alert about a possible threat. You can think of this threshold as settings on a control knob. A 95% confidence level, for example, indicates that the model considers its interpretation to be highly likely.

A low threshold catches more potential threats but increases false alarms. A high threshold reduces mistakes but risks missing real dangers. Either way, these algorithmic thresholds are often invisible to the public and are set quietly by vendors or agencies, even though they shape when police action begins.

Angela Lipps was unjustly jailed for more than five months based on a mistake by a facial recognition system.

Where to draw the line

In medicine, these kinds of trade-offs are explicit. Diagnostic tools are calibrated on the relative harm of different errors. In infectious disease settings, for instance, systems that detect infections are often designed to accept more false positives to avoid missing contagious individuals. Then medical professionals look into the human cases. And the algorithm-based decisions are subject to professional standards, ethics reviews and regulatory oversight.

In policing, an AI system must balance false positives, where the system flags a threat that does not exist, and false negatives, where it fails to detect a real danger. The trade-off carries significant consequences. A lower threshold may generate more alerts and allow officers to intervene earlier, but it also increases the risk of mistaken identifications, which happened to Angela Lipps, or escalated encounters like the one Taki Allen experienced. A higher threshold may reduce wrongful interventions but could allow legitimate threats to go undetected.

Some law enforcement agencies argue that acting on imperfect signals is preferable to missing serious risks. But lowering the bar for algorithmic alerts based on probabilistic estimates effectively expands the number of people subjected to police attention. It is important to realize that these thresholds are not neutral features of the technology; they are choices embedded by the creators in the model’s code. Decisions about where to draw the line determine when an algorithmic suspicion becomes a real-world police action, even though the public rarely sees or debates how those thresholds are set.

Limits of optimization

Developers often use several methods to determine where to set a confidence threshold. Techniques such as “receiver operating characteristic curve analysis” examine how changing the threshold for an alert alters the balance between correctly identifying real events and mistakenly flagging harmless ones. Precision–recall analysis examines a similar trade-off, asking how accurate the system’s alerts are relative to the number of incidents it successfully detects.

These approaches could help calibrate systems more responsibly by testing how often an algorithm wrongly flags people or locations. Fine-tuning can improve system performance. But the techniques cannot resolve the underlying question of how much algorithmic uncertainty society is willing to tolerate.

In law, legal standards of proof determine how convincing evidence must be before a judge or jury can rule in favor of a plaintiff or defendant. Courts use formal standards of proof depending on the stakes, such as probable cause, preponderance of the evidence and beyond a reasonable doubt. These standards reflect a societal judgment about how much uncertainty is acceptable before exercising legal authority. A court does not accept a guess or a prediction; it follows a process to weigh evidence. Unlike humans, an AI model does not usually say, “I’m not sure.” A model typically has confidence in its reply, even when the answer is incorrect.

Stakes are rising as AI enters the courtroom, law enforcement, the classroom, the doctor’s office and the public sector. It is important for people to understand that AI does not know things the way many assume it does. It does not distinguish between “maybe” and “definitely.” That is up to us. We believe that technologists should design systems that admit uncertainty and need to educate users about how to interpret AI outputs responsibly.

The Conversation

Maria Lungu is affiliated with the Digital Technology for Democracy Lab at the University of Virginia, Kennesaw State University, and the Center for DI and Digital Policy (CAIDP).

Steven L. Johnson is affiliated with the Digital Technology for Democracy Lab at the University of Virginia.

Delta-8, delta-9, THCA? What sets the different THC forms available in regulated cannabis products apart

Commercially available THC products are displayed at a dispensary in New York. AP Photo/Angelina Katsanis

Hemp products have exploded across the United States, even in the majority of states where recreational marijuana remains illegal. This surge came after the 2018 Farm Bill removed hemp from the Controlled Substances Act and made cannabis products derived from hemp, defined as those containing less than 0.3% delta-9 tetrahydrocannabinol – commonly known as THC – legal. But the types of THC products available and the regulations around them, which vary by state, can be confusing.

A common question I get as a chemist is about the differences between the various delta THCs, and about the actual amounts of THC in the available products. There’s delta-8, delta-9, delta-10 and THCA. The amounts of THC in legally infused drinks and edibles also varies, with products most often containing 5 or 10 milligrams.

Knowing the difference between these compounds, and how much THC is in what you’re buying, goes a long way toward making informed choices as a consumer.

THCA and delta-9 THC

THC compounds are a subset of cannabinoids, which include any compound that interacts with the cannabinoid receptors in your body. THC is technically a family of compounds including delta-8, delta-9 and delta-10 THC, which all have similar chemical structures and are psychoactive – meaning they can alter your mood and perception and produce a “high.”

However, not all cannabinoids are psychoactive. For example, cannabidiol, or CBD, interacts with the same receptors, but through different mechanisms, so it does not produce a high.

9-tetrahydrocannabinolic acid, THCA, is the major cannabinoid found in the cannabis plant. THCA itself does not produce a high, however. It first needs to undergo a chemical reaction that generates a psychoactive compound: delta-9 THC.

These two compounds have different chemical structures. THCA has an extra group of atoms attached that must be removed to produce delta-9 THC. Under heat, this group breaks away from the rest of the compound, creating delta-9 THC. So, when the plant is burned or cooked, THCA transforms into delta-9 THC.

The 2018 Farm Bill measured only the delta-9 THC – not THCA – present in a hemp plant. So a hemp plant could have, say, 25% THCA and only 0.2% delta-9 THC and still be legal, as it has less than 0.3% delta-9 THC. But as soon as you heat it, the THCA will convert to psychoactive delta-9 THC.

However, in November 2025, the Agriculture Appropriations Act redefined hemp by limiting the total THC, including THCA, to 0.3% on a dry weight basis.

Changing regulations

This new rule will go into effect in November 2026 and significantly affect the potency of smokable hemp products. In the plant itself, the cannabinoids make up a large percentage of the flower’s dry weight. High-potency cannabis strains have THCA concentrations from 20% to 30% by dry weight – far above the 0.3% total THC threshold. This redefinition would effectively render the majority of these products illegal under federal law.

The math for edibles like gummies and seltzers is different, so the dry weight rule alone does not affect these products.

Consider a 12-ounce THC-infused drink: The total dry weight of the product would only need to be about 3.3 grams per 10 milligrams of delta-9 THC – a common higher-end dosage – to fall at exactly the 0.3% threshold. A 12-ounce can of seltzer weighs around 355 grams, so 10 milligrams of delta-9 THC in a 12-ounce drink easily passes the weight threshold.

Even a very small edible like a gummy easily meets this weight threshold. For instance, a single Starburst candy weighs 5 grams, well above the 3.3-gram minimum needed for a 10-milligram dose to be under the 0.3% limit.

To close this loophole, the new law adds a separate rule: Any final hemp-derived product containing more than 0.4 milligrams of THC per container is no longer legal. That’s well below a single dose of any commercially marketed THC beverage or edible.

However, the debate isn’t over. Lawmakers introduced amended legislation in April 2026 that will give states autonomy in hemp regulation as opposed to a blanket federal ban.

What about delta-8 and delta-10 THC?

Delta-8 and delta-10 THC are what chemists call isomers of the delta-9 THC. They have the same chemical formula but different chemical structures. It’s hard to even tell the difference looking at the molecules. One of the double bonds just shifts its position by one spot in the ring.

Like delta-9, delta-8 and delta-10 THC are also psychoactive and bind cannabinoid receptors in the body in a similar way.

While they do occur naturally in cannabis plants, the concentrations are far lower than THCA and delta-9 THC. For commercial products, they must be produced synthetically, which has raised concerns about chemical contamination from manufacturing.

Some evidence suggests that these alternate forms are less potent than delta-9, but scientists will need to conduct more research to determine whether that’s true.

These compounds fell outside the original calculation in the 2018 Farm Bill, which limited only delta-9 – effectively acting as another loophole. But the recently proposed total THC standard closes it by accounting for all types of THC. State legislation still varies substantially when it comes to hemp-derived products.

In April 2026, the Trump administration rescheduled medical marijuana from Schedule I to Schedule III. This move could potentially add to the regulatory confusion, but it will lower research barriers and help scientists address basic questions about THC’s potency, how the body metabolizes it and its therapeutic potential.

Underlying all these complex debates around the legality of hemp versus marijuana and recreational versus medical uses at the state and federal levels lies a single molecule: delta-9 THC.

The Conversation

Aaron W. Harrison does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Why did ‘Tyrannosaurus rex’ have such short arms?

Teeth? Big. Arms? Not so much. William_Potter/iStock via Getty Images Plus

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


What did the T. rex use its little arms for? – Aurora, age 11, Pemberton Township, New Jersey


One of the most famous dinosaurs to ever roam across Earth, Tyrannosaurus rex, has filled people’s minds with wonder since the first skeleton was discovered in the early 1900s.

Scientists believe T. rex, or King of the Tyrant Lizards, as its name translates, was a fearsome predator. An adult T. rex was massive in size – approximately 40 feet (12 meters) long and 20 feet (6 meters) tall, weighing as much as an African elephant. Each of its enormous sharp teeth could be near a foot (0.3 meters) in length from the root to the tip.

I’m a paleontologist, and I use fossils to study how animals lived and evolved over long periods of time. One of the coolest things about being a paleontologist is that there are always new questions to ask and new things to learn – even about a super-well-known dino like T.rex, which went extinct just over 65 million years ago.

One T. rex mystery has to do with this giant predator’s relatively tiny arms. Why would it have arms so short that it couldn’t even reach its own mouth? How did it use them?

How ‘short’ is short?

First, let’s define what we mean by “short.”

The biggest T. rex could measure 45 feet (14 meters) from the snout to the tip of the tail, but their arms were only about 3 feet (1 meter) long. On average, a T. rex’s arms were just about 30% of the length of its legs.

In comparison, humans have, on average, arms around 66% of the length of their legs. If people had the same arm proportions as a T.rex, a 6-foot (1.8 meters) tall person would have arms only 10 or 12 inches (25 to 30 centimeters) long!

T. rex isn’t the only dinosaur with such short arms. The evolutionary trend toward shorter arms in theropods – the larger group of meat-eating, two-legged dinosaurs that T. rex belongs to – happened multiple times. Similar to how wings separately evolved in different animals – like birds and bats – traits can emerge many times in evolutionary history.

You can see the shortening of T. rex arms as a pattern in its family tree, as earlier relatives had proportionally longer arms.

Lots of schoolchildren gathered around a T. rex skeleton on display in a museum
Fossil skeletons of Tyrannosaurus rex make clear that the dinosaur itself was very big, even if its arms were proportionally small. John Zich/AFP via Getty Images

How did they use their mini-arms?

Short arms don’t seem to have been a problem for these mighty dinosaurs. T. rex was a successful carnivorous species that existed for over a million years. They only went extinct when an asteroid hit the Earth, causing a global mass extinction.

Scientists have suggested a few ideas to possibly explain how T. rex used their arms. Maybe they were used as some kind of social display that could impress other T. rex – kind of like the bright feathers of a peacock that can attract potential mates.

But male and female T. rex skeletons don’t show the major differences that paleontologists would take as clues that they relied on social displays to attract mates. And while animal behavior can sometimes be preserved, such as in bite marks or fossilized footprints, it’s rare to have enough fossil data to draw clear conclusions.

Maybe T. rex used their arms as weapons to attack or hold down prey. But these options seem unlikely since T. rex’s huge jaws would have made contact with an enemy or prey before the short arms would have been able to reach it.

Some scientists have recently hypothesized that T. rex‘s short arms were an adaptation to competition with other carnivores. If multiple predators were feeding on a carcass, one could get hurt by accidental bites or even intentional warning bites for getting too close. Shorter arms would be less likely to get chomped. Similar things occur today with territorial carnivores, like Komodo dragons.

Two Tyrannosaurus dinosaurs face off over a downed prey carcass
Scientists have suggested that in a feeding frenzy, shorter arms would potentially be easier to keep out of the way of chomps from other T. rex. Mark Garlick/Science Photo Library via Getty Images

Maybe the arms didn’t have a purpose

Another possibility is that the arms served little or no purpose at all, so over time, they became vestigial. That’s the scientific term for body parts that don’t have clear purposes anymore, but are still passed down through evolution.

One example is a whale’s hindlimbs. Whales evolved from mammals that lived on land that had large legs to move around. The bones are still present in today’s whales, but have gotten much smaller over millions of years and have no function.

Some scientists have suggested a different idea: T. rex’s arms may have evolved to be smaller as another body part grew larger. The fossil record reveals that arms got shorter as theropod skulls got larger across many different dinosaur groups, including T. rex. Larger skulls likely would have made it easier to hunt and eat larger prey.

Researchers can use mathematical equations to accurately predict theropod arm length if they know the animal’s skull size and length of its upper leg bone, the femur. It turns out that larger skulls are strongly linked to shorter arms in theropods.

The reason for the change in arms, however, isn’t as clear. Some scientists have argued that the smaller arms could have helped with balance as the head got larger, but others aren’t so sure. In evolution, there isn’t always a reason why a change occurs – sometimes, changes just happen. In this case, we don’t yet know if there was a benefit for the arms to get smaller as heads got larger.

Artist's rendition of a T. rex in a misty forest.
However they got that way, small arms don’t seem to have been an issue for these big predators. Orla/iStock via Getty Images Plus

So for now, we don’t really know how T. rex used its arms or why they evolved to be so small, proportionally. As scientists find new data, we will continue to test hypotheses to better understand why this tiny-arm trend occurred so many times in theropod evolution. That’s what makes science so exciting – a future fossil discovery could be the missing puzzle piece that helps us answer these questions.

Sarah Sheffield describes – and her students act out – some of scientists’ hypotheses about T. rex arms.

Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

The Conversation

Sarah Sheffield does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

What to do if someone you know in Philadelphia or elsewhere is detained by ICE

A handout photo provided by U.S. Immigration and Customs Enforcement of a worksite enforcement operation at a car wash in Philadelphia on Jan. 28, 2025. U.S. Immigration and Customs Enforcement via Getty Images

If someone you know is detained by U.S. Immigration and Customs Enforcement, it can be incredibly challenging to find and communicate with them.

For example, it can take several days just to confirm where they are. Even after locating a loved one, it is possible to lose track of them again, as ICE regularly moves people between facilities without notice.

I’m a law professor at Temple University in Philadelphia, where I work with immigrant rights organizations on issues of ICE arrest and detention.

Here’s what we know about how and where ICE is holding people as of May 2026.

A confusing web of detention facilities

When a person is arrested by ICE, the lack of a centralized immigration detention system makes it hard to figure out where they are.

For ICE detention, the federal government can contract with counties for county jail space or to execute service agreements with private prison companies. ICE also contracts with the Federal Bureau of Prisons to hold immigrants in their facilities.

Pennsylvania is no exception to this patchwork system. Four county jails – in Pike, Clinton, Cambria and Franklin counties – have contracts with the federal government to detain immigrants for ICE. Pike County, for example, received US$16 million from ICE in 2024 and 2025 for use of its jail.

Further, ICE contracts with Centre County so the county can serve as a pass-through for payment to the private prison company, the Geo Group, which runs the Moshannon Valley Processing Center. Moshannon is the largest detention center in the Northeast with 1,876 beds. This pass-through system allows the federal government to avoid the burdensome Federal Acquisition System for contractors. That purchasing system is governed by uniform policies that apply to all federal agencies that enter into contracts for services to ensure that business is conducted with integrity, fairness and transparency.

ICE pays millions of dollars each month to operate the Moshannon Valley facility.

Most recently, ICE set up contracts with two Bureau of Prison facilities in Pennsylvania to hold immigrants: the federal detention center in Philadelphia and the federal prison FCI Lewisburg.

Over 2,000 immigrants in detention in PA

After a person has been arrested by ICE, major federal policy changes that are intended to keep people locked up or have them deported make it difficult to get that person released.

For example, ICE has issued new guidance that expands who is subject to mandatory detention without access to a bond hearing to include anyone who entered the U.S. without a visa. This policy is currently being legally challenged by the ACLU along with other groups.

Additionally, ICE releases many fewer people. Under federal law, ICE has the discretion to release most people, unless they fall into a specialized category of “criminal aliens.” Previously, people were released on parole or on their own recognizance, sometimes with an order of supervision or bond.

As a result, immigration detention has reached unprecedented levels. Over 70,000 people were held in immigration detention in January 2026. As of April 2, 2026, over 2,000 people were held in immigration detention in Pennsylvania.

Crowd of people with one holding a sign that reads 'Sergio is one of us' and another holding a sign that reads 'We stand with Sergio'
Residents of Danville, Pa., hold a candlelight vigil for local business owner Sergio Chavez Jimenez after he was arrested by ICE on Dec. 27, 2025, and detained at the Clinton County Correctional Facility. Paul Weaver/SOPA Images/LightRocket via Getty Images

Isolated from family and legal advice

Once arrested, ICE detainees have a hard time contacting the outside world.

Upon arrival at a facility, they are stripped of their belongings, including their cellphone. They must pay for telephone calls to their family or get others to pay by putting money in their commissary account.

Further, ICE detention facilities are often outside of major urban areas and far from legal services and community support. Moshannon, for example, is over 100 miles from any nonprofit immigration attorneys who provide representation to people in immigration removal proceedings.

Previously, the federal government funded a Legal Orientation Program where nongovernmental legal services offered information, referrals and representation to those in detention. In 2025, the Department of Justice ended the program, justifying its termination based on the executive order entitled “Protecting the American People Against Invasion.” Section 19 of that executive order relates to reviewing, pausing or terminating contracts, grants or other agreements with nongovernmental organizations that support or provide services “to removable or illegal aliens.”

Out-of-state transfers are common

ICE’s movement of people without notice across different facilities is a long-standing practice. However, a recent UCLA study found that out-of-state transfers of noncriminal Latino detainees jumped from 18% to 55% after President Donald Trump’s reelection in 2024.

Transfers are mostly about ICE’s own efficiency in maximizing the filling of bed space. Some advocacy organizations have alleged that transfers are conducted for retaliatory reasons against people who make requests or complain. Transfers are not only disorienting for the person involved but also impede communication with family and access to counsel.

How to find someone in ICE detention

Several online guides provide information about how to locate someone after an ICE arrest and how to prepare their family in case of future arrest.

Here are some key tips.

1. Use the ICE online detainee locator.

The locator requires either a person’s country of birth and alien registration number – called an “A number” – or their full name and date of birth. A person might have an A number if they have a past or present case with the government, including having applied for a green card or asylum. It can take 48 hours for ICE to enter information about the person into its database so it can be picked up by the online locator. The name must be an exact match with what was entered into the system.

Webpage of U.S. Immigration and Customs Enforcement
This online search tool can help locate an adult detainee in ICE or Customs and Border Protection custody. U.S. Immigration and Customs Enforcement

2. Contact the ICE field office.

The Philadelphia field office covers Delaware, Pennsylvania and West Virginia. If you are a noncitizen, you might want a U.S. citizen to do this for you out of an abundance of caution, because ICE records information about the person calling. Call 215-656-7164 or email Philadelphia.Outreach@ice.dhs.gov.

3. Contact the consulate.

In many instances, ICE is supposed to notify the consulate of the arrested person’s home country within 72 hours.

4. Reach out to community groups, attorneys and elected officials.

In Philadelphia, community groups such as Asian Americans United, Juntos and New Sanctuary Movement, or the statewide Pennsylvania Immigration Coalition, might be able to help you. An attorney might also be able to help you. Here is a list of nonprofit legal service providers in Pennsylvania.

Further, you can ask for help from your federal elected officials, such as your congressional representative or Sens. John Fetterman or Dave McCormick. If you have a more direct relationship with a local elected official, such as your city council member, it cannot hurt to see whether they can also help you.

How to prepare in advance

If you know someone who is at risk of arrest by ICE, you can help them prepare in advance. Tell them to:

1. Keep copies of their documents in a secure space.

This includes their A number as well as immigration documents, passport, birth certificate, marriage certificate, tax returns and any employment and medical records. If they have children, make sure to include their passports, birth certificates and medical records.

2. Memorize important phone numbers.

They should know the numbers of family members and their attorney in case their cellphone is taken from them.

3. Have an emergency plan.

A family preparedness plan includes designating a caregiver for children in case a parent or guardian is arrested. They should also consider filling out documents that may help a family member or friend to care for their children if they are unavailable because of detention or deportation. These include forms that provide temporary guardianship or custody of minor children, consent for medical care of minor children and information for the Philadelphia School District.

Philadelphia Legal Assistance provides free downloadable packets in English and in Spanish to build a family preparedness plan.

Read more of our stories about Philadelphia and Pennsylvania, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Jennifer J. Lee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Pensions for Botswana’s elderly are growing, but care services are lacking – study tracks 20 years

Botswana’s economy is projected to contract by 0.4% in 2026, driven largely by a slowdown in the diamond sector. Diamonds account for a third of fiscal revenues and a quarter of GDP. This means the government has less money to spend, even before making any policy choices.

At the same time, the government has set about reducing debt as a share of GDP by cutting expenditure to stabilise the economy. This combination is forcing difficult decisions about public spending.

A key one is investment in social protection for older people. Over the past two decades, the number of older persons aged 60+ has doubled to about 279,111 people (roughly 8% of the population). In coming decades, that number is set to rise even more sharply. While this reflects important gains in life expectancy, it also presents a policy challenge: how to support an ageing population in a context of tightening public finances.

We have between us expertise in long term care systems, public financing and budget analysis. Our recent study sought to tackle this question by examining how the Botswana government has funded elder care over the last 20 years.

We also obtained government data to examine how state spending on older people has evolved over time under various social protection measures. These included the old age pension, destitute programme, disability allowance and war veteran’s allowance, as well as care provision through the home-based care programme.


Read more: Botswana’s hike of old age pensions hasn’t fixed the problem of who cares for the elderly – new study


Our final report looks at how spending in 2005 compares to spending in 2024-2025, adjusted for inflation to reflect real changes in today’s value, and how these trends correspond with the growth of the older person population.

The key insight of the new report is that while Botswana has significantly expanded its old age pension system, investment in care services for older people has not kept pace.


Read more: Botswana’s hike of old age pensions hasn’t fixed the problem of who cares for the elderly – new study


The result is a system that provides income support but leaves many without the care they need and an underinvestment in the care economy in Botswana.

A pension success story: at a cost?

Botswana’s old age pension has long been one of the country’s most important social protection programmes. It is universal, meaning all citizens above a certain age qualify, and it has achieved broad reach across both urban and rural areas.

In 2025, the government made two major changes: it lowered the eligibility age from 65 to 60 and increased the monthly benefit.

These reforms have been widely welcomed. For many older people, the pension provides a crucial lifeline, helping to cover food, transport and other basic needs. In a country without unemployment benefits, it often supports entire households, not just individuals.

But this success comes with trade-offs.

The rapid expansion of the pension has absorbed a growing share of the broader social protection budget. This has left less room for other forms of public support, particularly those related to care.

A hidden crisis of care

Ageing is not just about income, it is also about health, disability and the need for care. As people live longer, they are more likely to experience chronic illnesses and multiple health conditions at once. This often leads to increased levels of disability and dependence.

Yet Botswana’s spending patterns suggest that these realities are not being fully addressed.

Pension coverage has expanded. But access to other support programmes has stagnated or even declined. The proportion of older persons receiving the destitute allowance has fallen significantly over the past decade, and disability support reaches only a small fraction of those who need it. While there has been an increase in total spending, there has not been an increase in total spending in real terms per person.

At the same time, spending on community home-based care, a key service that supports older persons in their homes, has decreased in real terms. This is happening despite clear evidence that demand for such services is rising.

Families under pressure

Care for older people in Botswana has traditionally been provided by families. This model is under increasing strain. A previous report on caregiving indicated how the long-term impact of HIV/Aids, combined with migration and rising female employment, has reduced the availability of family caregivers.

Moreover, between 2012 and 2023, female labour force participation increased from 54.9% to 63.4%, meaning fewer women are available to provide full-time care at home.

At the same time, many households face significant economic and infrastructural challenges. Older-people households are often large and multigenerational, yet resources are limited. Nearly half report experiencing food insecurity, and many lack access to basic services such as piped water and sanitation.

In a few isolated cases there are “voluntary” carers supporting older persons. But serious questions remain about their long-term sustainability.

In rural areas, where most older persons live, these challenges are even more pronounced.

Poverty persists despite pensions

Poverty among older people remains a serious concern. Around 11.9% live in extreme poverty, and they are more likely to be poor than any other age group. One reason is that the pension is often stretched across entire households.

At the same time, access to additional assistance is limited. Programmes such as the destitute allowance and disability grant often rely on discretionary assessments by social workers. Many older persons report that these programmes are difficult to access or simply unavailable.

This points to a broader issue: Botswana’s social protection system for older people is becoming increasingly narrow, centred on a single programme while other forms of support fall away.

These challenges are unfolding in a context of fiscal austerity. As the government seeks to reduce deficits and stabilise the economy, public spending is under pressure. But cuts to social services come with risks. Botswana is already one of the most unequal countries in the world. Reductions in social protection and care services are likely to exacerbate these inequalities.

Public services are also under strain. The country faces shortages of healthcare workers and infrastructure. In this context, reducing investment in care could have long-term consequences for both social and economic development.

Rethinking social protection

The current moment calls for a shift in how social protection is understood. Rather than focusing narrowly on pensions, policymakers need to take a broader view, one that includes care as a central component. Investing in care services is not just about meeting immediate needs. It can also create jobs, support households, and contribute to economic growth. Community-based care programmes, disability support, and partnerships with local organisations all offer pathways to strengthen the system.

Across Botswana, community initiatives are already stepping in to fill the gaps. But without stronger public support, these efforts cannot meet the scale of need.

What’s needed is a more balanced approach to spending priorities, one that protects income security while also investing in the public services that enable people to age with dignity.

The Conversation

Elena Moore receives funding from Welcome Trust 225910/Z/22/Z and the International Development Research Centre, Grant No. 110536 - 001

Thokozile Madonko does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Can the assisted dying bill be brought back? It’s possible – but supporters face four challenges

Despite MPs backing proposals last year to legalise assisted dying in England and Wales, the plan did not become law. The bill failed to complete its passage through the House of Lords – not because peers voted against it, but because a relatively small number proposed an unprecedentedly large list of amendments. As a result, the bill ran out of time.

But this is unlikely to be the end of the story for assisted dying. MPs who support the change have called for the bill to be brought back in the new parliamentary session, which begins on May 13. They have reportedly been joined in their demands by almost 200 peers in the Lords.

Their strategy will be for MPs to pass the bill again in an identical form. If they do so, it could become law even if the Lords fails to pass it. This is possible because of special powers to override the Lords under what are known as the Parliament Acts. But achieving this will not be straightforward – to succeed, supporters will need to overcome four key challenges.

Challenge 1: Winning the lottery

The first challenge will be for a supporter to be drawn high in the private members’ bill ballot. At the start of each session, 20 MPs are selected from this random draw to receive priority access to the very limited Commons time available for private members’ bills. Those drawn highest pick their slots first, giving them the best chance of success.

A supporter would need to be drawn among the top seven places to guarantee a full day’s debate (and therefore a vote) on their bill – a key requirement to prevent it being “talked out”. But in reality, they probably need to be drawn in the top three. Supporters say they have around 200 MPs willing to reintroduce the bill if selected.

If advocates do not win this legislative lottery, they have other options. One is to introduce a different form of private members’ bill, known as a presentation bill, but this will struggle unless ministers grant it time. Less likely is a government bill. Either way, ministers would need to provide public assistance in ways they have so far been reluctant to.

Challenge 2: Maintaining support from MPs

The next task will be to maintain a coalition of MPs behind the bill, which would again be subject to a free vote. Although MPs backed the bill last time, supporters may be concerned that the margin of victory more than halved during its Commons passage – from 55 at the initial second reading vote to 23 at the final third reading. Fourteen MPs switched from support to opposition, while just one made the opposite journey. If this trend continued, the majority behind the bill could evaporate.

But the reverse is also possible, especially if some opponents choose to back it as a point of democratic principle. Liberal Democrat leader Ed Davey, who voted against the bill’s first iteration, has criticised as “undemocratic” the Lords’ failure to complete its scrutiny. It is possible that he and others could switch their votes.

Challenge 3: Avoiding new amendments

To be able to use the Parliament Acts to override the Lords, MPs would need to back the new bill in essentially an identical form to the first time. During the first bill’s passage, MPs made more than 200 amendments. Supporters will want to avoid doing so again.

MPs can amend bills at their committee and report stages. On the few occasions when the Parliament Acts have been used, ministers have usually moved a motion to effectively cancel these stages – preventing MPs from making changes. But the Parliament Acts have never before been used on a private members’ bill, and it is unclear how these stages could be avoided without government assistance.

Otherwise, these stages would proceed as normal. This would not only slow the bill’s passage through the Commons but would also risk the bill being amended – which would of course prevent the Parliament Acts being used. As such, any amendment passed in the Commons could effectively scupper the bill. This could provide cover for opponents who would prefer not to be seen blocking the bill outright.

Challenge 4: Incorporating amendments they do want

There is another snag for supporters of the assisted dying bill. The version of the bill passed last year by MPs is not the version they would ideally like to see on the statute book. Supporters cannot include any changes when they ask MPs to vote again for bill, but they will want to add some later.

For instance, Labour peer Charlie Falconer, the bill’s sponsor in the Lords, proposed almost 80 amendments last time – typically implementing changes requested by government lawyers, or responding to parliamentary pressure including from influential Lords committees. The slow pace of Lords scrutiny meant that most of these were never reached.

The Parliament Acts provide a mechanism to deal with this: an unusual “suggested amendments” process, enabling MPs to send the amendments to the Lords alongside an otherwise-identical bill. But this process would probably require ministers to provide Commons time.

Over to the Lords (part two)

Making it around these obstacles would require a combination of luck, tactical nous and sustained popular support. It is also likely that it will require a more overt helping hand from ministers on the process – though the government will remain neutral on the policy. Such assistance seems more doubtful if Prime Minister Keir Starmer is ousted.

If the bill makes it to the Lords a second time, and in an identical form, it would then be up to peers to scrutinise it again. But whereas last time opponents in the Lords had incentives to drag out scrutiny, this time their best interests would be served by reaching agreement on safeguards before the session ends. Because, if they fail to do so, the bill could be passed into law regardless.

Yet just because MPs could override the Lords, it does not mean they necessarily will: some form of compromise seems more likely. If peers amend the bill a second time around, MPs could still accept these changes. And we shouldn’t forget that, given the breadth of expertise in the Lords, doing so could also make for a better law.

The Conversation

Daniel Gover does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

❌