Normal view

The bias in medical research: Africa carries a huge disease burden but is missing from clinical trials

Modern medicine prides itself on being a universal science, built on evidence from clinical trials.

But there’s a bias in medical research. While Africa accounts for roughly 25% of the global disease burden and 19% of the global population, the continent’s people are largely invisible in some clinical trials.

The scale of the erasure is revealed in a landmark study of 2,472 randomised controlled trials globally published between 2019 and 2024.

I led this team of researchers, who scrutinised the world’s most influential medical publications to quantify African representation. They included the New England Journal of Medicine, The Lancet, the Journal of the American Medical Association, Nature Medicine, and the British Medical Journal. There were also three leading cardiovascular journals in the study: Circulation, the European Heart Journal and the Journal of the American College of Cardiology.

I am a physician-scientist working at the intersection of cardiometabolic epidemiology and biomedical data science. I also focus on large-scale population studies in Africa and data-driven cardiovascular prevention.

Randomised controlled trials are a cornerstone of evidence-based medicine. Introduced in the mid-20th century, they rigorously evaluate the safety and effectiveness of treatments by randomly assigning participants to different groups. This is done to minimise bias. Trials like these have been central to major medical breakthroughs, from cardiovascular therapies to vaccines. They continue to guide clinical decisions and the development of new treatments worldwide.


Read more: African countries are signing bilateral health deals with the US: virologist identifies the ‘red flags’


What we discovered

Our findings show a profound imbalance in the global clinical research landscape. Across the five most prestigious general medical journals, only 3.9% of trials were conducted exclusively in Africa. In cardiovascular health, the numbers drop to a statistical whisper. Of the major trials published in leading cardiology journals, just two studies (0.6%) were conducted solely on African soil.

This is a crisis of scientific accuracy. When clinical trials exclude African populations, they produce evidence that lacks “external validity”. This refers to how well the results of a study can be generalised beyond the participants. It asks whether findings from a clinical trial will still hold true when applied to different populations, settings, or real-world conditions.

Without that validity, doctors are essentially conducting unmonitored experiments on millions of patients every day.

Modern medicine cannot claim to be universal if entire populations remain invisible in the evidence base. Biology, health systems and disease patterns are not identical across the world.


Read more: Africa is losing health workers when it can least afford to – a pattern rooted in colonial history


The gap and why it matters

Many treatments used across the continent are based on evidence generated in non-African populations, raising concerns about their applicability.

Moreover, most Africa-based trials still focus on infectious diseases, despite the rising burden of non-communicable diseases such as cardiovascular disease.

Emerging evidence shows that genetics, environment and diet can radically alter how a body responds to a drug. It therefore makes no medical sense that an entire continent is left out of the trial net.

There’s also evidence showing that certain treatments have different safety profiles in Black patients. Diabetes and gout are just two examples. So are certain common blood pressure medications, such as angiotensin-converting enzyme (ACE) inhibitors. Research shows that they carry a three- to four-fold higher risk of severe, life-threatening side effects in people of African descent compared to other populations.

When clinical trials exclude populations, doctors are forced to extrapolate findings from one population and apply them to another.

The study also highlights a dangerous lag between global research funding and the evolving reality of African health. The new data show that nearly 76% of trials conducted exclusively in Africa focused on infectious diseases. But the continent is undergoing a massive epidemiological shift. Non-communicable diseases – heart disease, stroke, and diabetes – now account for about 38% of all deaths in many African nations.

The middle class in Africa has tripled to 300 million people from roughly 100 million people in the early 2000s. More people are now living long enough with lifestyles that increase the risk of chronic conditions such as heart disease, diabetes, and hypertension. Consequently, there is a growing need and market for long-term treatments that manage these diseases, rather than short-term therapies for infections. Yet cardiovascular trials continue to be discouraged.

Even within the continent, the data show deep “black holes” of information. South Africa accounted for over 62% of all trials conducted on the continent. Central Africa, a region that’s home to more than 180 million people, was virtually non-existent in the global research record. It contributed less than 3% of the continent’s limited trial output. Possible reasons include South Africa’s decades of cumulative investment, seen in stronger academic hubs, research governance, experienced trial units, and more established sponsor relationships. Other regions face barriers like fewer resourced research institutions, less access to trial platforms, and sometimes language and publication issues that can reduce visibility in top-tier journals.

The inequity extends into the hierarchy of science itself. Even when African sites are included in large, multicontinental trials, they are often relegated to the role of “recruitment hubs” rather than scientific partners. Our study found that African scientists led only 3.6% of multicontinental trials that included an African site.


Read more: Africa needs to speed up research excellence: here’s how


Towards a new era of African science

Africa should not simply be a location where studies are conducted.

It must be a place where research is conceived, led and interpreted. The current model creates a cycle of external dependence where international institutions manage the funding and the data. This leaves local research systems fragile and unable to translate evidence into national policy.

There is need for “ring-fenced” funding for African-led research, the development of regional trial networks, and a mandate for medical journals to report on the diversity of trial populations.

There are signs of a rising momentum. Organisations like Alliance for Medical Research in Africa are working to equip a new generation of African investigators. Africa must create a research ecosystem that is too important for the global community to ignore.

The Conversation

Bamba Gaye does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

80% of Africa’s fertiliser is imported: how food systems can adapt to the Iran shock

Conflict in the Persian Gulf is disrupting fertiliser supplies, and Africa’s food systems stand to lose.

Agrifood systems (the activities that connect the people, investments and decisions involved in producing and delivering food and agricultural goods) rely on a steady flow of inputs like fertiliser, along with markets, infrastructure and policy and trade decisions.

These food systems can absorb shocks and find new ways to keep supplies flowing under pressure. But they are also sensitive. A disruption in one part of the system has an impact on others, as the conflict in Iran that erupted in late February 2026 shows clearly.


Read more: Iran has a powerful new tool in the Strait of Hormuz that it can leverage long after the war


This is how the war on Iran affects sub-Saharan African farmers and food systems: the Gulf countries (which include Iran) are the biggest exporter globally of fertiliser ingredients. Iran alone is the fourth biggest global exporter of urea, a key ingredient in fertiliser - and one of the cheapest suppliers. Nigeria, Ghana, Togo, Kenya, Tanzania and North Africa all buy urea from Iran.

Qatar is another key urea producer and exporter but stopped making urea in early March 2026 because it needs gas to do so - and its gas plants were hit by Iranian missiles.

Shipping in the narrow Strait of Hormuz shipping channel next to Iran is down by 95% since the start of the war. This means the fertiliser that is still being made in Gulf countries has been prevented from leaving the region.

This is bad news for sub-Saharan Africa which imports about 80% of the fertiliser it uses. This comes from countries including Russia,, Europe, Ukraine, India, China and the Gulf states. Malawi, for example, imports 52% of its fertiliser from the Gulf. Morocco, Nigeria, South Africa also import ingredients from the Gulf states and use it to make fertiliser that they export.


Read more: Has the Strait of Hormuz emerged as Iran’s most powerful form of deterrence?


Fertiliser prices have already increased. And, unlike oil, there is no internationally coordinated strategic reserve for fertiliser. When the supply is disrupted, it stays disrupted.

I am a researcher and practitioner who looks at how evidence and policy can be used to make better decisions in food systems and agriculture. Recently, I was part of a team that investigated how to end hunger and all forms of malnutrition through changing the agrifood system so that nutritious food becomes more available, affordable, or accessible to poor and often rural communities.

We are especially interested in the kinds of interventions that attract investment from both the private and public sectors.


Read more: Africa’s superfood heroes – from teff to insects – deserve more attention


Our research found that food in Africa is often available but not affordable, safe, or diverse enough to make up healthy diets. For example, over the past 50 years government policies have pushed subsidies, price incentives and procurement programmes towards growing staple crops (maize, wheat, rice). But on their own, these crops are not very nutrient-dense. Focusing mainly on them means that more nutrient-dense foods have been crowded out.

Our research found a number of ways that Africa’s agrifood systems can provide more nutritious foods in future. This can also happen when fertiliser supplies are limited. We highlight some of them below.

From pandemic to war to Hormuz: Africa’s fertiliser shocks

Fertiliser disruptions and the damage to agrifood systems in Africa have happened before.

Between 2020 and 2024, fertiliser supply chains were strained by COVID-19 and then the war in Ukraine. African farmers absorbed those shocks through reducing the amount of fertiliser they used on their crops. But this led to lower yields, lower earnings and tighter household budgets.


Read more: Russia’s war with Ukraine risks fresh pressure on fertiliser prices


It’s important to remember that fertiliser supplies are entangled with decades of subsidy policy, public investment and debates about what kind of agriculture African governments should be promoting. They’re highly contested and politicised, shaped by history and power as much as by agronomic evidence and household economic choices.

The current threat of shortages is only part of the picture.

Ten ways for African countries to cope using less fertiliser

The food systems in Africa that survive the fertiliser crisis linked to the Iran war will be those that put in place nutrition-focused programmes and continue investing in innovations that reduce dependence on fertiliser.

Our report identifies ten high-impact interventions that improve nutrition and dietary outcomes. Several are particularly relevant right now:

  • Farmers should start growing fruit, vegetables and pulses, and farming with trees (agroforestry). This improves the health of the soil and produces nutrient-dense food.

Read more: Indigenous trees might be the secret to climate resilient dairy farming in Benin, says this new study


  • Home gardens can improve diets and household food security, if people get training and nutrition education.

  • Sustainable aquaculture (fish) and livestock farming, including poultry, boosts production and protein consumption.

  • Bio-fortified crops, such as high-iron beans grown in Rwanda and vitamin A-rich, orange-fleshed sweet potatoes in Mozambique, build nutrition directly into the crop during production. Because they contain more nutrients, they don’t waste as much fertiliser.

  • Storage and distribution infrastructure reduces spoilage of food. It also improves the quality of food.


Read more: Scientists are breeding super-nutritious crops to help solve global hunger


  • Foods can be fortified (have essential vitamins and minerals added) when they are being processed. These improve nutrition without requiring any changes in how food is grown.

  • Food and agricultural handling practices must be introduced to keep crops safe to eat.

  • Nutrition education helps people make better everyday food choices so that, when food is available, people eat more varied and nutritious diets.

  • Social protection programmes, such as cash transfers and food vouchers, help families during times when prices rise.

  • Providing school meals specially designed to be nutritious offers a high return on investment.

What needs to happen next

Our research emphasises that these interventions can only work as a bundle or package of support. Gender matters too; our research found that women don’t always get to eat nutrient dense food even when there is more available at home.

These interventions represent what we know works today. But governments and researchers should look beyond these too. For example, scientists at the Centre for Research on Programmable Plant Systems (including scientists at Cornell University) are engineering specialised plants known as “reporter” plants. A reporter plant is typically placed strategically in a field of crops to act as an early warning system.

They have developed a tomato plant, for example, that turns vivid red when soil nitrogen levels drop to critically low levels. This plant gives farmers precise, real-time information about what their fields need.

Tools like these could transform that relationship farmers have with fertiliser: reducing waste, cutting costs, and building a form of fertiliser intelligence into the farming system itself.

The Conversation

Jaron Porciello receives funding from BMZ Germany and Gates Foundation.

India’s Horn of Africa strategy has shifted: what it’s trying to do and how it could work

India’s engagement in the Horn of Africa and Red Sea basin was, until recently, largely limited to UN peacekeeping operations and anti-piracy patrols.

Since the second half of the 1990s, India has participated in nearly all peacekeeping operations in Africa.

Anti-piracy efforts emerged between 2008 and 2014 as piracy off Somalia and the Gulf of Aden spread across a vast maritime space. This spanned east Africa and the wider Indian Ocean, bringing threats close to India’s shores.

Indian trade routes were exposed to new security risks, so a more sustained maritime posture was needed.

From the mid-2010s, therefore, India expanded its engagement in the Horn of Africa and the Red Sea basin to secure shipping lanes linking it to global markets. At the same time, it sought to counter China’s growing naval presence along the western Indian Ocean coast, protect its diaspora and investments, and position itself as a regional security provider.

When Prime Minister Narendra Modi took office in 2014, this shift accelerated. India placed greater emphasis on proactive diplomacy, expanding high-level engagement, and trade and infrastructure links. It also pursued strategic coordination through bilateral agreements and naval exercises across west Asia and the adjoining African coastline.

India, the Horn of Africa and the Red Sea basin

This evolution reflects India’s transition from a post-colonial, non-aligned actor to a more assertive power with ambitions outside the region. It is now Africa’s third-largest trading partner. Economic interdependence is growing alongside geostrategic interests.

Drawing on our work on international security in the western Indian Ocean and sub-Saharan Africa, we argue that over the past decade New Delhi has redefined the Indian Ocean as a protective buffer and a primary theatre of influence linking the Indo-Pacific to the Red Sea. The Horn of Africa lies at the heart of this connective space.

In 2023, India declared itself the Indian Ocean’s “net security provider”. It introduced a framework to strengthen regional security, deepen economic cooperation and address shared maritime challenges.

Today, with shipping routes being recalculated and governments reconsidering their strategic partnerships, India’s position is being put to an operational test.

The Horn is a space where legitimacy, delivery and endurance determine who remains relevant after the headlines fade. For the first time, India’s quiet advance is visible. Next, it will have to solidify its presence.

Why the Horn of Africa is important for India

An initiative called the 2025 Africa-India Key Maritime Engagement, co-hosted with Tanzania, positions India as a security partner for African nations, particularly those along the Indian Ocean rim.

India is also involved in development and investment projects in the region. These include agricultural efforts to improve food security, infrastructure projects, and technical assistance in education and health. It also provides humanitarian assistance in Somalia, Kenya and Djibouti.

What distinguishes the past decade is the effort to align these activities within a broader strategic narrative – one that presents India as a partner offering technology and development without debt concerns or political conditions.

This narrative is attractive to local governments in the Horn. But it also creates a test: India must show that it can deliver consistently.

Ethiopia has an important role for India. It hosts the African Union, functions as a diplomatic centre and offers an entry point into African multilateral politics.

Somalia also matters. It sits close to critical sea lanes and is central to the security of the Gulf of Aden. External actors there can convert security assistance into political access.


Read more: China’s military support for Somalia is on the rise – what Taiwan and Somaliland have to do with it


India’s interest in Somalia and Somaliland has taken on a geo-economic dimension. Indian firms are focusing on gold and mineral resources, particularly in eastern Somaliland.

Although still limited in scale, this shift signals that India’s footprint in the Horn is no longer confined to security and development assistance. It is intersecting with resource access and supply chain strategies.

The competition

The corridor of the Red Sea, Gulf of Aden and western Indian Ocean has become a crowded arena for external powers over the past two decades.

Great powers have seen countries in the region as a platform for counterterrorism and naval reach. Small and middle powers (like Turkey, Iran and Gulf states) have sought to secure influence through ports, training missions, arms transfers, commercial access and selective mediation.

The result is a dense environment. Almost every external actor offers a package of security, finance, technology and diplomacy. Fragile local governments hedge among them.

India’s challenge is to deliver consistently through:

  • creating defence and security training pipelines

  • project delivery

  • stable financing instruments

  • sustained bureaucratic attention.

If India’s Africa policy is maritime-led, then things like naval exercises, information-sharing, coast guard cooperation and institutional training must become regular and visible.

If the strategy is also developmental and technological, then India must deliver flagship projects in digital infrastructure, health and agriculture.

From quiet influence to lasting power

India faces three constraints in growing its influence in the Horn of Africa.

1. Limited military capacity

India’s naval capabilities do not match the scale of China’s fleet or America’s technological edge and operational depth. This gap is not fatal if India’s aim is durable influence through partnership. It does mean that India’s leverage will depend on institutional cooperation and coalition-building.

2. Competitive density

The Horn’s architecture is made of foreign bases, port diplomacy and overlapping rivalries. India’s advantage is that it’s not overwhelmingly intrusive. But it could become just one more actor among many.

3. Institutionalisation

If India’s engagement depends too heavily on leader-level attention, it will remain vulnerable to distraction. Durable influence requires bureaucratic routines and financing mechanisms. It must survive political cycles and shifting crises. Ethiopia is a test case. High-level roadmaps will have to turn into visible digital infrastructure, health systems and agricultural support.

The broader point is that the Horn is not an empty theatre waiting for India to arrive.

The Conversation

Federico Donelli is affiliated with the Italian Institute for International Political Studies (ISPI), the Nordic Africa Institute (NAI), and the Orion Policy Institute (OPI).

Riccardo Gasco is affiliated with IstanPol Institute.

Chiara Boldrini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Extreme heat is a growing threat to health, jobs and food security in southern Africa – study looks for practical solutions

Extreme heat is not just uncomfortable weather – it is becoming a serious threat to health, jobs and food security across southern Africa, especially for those least able to cope.

Unlike floods, cyclones, wildfires or storms, extreme heat rarely leaves dramatic images of destruction. But it builds without relief, putting strain on people’s bodies, homes and health systems.

In many cases, the danger is intensified when temperatures stay high overnight, leaving little chance to recover.


Read more: Heat with no end: climate model sets out an unbearable future for parts of Africa


Even temperatures that seem manageable can be dangerous, depending on where people live and how well they can adapt.

We are members of a group of researchers and practitioners from across southern Africa working on climate, health and policy.

We recently conducted a regional consensus study for the Academy of Science of South Africa (ASSAf) to assess how extreme heat affects health and daily life across the region. Our aim was to determine what practical steps are needed to reduce the harm caused by extreme heat.

We worked with a team of independent experts from across disciplines to review scientific evidence, regional data and policies, and to develop a shared, evidence-based view of how extreme heat is affecting the region.

Our study was unique because it brought together evidence from across health, labour, food systems and infrastructure to show how heat affects everyday life, analysing heat not just as a weather event, but as a system-wide risk.


Read more: Heat extremes in southern Africa might continue even if net-zero emissions are achieved


We found that extreme heat is already a defining climate and health threat in southern Africa.

One of the biggest mistakes in public discussion is to treat heat as simply a weather event. It is much more than that. Heat immediately increases the risk of dehydration, heat exhaustion and heat stroke. Heat can also worsen existing conditions such as cardiovascular, respiratory and renal (kidney) disease.

Heat needs to be treated as a major public health and development priority across the Southern African Development Community.

Heat is a health issue – not just a weather issue

The Southern African Development Community has 16 member states, home to more than 400 million people. Yet collectively, these countries contribute less than 1.3% of global greenhouse gas emissions.

Despite this, southern Africa is already heating up fast. Average surface temperatures across the region have risen by 1.0-1.5°C since 1961. A further 4.5-5°C increase is projected by 2050 under high-emission scenarios (where fossil fuel companies continue to pollute at the same rate as they are now).


Read more: Climate change has doubled the world’s heatwaves: how Africa is affected


In our report, we describe extreme heat as an “integrator hazard” (a multiplier). This means it is not only one risk but makes existing problems worse all at once.

For example, extreme heat can reduce crop yields and nutrient quality, increase water stress, worsen air quality through dust and wildfire smoke, and disrupt livelihoods that depend on safe outdoor work – all at the same time. That is what makes heat so dangerous.


Read more: South African study finds 4 low-income communities can’t cope with global warming: what needs to change


It can also make already hot environments – especially informal settlements with limited shade, ventilation or cooling – far more dangerous. Extreme heat can place added strain on electricity systems. This increases the risk of power outages just when cooling, water supply and health services are most needed.

In many communities, heat also shortens the safe life of perishable food – including food sold informally that isn’t stored in fridges. This too increases the risk of food-borne illness. That matters in a region like southern Africa where street food and informal food economies are part of everyday life.

The burden is deeply unequal

Extreme heat does not affect everyone equally. One of our study’s central findings is that the people and communities most exposed to heat are often those with the fewest resources to adapt. This includes people living in informal settlements, those without reliable electricity or cooling, communities facing water scarcity, and workers who must work outside all day.

Across much of southern Africa, many people work outdoors or in poorly ventilated environments – from subsistence farms and construction sites to factories, markets and transport hubs. Being forced by heat to slow down, stop work, or continue working under dangerous conditions affects both health and livelihoods.


Read more: Zambia’s farmers are working in dangerous heat – how they can protect themselves


Heat exposure affects daily life: children may walk long distances to school or spend hours outdoors. It affects pregnancy and newborn health, causing risks such as premature birth, low birth weight and pregnancy complications.

For this reason, extreme heat is also an ethical and justice issue. The people who contribute least to climate change are often the ones most exposed to its effects – simply because of where they live, the work they do, and the resources available to them.

What governments should do now

Extreme heat is not a problem that can be solved simply by telling people to “drink more water” or “stay indoors” – especially where safe housing, water, electricity and cooling are not guaranteed. But there are practical measures that governments and institutions can take.

These include:

  • improving locally appropriate early warning systems

  • tracking heat-related illness and deaths to guide response and planning

  • making clinics and hospitals more climate-resilient, through reliable electricity, cooling, water supply and backup systems

  • protecting workers through rest breaks, shaded areas, access to water and adjusted working hours

  • improving urban design and housing so that buildings and neighbourhoods stay cooler

  • integrating heat into national climate and health planning.

Governments can also establish public cooling spaces – such as community centres, schools or clinics – where people can safely rest during extreme heat.


Read more: Climate change: the effects of extreme heat on health in Africa – 4 essential reads


There are already promising examples in the region. South Africa has begun strengthening heat-health early warning and surveillance systems. Malawi is helping farmers adapt to rising temperatures in climate-smart agricultural planning.

Namibia has supported community-level water and resource management in heat-prone areas. These examples show that progress is possible, but they need to be expanded and sustained.


Read more: Climate information is useful at local level if people get it in good time: how African countries can build systems to share it


Heat does not respect borders, and coordinated action within countries and across borders can better prepare countries for heat disasters. National meteorological services, health departments, local governments, labour authorities and emergency services should work together so that heat warnings lead to clear, coordinated action on the ground.

For too long, extreme heat has been treated as a secondary climate risk. That is no longer tenable. Heat now needs to move to the centre of climate policy. The question is no longer whether southern Africa can afford to act. It is whether it can afford not to.

The Conversation

Jerome Amir Singh has received funding from the Academy of Science of South Africa (ASSAf). ASSAf is a statutory body that is funded primarily through a parliamentary grant allocated by the South African government's Department of Science, Innovation, and Technology.

Caradee Yael Wright receives funding from the South African Medical Research Council.

Received — 1 May 2026 The Conversation

UK terror threat is raised – counter-terror expert explains how official prevention strategies work

William Barton/Shutterstock

The UK has raised its terror threat level from “substantial” to “severe”, meaning an attack within the next six months is considered highly likely. The change means the threat level is at severe for the first time in four years. It came with a warning from the Home Office of an increased threat from individuals and small groups based in the UK.

Counter-terrorism in the UK centres on a strategy known as Contest – a key part of this is the Prevent programme. The primary objective of Prevent, as the name suggests, is to stop people becoming involved in terrorism or from supporting extremist ideologies.

As such, it is designed to deliver tailored early-intervention aimed at addressing risks of radicalisation, extremism and terrorism.

In my experience as a researcher in counter-terrorism studies, I have engaged extensively with Prevent practitioners. I have gained an insight into their work, which is carried out with commitment and dedication despite constraints in funding and resources.

A Prevent referral does not imply that someone has committed, or is suspected of committing, a criminal offence. Rather, it indicates that a professional with a statutory “Prevent duty” has raised concerns about someone’s behaviour, expressions or vulnerabilities. These professionals include teachers, healthcare workers and or social workers.

Recent data illustrates the scale of the programme: between April 2024 and March 2025, 8,778 people were referred to Prevent. This is the highest figure recorded since data collection began in 2015.

Notably, a significant proportion of these referrals involve young people, with 36% (3,192 cases) concerning children aged 11 to 15. A further 1,178 cases involved those aged 16 and 17.

rear view of young person in hoodie facing a chickenwire fence
More than a third of referrals to Prevent involved children. Aoy_Charin/Shutterstock

The criteria for referral are broad, and they can include observable changes in behaviour, expressing extremist views, or signs of vulnerability that could make someone more susceptible to being radicalised.

Referrals are assessed by a panel that often include representatives from education, social services and law enforcement. In many cases, referrals do not progress beyond this initial stage. Instead, they may be signposted to other teams, such as social services, for more support.

But where there are still concerns, individuals may be offered support through the Channel programme. This is Prevent’s primary de-radicalisation mechanism. Channel is voluntary and confidential, and referrals can come from anyone and from any context. They are often issued by the police and education sector.

The support is tailored to the specific needs in each case and may include mentoring and mental health support. Participation is voluntary, and individuals can refuse to be involved or stop participating at any point.

Stigma and polarisation

Prevent is designed to be pre-emptive – intervening before criminal activity occurs. However, this breadth is also a source of ongoing controversy.

Critics argue that it can result in referrals based on ambiguous or misinterpreted behaviour. As such it has the potential to reinforce polarisation, particularly in relation to specific groups such as Muslims. This can clearly leave people feeling stigmatised, under surveillance or unfairly judged, particularly if they believe the referral was based on cultural, religious or political misunderstandings.

The scheme also has significant structural limitations as it functions as a time-limited intervention rather than a system of ongoing monitoring. Once a case is closed, the person is not subject to continuous oversight. This means that a previous referral does not eliminate the possibility of future risk – it simply means that the threshold for intervention was not met at that time.

It is important to remember that positive stories rarely get widespread attention. Every year, thousands of people in the UK receive early support and successfully disengage from radical and extremist ideologies.

Prevent operates in a difficult space between safeguarding and security, where risk is assessed before an offence occurs. While high-profile cases can amplify perceptions of failure, they do not reflect the full picture. Most referrals do not lead to further action, and many result in early, voluntary support that helps people disengage from harmful pathways.

At the same time, concerns around ambiguity and stigma remain valid. In short, rather than seeing Prevent as a measure of guilt, it’s important to recognise its limitations and its role as an early intervention tool.

The Conversation

Elisa Orofino does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The king’s state visit was a success – but there is still a chasm to bridge between UK and US outlooks

As King Charles concludes his transatlantic travels with a visit to Bermuda, a British Overseas Territory, he can take pride in a job well done. His four-day state visit to the US – which concluded with a wreath-laying at Arlington National Cemetery and a block party in Virginia – appears to have been a success.

Amid a period of heightened tension between President Donald Trump and Prime Minister Keir Starmer, the king’s carefully calibrated speech to a joint session of Congress has secured praise on both sides of the Atlantic (and on both sides of the Congressional aisle).

It was a remarkable performance: careful, diplomatic, occasionally pointed and at times both charming and witty. Perhaps we should not be that surprised. The king is a highly experienced diplomat, and while this was his first address to Congress, it was his 20th visit to Washington – as he himself noted.


Read more: How King Charles charmed the US while taking digs at Trump


But what does the undoubtedly warm American response to the king’s visit mean for the future of the US-UK “special relationship”?

On its own, no act of royal diplomacy, however well executed, can deliver an instant reset in US-UK relations. Nor can it force an American president of any stripe – let alone the current incumbent – to change tack or alter approach.

On this, history offers a salutary lesson. For all the positive impact of King George VI’s June 1939 visit to the US, when war broke out just a few months later, it did not lead to instant American intervention.

The queen’s visit in 1957 was similarly well received. But the work of rebuilding transatlantic trust – so damaged by the Suez crisis – remained ongoing in the months that followed.

This latest state visit will similarly offer no quick fix. But like its predecessors, it might shift the dial on the current US-UK dialogue – and perhaps help to temper the tone, at least for a while.

For the UK, there has already been one immediate win: Trump has decided to remove whisky tariffs in honour of the king’s visit. This will be very much welcomed by the Scottish whisky industry.

The potential long-term impact of the king’s visit is harder to ascertain. Not least because this will be determined by variables well beyond either his or Starmer’s control: contemporary geopolitics, especially as affected by the wars in Iran and Ukraine. Despite this week’s mutual exchange of praise and platitudes, the distance between London and Washington on these matters remains substantive.

Worlds apart

There was a brief glimpse of these differences in two of the speeches: the king’s to Congress and, on the same day, the president’s at the White House.

The king’s speech lingered on the shared ties of history and, especially, on democratic values and ideals. It included references to the importance of compassion and interfaith dialogue, and celebrated the strength of what the king called “our vibrant, diverse and free societies”.

President Trump’s speech at the White House a few hours earlier similarly featured references to the transatlantic connections born of history. But elsewhere, his tone and intent seemed rather different.

Trump celebrated “the blood and noble spirit of the British” – qualities which had provided American revolutionaries with a “majestic inheritance”. At one point, he even declared that the patriots of the American Revolution had been animated by the “Anglo-Saxon courage” in their veins. This is a claim on American history that one commentator has suggested “walks up to the edge of white nationalism”.

There are indeed echoes here of an early 20th-century diplomatic discourse known as Anglo-Saxonism. Once popular and pervasive among transatlantic writers and diplomats (including those of a nativist bent), it largely fell out of favour in the years between the world wars – its racial assumptions increasingly untenable.

As long ago as December 1918, President Woodrow Wilson explicitly told an audience of British dignitaries – including King Charles’s great-grandfather, George V – that they must not “think of us as Anglo-Saxons, for that term can no longer be rightly applied to the people of the United States”.

Wilson was the first serving US president to visit Britain, and, like Trump, had a British mother (Trump’s was from Tong on the Isle of Lewis, Scotland). Wil point – made in advance of the Paris Peace Conference – was that the US was a melting pot, not a monoculture.

King Charles’s speech hailed the importance of alliances (Nato), of multilateral institutions (the UN), and of the rule of law. These, he argued, are what has delivered eight decades of transatlantic peace and prosperity.

From the president, meanwhile, came a celebration of Anglo-American blood brotherhood. It was at times reminiscent of thinking (and language) which his predecessor Wilson – no “woke” radical – had called into question well over a century ago.

These two visions of the US-UK relationship, distinct in their underlying assumptions, reflect a broader geopolitical shift – one which has increasingly strained the transatlantic relationship in recent months.

The king’s vision, indicative of widespread sentiment in Europe, represents an affirmation of the post-1945 world order. The other, with its echoes of the early 20th century, is disruptive. Between them lies a chasm of significant proportions, and bridging it will be the task of today’s transatlantic diplomats.

The Conversation

Sam Edwards has previously received funding from the ESRC and the US-UK Fulbright Commission. Sam is a Governor of The American Library (Norwich) and a Trustee of Sulgrave Manor (Northamptonshire).

Why the 60-day War Powers Resolution deadline doesn’t actually constrain presidents

A TV displays U.S. President Donald Trump's prime-time address on the war in Iran inside a Cheesecake Factory on April 1, 2026, in Washington, D.C. Anna Moneymaker/Getty Images

May 1, 2026, marks the 60th day of Operation Epic Fury in Iran – a symbolically significant date designating when a president who has mounted unilateral military operations must receive Congressional approval or wind it down.

However, the complex history of the War Powers Resolution clock demonstrates it is a toothless milestone.

The Trump administration signaled on April 30, 2026, that it would ignore that deadline, set by the War Powers Resolution. Secretary of Defense Pete Hegseth testified before the Senate Armed Services Committee that “we are in a cease-fire right now, which my understanding is that the 60-day clock pauses or stops in a cease-fire. That’s our understanding, so you know.”

Sen. Tim Kaine of Virginia, a Democrat, responded that the 60-day threshold poses a “legal question” and “constitutional concerns.”

This is not the first time presidents and members of Congress have sparred on the meaning of the War Powers Resolution. What happens next will play out through regular politics, because the conflict is not a matter of simple legal interpretation.

War: Collective judgment

In the U.S. Constitution, Congress and the president share war powers.

In the shadow of political struggles in the final years of the Vietnam War, Congress passed the War Powers Resolution in 1973 to “insure that the collective judgment of both the Congress and the President will apply to the introduction of United States Armed Forces into hostilities.”

A crucial section of the resolution reasserts legislators’ role, and makes clear that the constitutional power of the president to make war is subject to, or exercised with, the following conditions: a Congressional declaration of war; specific statutory authorization; or a national emergency created by attack upon the United States, its territories or possessions or its armed forces.

For new military campaigns that do not meet these criteria, the resolution included a 60-day clock that begins when a president reports the action to congressional leadership within 48 hours of the action beginning.

The clock can be expanded to up to 90 days upon presidential determination and certification of “unavoidable military necessity respecting the safety of United States Armed Forces” related to removal of troops.

After 60 to 90 days, the resolution originally said this type of unilateral military action would be terminated automatically unless both chambers of Congress approved some form of legislative authorization.

Congress could also choose to terminate an unauthorized military operation any time before the 60 days with a concurrent resolution, which doesn’t require a president’s signature – essentially, a “legislative veto.”

And to make sure the president couldn’t stretch the definition of congressional approval, the resolution said neither existing treaties nor new budget appropriations could substitute for legislative authorization of a military action.

Since 1973, actions by all three branches across a variety of political and policy landscapes have undermined its intents and procedures.

Veto vetoed

In 1983, the Supreme Court declared various kinds of legislative vetoes unconstitutional, which led Congress to reinterpret its War Powers Resolution procedures and powers and effectively amend its processes to expedite any joint resolution or bill that “requires the removal of U.S. armed forces from hostilities outside the United States.”

Now, if members want to stop a presidential military campaign already in progress, they must act affirmatively and pass a disapproval resolution, which a president could veto like any other bill. Congress has sent only one such disapproval – to President Donald Trump in his first term – which he vetoed. Congress did not have the two-thirds required in the Constitution to override.

Both chambers of Congress now have to vote twice, once to disapprove a military action and then again to overcome a likely veto, to stop something it never approved in the first place.

House Majority Leader Mike Johnson explains on March 4, 2026, why his party rejects a Democratic-led measure to assert Congress’ war powers and stop the Iran military action.

The 60-day mark for the current Iran operation has therefore loomed as more of a politically charged symbol of this longstanding imbalance on war powers than a real deadline for action by either branch.

Parallels to Kosovo and Libya

The House and Senate have tried to pass legislation to stop military operations against Iran six times since operations began. All attempts have failed, including the most recent vote on April 30. Democrats are considering filing suit against President Trump if operations go beyond 60 days without authorization.

Yet federal courts have long expressed disinterest in getting involved in constitutional questions related to the War Powers Resolution, especially if members of Congress are the plaintiffs.

Although most presidents from Richard Nixon onward have claimed that the War Powers Resolution is an unconstitutional check on their institutional powers, they usually filed the required reports on new military actions 48 hours after they began.

While the current Iran conflict is different in many ways, presidential unilateralism, inconclusive chamber actions and even member lawsuits all echo controversies over U.S. military action in Kosovo in 1999 and Libya in 2011.

Where Trump administration may lean on Clinton

Operation Epic Fury against Iran began Feb. 28, 2026, and President Trump sent the required report to Congress on March 2, 2026.

After detailing the rationale for military action, Trump added “Although the United States desires a quick and enduring peace, it is not possible at this time to know the full scope and duration of military operations that may be necessary.”

He concluded the memo with his interpretation of constitutional power to act unilaterally.

“I directed this military action consistent with my responsibility to protect Americans and United States interests both at home and abroad and in furtherance of United States national security and foreign policy interests,” the president wrote. He acted, he said, “pursuant to my constitutional authority as Commander in Chief and Chief Executive to conduct United States foreign relations.” He said he made the report “consistent with the War Powers Resolution. I appreciate the support of the Congress in these actions.”

Similarly, on March 26, 1999, President Bill Clinton sent a War Powers Resolution letter explaining his decision two days earlier to take part in a NATO-led operation against the Federal Republic of Yugoslavia, known as FRY.

Clinton wrote to Congress using mostly the same words and phrases Trump did in his 2026 letter. Clinton also said that he took the action “in response to the FRY government’s continued campaign of violence and repression against the ethnic Albanian population of Kosovo.”

A gray-haired man in a dark jacket and blue tie, sitting at a desk in a very formal looking room.
President Bill Clinton after his television address to the nation on the NATO bombing of Serbian forces in Kosovo, March 24, 1999. Pool/Getty Images

Clinton explained his authority in virtually the same language as Trump and, like Trump, said it was hard to predict how long the operations would continue.

The House and Senate repeatedly failed to either approve or disapprove of Clinton’s actions through a series of votes across March and April 1999. But lawmakers did send him supplemental appropriations for the operations in May.

NATO suspended the operation after 78 days. Almost a year later, a federal appellate court upheld a district court’s decision rejecting a lawsuit led by Rep. Tom Campbell, a California Republican, alleging Clinton violated the War Powers Resolution. Rather than deciding on the merits, the decision rejected the lawmakers’ claims of injury as not reviewable by the court.

Obama did it, too

In a very different context, a similar rhythm played out during President Barack Obama’s presidency.

During the “Arab Spring” revolts of 2010-2011, the U.N. Security Council passed two resolutions condemning violence against Libyan civilians by security forces under the direction of Colonel Moammar Gadhafi.

On March 21, 2011, two days after NATO operations began against Gadhafi’s forces, which included American air support, Obama sent his War Powers Resolution letter to the Republican House and Democratic Senate. Obama had not received prior legislative authority from Congress.

Obama’s letter included language almost identical to Clinton’s earlier letter and Trump’s later one.

As with Kosovo, the House and Senate did not ultimately agree to either approve or disapprove of the president’s actions in support of the UN and NATO over the operation’s 222 days. In addition, Democratic Rep. Dennis Kucinich of Ohio led a group of mostly Republican House members in a failed War Powers Resolution lawsuit to stop the president.

Unilateral action endures

The Office of Legal Counsel in the Department of Justice has published legal opinions that explain and defend presidential war powers, including with Kosovo and Libya. In December 2025, that office published a memo defending the imminent January 2026 capture of Nicolás Maduro. On April 21, 2026, the State Department published a defense of ongoing U.S. actions in Iran.

Within the current dynamics of the War Powers Resolution, until Congress musters bipartisan supermajorities to connect its own institutional ambition with constitutional power, presidents from either party will decide alone if, and when, the country goes to war. Instead of Congress, presidents may heed public opinion and economic indicators, especially in election years.

The Conversation

Jasmine Farrier is affiliated with the American Political Science Association.

How Britain’s housing crisis contributes to its declining healthy life expectancy

I Wei Huang/Shutterstock

People in the UK are now spending fewer years in good health than they did a decade ago, according to a new analysis by the Health Foundation. The UK now sits near the bottom of a 21-country comparison, ahead only of the US.

A drop in healthy life expectancy is explained through many causes: obesity, alcohol, drugs, suicide, chronic disease, poverty and widening inequality. But one of the most powerful causes sits atop them all: housing. Where and how people live is one of the main factors explaining how health risks are created and distributed across society.

The UK Housing Review is an annual independent review of housing policy and evidence, written by housing experts and published by the Chartered Institute of Housing. Its latest edition, which we contributed to, identifies several interrelated ways that housing affects health.

A key one is affordability – housing costs shape where people can live, whether they can heat their homes, whether they can afford food and transport, whether they can move for work, whether they can leave unsafe or unsuitable housing and whether they live with chronic financial stress.

In the UK, housing costs are high by historical standards and poor housing remains widespread. The review notes that private rents are now at their highest recorded share of earnings, while millions of homes in England still contain serious health and safety hazards.

When housing is unaffordable, people are forced to make tradeoffs. For example, trading affordability for damp or overcrowded homes. They cut back on heating, food, medication, transport and social participation. They move further from public services, work and support networks. Affordability problems also force many people into cheaper, less secure, tenancies.

Poor housing quality directly shapes health. Cold, damp, mould, disrepair, poor ventilation and unsafe homes are directly linked to respiratory illness, cardiovascular risk, mental health problems and reduced wellbeing.


Read more: Cold homes increase the risk of severe mental health problems – new study


The Building Research Establishment, an independent research organisation, has estimated that poor housing costs the NHS in England £1.4 billion each year. More than half of this is attributed to cold homes, which increase the risk of respiratory illness, cardiovascular problems and poor mental health. They are especially dangerous for older people, babies and people with existing health conditions.

But the wider costs are even greater. Poor sleep, stress, disrupted schooling, insecure work, social isolation and caring strain all affect mental and physical health. They increase pressure on families and, over time, on health, education and social care systems.

Close up of someone resting their hands and hot drink on a radiator
Cold homes can cause serious and widespread health problems. Jelena Stanojkovic

Historically in the UK, social housing has provided some protection to people unable to access good quality affordable housing in the open market. But the stock of social rented housing in the UK has declined. This means that people are increasingly dependent on (often expensive) market rental, where the quality, size and location of housing depend much more directly on income.

The rise of the private rented sector this century has meant that more households are exposed, not just to higher housing costs, but also to shorter tenancies and fewer protections than social housing traditionally provided.

The Renters’ Rights Act increases security, but does not remove “no fault” evictions altogether and does little to protect tenants from economic pressures that can result in eviction. The cognitive burden of worrying about eviction, arrears, repairs or the next rent increase is a direct health risk.

Recent evidence also suggests that insecure housing can result in measurably faster biological ageing, equivalent to the effects of more traditional health concerns like smoking.

Additional weeks of biological ageing per year from different factors

Bar chart showing additional weeks per year for private renting (2.4 weeks) compared to other social determinants of health including unemployment (1.4 weeks), having no qualifications (1.1 weeks) and being a former smoker (1.1 weeks)
Amy Clair

The number of people living in temporary accommodation has risen dramatically, reaching over 130,000 households at the beginning of 2025. This is a 156% increase compared with 2010, largely driven by the poor affordability and insecurity of the private rented sector and lack of social housing. Temporary accommodation is inadequate housing, particularly for children. Living in temporary accommodation was a contributing factor in the deaths of at least 104 children in England between 2019 and 2025, 76 of whom were under one year of age.

This is not about housing quality alone. Temporary accommodation reflects multiple risks brought together: poverty, overcrowding, poor conditions, instability, lack of space for safe infant sleep, poor access to services and wider racial and social inequality. The National Child Mortality Database identifies temporary accommodation as a contributing factor to vulnerability, ill health or death, not necessarily as the sole cause. Emerging evidence also links temporary accommodation with stillbirth and neonatal death.


Read more: Insecure renting ages you faster than owning a home, unemployment or obesity. Better housing policy can change this


Housing health inequality

ONS data shows a very large difference in healthy life expectancy between the most and least deprived areas. In 2022-24, healthy life expectancy in the most deprived areas of England was just 49.8 years for men and 48.2 years for women, compared with 69.2 and 68.5 years in the least deprived areas.

Housing contributes to this difference, determining whether people live in homes that support recovery or deepen stress, whether children grow up in stable and safe environments, and whether older people can remain warm and independent.

If the government is serious about its stated aim to “halve the gap in healthy life expectancy between the richest and poorest regions”, housing policy must become health policy.

That means investing in social housing, enforcing decent standards in the private rented sector, making homes warmer, safer and more accessible, and recognising temporary accommodation, overcrowding and insecurity as public health failures, not just housing management problems.

It also means changing the way that success is measured. Housing policy is too often judged by supply numbers, prices or tenure outcomes. These matter, but they are incomplete. A healthy housing system should also be judged by whether people can live in homes that are affordable, secure, decent, suitable and resilient to climate change.

The decline in healthy life expectancy is a warning light. It tells us that the UK is not only failing to keep people well for longer, it is failing to provide the foundations of health.

The Conversation

Emma Baker receives funding from the Economic and Social Research Council, the Australian Research Council, The National Health and Medical Research Council, and the Australian Housing and Urban Research Institute.

Amy Clair receives funding from the Australian Research Council and the Australian Housing and Urban Research Institute.

Mark Stephens receives funding from ESRC, the EU/Innovate UK and the Australian Housing and Urban Research Institute (AHURI).

Ten compelling poems about climate change – chosen by our experts

Three Reading Women in a Summer Landscape by Johan Krouthén (1908). WikiCommons

We asked ten literary experts to recommend the climate poem that has spoken to them most powerfully. Their answers span over 200 years and a range of emotions from sorrow, to anger, fear and hope.

This article is part of Climate Storytelling, a series exploring how arts and science can join forces to spark understanding, hope and action.

1. Death of a Field by Paula Meehan (2005)

Published in the wake of the 2008 financial crisis, Paula Meehan’s Death of a Field critiqued the environmental impact of the Celtic Tiger economy in Ireland.

The poem anticipates the destruction of the titular field by property developers with little regard for native ecologies: “The end of the field as we know it is the start of the estate.”

Death of a Field read by Paula Meehan.

The global effects of the climate crisis are seen from a uniquely local perspective as the displacement of Irish wildlife mirrors the effect of colonial violence. “Some architect’s screen” is simply the latest iteration of imperial technologies that seek to plunder Irish landscapes. The poem gains further strength by refusing to replicate a hierarchical relationship to nature by preserving its many mysteries:

Who can know the yearning of yarrow

Or the plight of the scarlet pimpernel

Whose true colour is orange?

Jack Reid is a PhD Candidate in Irish literature

2. Darkness by Lord Byron (1816)

Darkness imagines the fallout of a volcanic eruption that has destroyed the Earth. The “dream” that the poem mentions was inspired by genuine weather conditions during the “year without a summer” in 1816, caused by the eruption of Mount Tambora in Indonesia the previous year.

Darkness by Lord Byron.

Sulphur in the atmosphere caused darkness and low temperatures across Europe. In Lake Geneva, Lord Byron experienced the infamous “haunted summer” of darkness.

Byron’s depiction of climate catastrophe is bleak, with words like “crackling”, “blazing” and “consum’d” bearing resemblance to contemporary reports of wildfires caused by climate change. After a famine, all elements of Byron’s Earth, from the clouds to the tide, eventually cease to exist: “Seasonless, herbless, treeless, manless, lifeless– / A lump of death – a chaos of hard clay.” Read as a portent of the Anthropocene, Byron’s poem urges readers to seriously consider the future of mankind.

Katie MacLean is a PhD candidate in English Literature

3. Mont Blanc by Percy Bysshe Shelley (1817)

Byron’s close friend Percy Bysshe Shelley was also inspired by the “year without a summer”. He witnessed temperatures drop, volcanic ash hanging heavy in the air and crops failing. While his wife Mary used the gloomy climatic event to inform her novel Frankenstein (1818), Shelley channelled them into his poem Mont Blanc.

A reading of Mont Blanc.

In his ode, Shelley describes a timeless “wall impregnable of beaming ice”. By drawing on his scientific reading, he then explains his fears regarding global cooling and the possibility of vast glaciers eventually covering the alpine valleys.

He imagines “the dwelling-place / Of insects, beasts, and birds” being obliterated and mankind forced to flee. While Shelley saw this process as “destin’d” and inevitable, it is clear that Mont Blanc is a poem with catastrophic climate change at its heart. In 2026, it is difficult to read in any other way.

Amy Wilcockson is a research fellow in Romantic literature

4. Characteristics of Life by Camille T. Dungy (2012)

There’s something gloriously elastic about invertebrates: the spinelessness of a worm, the pulsing of the jellyfish, the curling of an octopus. Spiders, snails and bees, too, with their exoskeletons on display, invite us to see things “inside-out”.

These are the thoughts I have when I read Characteristics of Life by Camille T. Dungy, which opens with a snippet from a BBC news report claiming that “a fifth of animals without backbones could be at risk of extinction”. What would a world be without the “underneathedness” of the snail beneath its shell beneath the terracotta pot in the garden? Or “the impossible hope of the firefly” whose adult lives span only a handful of human weeks?

Camille T. Dungy speaks about nature and poetry.

Dungy speaks from a “time before spinelessness was frowned upon”, and from a world where to dismiss a being as “mindless” (jellyfish have no brains) or even “wordless” would be “missing the point” entirely. As I think of these creatures that dwell beyond our usual line of vision – flying, crawling, tunnelling and swimming – I find my perspective on our beautiful world turning and shifting.

Janine Bradbury is a poet and a senior lecturer in contemporary writing and culture

5. Prayer at Seventy by Vicki Feaver (2019)

One of my favourite poems about climate change is Vicki Feaver’s Prayer at Seventy from her 2019 collection I Want! I Want!.

The speaker’s request of passing her “last years with less anxiety” appears to be denied by a god who first responds by changing her into “a tiny spider / launching into the unknown / on a thread of gossamer” and who, when she begs to “be a bigger / fiercer creature”, turns her into “a polar bear / leaping between / melting ice floes”.

A reading of Prayer at Seventy by Vicki Feaver followed by an explanation by the poet.

Both images present creatures who are in precarious positions, their futures uncertain, reflecting the state of a person contemplating the unknowns of old age and death. But the poem moves beyond the personal. The reference to the melting ice floes is not solely metaphorical: it reminds us that the planet itself is in danger and every living thing is therefore vulnerable – and will be increasingly so.

Julie Gardner is a PhD candidate in literature


Read more: How poetry can sustain us through illness, bereavement and change


6. Walrus by Jessica Traynor (2022)

Walrus, from Jessica Traynor’s 2022 collection Pit Lullabies expresses the quiet anxiety a mother has for her child in the world of climate breakdown.

While stripping wallpaper from the box room of her house, the poet discovers a mural of the Walrus and the Carpenter from Alice’s Adventures in Wonderland. Traynor takes part of Lewis Carroll’s poem about the Walrus and the Carpenter walking along the beach, eating the vulnerable oysters, and weaves it into her own poem.

Jessica Traynor reading poems from her collection Pit Lullabies.

Carroll’s absurd verse includes what, at that time no doubt, seemed like an impossible image of a “boiling hot” sea. In the 21st century, this is no longer an absurdity, as Traynor knows. She makes a connection with Carroll’s poem, imploring her child:

Sleep as the sun rises and ice melts

and for want of the freeze a walrus

pushes further up a cliff-face.

It’s a complex poem that reimagines a key work of children’s literature, connecting it with the reality of the changing world. All the while the mother keeps her fears at bay for the sake of her child, “brows[ing] washing machines” with a “ball of tears” in her throat.

Ellen Howley is an assistant professor of English

7. Ocean Forest, co-created by the We Are the Possible programme

Ocean Forest is woven out of words, research, ideas and stories shared by scientists, educators, health professionals, youth leaders, writers and artists. They took part in creative writing workshops to co-create the anthology Planet Forest – 12 Poems for 12 Days for the UN Climate Conference in Brazil in 2025.

In the shallows, alert to change,

the minuscule, overlooked creatures

weave between seagrass, and weed –

live their shortened lives.

When ships pass overhead, when sands shift,

fish navigate swell, migrate beyond

where coral’s been bleached, through schools

of silenced whales and barely rooted mangroves

struggling to thrive in darkening water.

Deeper down,

pressure builds, species exist, unaware,

undisturbed. As heat and waves rise there’s hope

the unfound, the unnamed, the unpolluted

in the remotest ocean forests will survive.

Through uniting disciplines and voices the poem takes unexpected shifts. It demonstrates that climate change affects and erodes the habitats that lie beneath the surface and that urgent action is needed to protect disappearing species.

Yet, there is also a glimmer of hope – that in the deepest, darkest parts of the ocean, where temperatures are near freezing and there are bone-crushing pressures, maybe there are creatures that will survive human interference and pollution.

Sally Flint is a lecturer in creative writing and programme lead on the We Are the Possible programme

8. Di Baladna (Our Land) by Emi Mahmoud (2021)

Emtithal “Emi” Mahmoud is a Sudanese poet and activist, who has won multiple awards for her slam poetry performances. Mahmoud performed Di Baladna at the United Nations Climate Change Conference in 2021.

Poetry – especially spoken word – helps people connect emotionally with the human side of climate-driven displacement, a topic that’s often explained only through technical language. The language of emissions targets, temperature thresholds, or policy frameworks can distance people emotionally from its consequences. Yet poetry can cut through this abstraction.

Di Baladna (Our Land) read by Emi Mahmoud.

Mahmoud’s performance gave voice to those forced from their homes by environmental collapse, reminding listeners that climate change is not only an environmental crisis but a deeply human one, with profound effects on individuals, families and communities.

By merging vivid natural imagery with the rhythms of displacement and lived testimony, the poem urges listeners to replace passive awareness with empathy. Mahmoud implores us to feel the loss, fear and resilience of displaced communities, looking beyond news headlines and images of victimisation. Engaging with such work helps transform climate refugees from statistics into people.

Clodagh Philippa Guerin is a PhD candidate in refugee world literature

9. Flowers by Jay Bernard (2019)

At first glance, Jay Bernard’s Flowers is circular poem (one that begins and ends in the same place) but you soon realise that the circle isn’t going to complete. It opens:

Will anybody speak of this

the way the flowers do,

the way the common speaks

of the fearless dying leaves?

And closes:

Will anybody speak of this

the fire we beheld

the garlands at the gate

the way the flowers do?

And the answer seems to be, no: no one will speak of these things – the “coming cold” and the “quiet” it will bring – only the things themselves as they die. With the songs Where Have All the Flowers Gone? by Pete Seeger and Blowin’ in the Wind by Bob Dylan in its DNA, Flowers has the eternal power of a folk-lyric – prophetic and unignorable.

Kate McLoughlin is a professor of English literature

10. Place by W.S. Merwin (1987)

Climate change poetry – should it be a thing? How do poets avoid the oracular pomp it threatens? Browsing my small library I’m shocked anew to realise most poets lived and died blissfully innocent of our condition.

OK, what about the late John Burnside’s lyric Weather Report (“this is the weather, today / and the weather to come”). It poignantly extrapolates from a sodden summer to his sons’ futures: “a life they never bargained for / and cannot alter”. Heartbreaking. Or the odd dread of spring in Fiona Benson’s Almond Blossom, a season characterised as Earth’s, “slow incline … inch by ruined inch”. Ditto.

W.S. Merwin reads Place.

But then I reach back to the great American poet W.S. Merwin’s short prayer Place to find that grace-note of hope which surely needs to thread through all poems, whether they speak of climate change, mortality or love: “On the last day of the world / I would want to plant a tree.” Me too.

Steve Waters is a playwright and professor of scriptwriting at the University of East Anglia

This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something, The Conversation UK may earn a commission.

The Conversation

Amy Wilcockson receives funding from Modern Humanities Research Association as Research Fellow for the Percy Bysshe Shelley Letters project.

Steve Waters receives funding from AHRC

Clodagh Philippa Guerin, Ellen Howley, Jack Reid, Janine Bradbury, Julie Meril Gardner, Kate McLoughlin, Katie MacLean, and Sally Flint do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

The Iran war has brought many old Gulf faultlines to the fore – and is creating new ones

Dubai's port and airport plus some residential buildings and hotels were struck by Iranian missiles and drones. Maria_Usp / Shutterstock

The United Arab Emirates (UAE) announced on April 28 that it will leave the global oil producers’ cartel Opec. Its decision is the latest sign that the war in the Middle East has not only deepened animosities between Iran and its Gulf neighbours, but among the Gulf states too.

Founded in 1960, Opec is a rare success story among multilateral organisations in the region. Its policies paved the way for Gulf oil producers to have enough funds to buy back or renationalise their oil resources, and finance the spectacular development of their states.

The organisation has survived all major revolutions and wars in the region thus far – though Qatar left in 2019 when it was blockaded by its Gulf neighbours.

Saudi Arabia, the largest oil producer in Opec, holds substantial leverage within the group. This has led to tension with the UAE, which has for some time pushed for higher production quotas for itself, given its spare capacity. These efforts have been to no avail.

However, its decision to leave Opec is about more than merely frustration with the organisation.

Though it was very close to Saudi Arabia in the mid-2010s, the UAE has in recent years drifted apart from its larger neighbour. This has been driven by a number of regional issues including the countries’ diverging strategies in wars in Yemen and Sudan, and their respective relations with Israel.

The UAE normalised relations with Israel in 2020, while the Saudis say they will only normalise once a Palestinian state is established.

The two countries have also recently become serious economic competitors. And although both states have been hit hard by Iran in the current war, the conflict seems to have accelerated their rivalry.

A map of the Gulf region.
Iran responded to US-Israeli attacks in February by launching strikes on countries around the Gulf and blockading the Strait of Hormuz. Peter Hermes Furian / Shutterstock

Saudi Arabia is the largest and richest country in the Gulf. But many of its transformative economic projects require political stability and a high oil price to succeed. The war has exposed the limits of its policy of tentative outreach to Iran, and of its partnership with a US that is so closely allied with Israel. So, the Saudis have strengthened defence ties with nuclear-armed Pakistan.

These deepening ties have been met with dismay in the UAE, which has close ties to India. The Emiratis have been critical of Pakistan during the war, calling on Islamabad to condemn the Iranians more forcefully – something that is not possible due to Pakistan’s role as a mediator in peace negotiations.

At least partly in frustration at its response to the war, the UAE recently demanded that Pakistan repay a US$3.5 billion (£2.6 billion) loan. Saudi Arabia immediately came to the rescue by providing Pakistan with financial support.

The UAE’s announcement to leave Opec coincided with a meeting of the Gulf Cooperation Council in the Saudi Arabian capital Riyadh, where members sought to find common ground on the Iran war. This was a major affront to the Saudis.

Other Gulf frictions

The war has sparked other frictions in the Gulf, including reviving old tensions between the UAE and Iran over three islands – Abu Musa, Greater Tunb and Lesser Tunb – that Iran occupied at the time of Emirati independence from Britain in 1971. These islands strengthen Iran’s strategic position along Gulf shipping lanes.

The UAE has long claimed sovereignty over the islands, while Iran claims they were always part of its territory. Iran’s control of the three islands is thought to be part of a secret deal between Britain and the Shah of Iran around 1970, whereby the shah would renounce a claim Iran maintained to Bahrain in return for the islands.

This and other historic border disputes in the region, including between the UAE, Saudi Arabia and Oman, remain some of the most sensitive topics in modern Gulf history. For a forthcoming book on the rise of the Gulf states, I have tried to access relevant UK Foreign Office documents, but have had numerous freedom-of-information requests denied on closed material dating back to the 1960s and earlier.

Damaged buildings on Kuwait's Failaka Island.
Failaka Island off Kuwait’s coast remains partially abandoned due to the heavy damage that was inflicted during the 1990 Iraqi invasion. Sebastian Castelier / Shutterstock

The northern Gulf state of Kuwait has also been hit hard during the conflict. Here, many attacks seem to have come from Shia militias based in Iraq. These attacks have revived traumatic memories of Iran-linked political violence in the 1980s, and Iraq’s invasion in 1990.

States that cannot bypass the closed Strait of Hormuz – such as Bahrain, Kuwait and Qatar – have experienced the most economic damage from the war. To balance its budget, Bahrain is already dependent on aid by wealthier Gulf states. The UAE, Saudi Arabia and Oman, on the other hand, have the geographical means to bypass Hormuz.

Oman, which controls one side of the strait, may well benefit in the long run. This could either be through a new arrangement with Iran to charge vessels a toll, or because its ports on the Arabian Sea will increase in significance – perhaps even resurrecting some of Oman’s former glory, when it was a major regional power. This is not something neighbouring UAE and Saudi Arabia would like to see.

The reckless US-Israeli attack on Iran has thus opened up old faultlines, and could create new ones between states around the Gulf. It is also undermining the few avenues of regional cooperation that remain. This makes a fragmented and dangerous region even more so.

The Conversation

Toby Matthiesen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Buffy the exercise slayer: Sarah Michelle Gellar’s EMS workout trend explained

The actor performs pilates moves while wearing an EMS suit. StudioLab Images/ Shutterstock

Actor Sarah Michelle Gellar, best known for her role as teenage demon slayer Buffy Summers, recently shared in an interview that she uses an “EMS suit” during workouts to stay fit. And she’s not the only one who has made this form of exercising a trend – with celebrities from Tom Holland to Cindy Crawford all using EMS workouts to get fit.

EMS, short for electromyostimulation, uses electrical impulses to support muscle contraction. The idea is that the machine uses electricity to stimulate your muscles to work harder, to help you get more out of your workout without lifting heavy weights.

Some companies even claim that a 20-minute EMS session (roughly half an episode of Buffy the Vampire Slayer), can deliver the same benefits as hours in the gym. For people who are short on time, dislike traditional exercise or want a novel way to stay motivated, this sounds very tempting.

But while EMS does have some evidence-based benefits, particularly in rehabilitation settings, it’s far from a miracle shortcut to getting fit.

In clinical contexts, EMS works by sending small, electrical impulses through pads placed on the skin. Just like with regular workouts, these impulses stimulate nerves, triggering muscles to contract. Physiotherapists have used EMS for decades to help patients recovering from injury or surgery, especially when regular movement is difficult.

It has even been used in spaceflight simulations, in which participants have to lie in a bed tilted slightly downwards for extended periods to replicate the effects of being in space on the body. This can cause muscles to weaken, and research has explored EMS as a countermeasure loss during these conditions, particularly when combined with resistance exercise.

What is new is the rise of “whole body EMS” in the fitness industry. Instead of placing electrodes on a single muscle group, users wear the suit or vest. It contains multiple electrodes targeting the arms, legs, glutes, back and core. During a session, people perform squats, lunges, arm raises and more, while the suit pulses to intensify muscle activation.

In practice, the benefits depend heavily on who you are and how you train.

Does it work?

Research suggests EMS can help maintain strength and muscle mass after five to six weeks of treatment compared with doing a conventional exercise programme. A meta analysis in 2023 supports this, outlining how between one to three whole-body EMS sessions per week for six to 12 weeks can result in modest improvements in muscle mass, strength and power.

Another separate study also reported strength gains after a similar frequency of use in non-athletic, sedentary adults.

For people who are sedentary, or have joint pain, EMS may offer an alternative to stimulating muscles without the stress of exercise.

However, it is not a substitute for the broad, well established, whole-body health benefits of regular exercise, which extend beyond muscles to the cardiovascular and metabolic systems, among others.

This distinction becomes clearer when we look at regular exercisers. A recent study, which examined EMS use in athletes and trained sportspeople, found little to no benefit on performance measures such as jumping, sprinting or agility.

A woman performs a bodyweight squat while wearing an EMS suit.
EMS suits may not be as beneficial for regular exercisers. Chester-Alive/ Shutterstock

Furthermore, studies examining strength outcomes report inconsistent findings, with results varying widely depending on the EMS protocol used and how it’s combined with conventional training.

Taken together, these findings suggest that for people who are already active, EMS probably won’t provide a meaningful edge as conventional exercise is already very effective. Lifting weights, sprinting or doing bodyweight exercises all produce strong, natural muscle contractions without the need for electrical stimulation.

Should you try it?

Overall, the research on EMS is promising but far from definitive. Many studies are small, short term, or use differing protocols, making comparisons difficult.

Some combine EMS with exercise, while others compare it to doing nothing at all. This makes it challenging to determine whether improvements come from EMS alone, its combination with exercise or because participants are just being more active.

Because EMS can produce strong, involuntary muscle contractions, overuse can also lead to severe muscle soreness or, in rare cases, a condition called rhabdomyolysis. This occurs when muscle tissue breaks down rapidly and releases proteins into the bloodstream, harming the kidneys.


Read more: High-intensity workouts may put regular gym goers at risk of rhabdomyolysis, a rare but dangerous condition


Several cases of rhabdomyolysis have been reported after intense EMS sessions, even after a single workout. For this reason, it is recommended to start slowly, stay hydrated and use EMS under professional supervision.

Cost is another factor. Whole body EMS sessions can be expensive, and purchasing a suit for home use can be even more costly. For many people, that money might be better spent on evidence-based, personal training or structured exercise programmes.

For those that can afford it, EMS should be viewed as a supplement, not a substitute, for regular exercise. The strongest evidence for improving health, fitness and body composition still comes from simple, consistent habits: lifting weights a few times a week, walking more, cycling, swimming, jogging or following a gym programme.

There’s no shortcut around the basics. EMS may add a spark, but it can’t replace the benefits of real exercise.

The Conversation

John Noone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The four-day week won’t happen overnight, but it could transform how we live and work

buritora/Shutterstock

A century ago, the five-day working week helped reshape society. It was introduced at scale by industrial pioneers to address not only worker wellbeing but also economic pressures.

US industrialist Henry Ford was among the first to give workers two full days off per week, 100 years ago this month. Ford suspected that giving workers a “weekend” would increase overall productivity – and he was correct.

Today, as advances in artificial intelligence accelerate and concerns about job security grow, a similar question is emerging. Could reducing working time again help societies adapt to these seismic changes?

The evidence increasingly suggests it can, but not in the simplistic way that is often portrayed. The four-day week is not just a workplace benefit. It is a potential tool to improve wellbeing, support families and rethink how work is distributed in society.

Research across multiple countries, including large-scale pilots in the UK and Portugal, shows that reducing working time can deliver meaningful benefits for both employees and organisations.

In a 2025 study of four-day week adoption, my colleagues and I found improvements in sleep, exercise and quality of working life. There were positive implications for both the mental and physical health of employees.

Our research showed productivity at work can also increase, alongside reductions in absenteeism and staff turnover. And it can be beneficial for an employer’s social image.

However, the most important insight is not about productivity but what happens outside work. After all, time is a social resource, not just an economic one.

When people move to a four-day week, they do not simply rest more. They reallocate time in ways that have broader implications for society.

Across our research, participants said they spend more time with family and friends, engaging in community activities and investing in their physical and mental health by exercising and practising hobbies and self-care activities.

These are not trivial changes. Over time, they contribute to stronger social ties, better mental health and more resilient communities.

There are also important gender implications. Early findings suggest that reduced working time can lead to fathers being more involved in caring for their children and other domestic responsibilities. While this does not automatically solve gender inequality, it creates conditions that make more equal divisions of labour possible.

In this sense, the four-day week is not just about work. It is about how societies organise care, relationships and everyday life.

The challenge in service sectors

Critics of a four-day week often point out that it is harder to implement in service sectors such as healthcare, childcare, manufacturing, hospitality or retail. This is true, but it is not a reason to dismiss the idea.

In these sectors, work is tied to time, presence and staffing levels. Reducing working hours often requires more complex redesign, including changes to rotas, additional hiring or upfront investment. Colleagues and I have highlighted this when addressing the UK case of the NHS.

But these challenges should be seen as design problems, not impossibilities. In fact, the potential benefits to society may be even greater in these sectors. Improved wellbeing and reduced burnout among healthcare staff and care workers can translate into better quality of service and fewer mistakes.

female healthcare worker on a break outside on a hospital balcony with a coffee and her phone in her hand.
Reduced working hours for healthcare staff could lead to fewer clinical mistakes. Iryna Inshyna/Shutterstock

A more important concern is inequality. If working time reductions are adopted unevenly, there is a risk that some workers will be excluded – often those in lower-paid or frontline roles. This is a valid concern, but not an argument against the four-day week. Rather, it is an argument for implementing it more thoughtfully.

Instead of asking whether all jobs can adopt the same model, the focus should be on how different forms of reduced work time can be adapted across sectors. This could include shorter daily hours, staggered schedules or phased time reductions.

The future of work

The renewed interest in reducing the amount of time we spend working is not happening in isolation. It is closely linked to broader debates about automation, productivity and the future of work.

If technological advances continue to increase productivity, a fundamental question arises: who benefits from these gains?

Historically – during the Great Depression, for example – working time reductions have been one way of redistributing those benefits. Compared with more radical proposals such as universal basic income, the four-day week offers a more direct and socially embedded way of sharing gains in productivity.

The four-day week is not a universal solution, and it will not look the same everywhere. But the evidence shows working less can go hand-in-hand with maintaining productivity.

It can also support a shift towards a society where time is valued not only as an economic input, but as a foundation for wellbeing, relationships and participation in community life.

A century after the five-day week helped define modern work, there may be another turning point on the horizon. This time, the real question is not whether we can afford to reduce working time, but whether we can afford not to.

The Conversation

Rita Fontinha’s employer, the University of Reading, has received funding from the Portuguese Government and the Azores Regional Government to conduct academic research on four-day working week pilots.

❌