Normal view

Your browsing history could soon set your grocery bill — and Canada isn’t ready for it

Parliament voted down a motion on April 15 to ban a practice most Canadians have never heard of, but that retailers are already rolling out: surveillance pricing.

Also called algorithmic personalized pricing, the practice uses personal data to estimate how much consumers are willing to pay, then adjusts the price accordingly. Two shoppers, same store, same item: two different prices, generated by data neither of them can see.

The NDP motion urges the government to prohibit surveillance pricing both in stores and online. The Liberals and Conservatives voted it down. NDP leader Avi Lewis had called the practice “unfair” and “downright creepy” at a news conference days earlier.

A poll by Abacus Data conducted in March found that while most Canadians are not familiar with the term, when the practice was explained to them, 52 per cent said it should be banned. Another 31 per cent of the Canadians surveyed said it should be allowed but more strictly regulated.

For Canadians struggling with cost-of-living pressure, the practice is spreading among retailers, and the laws meant to protect consumers were not designed to catch it.

Not the same as surge pricing

A useful distinction first. Dynamic pricing, the kind used by airlines, hotels and rideshare companies, adjusts based on conditions like demand, the time of day or weather, and applies the same algorithm to every customer equally.

Uber’s surge pricing is the textbook example of dynamic pricing: every rider in the same area at the same moment sees the same multiplier. Annoying? Perhaps. Personalized? No.

Surveillance pricing is different. Where dynamic pricing responds to market conditions, surveillance pricing responds to the individual. It draws on browsing history, device, postal code, purchase frequency and inferred income to predict a person’s willingness to pay.

Dynamic pricing seems to ask: “What are the conditions right now?” Surveillance pricing asks: “Who are you, and how much can we extract from you?”

How much is happening in Canada?

It’s difficult to know how much surveillance pricing is happening in Canada, if at all. So far, there has been no confirmed Canadian case, and the practice is opaque by design.

The Competition Bureau’s discussion paper, published in 2025, reported that more than 60 companies in Canada offer services that use algorithms to optimize pricing across retail, hospitality, transportation and ticketing.

The bureau’s What We Heard report, published in January after a public consultation on algorithmic pricing, identified transparency as Canadians’ chief concern. Shoppers do not know whether the price in front of them has been personalized to them specifically.

The most prominent real-world example came from south of the border. An investigation by Consumer Reports and Groundwork Collaborative documented Instacart customers in the U.S. being charged up to 23 per cent more than other shoppers for the same items, at the same store, at the same time.

Nearly three-quarters of grocery items tested were offered to shoppers at multiple price points simultaneously.

Instacart disputed the characterization, but halted the program in December 2025 following public backlash. New York Attorney General Letitia James has since demanded that Instacart share information about its price-testing experiments.

Canadian retailers, meanwhile, are assembling the same underlying toolkit: digital shelf labels that allow prices to be changed remotely in seconds, AI-driven pricing engines and the loyalty card data that feeds them.

Where Canadian law runs out

Most Canadians assume that if something feels deceptive at checkout, the law catches it. For some familiar problems, that is true.

Recent amendments to the Competition Act introduced an explicit ban on drip pricing — the practice of advertising a low price and then adding unavoidable fees at checkout.

The Cineplex case is the most prominent recent example of that law in action. The Competition Tribunal levied a record $38.9 million penalty against the cinema chain for concealing online booking fees, a ruling the Federal Court of Appeal upheld in January. Cineplex has since sought leave to appeal to the Supreme Court of Canada.


Read more: Cineplex’s $38.9 million fine is a wake-up call about corporate sustainability practices


But surveillance pricing slips past this framework entirely. The price displayed is technically accurate. No fee is buried and no phantom “regular price” is invented. What is hidden is the process.

Deceptive marketing rules assume everyone is offered the same price and someone is misrepresenting it. Surveillance pricing inverts the premise: everyone is offered a different price, and almost no one knows it’s happening.

The Competition Bureau’s mandate is to protect and promote competition, not consumer fairness. Its tools were built to catch anti-competitive behaviour between companies, not price discrimination between individual shoppers.

Similarly, provincial consumer protection laws like Ontario’s Consumer Protection Act are designed to deal with misleading or unfair practices in one-on-one transactions — not large-scale, automated differences in how millions of consumers are treated.

Privacy law, in turn, governs consent to data collection, not consent to how that data is used to shape what you pay. Three legal regimes circle the problem; none quite covers it.

What other jurisdictions have done

In November 2025, New York’s Algorithmic Pricing Disclosure Act took effect, requiring any business that uses personalized pricing to display a notice reading “this price was set by an algorithm using your personal data,” with civil penalties of up to US$1,000 per violation.

The European Union has required disclosure of personalized pricing since its 2019 consumer rights overhaul. Manitoba’s Bill 49, introduced March 17 by the NDP government of Premier Wab Kinew, would go further than either of those measures and prohibit surveillance pricing outright, making it an unfair business practice.

When asked if he would follow suit, Ontario Premier Doug Ford said he would not, telling reporters he believes in a “free market” and a “capitalist society.”

Federal AI Minister Evan Solomon said the federal government is “looking into” the issue, but that it would fall under the purview of the Competition Bureau.

What real protection would require

In the short term, shoppers can use private browsing mode, turn off location services and log out of loyalty apps before they shop.

These, however, are only workarounds. They place the burden of navigating an opaque system on the least-informed party in the transaction and they require a level of digital awareness some shoppers don’t have.

Real protection means either a federal disclosure mandate along New York’s lines, or an outright prohibition like the one Manitoba is pursuing. The Competition Bureau can keep monitoring, but monitoring is not enforcement, and competition law wasn’t designed to police unfairness on its own.

Until Parliament or the provinces close the gap, Canadian consumers have no reliable way of knowing whether the price they see is the price everyone else sees.

The Conversation

Jake Okechukwu Effoduh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How should schools teach AI? 3 models to consider

Students across Canada are exposed to artificial intelligence (AI) whether through search engines, writing assistants, automated recommendation systems or social media.

That everyday exposure raises a first, fundamental question: What should students should learn about AI? This goal is often described as AI literacy, which combines conceptual understanding with responsible use and critical judgment about AI.

A second, more practical, question is: Where should learning about AI sit in the curriculum? Since education is a provincial responsibility, Canada has no single approach.

Teaching AI literacy in schools builds on what provinces already require students to learn about digital technologies. How provinces do this determines how much time students get, what can be assessed and how teachers must be prepared.

In practice, these different curriculum models, plus the supports to ensure teachers can effectively teach them, will shape whether AI education becomes a set of tips for using apps — or a form of digital citizenship grounded in concepts, ethics and critical thinking.

What AI literacy implies for schools

Several provinces and educator associations have or are developing frameworks pertaining to AI in K-12 education. Several organizations have proposed similar frameworks that specify the concepts and competencies students should develop, or that guide what meaningful AI education would require in schools.

The United Nations Educational Scientific and Cultural Organization sees AI literacy spanning technical understanding and ethical awareness, and names a vision of students as AI co-creators and responsible citizens.

A U.S.-based framework, AI4K12, outlines what students should learn about AI across grade levels, and identifies five “big ideas” about AI: perception, representation and reasoning, learning, natural interaction and societal impact.

Two students work on a robot.
AI frameworks guide what meaningful AI education might look like in schools. (Allison Shelley/The Verbatim Agency/EDUimages), CC BY-NC

The U.S.-based International Society for Technology in Education (ISTE) proposes standards that engage students as empowered learners, computational thinkers, innovative designers and digital citizens.

Digital learning in provincial curricula

Across Canada, provinces integrate digital learning through different models — but note that these models are ideal types. Several provinces combine them. Each model can support AI literacy, but each creates different conditions for time, assessment and teacher preparation.

1. A dedicated subject or domain, where digital skills or computer science have their own courses. In many systems, teachers have been specifically trained for the subject. This configuration typically supports clearer sequencing across grades and more consistent assessment.

For example, between kindergarten to Grade 9, British Columbia teaches technological learning within applied design, skills and technologies curriculum, with Grade 8 requiring the equivalent of a full-year course that schools can deliver through modules.

Newfoundland and Labrador frames technology education as a hands-on area that can include programming and controlling physical devices through two dedicated courses about computer science in Grades 9 and 10.

Ontario’s computer studies curriculum creates dedicated course space for learning computing concepts. Ontario also illustrates how systems can shift emphasis over time: coding and digital competencies can be embedded within compulsory subjects, while a separate computer studies curriculum expands opportunities for sustained progression.

A dedicated subject provides protected classroom time to teach related core ideas (for example, data, algorithms and modelling) and to assess learning beyond using tools, while still making possible cross-curriculum learning.

It also creates clearer conditions for implementing ambitious AI literacy frameworks such as AIK12 and UNESCO’s guidance. This is because a teacher trained to translate specialized concepts for non-specialists leads instruction and can support sustained, project-based learning.

However, in many provinces, this “dedicated subject” exposure remains intermittent across K–12, often concentrated in a small number of courses, or sometimes a single year-long course with limited weekly time. This constrains cumulative progression and makes outcomes sensitive to local staffing capacity and teacher qualification.

2. Digital learning embedded in existing subjects. In New Brunswick, digital learning in Grades 6 to 8 is organized through the Middle Block, where Technology is one learning area among others. Teachers must address digital learning alongside a much wider set of practical and developmental goals, rather than teaching it as a fully separate subject with protected time.

Two teachers at a table in discussion.
How AI-related professional development will help teachers depends partly on learning expectations relevant to their work. (Allison Shelley/The Verbatim Agency/ EDUimages), CC BY-NC

This approach can make learning more connected to real problems and other learning. But it can also limit how much time can be devoted to AI-related concepts, and whether this learning is effective, when many other objectives must be covered within the same program structure. The trade-off is generally capacity: teachers are asked to carry new conceptual content without necessarily having time, training or materials.

3. A “transversal” framework, where competencies that underpin digital technology are meant to be integrated across subjects.

For example, Manitoba teaches literacy with information communication technology (ICT) across curriculum, related to thinking critically and creatively about information and about communication, “as citizens of the global community, while using ICT safely, responsibly and ethically.” Alberta’s information and communication technology program of studies states that it is “not intended to stand alone” but should be infused within core courses.

Québec has a province-wide digital competency framework describing 12 dimensions of confident, critical and creative uses of digital technology.

When competencies related to digital learning are integrated across subjects, every student can be reached, not only those who choose electives.

However, without clear accountability tying underlying competencies to particular digital media uses, this approach can potentially yield uneven learning experiences from school to school. Every teacher must also receive sufficient professional development on the subject.

What ‘AI-ready’ could mean

Each model requires different policy supports. Dedicated subjects need staffing and teacher preparation pipelines. Embedded approaches need sustained professional learning and realistic expectations for non-specialist teachers. Transversal frameworks need clear markers for student progression and assessment strategies, otherwise implementation depends on local enthusiasm.

For many provinces, the path forward is likely not choosing one model, but combining the strengths of all three.

Two students work on robot models.
The path forward for teaching AI literacy is likely combining the strengths of different curricular models. (Allison Shelley/The Verbatim Agency/EDUimages), CC BY-NC

This requires grounding in foundational knowledge of AI, as well as developing both discipline-specific and transdisciplinary competencies. UNESCO’s AI competency framework for teachers makes a similar point: governments should anchor AI learning in curriculum policy, build collaboratively with educators and invest in teacher preparation and resources.

Canada’s provincial diversity creates conditions for comparative analysis. If researchers study student learning associated with different models, this could help identify which policy arrangements, supports and implementation strategies are associated with stronger and more equitable forms of AI education.

Comparison may become even more salient with the OECD’s planned PISA 2029 media and artificial intelligence literacy assessment, which will be designed to examine whether students have had opportunities to learn to engage critically and responsibly with digital and AI systems.

The Conversation

Hugo G. Lapierre receives funding from the Fonds de recherche du Québec (FRQSC), the Social Sciences and Humanities Research Council (SSHRC) and IVADO.

Normand Roy receives funding from Fonds de recherche du Québec (FRQ), le ministère de l'Éducation du Québec (MÉQ), Social Sciences and Humanities Research Council (SSHRC).

Patrick Charland receives funding from the Fonds de recherche du Québec (FRQSC), the Social Sciences and Humanities Research Council (SSHRC) and UNESCO.

How wildlife conservancies perpetuate green colonialism in Kenya

The story of wildlife conservation in East Africa is often told through spectacular images of beautiful scenery and the region’s charismatic animals. But seldom asked is the question about how those efforts include and impact the communities that live alongside wildlife.

At the core of Africa’s rich biodiversity are Indigenous communities, which include pastoralists and forest peoples whose ways of life and knowledge are critical to conservation.

a giraffe standing in a grassy area
A giraffe in the Maasai Mara National Reserve in southern Kenya. (Kariũki Kĩrigia)

However, these communities have historically been blamed for biodiversity loss. Pastoralists such as the Maasai are often blamed for keeping “excessive” amounts of livestock, overgrazing and land degradation.

Such tropes against African Indigenous communities linger and continue to shape conservation, which has led to strict and often punitive regulations.

My ongoing research in the Maasai Mara region of southern Kenya looks into wildlife conservancies. The region is home to the Maasai, as well as other Indigenous Peoples, and rich biodiversity. My research examines how conservancies impact local communities on whose land conservation is practised.


Read more: Tanzania’s Maasai are being forced off their ancestral land – the tactics the government uses


What are wildlife conservancies?

The decline in wildlife in Kenya led to the birth of wildlife conservancies on both community and private lands. Kenya’s 2013 Wildlife Conservation and Management Act defines a wildlife conservancy as “land set aside by an individual landowner, body corporate, group of owners or a community for purposes of wildlife conservation.”

Organizations like the Kenya Wildlife Conservation Association (KWCA) view them differently. They see conservancies as land that is not set aside, but rather managed for the well-being of wildlife and communities.

In essence, the government maintains the view of fortress conservation that entails separating humans from nature, while the KWCA imagines communities co-existing with wildlife.

At the core of wildlife conservancies is land. Land ownership largely determines the type of conservancy that is established, which are either private, community, group or co-managed conservancies.

Private conservancies

Kariũki Kĩrigia explains his research into wildlife conservancies in Kenya. (University of Toronto Black Research Network)

In northern Kenya, private conservancies have largely been established in the highlands that were settled by white farmers during the colonial period.
These private conservancies have been criticized as “settler ecologies” built on a “big conservation lie” because they obscure the history of violent, colonial land dispossession, the criminalization of Indigenous pastoralist livelihoods and the exploitation of land and biodiversity to profit from conservation.

Additionally, the normalization of militarized violence in conservation, appropriation and control of conservation revenues meant for communities, and restriction of access to scarce water and pasture from pastoralists even during droughts, amounts to what is known as green colonialism.

The contradiction is that it was British colonial rule in Kenya that created the need for wildlife conservation starting in the 1940s. Extensive devastation of wildlife through sport hunting, wildlife trade and culling meant animals needed greater protection from humans, primarily through state-protected national parks and reserves.


Read more: Operation Legacy: How Britain covered up its colonial crimes


Group conservancies

Group conservancies are mostly found in southern Kenya, where individual plots are amalgamated through long-term land leases to conservation investors who, in turn, establish wildlife conservancies.

In the Maasai Mara, local communities typically lease their land for conservancies in exchange for lease payments, regular access to pasture and investment in initiatives such as school bursaries and infrastructure development.

One such example is the Nashulai Maasai Conservancy, established in July 2016. It’s the first Maasai conservancy in the Maasai Mara created by Maasai peoples.

Wildlife conservancies in Kenya are an important way to enhance land security and conservation built around communities. Community and group conservancies are based on the idea of using the land, water and pastures in ways that support humans, livestock and wildlife.

As part of my research, I interviewed community members who told me about some benefits brought by the conservancy. These included access to post-secondary education through a community college, women empowerment projects such as soap made from elephant dung, river restoration for household water access and food aid during the COVID-19 pandemic.

Challenges faced by group conservancies

Many group conservancies employ strict access rules and hefty fines against human and livestock presence. These practices often agitate communities as they echo fortress conservation’s tactics of separating humans and wildlife.

Land lease agreements between conservancies and landowners are often crafted in complex legal language that only a few community members can comprehend. It is critical that communities are provided with a detailed explanation of what leasing land to a conservancy entails beyond the benefits promised.

In addition, community benefits are undermined through land dispossession by local elites during land subdivision, who, in turn, benefit unfairly from leasing the unjustly acquired land to conservancies.

Biodiversity conservation in East Africa and the Global South more broadly depends significantly on external funding from organizations in the West, especially non-governmental organizations, which British conservation scholar George Holmes calls “conservation’s friends in high places.”

However, Indigenous communities face onerous requirements and processes to access funding for conservation and climate change initiatives.

In a recent guest lecture at the University of Toronto, Kimaren Ole Riamit, the director of the Indigenous Livelihoods Enhancement Partners (ILEPA), explained how African Indigenous communities experience the negative impacts of climate change despite being the least responsible for global warming, lose land to conservation and carbon projects and face significant hurdles in accessing resources to address climate-related challenges.

Initiatives meant to empower communities are often captured by local elites and corporate interests that appropriate and control resources and benefits expected to flow to communities.

Carbon offsetting

Wildlife conservancies have also gained the attention of carbon offset markets, which are expanding fast in Kenya. The Northern Kenya Rangelands Carbon Project and the One Mara Carbon Project are some of the main carbon projects in the country’s northern and southern rangelands.

Kenya’s rangelands sequester atmospheric carbon dioxide, which is then measured and verified by certification bodies such as Verra, and converted into tradeable carbon credits. These are sold to organizations seeking to offset their carbon emissions.

Carbon projects enter into long-term contracts with landowners, typically around 40 years, and spell out how the landowners should utilize the land to ensure adequate carbon sequestration and storage. Landowners receive expert knowledge that employs technologies and measurements of carbon that are foreign to local communities.

a zebra in a grassland area
A zebra in the Maasai Mara National Reserve in southern Kenya. (Kariũki Kĩrigia)

On the contrary, the same communities that have long managed lands and ecosystems sustainably are treated as lacking the ecological knowledge necessary for biodiversity conservation and carbon sequestration.

The outcome is that the owners of the technologies and what is deemed “expert” knowledge become the owners of the value generated from the land owned by communities.

While such initiatives generate millions of dollars in revenue, it has been shown that less than two per cent of climate finance reaches Indigenous Peoples, smallholder farmers and local communities in developing countries.

To create genuinely sustainable ecological conservation and improved quality of life for local communities, the government must focus on empowering communities through meaningful participation in initiatives.

Organizations like ILEPA and the Nashulai Maasai Conservancy are working to empower Indigenous communities in Kenya. These kinds of community-led efforts exemplify how conservation can, and must, include the people who call East Africa’s rich biodiverse landscapes home.

The Conversation

Kariuki Kirigia has received funding from the Black Research Network at the University of Toronto, the Ryoichi Sasakawa Young Leaders Fellowship Fund, and SSHRC-IDRC through the Institutional Canopy of Conservation research project.

Here’s why Canada needs to ditch age-based immigration points

Canada’s Comprehensive Ranking System (CRS) was established in 1967 to respond to historic racism and nationality bias in Canada’s immigration system. Granting points for age, education, official language skills, Canadian work experience and family ties, the CRS ranks applicants for permanent residency.

The federal government recently proposed changes to CRS points, including the elimination of some point categories. While family-related points are proposed for removal, age-based criteria are not.

My research delves into the legal, ethical and policy reasons why Canada should ditch age-based immigration points.

Age-based points are Charter violations

The Canadian Charter of Rights and Freedoms explicitly prohibits age discrimination in the equality clause of Section 15(1). According to the Supreme Court’s Singh v. Minister of Employment and Immigration decision, the Charter applies to anyone who is physically present in Canada, including non-citizens.

Many people who apply for permanent residence do so from within Canada. In fact, the federal government has introduced a two-year initiative — in 2026 and 2027 — to fast-track permanent residence for skilled workers who are already in Canada in specific high-demand sectors.

According to the lawyers I interviewed for my book, Age and Immigration Policy in Canada, such individuals would have solid legal grounds to launch a Charter challenge. They could claim that the points system constitutes age discrimination in violation of Canadian law.

Ageist immigration policies

Age discrimination embedded in the points system also contradicts Canadian values. Currently, a person gets zero points for age if they are under 18 or over 45.

Imagine the public outcry if a person received zero points for being a woman? Or for being a racialized person? Many Canadians would rightly call out such overtly sexist and racist policies.

Similarly, points for age undermine the merit-based foundations of the CRS. They contradict rights-based hiring practices that prohibit asking candidates their age and stereotyping older workers.

My archival research suggests the architect of the CRS, then-Deputy Immigration Minister Tom Kent, did not have a clear policy rationale for the initial age-based points. One historian has argued: “The points system, as it was originally conceived, has as much to do with politics as with labour markets.”

There is also some internal contradiction within the points system between the decreasing points for age and the increasing points for education and work experience. The latter rely on the passage of chronological time, while the former subtracts points for it.

Age-based points are bad policy

Policymakers and public commentators sometimes justify age discrimination in the points system by claiming that older immigrants are likely to take more from Canada than they are to give. But research shows that this is empirically incorrect.

First, Canadian and Québec pension plans are contributory — benefits are calculated by lifetime earnings in Canada. For Old Age Security, people must be residents of Canada for at least 10 years to qualify, and they must have resided here for at least 40 years to receive the maximum benefit.

As a result, immigrants to Canada receive fewer contributions and are more likely to be poor than any other group of Canadians when they retire.

Second, while some may assume older immigrants will be a burden on the health-care system, the “healthy immigrant effect” is well-documented.

Newcomers also tend to under-use health services. What’s more, there’s a waiting period for universal health coverage. Some immigrants actually return to their home countries to access time-sensitive or culturally appropriate care.


Read more: Why is Canada snubbing internationally trained doctors during a health-care crisis?


Third, people over the age of 45 contribute indirectly to the Canadian economy in ways that are not captured in formal economic data. For example, they undertake unpaid work in family businesses or provide free child care to enable their adult children to work outside the home.

Given these legal, ethical and empirical concerns about age-based points, the time has come to eliminate them altogether. Ongoing public consultations on the CRS are a historic opportunity for Canadians to oppose the age discrimination that has been normalized in our immigration system for too long.

The Conversation

Christina Clark-Kazak receives funding from the Social Sciences and Humanities Research Council of Canada.

The bias in medical research: Africa carries a huge disease burden but is missing from clinical trials

Modern medicine prides itself on being a universal science, built on evidence from clinical trials.

But there’s a bias in medical research. While Africa accounts for roughly 25% of the global disease burden and 19% of the global population, the continent’s people are largely invisible in some clinical trials.

The scale of the erasure is revealed in a landmark study of 2,472 randomised controlled trials globally published between 2019 and 2024.

I led this team of researchers, who scrutinised the world’s most influential medical publications to quantify African representation. They included the New England Journal of Medicine, The Lancet, the Journal of the American Medical Association, Nature Medicine, and the British Medical Journal. There were also three leading cardiovascular journals in the study: Circulation, the European Heart Journal and the Journal of the American College of Cardiology.

I am a physician-scientist working at the intersection of cardiometabolic epidemiology and biomedical data science. I also focus on large-scale population studies in Africa and data-driven cardiovascular prevention.

Randomised controlled trials are a cornerstone of evidence-based medicine. Introduced in the mid-20th century, they rigorously evaluate the safety and effectiveness of treatments by randomly assigning participants to different groups. This is done to minimise bias. Trials like these have been central to major medical breakthroughs, from cardiovascular therapies to vaccines. They continue to guide clinical decisions and the development of new treatments worldwide.


Read more: African countries are signing bilateral health deals with the US: virologist identifies the ‘red flags’


What we discovered

Our findings show a profound imbalance in the global clinical research landscape. Across the five most prestigious general medical journals, only 3.9% of trials were conducted exclusively in Africa. In cardiovascular health, the numbers drop to a statistical whisper. Of the major trials published in leading cardiology journals, just two studies (0.6%) were conducted solely on African soil.

This is a crisis of scientific accuracy. When clinical trials exclude African populations, they produce evidence that lacks “external validity”. This refers to how well the results of a study can be generalised beyond the participants. It asks whether findings from a clinical trial will still hold true when applied to different populations, settings, or real-world conditions.

Without that validity, doctors are essentially conducting unmonitored experiments on millions of patients every day.

Modern medicine cannot claim to be universal if entire populations remain invisible in the evidence base. Biology, health systems and disease patterns are not identical across the world.


Read more: Africa is losing health workers when it can least afford to – a pattern rooted in colonial history


The gap and why it matters

Many treatments used across the continent are based on evidence generated in non-African populations, raising concerns about their applicability.

Moreover, most Africa-based trials still focus on infectious diseases, despite the rising burden of non-communicable diseases such as cardiovascular disease.

Emerging evidence shows that genetics, environment and diet can radically alter how a body responds to a drug. It therefore makes no medical sense that an entire continent is left out of the trial net.

There’s also evidence showing that certain treatments have different safety profiles in Black patients. Diabetes and gout are just two examples. So are certain common blood pressure medications, such as angiotensin-converting enzyme (ACE) inhibitors. Research shows that they carry a three- to four-fold higher risk of severe, life-threatening side effects in people of African descent compared to other populations.

When clinical trials exclude populations, doctors are forced to extrapolate findings from one population and apply them to another.

The study also highlights a dangerous lag between global research funding and the evolving reality of African health. The new data show that nearly 76% of trials conducted exclusively in Africa focused on infectious diseases. But the continent is undergoing a massive epidemiological shift. Non-communicable diseases – heart disease, stroke, and diabetes – now account for about 38% of all deaths in many African nations.

The middle class in Africa has tripled to 300 million people from roughly 100 million people in the early 2000s. More people are now living long enough with lifestyles that increase the risk of chronic conditions such as heart disease, diabetes, and hypertension. Consequently, there is a growing need and market for long-term treatments that manage these diseases, rather than short-term therapies for infections. Yet cardiovascular trials continue to be discouraged.

Even within the continent, the data show deep “black holes” of information. South Africa accounted for over 62% of all trials conducted on the continent. Central Africa, a region that’s home to more than 180 million people, was virtually non-existent in the global research record. It contributed less than 3% of the continent’s limited trial output. Possible reasons include South Africa’s decades of cumulative investment, seen in stronger academic hubs, research governance, experienced trial units, and more established sponsor relationships. Other regions face barriers like fewer resourced research institutions, less access to trial platforms, and sometimes language and publication issues that can reduce visibility in top-tier journals.

The inequity extends into the hierarchy of science itself. Even when African sites are included in large, multicontinental trials, they are often relegated to the role of “recruitment hubs” rather than scientific partners. Our study found that African scientists led only 3.6% of multicontinental trials that included an African site.


Read more: Africa needs to speed up research excellence: here’s how


Towards a new era of African science

Africa should not simply be a location where studies are conducted.

It must be a place where research is conceived, led and interpreted. The current model creates a cycle of external dependence where international institutions manage the funding and the data. This leaves local research systems fragile and unable to translate evidence into national policy.

There is need for “ring-fenced” funding for African-led research, the development of regional trial networks, and a mandate for medical journals to report on the diversity of trial populations.

There are signs of a rising momentum. Organisations like Alliance for Medical Research in Africa are working to equip a new generation of African investigators. Africa must create a research ecosystem that is too important for the global community to ignore.

The Conversation

Bamba Gaye does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

India’s Horn of Africa strategy has shifted: what it’s trying to do and how it could work

India’s engagement in the Horn of Africa and Red Sea basin was, until recently, largely limited to UN peacekeeping operations and anti-piracy patrols.

Since the second half of the 1990s, India has participated in nearly all peacekeeping operations in Africa.

Anti-piracy efforts emerged between 2008 and 2014 as piracy off Somalia and the Gulf of Aden spread across a vast maritime space. This spanned east Africa and the wider Indian Ocean, bringing threats close to India’s shores.

Indian trade routes were exposed to new security risks, so a more sustained maritime posture was needed.

From the mid-2010s, therefore, India expanded its engagement in the Horn of Africa and the Red Sea basin to secure shipping lanes linking it to global markets. At the same time, it sought to counter China’s growing naval presence along the western Indian Ocean coast, protect its diaspora and investments, and position itself as a regional security provider.

When Prime Minister Narendra Modi took office in 2014, this shift accelerated. India placed greater emphasis on proactive diplomacy, expanding high-level engagement, and trade and infrastructure links. It also pursued strategic coordination through bilateral agreements and naval exercises across west Asia and the adjoining African coastline.

India, the Horn of Africa and the Red Sea basin

This evolution reflects India’s transition from a post-colonial, non-aligned actor to a more assertive power with ambitions outside the region. It is now Africa’s third-largest trading partner. Economic interdependence is growing alongside geostrategic interests.

Drawing on our work on international security in the western Indian Ocean and sub-Saharan Africa, we argue that over the past decade New Delhi has redefined the Indian Ocean as a protective buffer and a primary theatre of influence linking the Indo-Pacific to the Red Sea. The Horn of Africa lies at the heart of this connective space.

In 2023, India declared itself the Indian Ocean’s “net security provider”. It introduced a framework to strengthen regional security, deepen economic cooperation and address shared maritime challenges.

Today, with shipping routes being recalculated and governments reconsidering their strategic partnerships, India’s position is being put to an operational test.

The Horn is a space where legitimacy, delivery and endurance determine who remains relevant after the headlines fade. For the first time, India’s quiet advance is visible. Next, it will have to solidify its presence.

Why the Horn of Africa is important for India

An initiative called the 2025 Africa-India Key Maritime Engagement, co-hosted with Tanzania, positions India as a security partner for African nations, particularly those along the Indian Ocean rim.

India is also involved in development and investment projects in the region. These include agricultural efforts to improve food security, infrastructure projects, and technical assistance in education and health. It also provides humanitarian assistance in Somalia, Kenya and Djibouti.

What distinguishes the past decade is the effort to align these activities within a broader strategic narrative – one that presents India as a partner offering technology and development without debt concerns or political conditions.

This narrative is attractive to local governments in the Horn. But it also creates a test: India must show that it can deliver consistently.

Ethiopia has an important role for India. It hosts the African Union, functions as a diplomatic centre and offers an entry point into African multilateral politics.

Somalia also matters. It sits close to critical sea lanes and is central to the security of the Gulf of Aden. External actors there can convert security assistance into political access.


Read more: China’s military support for Somalia is on the rise – what Taiwan and Somaliland have to do with it


India’s interest in Somalia and Somaliland has taken on a geo-economic dimension. Indian firms are focusing on gold and mineral resources, particularly in eastern Somaliland.

Although still limited in scale, this shift signals that India’s footprint in the Horn is no longer confined to security and development assistance. It is intersecting with resource access and supply chain strategies.

The competition

The corridor of the Red Sea, Gulf of Aden and western Indian Ocean has become a crowded arena for external powers over the past two decades.

Great powers have seen countries in the region as a platform for counterterrorism and naval reach. Small and middle powers (like Turkey, Iran and Gulf states) have sought to secure influence through ports, training missions, arms transfers, commercial access and selective mediation.

The result is a dense environment. Almost every external actor offers a package of security, finance, technology and diplomacy. Fragile local governments hedge among them.

India’s challenge is to deliver consistently through:

  • creating defence and security training pipelines

  • project delivery

  • stable financing instruments

  • sustained bureaucratic attention.

If India’s Africa policy is maritime-led, then things like naval exercises, information-sharing, coast guard cooperation and institutional training must become regular and visible.

If the strategy is also developmental and technological, then India must deliver flagship projects in digital infrastructure, health and agriculture.

From quiet influence to lasting power

India faces three constraints in growing its influence in the Horn of Africa.

1. Limited military capacity

India’s naval capabilities do not match the scale of China’s fleet or America’s technological edge and operational depth. This gap is not fatal if India’s aim is durable influence through partnership. It does mean that India’s leverage will depend on institutional cooperation and coalition-building.

2. Competitive density

The Horn’s architecture is made of foreign bases, port diplomacy and overlapping rivalries. India’s advantage is that it’s not overwhelmingly intrusive. But it could become just one more actor among many.

3. Institutionalisation

If India’s engagement depends too heavily on leader-level attention, it will remain vulnerable to distraction. Durable influence requires bureaucratic routines and financing mechanisms. It must survive political cycles and shifting crises. Ethiopia is a test case. High-level roadmaps will have to turn into visible digital infrastructure, health systems and agricultural support.

The broader point is that the Horn is not an empty theatre waiting for India to arrive.

The Conversation

Federico Donelli is affiliated with the Italian Institute for International Political Studies (ISPI), the Nordic Africa Institute (NAI), and the Orion Policy Institute (OPI).

Riccardo Gasco is affiliated with IstanPol Institute.

Chiara Boldrini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Extreme heat is a growing threat to health, jobs and food security in southern Africa – study looks for practical solutions

Extreme heat is not just uncomfortable weather – it is becoming a serious threat to health, jobs and food security across southern Africa, especially for those least able to cope.

Unlike floods, cyclones, wildfires or storms, extreme heat rarely leaves dramatic images of destruction. But it builds without relief, putting strain on people’s bodies, homes and health systems.

In many cases, the danger is intensified when temperatures stay high overnight, leaving little chance to recover.


Read more: Heat with no end: climate model sets out an unbearable future for parts of Africa


Even temperatures that seem manageable can be dangerous, depending on where people live and how well they can adapt.

We are members of a group of researchers and practitioners from across southern Africa working on climate, health and policy.

We recently conducted a regional consensus study for the Academy of Science of South Africa (ASSAf) to assess how extreme heat affects health and daily life across the region. Our aim was to determine what practical steps are needed to reduce the harm caused by extreme heat.

We worked with a team of independent experts from across disciplines to review scientific evidence, regional data and policies, and to develop a shared, evidence-based view of how extreme heat is affecting the region.

Our study was unique because it brought together evidence from across health, labour, food systems and infrastructure to show how heat affects everyday life, analysing heat not just as a weather event, but as a system-wide risk.


Read more: Heat extremes in southern Africa might continue even if net-zero emissions are achieved


We found that extreme heat is already a defining climate and health threat in southern Africa.

One of the biggest mistakes in public discussion is to treat heat as simply a weather event. It is much more than that. Heat immediately increases the risk of dehydration, heat exhaustion and heat stroke. Heat can also worsen existing conditions such as cardiovascular, respiratory and renal (kidney) disease.

Heat needs to be treated as a major public health and development priority across the Southern African Development Community.

Heat is a health issue – not just a weather issue

The Southern African Development Community has 16 member states, home to more than 400 million people. Yet collectively, these countries contribute less than 1.3% of global greenhouse gas emissions.

Despite this, southern Africa is already heating up fast. Average surface temperatures across the region have risen by 1.0-1.5°C since 1961. A further 4.5-5°C increase is projected by 2050 under high-emission scenarios (where fossil fuel companies continue to pollute at the same rate as they are now).


Read more: Climate change has doubled the world’s heatwaves: how Africa is affected


In our report, we describe extreme heat as an “integrator hazard” (a multiplier). This means it is not only one risk but makes existing problems worse all at once.

For example, extreme heat can reduce crop yields and nutrient quality, increase water stress, worsen air quality through dust and wildfire smoke, and disrupt livelihoods that depend on safe outdoor work – all at the same time. That is what makes heat so dangerous.


Read more: South African study finds 4 low-income communities can’t cope with global warming: what needs to change


It can also make already hot environments – especially informal settlements with limited shade, ventilation or cooling – far more dangerous. Extreme heat can place added strain on electricity systems. This increases the risk of power outages just when cooling, water supply and health services are most needed.

In many communities, heat also shortens the safe life of perishable food – including food sold informally that isn’t stored in fridges. This too increases the risk of food-borne illness. That matters in a region like southern Africa where street food and informal food economies are part of everyday life.

The burden is deeply unequal

Extreme heat does not affect everyone equally. One of our study’s central findings is that the people and communities most exposed to heat are often those with the fewest resources to adapt. This includes people living in informal settlements, those without reliable electricity or cooling, communities facing water scarcity, and workers who must work outside all day.

Across much of southern Africa, many people work outdoors or in poorly ventilated environments – from subsistence farms and construction sites to factories, markets and transport hubs. Being forced by heat to slow down, stop work, or continue working under dangerous conditions affects both health and livelihoods.


Read more: Zambia’s farmers are working in dangerous heat – how they can protect themselves


Heat exposure affects daily life: children may walk long distances to school or spend hours outdoors. It affects pregnancy and newborn health, causing risks such as premature birth, low birth weight and pregnancy complications.

For this reason, extreme heat is also an ethical and justice issue. The people who contribute least to climate change are often the ones most exposed to its effects – simply because of where they live, the work they do, and the resources available to them.

What governments should do now

Extreme heat is not a problem that can be solved simply by telling people to “drink more water” or “stay indoors” – especially where safe housing, water, electricity and cooling are not guaranteed. But there are practical measures that governments and institutions can take.

These include:

  • improving locally appropriate early warning systems

  • tracking heat-related illness and deaths to guide response and planning

  • making clinics and hospitals more climate-resilient, through reliable electricity, cooling, water supply and backup systems

  • protecting workers through rest breaks, shaded areas, access to water and adjusted working hours

  • improving urban design and housing so that buildings and neighbourhoods stay cooler

  • integrating heat into national climate and health planning.

Governments can also establish public cooling spaces – such as community centres, schools or clinics – where people can safely rest during extreme heat.


Read more: Climate change: the effects of extreme heat on health in Africa – 4 essential reads


There are already promising examples in the region. South Africa has begun strengthening heat-health early warning and surveillance systems. Malawi is helping farmers adapt to rising temperatures in climate-smart agricultural planning.

Namibia has supported community-level water and resource management in heat-prone areas. These examples show that progress is possible, but they need to be expanded and sustained.


Read more: Climate information is useful at local level if people get it in good time: how African countries can build systems to share it


Heat does not respect borders, and coordinated action within countries and across borders can better prepare countries for heat disasters. National meteorological services, health departments, local governments, labour authorities and emergency services should work together so that heat warnings lead to clear, coordinated action on the ground.

For too long, extreme heat has been treated as a secondary climate risk. That is no longer tenable. Heat now needs to move to the centre of climate policy. The question is no longer whether southern Africa can afford to act. It is whether it can afford not to.

The Conversation

Jerome Amir Singh has received funding from the Academy of Science of South Africa (ASSAf). ASSAf is a statutory body that is funded primarily through a parliamentary grant allocated by the South African government's Department of Science, Innovation, and Technology.

Caradee Yael Wright receives funding from the South African Medical Research Council.

Received — 1 May 2026 The Conversation

Why the 60-day War Powers Resolution deadline doesn’t actually constrain presidents

A TV displays U.S. President Donald Trump's prime-time address on the war in Iran inside a Cheesecake Factory on April 1, 2026, in Washington, D.C. Anna Moneymaker/Getty Images

May 1, 2026, marks the 60th day of Operation Epic Fury in Iran – a symbolically significant date designating when a president who has mounted unilateral military operations must receive Congressional approval or wind it down.

However, the complex history of the War Powers Resolution clock demonstrates it is a toothless milestone.

The Trump administration signaled on April 30, 2026, that it would ignore that deadline, set by the War Powers Resolution. Secretary of Defense Pete Hegseth testified before the Senate Armed Services Committee that “we are in a cease-fire right now, which my understanding is that the 60-day clock pauses or stops in a cease-fire. That’s our understanding, so you know.”

Sen. Tim Kaine of Virginia, a Democrat, responded that the 60-day threshold poses a “legal question” and “constitutional concerns.”

This is not the first time presidents and members of Congress have sparred on the meaning of the War Powers Resolution. What happens next will play out through regular politics, because the conflict is not a matter of simple legal interpretation.

War: Collective judgment

In the U.S. Constitution, Congress and the president share war powers.

In the shadow of political struggles in the final years of the Vietnam War, Congress passed the War Powers Resolution in 1973 to “insure that the collective judgment of both the Congress and the President will apply to the introduction of United States Armed Forces into hostilities.”

A crucial section of the resolution reasserts legislators’ role, and makes clear that the constitutional power of the president to make war is subject to, or exercised with, the following conditions: a Congressional declaration of war; specific statutory authorization; or a national emergency created by attack upon the United States, its territories or possessions or its armed forces.

For new military campaigns that do not meet these criteria, the resolution included a 60-day clock that begins when a president reports the action to congressional leadership within 48 hours of the action beginning.

The clock can be expanded to up to 90 days upon presidential determination and certification of “unavoidable military necessity respecting the safety of United States Armed Forces” related to removal of troops.

After 60 to 90 days, the resolution originally said this type of unilateral military action would be terminated automatically unless both chambers of Congress approved some form of legislative authorization.

Congress could also choose to terminate an unauthorized military operation any time before the 60 days with a concurrent resolution, which doesn’t require a president’s signature – essentially, a “legislative veto.”

And to make sure the president couldn’t stretch the definition of congressional approval, the resolution said neither existing treaties nor new budget appropriations could substitute for legislative authorization of a military action.

Since 1973, actions by all three branches across a variety of political and policy landscapes have undermined its intents and procedures.

Veto vetoed

In 1983, the Supreme Court declared various kinds of legislative vetoes unconstitutional, which led Congress to reinterpret its War Powers Resolution procedures and powers and effectively amend its processes to expedite any joint resolution or bill that “requires the removal of U.S. armed forces from hostilities outside the United States.”

Now, if members want to stop a presidential military campaign already in progress, they must act affirmatively and pass a disapproval resolution, which a president could veto like any other bill. Congress has sent only one such disapproval – to President Donald Trump in his first term – which he vetoed. Congress did not have the two-thirds required in the Constitution to override.

Both chambers of Congress now have to vote twice, once to disapprove a military action and then again to overcome a likely veto, to stop something it never approved in the first place.

House Majority Leader Mike Johnson explains on March 4, 2026, why his party rejects a Democratic-led measure to assert Congress’ war powers and stop the Iran military action.

The 60-day mark for the current Iran operation has therefore loomed as more of a politically charged symbol of this longstanding imbalance on war powers than a real deadline for action by either branch.

Parallels to Kosovo and Libya

The House and Senate have tried to pass legislation to stop military operations against Iran six times since operations began. All attempts have failed, including the most recent vote on April 30. Democrats are considering filing suit against President Trump if operations go beyond 60 days without authorization.

Yet federal courts have long expressed disinterest in getting involved in constitutional questions related to the War Powers Resolution, especially if members of Congress are the plaintiffs.

Although most presidents from Richard Nixon onward have claimed that the War Powers Resolution is an unconstitutional check on their institutional powers, they usually filed the required reports on new military actions 48 hours after they began.

While the current Iran conflict is different in many ways, presidential unilateralism, inconclusive chamber actions and even member lawsuits all echo controversies over U.S. military action in Kosovo in 1999 and Libya in 2011.

Where Trump administration may lean on Clinton

Operation Epic Fury against Iran began Feb. 28, 2026, and President Trump sent the required report to Congress on March 2, 2026.

After detailing the rationale for military action, Trump added “Although the United States desires a quick and enduring peace, it is not possible at this time to know the full scope and duration of military operations that may be necessary.”

He concluded the memo with his interpretation of constitutional power to act unilaterally.

“I directed this military action consistent with my responsibility to protect Americans and United States interests both at home and abroad and in furtherance of United States national security and foreign policy interests,” the president wrote. He acted, he said, “pursuant to my constitutional authority as Commander in Chief and Chief Executive to conduct United States foreign relations.” He said he made the report “consistent with the War Powers Resolution. I appreciate the support of the Congress in these actions.”

Similarly, on March 26, 1999, President Bill Clinton sent a War Powers Resolution letter explaining his decision two days earlier to take part in a NATO-led operation against the Federal Republic of Yugoslavia, known as FRY.

Clinton wrote to Congress using mostly the same words and phrases Trump did in his 2026 letter. Clinton also said that he took the action “in response to the FRY government’s continued campaign of violence and repression against the ethnic Albanian population of Kosovo.”

A gray-haired man in a dark jacket and blue tie, sitting at a desk in a very formal looking room.
President Bill Clinton after his television address to the nation on the NATO bombing of Serbian forces in Kosovo, March 24, 1999. Pool/Getty Images

Clinton explained his authority in virtually the same language as Trump and, like Trump, said it was hard to predict how long the operations would continue.

The House and Senate repeatedly failed to either approve or disapprove of Clinton’s actions through a series of votes across March and April 1999. But lawmakers did send him supplemental appropriations for the operations in May.

NATO suspended the operation after 78 days. Almost a year later, a federal appellate court upheld a district court’s decision rejecting a lawsuit led by Rep. Tom Campbell, a California Republican, alleging Clinton violated the War Powers Resolution. Rather than deciding on the merits, the decision rejected the lawmakers’ claims of injury as not reviewable by the court.

Obama did it, too

In a very different context, a similar rhythm played out during President Barack Obama’s presidency.

During the “Arab Spring” revolts of 2010-2011, the U.N. Security Council passed two resolutions condemning violence against Libyan civilians by security forces under the direction of Colonel Moammar Gadhafi.

On March 21, 2011, two days after NATO operations began against Gadhafi’s forces, which included American air support, Obama sent his War Powers Resolution letter to the Republican House and Democratic Senate. Obama had not received prior legislative authority from Congress.

Obama’s letter included language almost identical to Clinton’s earlier letter and Trump’s later one.

As with Kosovo, the House and Senate did not ultimately agree to either approve or disapprove of the president’s actions in support of the UN and NATO over the operation’s 222 days. In addition, Democratic Rep. Dennis Kucinich of Ohio led a group of mostly Republican House members in a failed War Powers Resolution lawsuit to stop the president.

Unilateral action endures

The Office of Legal Counsel in the Department of Justice has published legal opinions that explain and defend presidential war powers, including with Kosovo and Libya. In December 2025, that office published a memo defending the imminent January 2026 capture of Nicolás Maduro. On April 21, 2026, the State Department published a defense of ongoing U.S. actions in Iran.

Within the current dynamics of the War Powers Resolution, until Congress musters bipartisan supermajorities to connect its own institutional ambition with constitutional power, presidents from either party will decide alone if, and when, the country goes to war. Instead of Congress, presidents may heed public opinion and economic indicators, especially in election years.

The Conversation

Jasmine Farrier is affiliated with the American Political Science Association.

How Britain’s housing crisis contributes to its declining healthy life expectancy

I Wei Huang/Shutterstock

People in the UK are now spending fewer years in good health than they did a decade ago, according to a new analysis by the Health Foundation. The UK now sits near the bottom of a 21-country comparison, ahead only of the US.

A drop in healthy life expectancy is explained through many causes: obesity, alcohol, drugs, suicide, chronic disease, poverty and widening inequality. But one of the most powerful causes sits atop them all: housing. Where and how people live is one of the main factors explaining how health risks are created and distributed across society.

The UK Housing Review is an annual independent review of housing policy and evidence, written by housing experts and published by the Chartered Institute of Housing. Its latest edition, which we contributed to, identifies several interrelated ways that housing affects health.

A key one is affordability – housing costs shape where people can live, whether they can heat their homes, whether they can afford food and transport, whether they can move for work, whether they can leave unsafe or unsuitable housing and whether they live with chronic financial stress.

In the UK, housing costs are high by historical standards and poor housing remains widespread. The review notes that private rents are now at their highest recorded share of earnings, while millions of homes in England still contain serious health and safety hazards.

When housing is unaffordable, people are forced to make tradeoffs. For example, trading affordability for damp or overcrowded homes. They cut back on heating, food, medication, transport and social participation. They move further from public services, work and support networks. Affordability problems also force many people into cheaper, less secure, tenancies.

Poor housing quality directly shapes health. Cold, damp, mould, disrepair, poor ventilation and unsafe homes are directly linked to respiratory illness, cardiovascular risk, mental health problems and reduced wellbeing.


Read more: Cold homes increase the risk of severe mental health problems – new study


The Building Research Establishment, an independent research organisation, has estimated that poor housing costs the NHS in England £1.4 billion each year. More than half of this is attributed to cold homes, which increase the risk of respiratory illness, cardiovascular problems and poor mental health. They are especially dangerous for older people, babies and people with existing health conditions.

But the wider costs are even greater. Poor sleep, stress, disrupted schooling, insecure work, social isolation and caring strain all affect mental and physical health. They increase pressure on families and, over time, on health, education and social care systems.

Close up of someone resting their hands and hot drink on a radiator
Cold homes can cause serious and widespread health problems. Jelena Stanojkovic

Historically in the UK, social housing has provided some protection to people unable to access good quality affordable housing in the open market. But the stock of social rented housing in the UK has declined. This means that people are increasingly dependent on (often expensive) market rental, where the quality, size and location of housing depend much more directly on income.

The rise of the private rented sector this century has meant that more households are exposed, not just to higher housing costs, but also to shorter tenancies and fewer protections than social housing traditionally provided.

The Renters’ Rights Act increases security, but does not remove “no fault” evictions altogether and does little to protect tenants from economic pressures that can result in eviction. The cognitive burden of worrying about eviction, arrears, repairs or the next rent increase is a direct health risk.

Recent evidence also suggests that insecure housing can result in measurably faster biological ageing, equivalent to the effects of more traditional health concerns like smoking.

Additional weeks of biological ageing per year from different factors

Bar chart showing additional weeks per year for private renting (2.4 weeks) compared to other social determinants of health including unemployment (1.4 weeks), having no qualifications (1.1 weeks) and being a former smoker (1.1 weeks)
Amy Clair

The number of people living in temporary accommodation has risen dramatically, reaching over 130,000 households at the beginning of 2025. This is a 156% increase compared with 2010, largely driven by the poor affordability and insecurity of the private rented sector and lack of social housing. Temporary accommodation is inadequate housing, particularly for children. Living in temporary accommodation was a contributing factor in the deaths of at least 104 children in England between 2019 and 2025, 76 of whom were under one year of age.

This is not about housing quality alone. Temporary accommodation reflects multiple risks brought together: poverty, overcrowding, poor conditions, instability, lack of space for safe infant sleep, poor access to services and wider racial and social inequality. The National Child Mortality Database identifies temporary accommodation as a contributing factor to vulnerability, ill health or death, not necessarily as the sole cause. Emerging evidence also links temporary accommodation with stillbirth and neonatal death.


Read more: Insecure renting ages you faster than owning a home, unemployment or obesity. Better housing policy can change this


Housing health inequality

ONS data shows a very large difference in healthy life expectancy between the most and least deprived areas. In 2022-24, healthy life expectancy in the most deprived areas of England was just 49.8 years for men and 48.2 years for women, compared with 69.2 and 68.5 years in the least deprived areas.

Housing contributes to this difference, determining whether people live in homes that support recovery or deepen stress, whether children grow up in stable and safe environments, and whether older people can remain warm and independent.

If the government is serious about its stated aim to “halve the gap in healthy life expectancy between the richest and poorest regions”, housing policy must become health policy.

That means investing in social housing, enforcing decent standards in the private rented sector, making homes warmer, safer and more accessible, and recognising temporary accommodation, overcrowding and insecurity as public health failures, not just housing management problems.

It also means changing the way that success is measured. Housing policy is too often judged by supply numbers, prices or tenure outcomes. These matter, but they are incomplete. A healthy housing system should also be judged by whether people can live in homes that are affordable, secure, decent, suitable and resilient to climate change.

The decline in healthy life expectancy is a warning light. It tells us that the UK is not only failing to keep people well for longer, it is failing to provide the foundations of health.

The Conversation

Emma Baker receives funding from the Economic and Social Research Council, the Australian Research Council, The National Health and Medical Research Council, and the Australian Housing and Urban Research Institute.

Amy Clair receives funding from the Australian Research Council and the Australian Housing and Urban Research Institute.

Mark Stephens receives funding from ESRC, the EU/Innovate UK and the Australian Housing and Urban Research Institute (AHURI).

Ten compelling poems about climate change – chosen by our experts

Three Reading Women in a Summer Landscape by Johan Krouthén (1908). WikiCommons

We asked ten literary experts to recommend the climate poem that has spoken to them most powerfully. Their answers span over 200 years and a range of emotions from sorrow, to anger, fear and hope.

This article is part of Climate Storytelling, a series exploring how arts and science can join forces to spark understanding, hope and action.

1. Death of a Field by Paula Meehan (2005)

Published in the wake of the 2008 financial crisis, Paula Meehan’s Death of a Field critiqued the environmental impact of the Celtic Tiger economy in Ireland.

The poem anticipates the destruction of the titular field by property developers with little regard for native ecologies: “The end of the field as we know it is the start of the estate.”

Death of a Field read by Paula Meehan.

The global effects of the climate crisis are seen from a uniquely local perspective as the displacement of Irish wildlife mirrors the effect of colonial violence. “Some architect’s screen” is simply the latest iteration of imperial technologies that seek to plunder Irish landscapes. The poem gains further strength by refusing to replicate a hierarchical relationship to nature by preserving its many mysteries:

Who can know the yearning of yarrow

Or the plight of the scarlet pimpernel

Whose true colour is orange?

Jack Reid is a PhD Candidate in Irish literature

2. Darkness by Lord Byron (1816)

Darkness imagines the fallout of a volcanic eruption that has destroyed the Earth. The “dream” that the poem mentions was inspired by genuine weather conditions during the “year without a summer” in 1816, caused by the eruption of Mount Tambora in Indonesia the previous year.

Darkness by Lord Byron.

Sulphur in the atmosphere caused darkness and low temperatures across Europe. In Lake Geneva, Lord Byron experienced the infamous “haunted summer” of darkness.

Byron’s depiction of climate catastrophe is bleak, with words like “crackling”, “blazing” and “consum’d” bearing resemblance to contemporary reports of wildfires caused by climate change. After a famine, all elements of Byron’s Earth, from the clouds to the tide, eventually cease to exist: “Seasonless, herbless, treeless, manless, lifeless– / A lump of death – a chaos of hard clay.” Read as a portent of the Anthropocene, Byron’s poem urges readers to seriously consider the future of mankind.

Katie MacLean is a PhD candidate in English Literature

3. Mont Blanc by Percy Bysshe Shelley (1817)

Byron’s close friend Percy Bysshe Shelley was also inspired by the “year without a summer”. He witnessed temperatures drop, volcanic ash hanging heavy in the air and crops failing. While his wife Mary used the gloomy climatic event to inform her novel Frankenstein (1818), Shelley channelled them into his poem Mont Blanc.

A reading of Mont Blanc.

In his ode, Shelley describes a timeless “wall impregnable of beaming ice”. By drawing on his scientific reading, he then explains his fears regarding global cooling and the possibility of vast glaciers eventually covering the alpine valleys.

He imagines “the dwelling-place / Of insects, beasts, and birds” being obliterated and mankind forced to flee. While Shelley saw this process as “destin’d” and inevitable, it is clear that Mont Blanc is a poem with catastrophic climate change at its heart. In 2026, it is difficult to read in any other way.

Amy Wilcockson is a research fellow in Romantic literature

4. Characteristics of Life by Camille T. Dungy (2012)

There’s something gloriously elastic about invertebrates: the spinelessness of a worm, the pulsing of the jellyfish, the curling of an octopus. Spiders, snails and bees, too, with their exoskeletons on display, invite us to see things “inside-out”.

These are the thoughts I have when I read Characteristics of Life by Camille T. Dungy, which opens with a snippet from a BBC news report claiming that “a fifth of animals without backbones could be at risk of extinction”. What would a world be without the “underneathedness” of the snail beneath its shell beneath the terracotta pot in the garden? Or “the impossible hope of the firefly” whose adult lives span only a handful of human weeks?

Camille T. Dungy speaks about nature and poetry.

Dungy speaks from a “time before spinelessness was frowned upon”, and from a world where to dismiss a being as “mindless” (jellyfish have no brains) or even “wordless” would be “missing the point” entirely. As I think of these creatures that dwell beyond our usual line of vision – flying, crawling, tunnelling and swimming – I find my perspective on our beautiful world turning and shifting.

Janine Bradbury is a poet and a senior lecturer in contemporary writing and culture

5. Prayer at Seventy by Vicki Feaver (2019)

One of my favourite poems about climate change is Vicki Feaver’s Prayer at Seventy from her 2019 collection I Want! I Want!.

The speaker’s request of passing her “last years with less anxiety” appears to be denied by a god who first responds by changing her into “a tiny spider / launching into the unknown / on a thread of gossamer” and who, when she begs to “be a bigger / fiercer creature”, turns her into “a polar bear / leaping between / melting ice floes”.

A reading of Prayer at Seventy by Vicki Feaver followed by an explanation by the poet.

Both images present creatures who are in precarious positions, their futures uncertain, reflecting the state of a person contemplating the unknowns of old age and death. But the poem moves beyond the personal. The reference to the melting ice floes is not solely metaphorical: it reminds us that the planet itself is in danger and every living thing is therefore vulnerable – and will be increasingly so.

Julie Gardner is a PhD candidate in literature


Read more: How poetry can sustain us through illness, bereavement and change


6. Walrus by Jessica Traynor (2022)

Walrus, from Jessica Traynor’s 2022 collection Pit Lullabies expresses the quiet anxiety a mother has for her child in the world of climate breakdown.

While stripping wallpaper from the box room of her house, the poet discovers a mural of the Walrus and the Carpenter from Alice’s Adventures in Wonderland. Traynor takes part of Lewis Carroll’s poem about the Walrus and the Carpenter walking along the beach, eating the vulnerable oysters, and weaves it into her own poem.

Jessica Traynor reading poems from her collection Pit Lullabies.

Carroll’s absurd verse includes what, at that time no doubt, seemed like an impossible image of a “boiling hot” sea. In the 21st century, this is no longer an absurdity, as Traynor knows. She makes a connection with Carroll’s poem, imploring her child:

Sleep as the sun rises and ice melts

and for want of the freeze a walrus

pushes further up a cliff-face.

It’s a complex poem that reimagines a key work of children’s literature, connecting it with the reality of the changing world. All the while the mother keeps her fears at bay for the sake of her child, “brows[ing] washing machines” with a “ball of tears” in her throat.

Ellen Howley is an assistant professor of English

7. Ocean Forest, co-created by the We Are the Possible programme

Ocean Forest is woven out of words, research, ideas and stories shared by scientists, educators, health professionals, youth leaders, writers and artists. They took part in creative writing workshops to co-create the anthology Planet Forest – 12 Poems for 12 Days for the UN Climate Conference in Brazil in 2025.

In the shallows, alert to change,

the minuscule, overlooked creatures

weave between seagrass, and weed –

live their shortened lives.

When ships pass overhead, when sands shift,

fish navigate swell, migrate beyond

where coral’s been bleached, through schools

of silenced whales and barely rooted mangroves

struggling to thrive in darkening water.

Deeper down,

pressure builds, species exist, unaware,

undisturbed. As heat and waves rise there’s hope

the unfound, the unnamed, the unpolluted

in the remotest ocean forests will survive.

Through uniting disciplines and voices the poem takes unexpected shifts. It demonstrates that climate change affects and erodes the habitats that lie beneath the surface and that urgent action is needed to protect disappearing species.

Yet, there is also a glimmer of hope – that in the deepest, darkest parts of the ocean, where temperatures are near freezing and there are bone-crushing pressures, maybe there are creatures that will survive human interference and pollution.

Sally Flint is a lecturer in creative writing and programme lead on the We Are the Possible programme

8. Di Baladna (Our Land) by Emi Mahmoud (2021)

Emtithal “Emi” Mahmoud is a Sudanese poet and activist, who has won multiple awards for her slam poetry performances. Mahmoud performed Di Baladna at the United Nations Climate Change Conference in 2021.

Poetry – especially spoken word – helps people connect emotionally with the human side of climate-driven displacement, a topic that’s often explained only through technical language. The language of emissions targets, temperature thresholds, or policy frameworks can distance people emotionally from its consequences. Yet poetry can cut through this abstraction.

Di Baladna (Our Land) read by Emi Mahmoud.

Mahmoud’s performance gave voice to those forced from their homes by environmental collapse, reminding listeners that climate change is not only an environmental crisis but a deeply human one, with profound effects on individuals, families and communities.

By merging vivid natural imagery with the rhythms of displacement and lived testimony, the poem urges listeners to replace passive awareness with empathy. Mahmoud implores us to feel the loss, fear and resilience of displaced communities, looking beyond news headlines and images of victimisation. Engaging with such work helps transform climate refugees from statistics into people.

Clodagh Philippa Guerin is a PhD candidate in refugee world literature

9. Flowers by Jay Bernard (2019)

At first glance, Jay Bernard’s Flowers is circular poem (one that begins and ends in the same place) but you soon realise that the circle isn’t going to complete. It opens:

Will anybody speak of this

the way the flowers do,

the way the common speaks

of the fearless dying leaves?

And closes:

Will anybody speak of this

the fire we beheld

the garlands at the gate

the way the flowers do?

And the answer seems to be, no: no one will speak of these things – the “coming cold” and the “quiet” it will bring – only the things themselves as they die. With the songs Where Have All the Flowers Gone? by Pete Seeger and Blowin’ in the Wind by Bob Dylan in its DNA, Flowers has the eternal power of a folk-lyric – prophetic and unignorable.

Kate McLoughlin is a professor of English literature

10. Place by W.S. Merwin (1987)

Climate change poetry – should it be a thing? How do poets avoid the oracular pomp it threatens? Browsing my small library I’m shocked anew to realise most poets lived and died blissfully innocent of our condition.

OK, what about the late John Burnside’s lyric Weather Report (“this is the weather, today / and the weather to come”). It poignantly extrapolates from a sodden summer to his sons’ futures: “a life they never bargained for / and cannot alter”. Heartbreaking. Or the odd dread of spring in Fiona Benson’s Almond Blossom, a season characterised as Earth’s, “slow incline … inch by ruined inch”. Ditto.

W.S. Merwin reads Place.

But then I reach back to the great American poet W.S. Merwin’s short prayer Place to find that grace-note of hope which surely needs to thread through all poems, whether they speak of climate change, mortality or love: “On the last day of the world / I would want to plant a tree.” Me too.

Steve Waters is a playwright and professor of scriptwriting at the University of East Anglia

This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something, The Conversation UK may earn a commission.

The Conversation

Amy Wilcockson receives funding from Modern Humanities Research Association as Research Fellow for the Percy Bysshe Shelley Letters project.

Steve Waters receives funding from AHRC

Clodagh Philippa Guerin, Ellen Howley, Jack Reid, Janine Bradbury, Julie Meril Gardner, Kate McLoughlin, Katie MacLean, and Sally Flint do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Buffy the exercise slayer: Sarah Michelle Gellar’s EMS workout trend explained

The actor performs pilates moves while wearing an EMS suit. StudioLab Images/ Shutterstock

Actor Sarah Michelle Gellar, best known for her role as teenage demon slayer Buffy Summers, recently shared in an interview that she uses an “EMS suit” during workouts to stay fit. And she’s not the only one who has made this form of exercising a trend – with celebrities from Tom Holland to Cindy Crawford all using EMS workouts to get fit.

EMS, short for electromyostimulation, uses electrical impulses to support muscle contraction. The idea is that the machine uses electricity to stimulate your muscles to work harder, to help you get more out of your workout without lifting heavy weights.

Some companies even claim that a 20-minute EMS session (roughly half an episode of Buffy the Vampire Slayer), can deliver the same benefits as hours in the gym. For people who are short on time, dislike traditional exercise or want a novel way to stay motivated, this sounds very tempting.

But while EMS does have some evidence-based benefits, particularly in rehabilitation settings, it’s far from a miracle shortcut to getting fit.

In clinical contexts, EMS works by sending small, electrical impulses through pads placed on the skin. Just like with regular workouts, these impulses stimulate nerves, triggering muscles to contract. Physiotherapists have used EMS for decades to help patients recovering from injury or surgery, especially when regular movement is difficult.

It has even been used in spaceflight simulations, in which participants have to lie in a bed tilted slightly downwards for extended periods to replicate the effects of being in space on the body. This can cause muscles to weaken, and research has explored EMS as a countermeasure loss during these conditions, particularly when combined with resistance exercise.

What is new is the rise of “whole body EMS” in the fitness industry. Instead of placing electrodes on a single muscle group, users wear the suit or vest. It contains multiple electrodes targeting the arms, legs, glutes, back and core. During a session, people perform squats, lunges, arm raises and more, while the suit pulses to intensify muscle activation.

In practice, the benefits depend heavily on who you are and how you train.

Does it work?

Research suggests EMS can help maintain strength and muscle mass after five to six weeks of treatment compared with doing a conventional exercise programme. A meta analysis in 2023 supports this, outlining how between one to three whole-body EMS sessions per week for six to 12 weeks can result in modest improvements in muscle mass, strength and power.

Another separate study also reported strength gains after a similar frequency of use in non-athletic, sedentary adults.

For people who are sedentary, or have joint pain, EMS may offer an alternative to stimulating muscles without the stress of exercise.

However, it is not a substitute for the broad, well established, whole-body health benefits of regular exercise, which extend beyond muscles to the cardiovascular and metabolic systems, among others.

This distinction becomes clearer when we look at regular exercisers. A recent study, which examined EMS use in athletes and trained sportspeople, found little to no benefit on performance measures such as jumping, sprinting or agility.

A woman performs a bodyweight squat while wearing an EMS suit.
EMS suits may not be as beneficial for regular exercisers. Chester-Alive/ Shutterstock

Furthermore, studies examining strength outcomes report inconsistent findings, with results varying widely depending on the EMS protocol used and how it’s combined with conventional training.

Taken together, these findings suggest that for people who are already active, EMS probably won’t provide a meaningful edge as conventional exercise is already very effective. Lifting weights, sprinting or doing bodyweight exercises all produce strong, natural muscle contractions without the need for electrical stimulation.

Should you try it?

Overall, the research on EMS is promising but far from definitive. Many studies are small, short term, or use differing protocols, making comparisons difficult.

Some combine EMS with exercise, while others compare it to doing nothing at all. This makes it challenging to determine whether improvements come from EMS alone, its combination with exercise or because participants are just being more active.

Because EMS can produce strong, involuntary muscle contractions, overuse can also lead to severe muscle soreness or, in rare cases, a condition called rhabdomyolysis. This occurs when muscle tissue breaks down rapidly and releases proteins into the bloodstream, harming the kidneys.


Read more: High-intensity workouts may put regular gym goers at risk of rhabdomyolysis, a rare but dangerous condition


Several cases of rhabdomyolysis have been reported after intense EMS sessions, even after a single workout. For this reason, it is recommended to start slowly, stay hydrated and use EMS under professional supervision.

Cost is another factor. Whole body EMS sessions can be expensive, and purchasing a suit for home use can be even more costly. For many people, that money might be better spent on evidence-based, personal training or structured exercise programmes.

For those that can afford it, EMS should be viewed as a supplement, not a substitute, for regular exercise. The strongest evidence for improving health, fitness and body composition still comes from simple, consistent habits: lifting weights a few times a week, walking more, cycling, swimming, jogging or following a gym programme.

There’s no shortcut around the basics. EMS may add a spark, but it can’t replace the benefits of real exercise.

The Conversation

John Noone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The four-day week won’t happen overnight, but it could transform how we live and work

buritora/Shutterstock

A century ago, the five-day working week helped reshape society. It was introduced at scale by industrial pioneers to address not only worker wellbeing but also economic pressures.

US industrialist Henry Ford was among the first to give workers two full days off per week, 100 years ago this month. Ford suspected that giving workers a “weekend” would increase overall productivity – and he was correct.

Today, as advances in artificial intelligence accelerate and concerns about job security grow, a similar question is emerging. Could reducing working time again help societies adapt to these seismic changes?

The evidence increasingly suggests it can, but not in the simplistic way that is often portrayed. The four-day week is not just a workplace benefit. It is a potential tool to improve wellbeing, support families and rethink how work is distributed in society.

Research across multiple countries, including large-scale pilots in the UK and Portugal, shows that reducing working time can deliver meaningful benefits for both employees and organisations.

In a 2025 study of four-day week adoption, my colleagues and I found improvements in sleep, exercise and quality of working life. There were positive implications for both the mental and physical health of employees.

Our research showed productivity at work can also increase, alongside reductions in absenteeism and staff turnover. And it can be beneficial for an employer’s social image.

However, the most important insight is not about productivity but what happens outside work. After all, time is a social resource, not just an economic one.

When people move to a four-day week, they do not simply rest more. They reallocate time in ways that have broader implications for society.

Across our research, participants said they spend more time with family and friends, engaging in community activities and investing in their physical and mental health by exercising and practising hobbies and self-care activities.

These are not trivial changes. Over time, they contribute to stronger social ties, better mental health and more resilient communities.

There are also important gender implications. Early findings suggest that reduced working time can lead to fathers being more involved in caring for their children and other domestic responsibilities. While this does not automatically solve gender inequality, it creates conditions that make more equal divisions of labour possible.

In this sense, the four-day week is not just about work. It is about how societies organise care, relationships and everyday life.

The challenge in service sectors

Critics of a four-day week often point out that it is harder to implement in service sectors such as healthcare, childcare, manufacturing, hospitality or retail. This is true, but it is not a reason to dismiss the idea.

In these sectors, work is tied to time, presence and staffing levels. Reducing working hours often requires more complex redesign, including changes to rotas, additional hiring or upfront investment. Colleagues and I have highlighted this when addressing the UK case of the NHS.

But these challenges should be seen as design problems, not impossibilities. In fact, the potential benefits to society may be even greater in these sectors. Improved wellbeing and reduced burnout among healthcare staff and care workers can translate into better quality of service and fewer mistakes.

female healthcare worker on a break outside on a hospital balcony with a coffee and her phone in her hand.
Reduced working hours for healthcare staff could lead to fewer clinical mistakes. Iryna Inshyna/Shutterstock

A more important concern is inequality. If working time reductions are adopted unevenly, there is a risk that some workers will be excluded – often those in lower-paid or frontline roles. This is a valid concern, but not an argument against the four-day week. Rather, it is an argument for implementing it more thoughtfully.

Instead of asking whether all jobs can adopt the same model, the focus should be on how different forms of reduced work time can be adapted across sectors. This could include shorter daily hours, staggered schedules or phased time reductions.

The future of work

The renewed interest in reducing the amount of time we spend working is not happening in isolation. It is closely linked to broader debates about automation, productivity and the future of work.

If technological advances continue to increase productivity, a fundamental question arises: who benefits from these gains?

Historically – during the Great Depression, for example – working time reductions have been one way of redistributing those benefits. Compared with more radical proposals such as universal basic income, the four-day week offers a more direct and socially embedded way of sharing gains in productivity.

The four-day week is not a universal solution, and it will not look the same everywhere. But the evidence shows working less can go hand-in-hand with maintaining productivity.

It can also support a shift towards a society where time is valued not only as an economic input, but as a foundation for wellbeing, relationships and participation in community life.

A century after the five-day week helped define modern work, there may be another turning point on the horizon. This time, the real question is not whether we can afford to reduce working time, but whether we can afford not to.

The Conversation

Rita Fontinha’s employer, the University of Reading, has received funding from the Portuguese Government and the Azores Regional Government to conduct academic research on four-day working week pilots.

❌