Normal view

How to make public spaces accessible, safe and attractive for an aging population

To be truly inclusive, public outdoor spaces must meet the needs of the entire population, regardless of age, physical ability or mobility.

Although many cities have adopted universal accessibility policies in recent years, it’s important to consider whether these policies have actually improved accessibility and the experiences of citizens who live there.

Public spaces can become a source of fatigue and stress for older people if their features are not properly designed.

Several fields of research in urban design, urban planning, and architecture offer valuable tools for understanding the level of accessibility in public spaces. Three dimensions are particularly relevant, since they directly concern the way a built environment meets the needs of people with motor, visual or cognitive impairments. These three dimensions — comfort, legibility, and geometric clarity — enable us to assess whether a space is truly designed for everyone.

As an architect, urban planner, and full professor at the Université du Québec à Montréal, I study the universal accessibility of public environments by identifying the physical and spatial dimensions that promote their equitable use.


This article is part of our ongoing series The Grey Revolution. The Conversation Canada and La Conversation are exploring the impact of the aging boomer generation on Canadian society, including housing, working, culture, nutrition, travelling and health care. The series explores the upheavals already underway and those looming ahead.


The importance of comfort

Environmental studies focus on how people live and use public spaces. According to Jan Gehl, a Danish architect and urban planner, a space suitable for pedestrians must provide protection, comfort and appeal.

  • Protection ensures safety, for example through pavements separated from vehicle traffic or clearly marked pedestrian crossings.

  • Comfort facilitates movement through features like flat, continuous surfaces, the absence of obstacles, benches, handrails and adapted access.

  • Appeal is based on a combination of physical and sensory elements, such as greenery, light and the presence of activities, which promote a pleasant experience for users.

These criteria benefit everyone, but are especially essential for older people or those with reduced mobility. Pleasant and comfortable spaces encourage people to walk more and take advantage of the city. That, in turn, promotes social inclusion and enhances well-being.

The pedestrian route in Parc Safari in Hemmingford, south of Montréal, is an example of a tourist development that prioritizes comfort.

Flat, paved surfaces and the absence of ground-level obstacles, such as uneven steps or steep slopes, ensure comfortable and unimpeded movement. To provide a pleasant and safe experience, it is essential to maintain uniform surfaces and consistent levels, which facilitate the passage of pushchairs and wheelchairs as well as the movement of people with mobility challenges.

5 critical urban elements

Urban planning studies on the “image of the city” focus on how people perceive and navigate their environment. Kevin Lynch, an American urban planner who taught at the Massachusetts Institute of Technology (MIT) and Harvard, profoundly influenced urban design with his work how cities are perceived. His research has identified five elements that help people find their way around the city:

  • Pathways (streets, pavements or footpaths).

  • Boundaries (walls, rivers or railway lines) that demarcate a space that may be difficult, or even impossible, to cross.

  • Neighbourhoods recognizable by their atmosphere, function or consistent architecture.

  • Nodes (places of passage or gathering, such as a public square, a crossroads or a station).

  • Landmarks (visible features that help people orient themselves), such as a tower, a bell tower, a sign, or a distinctive tree.

Montréal’s Esplanade Place Ville-Marie is a good example of a place with these qualities.

The design, organized around steps that incorporate a ramp clearly visible from the pedestrian’s line of sight, reduces confusion and makes it easier to understand the connections among the Esplanade’s different levels.

That makes it possible for pedestrians to anticipate the continuity of their route, making movement more reassuring and pleasant. The clarity of this layout ensures that the Esplanade Place Ville-Marie is accessible to all.

When boundaries and landmarks are clearly defined, the city becomes more welcoming and easier to navigate, particularly for people who have difficulty with orientation or trouble following directions. This reduces the anxiety associated with walking in complex environments and enhances the sense of security.

For example, as part of the Bristol Legible City project in the United Kingdom, 97 per cent of visitors highlighted the tangible impact of clear and consistent urban design on the walking experience and user comfort.

Geometrically clear urban layouts

Studies of spatiality analyze the form and geometry of urban spaces to understand how their organization influences human movement and behaviour.

Bill Hillier, a British architect and professor at University College London, is known for his syntactic approach, a method of analyzing urban and architectural spaces.

His work shows that people naturally move along clear, direct axes. Certain cognitive disorders, such as Alzheimer’s disease or mild age-related cognitive impairment, can affect memory, attention and orientation. A geometrically clear urban layout eases orientation for these people and enables them to mentally visualize the spatial layout of the area where they’re walking.

Another important factor is the spatial enclosure effect created by the continuity of façades, fences or building lines, which fosters a sense of containment and security.

The most accessible public spaces are, therefore, often those with simple, linear routes that offer a smooth and predictable path. A well-organized layout makes it easier for elderly people and visitors to plan their upcoming trips, maximizing their enjoyment of a city.

In Montréal’s Old Port, spaces are clearly defined. Along Saint-Paul Street, a continuous row of building façades shapes the street and guides movement, with the view shifting as you walk. A low curb adds to this sense of order and makes the route easy to follow.

Accessibility for all

The elements of comfort, navigability and geometric clarity can guide urban designers, including architects, urban planners, landscape architects and engineers, in creating public spaces that are accessible to all.

Adhering to these criteria from the design stage helps avoid costly and late-stage adjustments while ensuring optimal comfort and safety for all users.

When high-quality public spaces are designed from the outset, it is possible to meet the needs relating to mobility, vision and cognition without designing the space for a single type of user. A thoughtful and inclusive design makes the city more comfortable, accessible and safe for everyone, particularly for an aging population.

La Conversation Canada

François Racine has received funding from the Friends of the Parc Safari Foundation.

Your browsing history could soon set your grocery bill — and Canada isn’t ready for it

Parliament voted down a motion on April 15 to ban a practice most Canadians have never heard of, but that retailers are already rolling out: surveillance pricing.

Also called algorithmic personalized pricing, the practice uses personal data to estimate how much consumers are willing to pay, then adjusts the price accordingly. Two shoppers, same store, same item: two different prices, generated by data neither of them can see.

The NDP motion urges the government to prohibit surveillance pricing both in stores and online. The Liberals and Conservatives voted it down. NDP leader Avi Lewis had called the practice “unfair” and “downright creepy” at a news conference days earlier.

A poll by Abacus Data conducted in March found that while most Canadians are not familiar with the term, when the practice was explained to them, 52 per cent said it should be banned. Another 31 per cent of the Canadians surveyed said it should be allowed but more strictly regulated.

For Canadians struggling with cost-of-living pressure, the practice is spreading among retailers, and the laws meant to protect consumers were not designed to catch it.

Not the same as surge pricing

A useful distinction first. Dynamic pricing, the kind used by airlines, hotels and rideshare companies, adjusts based on conditions like demand, the time of day or weather, and applies the same algorithm to every customer equally.

Uber’s surge pricing is the textbook example of dynamic pricing: every rider in the same area at the same moment sees the same multiplier. Annoying? Perhaps. Personalized? No.

Surveillance pricing is different. Where dynamic pricing responds to market conditions, surveillance pricing responds to the individual. It draws on browsing history, device, postal code, purchase frequency and inferred income to predict a person’s willingness to pay.

Dynamic pricing seems to ask: “What are the conditions right now?” Surveillance pricing asks: “Who are you, and how much can we extract from you?”

How much is happening in Canada?

It’s difficult to know how much surveillance pricing is happening in Canada, if at all. So far, there has been no confirmed Canadian case, and the practice is opaque by design.

The Competition Bureau’s discussion paper, published in 2025, reported that more than 60 companies in Canada offer services that use algorithms to optimize pricing across retail, hospitality, transportation and ticketing.

The bureau’s What We Heard report, published in January after a public consultation on algorithmic pricing, identified transparency as Canadians’ chief concern. Shoppers do not know whether the price in front of them has been personalized to them specifically.

The most prominent real-world example came from south of the border. An investigation by Consumer Reports and Groundwork Collaborative documented Instacart customers in the U.S. being charged up to 23 per cent more than other shoppers for the same items, at the same store, at the same time.

Nearly three-quarters of grocery items tested were offered to shoppers at multiple price points simultaneously.

Instacart disputed the characterization, but halted the program in December 2025 following public backlash. New York Attorney General Letitia James has since demanded that Instacart share information about its price-testing experiments.

Canadian retailers, meanwhile, are assembling the same underlying toolkit: digital shelf labels that allow prices to be changed remotely in seconds, AI-driven pricing engines and the loyalty card data that feeds them.

Where Canadian law runs out

Most Canadians assume that if something feels deceptive at checkout, the law catches it. For some familiar problems, that is true.

Recent amendments to the Competition Act introduced an explicit ban on drip pricing — the practice of advertising a low price and then adding unavoidable fees at checkout.

The Cineplex case is the most prominent recent example of that law in action. The Competition Tribunal levied a record $38.9 million penalty against the cinema chain for concealing online booking fees, a ruling the Federal Court of Appeal upheld in January. Cineplex has since sought leave to appeal to the Supreme Court of Canada.


Read more: Cineplex’s $38.9 million fine is a wake-up call about corporate sustainability practices


But surveillance pricing slips past this framework entirely. The price displayed is technically accurate. No fee is buried and no phantom “regular price” is invented. What is hidden is the process.

Deceptive marketing rules assume everyone is offered the same price and someone is misrepresenting it. Surveillance pricing inverts the premise: everyone is offered a different price, and almost no one knows it’s happening.

The Competition Bureau’s mandate is to protect and promote competition, not consumer fairness. Its tools were built to catch anti-competitive behaviour between companies, not price discrimination between individual shoppers.

Similarly, provincial consumer protection laws like Ontario’s Consumer Protection Act are designed to deal with misleading or unfair practices in one-on-one transactions — not large-scale, automated differences in how millions of consumers are treated.

Privacy law, in turn, governs consent to data collection, not consent to how that data is used to shape what you pay. Three legal regimes circle the problem; none quite covers it.

What other jurisdictions have done

In November 2025, New York’s Algorithmic Pricing Disclosure Act took effect, requiring any business that uses personalized pricing to display a notice reading “this price was set by an algorithm using your personal data,” with civil penalties of up to US$1,000 per violation.

The European Union has required disclosure of personalized pricing since its 2019 consumer rights overhaul. Manitoba’s Bill 49, introduced March 17 by the NDP government of Premier Wab Kinew, would go further than either of those measures and prohibit surveillance pricing outright, making it an unfair business practice.

When asked if he would follow suit, Ontario Premier Doug Ford said he would not, telling reporters he believes in a “free market” and a “capitalist society.”

Federal AI Minister Evan Solomon said the federal government is “looking into” the issue, but that it would fall under the purview of the Competition Bureau.

What real protection would require

In the short term, shoppers can use private browsing mode, turn off location services and log out of loyalty apps before they shop.

These, however, are only workarounds. They place the burden of navigating an opaque system on the least-informed party in the transaction and they require a level of digital awareness some shoppers don’t have.

Real protection means either a federal disclosure mandate along New York’s lines, or an outright prohibition like the one Manitoba is pursuing. The Competition Bureau can keep monitoring, but monitoring is not enforcement, and competition law wasn’t designed to police unfairness on its own.

Until Parliament or the provinces close the gap, Canadian consumers have no reliable way of knowing whether the price they see is the price everyone else sees.

The Conversation

Jake Okechukwu Effoduh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How should schools teach AI? 3 models to consider

Students across Canada are exposed to artificial intelligence (AI) whether through search engines, writing assistants, automated recommendation systems or social media.

That everyday exposure raises a first, fundamental question: What should students should learn about AI? This goal is often described as AI literacy, which combines conceptual understanding with responsible use and critical judgment about AI.

A second, more practical, question is: Where should learning about AI sit in the curriculum? Since education is a provincial responsibility, Canada has no single approach.

Teaching AI literacy in schools builds on what provinces already require students to learn about digital technologies. How provinces do this determines how much time students get, what can be assessed and how teachers must be prepared.

In practice, these different curriculum models, plus the supports to ensure teachers can effectively teach them, will shape whether AI education becomes a set of tips for using apps — or a form of digital citizenship grounded in concepts, ethics and critical thinking.

What AI literacy implies for schools

Several provinces and educator associations have or are developing frameworks pertaining to AI in K-12 education. Several organizations have proposed similar frameworks that specify the concepts and competencies students should develop, or that guide what meaningful AI education would require in schools.

The United Nations Educational Scientific and Cultural Organization sees AI literacy spanning technical understanding and ethical awareness, and names a vision of students as AI co-creators and responsible citizens.

A U.S.-based framework, AI4K12, outlines what students should learn about AI across grade levels, and identifies five “big ideas” about AI: perception, representation and reasoning, learning, natural interaction and societal impact.

Two students work on a robot.
AI frameworks guide what meaningful AI education might look like in schools. (Allison Shelley/The Verbatim Agency/EDUimages), CC BY-NC

The U.S.-based International Society for Technology in Education (ISTE) proposes standards that engage students as empowered learners, computational thinkers, innovative designers and digital citizens.

Digital learning in provincial curricula

Across Canada, provinces integrate digital learning through different models — but note that these models are ideal types. Several provinces combine them. Each model can support AI literacy, but each creates different conditions for time, assessment and teacher preparation.

1. A dedicated subject or domain, where digital skills or computer science have their own courses. In many systems, teachers have been specifically trained for the subject. This configuration typically supports clearer sequencing across grades and more consistent assessment.

For example, between kindergarten to Grade 9, British Columbia teaches technological learning within applied design, skills and technologies curriculum, with Grade 8 requiring the equivalent of a full-year course that schools can deliver through modules.

Newfoundland and Labrador frames technology education as a hands-on area that can include programming and controlling physical devices through two dedicated courses about computer science in Grades 9 and 10.

Ontario’s computer studies curriculum creates dedicated course space for learning computing concepts. Ontario also illustrates how systems can shift emphasis over time: coding and digital competencies can be embedded within compulsory subjects, while a separate computer studies curriculum expands opportunities for sustained progression.

A dedicated subject provides protected classroom time to teach related core ideas (for example, data, algorithms and modelling) and to assess learning beyond using tools, while still making possible cross-curriculum learning.

It also creates clearer conditions for implementing ambitious AI literacy frameworks such as AIK12 and UNESCO’s guidance. This is because a teacher trained to translate specialized concepts for non-specialists leads instruction and can support sustained, project-based learning.

However, in many provinces, this “dedicated subject” exposure remains intermittent across K–12, often concentrated in a small number of courses, or sometimes a single year-long course with limited weekly time. This constrains cumulative progression and makes outcomes sensitive to local staffing capacity and teacher qualification.

2. Digital learning embedded in existing subjects. In New Brunswick, digital learning in Grades 6 to 8 is organized through the Middle Block, where Technology is one learning area among others. Teachers must address digital learning alongside a much wider set of practical and developmental goals, rather than teaching it as a fully separate subject with protected time.

Two teachers at a table in discussion.
How AI-related professional development will help teachers depends partly on learning expectations relevant to their work. (Allison Shelley/The Verbatim Agency/ EDUimages), CC BY-NC

This approach can make learning more connected to real problems and other learning. But it can also limit how much time can be devoted to AI-related concepts, and whether this learning is effective, when many other objectives must be covered within the same program structure. The trade-off is generally capacity: teachers are asked to carry new conceptual content without necessarily having time, training or materials.

3. A “transversal” framework, where competencies that underpin digital technology are meant to be integrated across subjects.

For example, Manitoba teaches literacy with information communication technology (ICT) across curriculum, related to thinking critically and creatively about information and about communication, “as citizens of the global community, while using ICT safely, responsibly and ethically.” Alberta’s information and communication technology program of studies states that it is “not intended to stand alone” but should be infused within core courses.

Québec has a province-wide digital competency framework describing 12 dimensions of confident, critical and creative uses of digital technology.

When competencies related to digital learning are integrated across subjects, every student can be reached, not only those who choose electives.

However, without clear accountability tying underlying competencies to particular digital media uses, this approach can potentially yield uneven learning experiences from school to school. Every teacher must also receive sufficient professional development on the subject.

What ‘AI-ready’ could mean

Each model requires different policy supports. Dedicated subjects need staffing and teacher preparation pipelines. Embedded approaches need sustained professional learning and realistic expectations for non-specialist teachers. Transversal frameworks need clear markers for student progression and assessment strategies, otherwise implementation depends on local enthusiasm.

For many provinces, the path forward is likely not choosing one model, but combining the strengths of all three.

Two students work on robot models.
The path forward for teaching AI literacy is likely combining the strengths of different curricular models. (Allison Shelley/The Verbatim Agency/EDUimages), CC BY-NC

This requires grounding in foundational knowledge of AI, as well as developing both discipline-specific and transdisciplinary competencies. UNESCO’s AI competency framework for teachers makes a similar point: governments should anchor AI learning in curriculum policy, build collaboratively with educators and invest in teacher preparation and resources.

Canada’s provincial diversity creates conditions for comparative analysis. If researchers study student learning associated with different models, this could help identify which policy arrangements, supports and implementation strategies are associated with stronger and more equitable forms of AI education.

Comparison may become even more salient with the OECD’s planned PISA 2029 media and artificial intelligence literacy assessment, which will be designed to examine whether students have had opportunities to learn to engage critically and responsibly with digital and AI systems.

The Conversation

Hugo G. Lapierre receives funding from the Fonds de recherche du Québec (FRQSC), the Social Sciences and Humanities Research Council (SSHRC) and IVADO.

Normand Roy receives funding from Fonds de recherche du Québec (FRQ), le ministère de l'Éducation du Québec (MÉQ), Social Sciences and Humanities Research Council (SSHRC).

Patrick Charland receives funding from the Fonds de recherche du Québec (FRQSC), the Social Sciences and Humanities Research Council (SSHRC) and UNESCO.

How wildlife conservancies perpetuate green colonialism in Kenya

The story of wildlife conservation in East Africa is often told through spectacular images of beautiful scenery and the region’s charismatic animals. But seldom asked is the question about how those efforts include and impact the communities that live alongside wildlife.

At the core of Africa’s rich biodiversity are Indigenous communities, which include pastoralists and forest peoples whose ways of life and knowledge are critical to conservation.

a giraffe standing in a grassy area
A giraffe in the Maasai Mara National Reserve in southern Kenya. (Kariũki Kĩrigia)

However, these communities have historically been blamed for biodiversity loss. Pastoralists such as the Maasai are often blamed for keeping “excessive” amounts of livestock, overgrazing and land degradation.

Such tropes against African Indigenous communities linger and continue to shape conservation, which has led to strict and often punitive regulations.

My ongoing research in the Maasai Mara region of southern Kenya looks into wildlife conservancies. The region is home to the Maasai, as well as other Indigenous Peoples, and rich biodiversity. My research examines how conservancies impact local communities on whose land conservation is practised.


Read more: Tanzania’s Maasai are being forced off their ancestral land – the tactics the government uses


What are wildlife conservancies?

The decline in wildlife in Kenya led to the birth of wildlife conservancies on both community and private lands. Kenya’s 2013 Wildlife Conservation and Management Act defines a wildlife conservancy as “land set aside by an individual landowner, body corporate, group of owners or a community for purposes of wildlife conservation.”

Organizations like the Kenya Wildlife Conservation Association (KWCA) view them differently. They see conservancies as land that is not set aside, but rather managed for the well-being of wildlife and communities.

In essence, the government maintains the view of fortress conservation that entails separating humans from nature, while the KWCA imagines communities co-existing with wildlife.

At the core of wildlife conservancies is land. Land ownership largely determines the type of conservancy that is established, which are either private, community, group or co-managed conservancies.

Private conservancies

Kariũki Kĩrigia explains his research into wildlife conservancies in Kenya. (University of Toronto Black Research Network)

In northern Kenya, private conservancies have largely been established in the highlands that were settled by white farmers during the colonial period.
These private conservancies have been criticized as “settler ecologies” built on a “big conservation lie” because they obscure the history of violent, colonial land dispossession, the criminalization of Indigenous pastoralist livelihoods and the exploitation of land and biodiversity to profit from conservation.

Additionally, the normalization of militarized violence in conservation, appropriation and control of conservation revenues meant for communities, and restriction of access to scarce water and pasture from pastoralists even during droughts, amounts to what is known as green colonialism.

The contradiction is that it was British colonial rule in Kenya that created the need for wildlife conservation starting in the 1940s. Extensive devastation of wildlife through sport hunting, wildlife trade and culling meant animals needed greater protection from humans, primarily through state-protected national parks and reserves.


Read more: Operation Legacy: How Britain covered up its colonial crimes


Group conservancies

Group conservancies are mostly found in southern Kenya, where individual plots are amalgamated through long-term land leases to conservation investors who, in turn, establish wildlife conservancies.

In the Maasai Mara, local communities typically lease their land for conservancies in exchange for lease payments, regular access to pasture and investment in initiatives such as school bursaries and infrastructure development.

One such example is the Nashulai Maasai Conservancy, established in July 2016. It’s the first Maasai conservancy in the Maasai Mara created by Maasai peoples.

Wildlife conservancies in Kenya are an important way to enhance land security and conservation built around communities. Community and group conservancies are based on the idea of using the land, water and pastures in ways that support humans, livestock and wildlife.

As part of my research, I interviewed community members who told me about some benefits brought by the conservancy. These included access to post-secondary education through a community college, women empowerment projects such as soap made from elephant dung, river restoration for household water access and food aid during the COVID-19 pandemic.

Challenges faced by group conservancies

Many group conservancies employ strict access rules and hefty fines against human and livestock presence. These practices often agitate communities as they echo fortress conservation’s tactics of separating humans and wildlife.

Land lease agreements between conservancies and landowners are often crafted in complex legal language that only a few community members can comprehend. It is critical that communities are provided with a detailed explanation of what leasing land to a conservancy entails beyond the benefits promised.

In addition, community benefits are undermined through land dispossession by local elites during land subdivision, who, in turn, benefit unfairly from leasing the unjustly acquired land to conservancies.

Biodiversity conservation in East Africa and the Global South more broadly depends significantly on external funding from organizations in the West, especially non-governmental organizations, which British conservation scholar George Holmes calls “conservation’s friends in high places.”

However, Indigenous communities face onerous requirements and processes to access funding for conservation and climate change initiatives.

In a recent guest lecture at the University of Toronto, Kimaren Ole Riamit, the director of the Indigenous Livelihoods Enhancement Partners (ILEPA), explained how African Indigenous communities experience the negative impacts of climate change despite being the least responsible for global warming, lose land to conservation and carbon projects and face significant hurdles in accessing resources to address climate-related challenges.

Initiatives meant to empower communities are often captured by local elites and corporate interests that appropriate and control resources and benefits expected to flow to communities.

Carbon offsetting

Wildlife conservancies have also gained the attention of carbon offset markets, which are expanding fast in Kenya. The Northern Kenya Rangelands Carbon Project and the One Mara Carbon Project are some of the main carbon projects in the country’s northern and southern rangelands.

Kenya’s rangelands sequester atmospheric carbon dioxide, which is then measured and verified by certification bodies such as Verra, and converted into tradeable carbon credits. These are sold to organizations seeking to offset their carbon emissions.

Carbon projects enter into long-term contracts with landowners, typically around 40 years, and spell out how the landowners should utilize the land to ensure adequate carbon sequestration and storage. Landowners receive expert knowledge that employs technologies and measurements of carbon that are foreign to local communities.

a zebra in a grassland area
A zebra in the Maasai Mara National Reserve in southern Kenya. (Kariũki Kĩrigia)

On the contrary, the same communities that have long managed lands and ecosystems sustainably are treated as lacking the ecological knowledge necessary for biodiversity conservation and carbon sequestration.

The outcome is that the owners of the technologies and what is deemed “expert” knowledge become the owners of the value generated from the land owned by communities.

While such initiatives generate millions of dollars in revenue, it has been shown that less than two per cent of climate finance reaches Indigenous Peoples, smallholder farmers and local communities in developing countries.

To create genuinely sustainable ecological conservation and improved quality of life for local communities, the government must focus on empowering communities through meaningful participation in initiatives.

Organizations like ILEPA and the Nashulai Maasai Conservancy are working to empower Indigenous communities in Kenya. These kinds of community-led efforts exemplify how conservation can, and must, include the people who call East Africa’s rich biodiverse landscapes home.

The Conversation

Kariuki Kirigia has received funding from the Black Research Network at the University of Toronto, the Ryoichi Sasakawa Young Leaders Fellowship Fund, and SSHRC-IDRC through the Institutional Canopy of Conservation research project.

Here’s why Canada needs to ditch age-based immigration points

Canada’s Comprehensive Ranking System (CRS) was established in 1967 to respond to historic racism and nationality bias in Canada’s immigration system. Granting points for age, education, official language skills, Canadian work experience and family ties, the CRS ranks applicants for permanent residency.

The federal government recently proposed changes to CRS points, including the elimination of some point categories. While family-related points are proposed for removal, age-based criteria are not.

My research delves into the legal, ethical and policy reasons why Canada should ditch age-based immigration points.

Age-based points are Charter violations

The Canadian Charter of Rights and Freedoms explicitly prohibits age discrimination in the equality clause of Section 15(1). According to the Supreme Court’s Singh v. Minister of Employment and Immigration decision, the Charter applies to anyone who is physically present in Canada, including non-citizens.

Many people who apply for permanent residence do so from within Canada. In fact, the federal government has introduced a two-year initiative — in 2026 and 2027 — to fast-track permanent residence for skilled workers who are already in Canada in specific high-demand sectors.

According to the lawyers I interviewed for my book, Age and Immigration Policy in Canada, such individuals would have solid legal grounds to launch a Charter challenge. They could claim that the points system constitutes age discrimination in violation of Canadian law.

Ageist immigration policies

Age discrimination embedded in the points system also contradicts Canadian values. Currently, a person gets zero points for age if they are under 18 or over 45.

Imagine the public outcry if a person received zero points for being a woman? Or for being a racialized person? Many Canadians would rightly call out such overtly sexist and racist policies.

Similarly, points for age undermine the merit-based foundations of the CRS. They contradict rights-based hiring practices that prohibit asking candidates their age and stereotyping older workers.

My archival research suggests the architect of the CRS, then-Deputy Immigration Minister Tom Kent, did not have a clear policy rationale for the initial age-based points. One historian has argued: “The points system, as it was originally conceived, has as much to do with politics as with labour markets.”

There is also some internal contradiction within the points system between the decreasing points for age and the increasing points for education and work experience. The latter rely on the passage of chronological time, while the former subtracts points for it.

Age-based points are bad policy

Policymakers and public commentators sometimes justify age discrimination in the points system by claiming that older immigrants are likely to take more from Canada than they are to give. But research shows that this is empirically incorrect.

First, Canadian and Québec pension plans are contributory — benefits are calculated by lifetime earnings in Canada. For Old Age Security, people must be residents of Canada for at least 10 years to qualify, and they must have resided here for at least 40 years to receive the maximum benefit.

As a result, immigrants to Canada receive fewer contributions and are more likely to be poor than any other group of Canadians when they retire.

Second, while some may assume older immigrants will be a burden on the health-care system, the “healthy immigrant effect” is well-documented.

Newcomers also tend to under-use health services. What’s more, there’s a waiting period for universal health coverage. Some immigrants actually return to their home countries to access time-sensitive or culturally appropriate care.


Read more: Why is Canada snubbing internationally trained doctors during a health-care crisis?


Third, people over the age of 45 contribute indirectly to the Canadian economy in ways that are not captured in formal economic data. For example, they undertake unpaid work in family businesses or provide free child care to enable their adult children to work outside the home.

Given these legal, ethical and empirical concerns about age-based points, the time has come to eliminate them altogether. Ongoing public consultations on the CRS are a historic opportunity for Canadians to oppose the age discrimination that has been normalized in our immigration system for too long.

The Conversation

Christina Clark-Kazak receives funding from the Social Sciences and Humanities Research Council of Canada.

The bias in medical research: Africa carries a huge disease burden but is missing from clinical trials

Modern medicine prides itself on being a universal science, built on evidence from clinical trials.

But there’s a bias in medical research. While Africa accounts for roughly 25% of the global disease burden and 19% of the global population, the continent’s people are largely invisible in some clinical trials.

The scale of the erasure is revealed in a landmark study of 2,472 randomised controlled trials globally published between 2019 and 2024.

I led this team of researchers, who scrutinised the world’s most influential medical publications to quantify African representation. They included the New England Journal of Medicine, The Lancet, the Journal of the American Medical Association, Nature Medicine, and the British Medical Journal. There were also three leading cardiovascular journals in the study: Circulation, the European Heart Journal and the Journal of the American College of Cardiology.

I am a physician-scientist working at the intersection of cardiometabolic epidemiology and biomedical data science. I also focus on large-scale population studies in Africa and data-driven cardiovascular prevention.

Randomised controlled trials are a cornerstone of evidence-based medicine. Introduced in the mid-20th century, they rigorously evaluate the safety and effectiveness of treatments by randomly assigning participants to different groups. This is done to minimise bias. Trials like these have been central to major medical breakthroughs, from cardiovascular therapies to vaccines. They continue to guide clinical decisions and the development of new treatments worldwide.


Read more: African countries are signing bilateral health deals with the US: virologist identifies the ‘red flags’


What we discovered

Our findings show a profound imbalance in the global clinical research landscape. Across the five most prestigious general medical journals, only 3.9% of trials were conducted exclusively in Africa. In cardiovascular health, the numbers drop to a statistical whisper. Of the major trials published in leading cardiology journals, just two studies (0.6%) were conducted solely on African soil.

This is a crisis of scientific accuracy. When clinical trials exclude African populations, they produce evidence that lacks “external validity”. This refers to how well the results of a study can be generalised beyond the participants. It asks whether findings from a clinical trial will still hold true when applied to different populations, settings, or real-world conditions.

Without that validity, doctors are essentially conducting unmonitored experiments on millions of patients every day.

Modern medicine cannot claim to be universal if entire populations remain invisible in the evidence base. Biology, health systems and disease patterns are not identical across the world.


Read more: Africa is losing health workers when it can least afford to – a pattern rooted in colonial history


The gap and why it matters

Many treatments used across the continent are based on evidence generated in non-African populations, raising concerns about their applicability.

Moreover, most Africa-based trials still focus on infectious diseases, despite the rising burden of non-communicable diseases such as cardiovascular disease.

Emerging evidence shows that genetics, environment and diet can radically alter how a body responds to a drug. It therefore makes no medical sense that an entire continent is left out of the trial net.

There’s also evidence showing that certain treatments have different safety profiles in Black patients. Diabetes and gout are just two examples. So are certain common blood pressure medications, such as angiotensin-converting enzyme (ACE) inhibitors. Research shows that they carry a three- to four-fold higher risk of severe, life-threatening side effects in people of African descent compared to other populations.

When clinical trials exclude populations, doctors are forced to extrapolate findings from one population and apply them to another.

The study also highlights a dangerous lag between global research funding and the evolving reality of African health. The new data show that nearly 76% of trials conducted exclusively in Africa focused on infectious diseases. But the continent is undergoing a massive epidemiological shift. Non-communicable diseases – heart disease, stroke, and diabetes – now account for about 38% of all deaths in many African nations.

The middle class in Africa has tripled to 300 million people from roughly 100 million people in the early 2000s. More people are now living long enough with lifestyles that increase the risk of chronic conditions such as heart disease, diabetes, and hypertension. Consequently, there is a growing need and market for long-term treatments that manage these diseases, rather than short-term therapies for infections. Yet cardiovascular trials continue to be discouraged.

Even within the continent, the data show deep “black holes” of information. South Africa accounted for over 62% of all trials conducted on the continent. Central Africa, a region that’s home to more than 180 million people, was virtually non-existent in the global research record. It contributed less than 3% of the continent’s limited trial output. Possible reasons include South Africa’s decades of cumulative investment, seen in stronger academic hubs, research governance, experienced trial units, and more established sponsor relationships. Other regions face barriers like fewer resourced research institutions, less access to trial platforms, and sometimes language and publication issues that can reduce visibility in top-tier journals.

The inequity extends into the hierarchy of science itself. Even when African sites are included in large, multicontinental trials, they are often relegated to the role of “recruitment hubs” rather than scientific partners. Our study found that African scientists led only 3.6% of multicontinental trials that included an African site.


Read more: Africa needs to speed up research excellence: here’s how


Towards a new era of African science

Africa should not simply be a location where studies are conducted.

It must be a place where research is conceived, led and interpreted. The current model creates a cycle of external dependence where international institutions manage the funding and the data. This leaves local research systems fragile and unable to translate evidence into national policy.

There is need for “ring-fenced” funding for African-led research, the development of regional trial networks, and a mandate for medical journals to report on the diversity of trial populations.

There are signs of a rising momentum. Organisations like Alliance for Medical Research in Africa are working to equip a new generation of African investigators. Africa must create a research ecosystem that is too important for the global community to ignore.

The Conversation

Bamba Gaye does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

80% of Africa’s fertiliser is imported: how food systems can adapt to the Iran shock

Conflict in the Persian Gulf is disrupting fertiliser supplies, and Africa’s food systems stand to lose.

Agrifood systems (the activities that connect the people, investments and decisions involved in producing and delivering food and agricultural goods) rely on a steady flow of inputs like fertiliser, along with markets, infrastructure and policy and trade decisions.

These food systems can absorb shocks and find new ways to keep supplies flowing under pressure. But they are also sensitive. A disruption in one part of the system has an impact on others, as the conflict in Iran that erupted in late February 2026 shows clearly.


Read more: Iran has a powerful new tool in the Strait of Hormuz that it can leverage long after the war


This is how the war on Iran affects sub-Saharan African farmers and food systems: the Gulf countries (which include Iran) are the biggest exporter globally of fertiliser ingredients. Iran alone is the fourth biggest global exporter of urea, a key ingredient in fertiliser - and one of the cheapest suppliers. Nigeria, Ghana, Togo, Kenya, Tanzania and North Africa all buy urea from Iran.

Qatar is another key urea producer and exporter but stopped making urea in early March 2026 because it needs gas to do so - and its gas plants were hit by Iranian missiles.

Shipping in the narrow Strait of Hormuz shipping channel next to Iran is down by 95% since the start of the war. This means the fertiliser that is still being made in Gulf countries has been prevented from leaving the region.

This is bad news for sub-Saharan Africa which imports about 80% of the fertiliser it uses. This comes from countries including Russia,, Europe, Ukraine, India, China and the Gulf states. Malawi, for example, imports 52% of its fertiliser from the Gulf. Morocco, Nigeria, South Africa also import ingredients from the Gulf states and use it to make fertiliser that they export.


Read more: Has the Strait of Hormuz emerged as Iran’s most powerful form of deterrence?


Fertiliser prices have already increased. And, unlike oil, there is no internationally coordinated strategic reserve for fertiliser. When the supply is disrupted, it stays disrupted.

I am a researcher and practitioner who looks at how evidence and policy can be used to make better decisions in food systems and agriculture. Recently, I was part of a team that investigated how to end hunger and all forms of malnutrition through changing the agrifood system so that nutritious food becomes more available, affordable, or accessible to poor and often rural communities.

We are especially interested in the kinds of interventions that attract investment from both the private and public sectors.


Read more: Africa’s superfood heroes – from teff to insects – deserve more attention


Our research found that food in Africa is often available but not affordable, safe, or diverse enough to make up healthy diets. For example, over the past 50 years government policies have pushed subsidies, price incentives and procurement programmes towards growing staple crops (maize, wheat, rice). But on their own, these crops are not very nutrient-dense. Focusing mainly on them means that more nutrient-dense foods have been crowded out.

Our research found a number of ways that Africa’s agrifood systems can provide more nutritious foods in future. This can also happen when fertiliser supplies are limited. We highlight some of them below.

From pandemic to war to Hormuz: Africa’s fertiliser shocks

Fertiliser disruptions and the damage to agrifood systems in Africa have happened before.

Between 2020 and 2024, fertiliser supply chains were strained by COVID-19 and then the war in Ukraine. African farmers absorbed those shocks through reducing the amount of fertiliser they used on their crops. But this led to lower yields, lower earnings and tighter household budgets.


Read more: Russia’s war with Ukraine risks fresh pressure on fertiliser prices


It’s important to remember that fertiliser supplies are entangled with decades of subsidy policy, public investment and debates about what kind of agriculture African governments should be promoting. They’re highly contested and politicised, shaped by history and power as much as by agronomic evidence and household economic choices.

The current threat of shortages is only part of the picture.

Ten ways for African countries to cope using less fertiliser

The food systems in Africa that survive the fertiliser crisis linked to the Iran war will be those that put in place nutrition-focused programmes and continue investing in innovations that reduce dependence on fertiliser.

Our report identifies ten high-impact interventions that improve nutrition and dietary outcomes. Several are particularly relevant right now:

  • Farmers should start growing fruit, vegetables and pulses, and farming with trees (agroforestry). This improves the health of the soil and produces nutrient-dense food.

Read more: Indigenous trees might be the secret to climate resilient dairy farming in Benin, says this new study


  • Home gardens can improve diets and household food security, if people get training and nutrition education.

  • Sustainable aquaculture (fish) and livestock farming, including poultry, boosts production and protein consumption.

  • Bio-fortified crops, such as high-iron beans grown in Rwanda and vitamin A-rich, orange-fleshed sweet potatoes in Mozambique, build nutrition directly into the crop during production. Because they contain more nutrients, they don’t waste as much fertiliser.

  • Storage and distribution infrastructure reduces spoilage of food. It also improves the quality of food.


Read more: Scientists are breeding super-nutritious crops to help solve global hunger


  • Foods can be fortified (have essential vitamins and minerals added) when they are being processed. These improve nutrition without requiring any changes in how food is grown.

  • Food and agricultural handling practices must be introduced to keep crops safe to eat.

  • Nutrition education helps people make better everyday food choices so that, when food is available, people eat more varied and nutritious diets.

  • Social protection programmes, such as cash transfers and food vouchers, help families during times when prices rise.

  • Providing school meals specially designed to be nutritious offers a high return on investment.

What needs to happen next

Our research emphasises that these interventions can only work as a bundle or package of support. Gender matters too; our research found that women don’t always get to eat nutrient dense food even when there is more available at home.

These interventions represent what we know works today. But governments and researchers should look beyond these too. For example, scientists at the Centre for Research on Programmable Plant Systems (including scientists at Cornell University) are engineering specialised plants known as “reporter” plants. A reporter plant is typically placed strategically in a field of crops to act as an early warning system.

They have developed a tomato plant, for example, that turns vivid red when soil nitrogen levels drop to critically low levels. This plant gives farmers precise, real-time information about what their fields need.

Tools like these could transform that relationship farmers have with fertiliser: reducing waste, cutting costs, and building a form of fertiliser intelligence into the farming system itself.

The Conversation

Jaron Porciello receives funding from BMZ Germany and Gates Foundation.

India’s Horn of Africa strategy has shifted: what it’s trying to do and how it could work

India’s engagement in the Horn of Africa and Red Sea basin was, until recently, largely limited to UN peacekeeping operations and anti-piracy patrols.

Since the second half of the 1990s, India has participated in nearly all peacekeeping operations in Africa.

Anti-piracy efforts emerged between 2008 and 2014 as piracy off Somalia and the Gulf of Aden spread across a vast maritime space. This spanned east Africa and the wider Indian Ocean, bringing threats close to India’s shores.

Indian trade routes were exposed to new security risks, so a more sustained maritime posture was needed.

From the mid-2010s, therefore, India expanded its engagement in the Horn of Africa and the Red Sea basin to secure shipping lanes linking it to global markets. At the same time, it sought to counter China’s growing naval presence along the western Indian Ocean coast, protect its diaspora and investments, and position itself as a regional security provider.

When Prime Minister Narendra Modi took office in 2014, this shift accelerated. India placed greater emphasis on proactive diplomacy, expanding high-level engagement, and trade and infrastructure links. It also pursued strategic coordination through bilateral agreements and naval exercises across west Asia and the adjoining African coastline.

India, the Horn of Africa and the Red Sea basin

This evolution reflects India’s transition from a post-colonial, non-aligned actor to a more assertive power with ambitions outside the region. It is now Africa’s third-largest trading partner. Economic interdependence is growing alongside geostrategic interests.

Drawing on our work on international security in the western Indian Ocean and sub-Saharan Africa, we argue that over the past decade New Delhi has redefined the Indian Ocean as a protective buffer and a primary theatre of influence linking the Indo-Pacific to the Red Sea. The Horn of Africa lies at the heart of this connective space.

In 2023, India declared itself the Indian Ocean’s “net security provider”. It introduced a framework to strengthen regional security, deepen economic cooperation and address shared maritime challenges.

Today, with shipping routes being recalculated and governments reconsidering their strategic partnerships, India’s position is being put to an operational test.

The Horn is a space where legitimacy, delivery and endurance determine who remains relevant after the headlines fade. For the first time, India’s quiet advance is visible. Next, it will have to solidify its presence.

Why the Horn of Africa is important for India

An initiative called the 2025 Africa-India Key Maritime Engagement, co-hosted with Tanzania, positions India as a security partner for African nations, particularly those along the Indian Ocean rim.

India is also involved in development and investment projects in the region. These include agricultural efforts to improve food security, infrastructure projects, and technical assistance in education and health. It also provides humanitarian assistance in Somalia, Kenya and Djibouti.

What distinguishes the past decade is the effort to align these activities within a broader strategic narrative – one that presents India as a partner offering technology and development without debt concerns or political conditions.

This narrative is attractive to local governments in the Horn. But it also creates a test: India must show that it can deliver consistently.

Ethiopia has an important role for India. It hosts the African Union, functions as a diplomatic centre and offers an entry point into African multilateral politics.

Somalia also matters. It sits close to critical sea lanes and is central to the security of the Gulf of Aden. External actors there can convert security assistance into political access.


Read more: China’s military support for Somalia is on the rise – what Taiwan and Somaliland have to do with it


India’s interest in Somalia and Somaliland has taken on a geo-economic dimension. Indian firms are focusing on gold and mineral resources, particularly in eastern Somaliland.

Although still limited in scale, this shift signals that India’s footprint in the Horn is no longer confined to security and development assistance. It is intersecting with resource access and supply chain strategies.

The competition

The corridor of the Red Sea, Gulf of Aden and western Indian Ocean has become a crowded arena for external powers over the past two decades.

Great powers have seen countries in the region as a platform for counterterrorism and naval reach. Small and middle powers (like Turkey, Iran and Gulf states) have sought to secure influence through ports, training missions, arms transfers, commercial access and selective mediation.

The result is a dense environment. Almost every external actor offers a package of security, finance, technology and diplomacy. Fragile local governments hedge among them.

India’s challenge is to deliver consistently through:

  • creating defence and security training pipelines

  • project delivery

  • stable financing instruments

  • sustained bureaucratic attention.

If India’s Africa policy is maritime-led, then things like naval exercises, information-sharing, coast guard cooperation and institutional training must become regular and visible.

If the strategy is also developmental and technological, then India must deliver flagship projects in digital infrastructure, health and agriculture.

From quiet influence to lasting power

India faces three constraints in growing its influence in the Horn of Africa.

1. Limited military capacity

India’s naval capabilities do not match the scale of China’s fleet or America’s technological edge and operational depth. This gap is not fatal if India’s aim is durable influence through partnership. It does mean that India’s leverage will depend on institutional cooperation and coalition-building.

2. Competitive density

The Horn’s architecture is made of foreign bases, port diplomacy and overlapping rivalries. India’s advantage is that it’s not overwhelmingly intrusive. But it could become just one more actor among many.

3. Institutionalisation

If India’s engagement depends too heavily on leader-level attention, it will remain vulnerable to distraction. Durable influence requires bureaucratic routines and financing mechanisms. It must survive political cycles and shifting crises. Ethiopia is a test case. High-level roadmaps will have to turn into visible digital infrastructure, health systems and agricultural support.

The broader point is that the Horn is not an empty theatre waiting for India to arrive.

The Conversation

Federico Donelli is affiliated with the Italian Institute for International Political Studies (ISPI), the Nordic Africa Institute (NAI), and the Orion Policy Institute (OPI).

Riccardo Gasco is affiliated with IstanPol Institute.

Chiara Boldrini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Extreme heat is a growing threat to health, jobs and food security in southern Africa – study looks for practical solutions

Extreme heat is not just uncomfortable weather – it is becoming a serious threat to health, jobs and food security across southern Africa, especially for those least able to cope.

Unlike floods, cyclones, wildfires or storms, extreme heat rarely leaves dramatic images of destruction. But it builds without relief, putting strain on people’s bodies, homes and health systems.

In many cases, the danger is intensified when temperatures stay high overnight, leaving little chance to recover.


Read more: Heat with no end: climate model sets out an unbearable future for parts of Africa


Even temperatures that seem manageable can be dangerous, depending on where people live and how well they can adapt.

We are members of a group of researchers and practitioners from across southern Africa working on climate, health and policy.

We recently conducted a regional consensus study for the Academy of Science of South Africa (ASSAf) to assess how extreme heat affects health and daily life across the region. Our aim was to determine what practical steps are needed to reduce the harm caused by extreme heat.

We worked with a team of independent experts from across disciplines to review scientific evidence, regional data and policies, and to develop a shared, evidence-based view of how extreme heat is affecting the region.

Our study was unique because it brought together evidence from across health, labour, food systems and infrastructure to show how heat affects everyday life, analysing heat not just as a weather event, but as a system-wide risk.


Read more: Heat extremes in southern Africa might continue even if net-zero emissions are achieved


We found that extreme heat is already a defining climate and health threat in southern Africa.

One of the biggest mistakes in public discussion is to treat heat as simply a weather event. It is much more than that. Heat immediately increases the risk of dehydration, heat exhaustion and heat stroke. Heat can also worsen existing conditions such as cardiovascular, respiratory and renal (kidney) disease.

Heat needs to be treated as a major public health and development priority across the Southern African Development Community.

Heat is a health issue – not just a weather issue

The Southern African Development Community has 16 member states, home to more than 400 million people. Yet collectively, these countries contribute less than 1.3% of global greenhouse gas emissions.

Despite this, southern Africa is already heating up fast. Average surface temperatures across the region have risen by 1.0-1.5°C since 1961. A further 4.5-5°C increase is projected by 2050 under high-emission scenarios (where fossil fuel companies continue to pollute at the same rate as they are now).


Read more: Climate change has doubled the world’s heatwaves: how Africa is affected


In our report, we describe extreme heat as an “integrator hazard” (a multiplier). This means it is not only one risk but makes existing problems worse all at once.

For example, extreme heat can reduce crop yields and nutrient quality, increase water stress, worsen air quality through dust and wildfire smoke, and disrupt livelihoods that depend on safe outdoor work – all at the same time. That is what makes heat so dangerous.


Read more: South African study finds 4 low-income communities can’t cope with global warming: what needs to change


It can also make already hot environments – especially informal settlements with limited shade, ventilation or cooling – far more dangerous. Extreme heat can place added strain on electricity systems. This increases the risk of power outages just when cooling, water supply and health services are most needed.

In many communities, heat also shortens the safe life of perishable food – including food sold informally that isn’t stored in fridges. This too increases the risk of food-borne illness. That matters in a region like southern Africa where street food and informal food economies are part of everyday life.

The burden is deeply unequal

Extreme heat does not affect everyone equally. One of our study’s central findings is that the people and communities most exposed to heat are often those with the fewest resources to adapt. This includes people living in informal settlements, those without reliable electricity or cooling, communities facing water scarcity, and workers who must work outside all day.

Across much of southern Africa, many people work outdoors or in poorly ventilated environments – from subsistence farms and construction sites to factories, markets and transport hubs. Being forced by heat to slow down, stop work, or continue working under dangerous conditions affects both health and livelihoods.


Read more: Zambia’s farmers are working in dangerous heat – how they can protect themselves


Heat exposure affects daily life: children may walk long distances to school or spend hours outdoors. It affects pregnancy and newborn health, causing risks such as premature birth, low birth weight and pregnancy complications.

For this reason, extreme heat is also an ethical and justice issue. The people who contribute least to climate change are often the ones most exposed to its effects – simply because of where they live, the work they do, and the resources available to them.

What governments should do now

Extreme heat is not a problem that can be solved simply by telling people to “drink more water” or “stay indoors” – especially where safe housing, water, electricity and cooling are not guaranteed. But there are practical measures that governments and institutions can take.

These include:

  • improving locally appropriate early warning systems

  • tracking heat-related illness and deaths to guide response and planning

  • making clinics and hospitals more climate-resilient, through reliable electricity, cooling, water supply and backup systems

  • protecting workers through rest breaks, shaded areas, access to water and adjusted working hours

  • improving urban design and housing so that buildings and neighbourhoods stay cooler

  • integrating heat into national climate and health planning.

Governments can also establish public cooling spaces – such as community centres, schools or clinics – where people can safely rest during extreme heat.


Read more: Climate change: the effects of extreme heat on health in Africa – 4 essential reads


There are already promising examples in the region. South Africa has begun strengthening heat-health early warning and surveillance systems. Malawi is helping farmers adapt to rising temperatures in climate-smart agricultural planning.

Namibia has supported community-level water and resource management in heat-prone areas. These examples show that progress is possible, but they need to be expanded and sustained.


Read more: Climate information is useful at local level if people get it in good time: how African countries can build systems to share it


Heat does not respect borders, and coordinated action within countries and across borders can better prepare countries for heat disasters. National meteorological services, health departments, local governments, labour authorities and emergency services should work together so that heat warnings lead to clear, coordinated action on the ground.

For too long, extreme heat has been treated as a secondary climate risk. That is no longer tenable. Heat now needs to move to the centre of climate policy. The question is no longer whether southern Africa can afford to act. It is whether it can afford not to.

The Conversation

Jerome Amir Singh has received funding from the Academy of Science of South Africa (ASSAf). ASSAf is a statutory body that is funded primarily through a parliamentary grant allocated by the South African government's Department of Science, Innovation, and Technology.

Caradee Yael Wright receives funding from the South African Medical Research Council.

Received — 1 May 2026 The Conversation

UK terror threat is raised – counter-terror expert explains how official prevention strategies work

William Barton/Shutterstock

The UK has raised its terror threat level from “substantial” to “severe”, meaning an attack within the next six months is considered highly likely. The change means the threat level is at severe for the first time in four years. It came with a warning from the Home Office of an increased threat from individuals and small groups based in the UK.

Counter-terrorism in the UK centres on a strategy known as Contest – a key part of this is the Prevent programme. The primary objective of Prevent, as the name suggests, is to stop people becoming involved in terrorism or from supporting extremist ideologies.

As such, it is designed to deliver tailored early-intervention aimed at addressing risks of radicalisation, extremism and terrorism.

In my experience as a researcher in counter-terrorism studies, I have engaged extensively with Prevent practitioners. I have gained an insight into their work, which is carried out with commitment and dedication despite constraints in funding and resources.

A Prevent referral does not imply that someone has committed, or is suspected of committing, a criminal offence. Rather, it indicates that a professional with a statutory “Prevent duty” has raised concerns about someone’s behaviour, expressions or vulnerabilities. These professionals include teachers, healthcare workers and or social workers.

Recent data illustrates the scale of the programme: between April 2024 and March 2025, 8,778 people were referred to Prevent. This is the highest figure recorded since data collection began in 2015.

Notably, a significant proportion of these referrals involve young people, with 36% (3,192 cases) concerning children aged 11 to 15. A further 1,178 cases involved those aged 16 and 17.

rear view of young person in hoodie facing a chickenwire fence
More than a third of referrals to Prevent involved children. Aoy_Charin/Shutterstock

The criteria for referral are broad, and they can include observable changes in behaviour, expressing extremist views, or signs of vulnerability that could make someone more susceptible to being radicalised.

Referrals are assessed by a panel that often include representatives from education, social services and law enforcement. In many cases, referrals do not progress beyond this initial stage. Instead, they may be signposted to other teams, such as social services, for more support.

But where there are still concerns, individuals may be offered support through the Channel programme. This is Prevent’s primary de-radicalisation mechanism. Channel is voluntary and confidential, and referrals can come from anyone and from any context. They are often issued by the police and education sector.

The support is tailored to the specific needs in each case and may include mentoring and mental health support. Participation is voluntary, and individuals can refuse to be involved or stop participating at any point.

Stigma and polarisation

Prevent is designed to be pre-emptive – intervening before criminal activity occurs. However, this breadth is also a source of ongoing controversy.

Critics argue that it can result in referrals based on ambiguous or misinterpreted behaviour. As such it has the potential to reinforce polarisation, particularly in relation to specific groups such as Muslims. This can clearly leave people feeling stigmatised, under surveillance or unfairly judged, particularly if they believe the referral was based on cultural, religious or political misunderstandings.

The scheme also has significant structural limitations as it functions as a time-limited intervention rather than a system of ongoing monitoring. Once a case is closed, the person is not subject to continuous oversight. This means that a previous referral does not eliminate the possibility of future risk – it simply means that the threshold for intervention was not met at that time.

It is important to remember that positive stories rarely get widespread attention. Every year, thousands of people in the UK receive early support and successfully disengage from radical and extremist ideologies.

Prevent operates in a difficult space between safeguarding and security, where risk is assessed before an offence occurs. While high-profile cases can amplify perceptions of failure, they do not reflect the full picture. Most referrals do not lead to further action, and many result in early, voluntary support that helps people disengage from harmful pathways.

At the same time, concerns around ambiguity and stigma remain valid. In short, rather than seeing Prevent as a measure of guilt, it’s important to recognise its limitations and its role as an early intervention tool.

The Conversation

Elisa Orofino does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The king’s state visit was a success – but there is still a chasm to bridge between UK and US outlooks

As King Charles concludes his transatlantic travels with a visit to Bermuda, a British Overseas Territory, he can take pride in a job well done. His four-day state visit to the US – which concluded with a wreath-laying at Arlington National Cemetery and a block party in Virginia – appears to have been a success.

Amid a period of heightened tension between President Donald Trump and Prime Minister Keir Starmer, the king’s carefully calibrated speech to a joint session of Congress has secured praise on both sides of the Atlantic (and on both sides of the Congressional aisle).

It was a remarkable performance: careful, diplomatic, occasionally pointed and at times both charming and witty. Perhaps we should not be that surprised. The king is a highly experienced diplomat, and while this was his first address to Congress, it was his 20th visit to Washington – as he himself noted.


Read more: How King Charles charmed the US while taking digs at Trump


But what does the undoubtedly warm American response to the king’s visit mean for the future of the US-UK “special relationship”?

On its own, no act of royal diplomacy, however well executed, can deliver an instant reset in US-UK relations. Nor can it force an American president of any stripe – let alone the current incumbent – to change tack or alter approach.

On this, history offers a salutary lesson. For all the positive impact of King George VI’s June 1939 visit to the US, when war broke out just a few months later, it did not lead to instant American intervention.

The queen’s visit in 1957 was similarly well received. But the work of rebuilding transatlantic trust – so damaged by the Suez crisis – remained ongoing in the months that followed.

This latest state visit will similarly offer no quick fix. But like its predecessors, it might shift the dial on the current US-UK dialogue – and perhaps help to temper the tone, at least for a while.

For the UK, there has already been one immediate win: Trump has decided to remove whisky tariffs in honour of the king’s visit. This will be very much welcomed by the Scottish whisky industry.

The potential long-term impact of the king’s visit is harder to ascertain. Not least because this will be determined by variables well beyond either his or Starmer’s control: contemporary geopolitics, especially as affected by the wars in Iran and Ukraine. Despite this week’s mutual exchange of praise and platitudes, the distance between London and Washington on these matters remains substantive.

Worlds apart

There was a brief glimpse of these differences in two of the speeches: the king’s to Congress and, on the same day, the president’s at the White House.

The king’s speech lingered on the shared ties of history and, especially, on democratic values and ideals. It included references to the importance of compassion and interfaith dialogue, and celebrated the strength of what the king called “our vibrant, diverse and free societies”.

President Trump’s speech at the White House a few hours earlier similarly featured references to the transatlantic connections born of history. But elsewhere, his tone and intent seemed rather different.

Trump celebrated “the blood and noble spirit of the British” – qualities which had provided American revolutionaries with a “majestic inheritance”. At one point, he even declared that the patriots of the American Revolution had been animated by the “Anglo-Saxon courage” in their veins. This is a claim on American history that one commentator has suggested “walks up to the edge of white nationalism”.

There are indeed echoes here of an early 20th-century diplomatic discourse known as Anglo-Saxonism. Once popular and pervasive among transatlantic writers and diplomats (including those of a nativist bent), it largely fell out of favour in the years between the world wars – its racial assumptions increasingly untenable.

As long ago as December 1918, President Woodrow Wilson explicitly told an audience of British dignitaries – including King Charles’s great-grandfather, George V – that they must not “think of us as Anglo-Saxons, for that term can no longer be rightly applied to the people of the United States”.

Wilson was the first serving US president to visit Britain, and, like Trump, had a British mother (Trump’s was from Tong on the Isle of Lewis, Scotland). Wil point – made in advance of the Paris Peace Conference – was that the US was a melting pot, not a monoculture.

King Charles’s speech hailed the importance of alliances (Nato), of multilateral institutions (the UN), and of the rule of law. These, he argued, are what has delivered eight decades of transatlantic peace and prosperity.

From the president, meanwhile, came a celebration of Anglo-American blood brotherhood. It was at times reminiscent of thinking (and language) which his predecessor Wilson – no “woke” radical – had called into question well over a century ago.

These two visions of the US-UK relationship, distinct in their underlying assumptions, reflect a broader geopolitical shift – one which has increasingly strained the transatlantic relationship in recent months.

The king’s vision, indicative of widespread sentiment in Europe, represents an affirmation of the post-1945 world order. The other, with its echoes of the early 20th century, is disruptive. Between them lies a chasm of significant proportions, and bridging it will be the task of today’s transatlantic diplomats.

The Conversation

Sam Edwards has previously received funding from the ESRC and the US-UK Fulbright Commission. Sam is a Governor of The American Library (Norwich) and a Trustee of Sulgrave Manor (Northamptonshire).

Why the 60-day War Powers Resolution deadline doesn’t actually constrain presidents

A TV displays U.S. President Donald Trump's prime-time address on the war in Iran inside a Cheesecake Factory on April 1, 2026, in Washington, D.C. Anna Moneymaker/Getty Images

May 1, 2026, marks the 60th day of Operation Epic Fury in Iran – a symbolically significant date designating when a president who has mounted unilateral military operations must receive Congressional approval or wind it down.

However, the complex history of the War Powers Resolution clock demonstrates it is a toothless milestone.

The Trump administration signaled on April 30, 2026, that it would ignore that deadline, set by the War Powers Resolution. Secretary of Defense Pete Hegseth testified before the Senate Armed Services Committee that “we are in a cease-fire right now, which my understanding is that the 60-day clock pauses or stops in a cease-fire. That’s our understanding, so you know.”

Sen. Tim Kaine of Virginia, a Democrat, responded that the 60-day threshold poses a “legal question” and “constitutional concerns.”

This is not the first time presidents and members of Congress have sparred on the meaning of the War Powers Resolution. What happens next will play out through regular politics, because the conflict is not a matter of simple legal interpretation.

War: Collective judgment

In the U.S. Constitution, Congress and the president share war powers.

In the shadow of political struggles in the final years of the Vietnam War, Congress passed the War Powers Resolution in 1973 to “insure that the collective judgment of both the Congress and the President will apply to the introduction of United States Armed Forces into hostilities.”

A crucial section of the resolution reasserts legislators’ role, and makes clear that the constitutional power of the president to make war is subject to, or exercised with, the following conditions: a Congressional declaration of war; specific statutory authorization; or a national emergency created by attack upon the United States, its territories or possessions or its armed forces.

For new military campaigns that do not meet these criteria, the resolution included a 60-day clock that begins when a president reports the action to congressional leadership within 48 hours of the action beginning.

The clock can be expanded to up to 90 days upon presidential determination and certification of “unavoidable military necessity respecting the safety of United States Armed Forces” related to removal of troops.

After 60 to 90 days, the resolution originally said this type of unilateral military action would be terminated automatically unless both chambers of Congress approved some form of legislative authorization.

Congress could also choose to terminate an unauthorized military operation any time before the 60 days with a concurrent resolution, which doesn’t require a president’s signature – essentially, a “legislative veto.”

And to make sure the president couldn’t stretch the definition of congressional approval, the resolution said neither existing treaties nor new budget appropriations could substitute for legislative authorization of a military action.

Since 1973, actions by all three branches across a variety of political and policy landscapes have undermined its intents and procedures.

Veto vetoed

In 1983, the Supreme Court declared various kinds of legislative vetoes unconstitutional, which led Congress to reinterpret its War Powers Resolution procedures and powers and effectively amend its processes to expedite any joint resolution or bill that “requires the removal of U.S. armed forces from hostilities outside the United States.”

Now, if members want to stop a presidential military campaign already in progress, they must act affirmatively and pass a disapproval resolution, which a president could veto like any other bill. Congress has sent only one such disapproval – to President Donald Trump in his first term – which he vetoed. Congress did not have the two-thirds required in the Constitution to override.

Both chambers of Congress now have to vote twice, once to disapprove a military action and then again to overcome a likely veto, to stop something it never approved in the first place.

House Majority Leader Mike Johnson explains on March 4, 2026, why his party rejects a Democratic-led measure to assert Congress’ war powers and stop the Iran military action.

The 60-day mark for the current Iran operation has therefore loomed as more of a politically charged symbol of this longstanding imbalance on war powers than a real deadline for action by either branch.

Parallels to Kosovo and Libya

The House and Senate have tried to pass legislation to stop military operations against Iran six times since operations began. All attempts have failed, including the most recent vote on April 30. Democrats are considering filing suit against President Trump if operations go beyond 60 days without authorization.

Yet federal courts have long expressed disinterest in getting involved in constitutional questions related to the War Powers Resolution, especially if members of Congress are the plaintiffs.

Although most presidents from Richard Nixon onward have claimed that the War Powers Resolution is an unconstitutional check on their institutional powers, they usually filed the required reports on new military actions 48 hours after they began.

While the current Iran conflict is different in many ways, presidential unilateralism, inconclusive chamber actions and even member lawsuits all echo controversies over U.S. military action in Kosovo in 1999 and Libya in 2011.

Where Trump administration may lean on Clinton

Operation Epic Fury against Iran began Feb. 28, 2026, and President Trump sent the required report to Congress on March 2, 2026.

After detailing the rationale for military action, Trump added “Although the United States desires a quick and enduring peace, it is not possible at this time to know the full scope and duration of military operations that may be necessary.”

He concluded the memo with his interpretation of constitutional power to act unilaterally.

“I directed this military action consistent with my responsibility to protect Americans and United States interests both at home and abroad and in furtherance of United States national security and foreign policy interests,” the president wrote. He acted, he said, “pursuant to my constitutional authority as Commander in Chief and Chief Executive to conduct United States foreign relations.” He said he made the report “consistent with the War Powers Resolution. I appreciate the support of the Congress in these actions.”

Similarly, on March 26, 1999, President Bill Clinton sent a War Powers Resolution letter explaining his decision two days earlier to take part in a NATO-led operation against the Federal Republic of Yugoslavia, known as FRY.

Clinton wrote to Congress using mostly the same words and phrases Trump did in his 2026 letter. Clinton also said that he took the action “in response to the FRY government’s continued campaign of violence and repression against the ethnic Albanian population of Kosovo.”

A gray-haired man in a dark jacket and blue tie, sitting at a desk in a very formal looking room.
President Bill Clinton after his television address to the nation on the NATO bombing of Serbian forces in Kosovo, March 24, 1999. Pool/Getty Images

Clinton explained his authority in virtually the same language as Trump and, like Trump, said it was hard to predict how long the operations would continue.

The House and Senate repeatedly failed to either approve or disapprove of Clinton’s actions through a series of votes across March and April 1999. But lawmakers did send him supplemental appropriations for the operations in May.

NATO suspended the operation after 78 days. Almost a year later, a federal appellate court upheld a district court’s decision rejecting a lawsuit led by Rep. Tom Campbell, a California Republican, alleging Clinton violated the War Powers Resolution. Rather than deciding on the merits, the decision rejected the lawmakers’ claims of injury as not reviewable by the court.

Obama did it, too

In a very different context, a similar rhythm played out during President Barack Obama’s presidency.

During the “Arab Spring” revolts of 2010-2011, the U.N. Security Council passed two resolutions condemning violence against Libyan civilians by security forces under the direction of Colonel Moammar Gadhafi.

On March 21, 2011, two days after NATO operations began against Gadhafi’s forces, which included American air support, Obama sent his War Powers Resolution letter to the Republican House and Democratic Senate. Obama had not received prior legislative authority from Congress.

Obama’s letter included language almost identical to Clinton’s earlier letter and Trump’s later one.

As with Kosovo, the House and Senate did not ultimately agree to either approve or disapprove of the president’s actions in support of the UN and NATO over the operation’s 222 days. In addition, Democratic Rep. Dennis Kucinich of Ohio led a group of mostly Republican House members in a failed War Powers Resolution lawsuit to stop the president.

Unilateral action endures

The Office of Legal Counsel in the Department of Justice has published legal opinions that explain and defend presidential war powers, including with Kosovo and Libya. In December 2025, that office published a memo defending the imminent January 2026 capture of Nicolás Maduro. On April 21, 2026, the State Department published a defense of ongoing U.S. actions in Iran.

Within the current dynamics of the War Powers Resolution, until Congress musters bipartisan supermajorities to connect its own institutional ambition with constitutional power, presidents from either party will decide alone if, and when, the country goes to war. Instead of Congress, presidents may heed public opinion and economic indicators, especially in election years.

The Conversation

Jasmine Farrier is affiliated with the American Political Science Association.

❌