Reading view

Vietnam railways accelerate digital shift, green transition

By 2030, Vietnam’s railway sector aims to achieve sustainable growth on existing lines while playing a greater role in supporting a green and circular economy. It also focuses on improving efficiency, reducing environmental impact, and upgrading service quality.

  •  

Over the past 15 years, NZ moved its fuel safety net offshore – now it’s being exposed

Marty Melville/Getty Images

Amid a worsening global energy crisis, New Zealand and Singapore’s freshly struck deal to keep fuel and other essential goods flowing is being touted as a boost to supply chain resilience.

The agreement commits both countries not to impose export restrictions on each other during economic upheaval. But it also highlights an uncomfortable reality facing New Zealand’s energy security, which depends heavily on fuel stored and refined overseas.

Nearly 60% of the country’s petroleum reserves are held offshore in countries such as the United States, Japan and the United Kingdom, and around a third of its fuel is refined in Singapore. As global tensions disrupt oil markets and put pressure on key shipping routes, that model is being tested.

While New Zealand meets international requirements to hold 90 days of net petroleum imports as a member of the International Energy Agency (IEA), much of this is stored thousands of kilometres away.

In emergencies, the IEA can coordinate collective stock releases to stabilise global markets, as occurred in the agency’s release of 400 million barrels of oil in March.

However, a closer look at the data shows New Zealand is a clear outlier in how it meets these obligations.

How NZ’s fuel security has shifted offshore

To remain compliant, the New Zealand government buys “ticket” contracts – or contractual claims on oil stored in other countries.

While these count toward the country’s 90-day requirement, they are effectively rights to purchase fuel that may never reach its shores during a major disruption, such as the closure of the Strait of Hormuz.

In January, New Zealand’s total petroleum reserves stood at exactly 90 days’ supply. This meets the IEA’s minimum requirement, but is the second-lowest reserve among members, ahead of only Australia’s 49-day capacity.

New Zealand’s total petroleum reserves (government and industry combined), shown as the number of days the country could cover its fuel imports, compared with other IEA countries in January 2026. Author provided, CC BY-NC-SA

New Zealand is also the only IEA member whose public oil reserves are fully overseas.

By contrast, countries such as Japan and South Korea hold around 200 days of reserves domestically, leaving them far better prepared for global supply shocks.

Share of petroleum reserves held overseas (including both industry and government stocks) as a percentage of total reserves, January 2026. Author provided, CC BY-NC-SA

It comes after New Zealand’s heavy reliance on offshore reserves has grown sharply over the past 15 years.

IEA data shows the country’s domestically-held industry stocks made up more than 90% of reserves in 2010–11, while public offshore holdings accounted for less than 10%.

By 2026, that balance had flipped. Industry stocks had fallen to 42%, while government-owned reserves held abroad had risen to 58%.

Share of New Zealand’s petroleum reserves held onshore by industry versus government-owned reserves held offshore, as a percentage of total reserves, 2008–2026. Author provided, CC BY-NC-SA

As companies cut physical inventories to reduce costs, the government filled the gap with ticket contracts to maintain compliance with the IEA’s 90-day requirement.

This shift effectively means New Zealand’s domestic resilience has been hollowed out. In January, for instance, the country held just 38 days of onshore petroleum stocks – far below the average of IEA members and of other Asia-Pacific nations.

The New Zealand government’s recent move to procure 90 million litres of diesel at Marsden Point will add roughly nine days of supply.

While a positive step, it remains small compared to the much larger domestic buffers maintained elsewhere.

The economic cost of fuel uncertainty

Because oil is a major driver of inflation, this all matters greatly to the average New Zealand household.

Last month, local diesel prices surged to over $3.80 a litre, almost double what they were before the Iran conflict. Because diesel powers farming and transport – both cornerstones of the New Zealand economy – these costs ripple through the entire supply chain.

When geopolitical risks rise, businesses increase “precautionary demand”, hoarding fuel inventory to avoid shortfalls. This reduces available supply and pushes prices even higher.

Research suggests the most effective way to reduce exposure to energy price volatility is through financial hedging or by holding physical fuel reserves. Holding reserves helps buffer against sudden supply shocks and reduce the risk of stockouts.

New Zealand, however, has only a thin physical fuel buffer. So what might be done?

Increasing onshore petroleum stocks can strengthen short-term energy resilience. But bigger oil tanks are not a lasting solution: true energy independence requires reducing New Zealand’s underlying oil consumption.

In this sense, there is much room for improvement. IMF data shows New Zealand has a relatively low level of trade in low-carbon technologies, at just 1.3% of GDP in 2024 – well below the IEA average of 4.76%.

To bolster its energy security in the meantime, New Zealand could look at increasing strategic onshore reserves, while shifting away from ticket contracts toward physical stockpiles to support critical sectors such as farming and freight.

At the same time, it could make a greater push toward electrification and the uptake of alternative energy sources, particularly by powering transport with renewable electricity.

Ultimately, this requires an orderly transition away from oil altogether, with a clear national focus on reducing dependence over time.

Right now, New Zealand’s strategy is a gamble on global stability. To protect the economy from future global oil supply shocks, it must bring its petroleum reserves home – then work hard to make them obsolete.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

  •  

In the age of AI, human creative output is becoming a luxury

Imagine two identical spoons. One is hand-wrought from silver by a skilled metalworker. The other, a base-metal facsimile, was mass-produced by a machine. Which would you value more? Most of us would say the handmade spoon.

In 1899, more than a century ago, American economist and sociologist Thorstein Veblen used this very example to explain how we assign value, or his theory of conspicuous consumption, in which he contended that bourgeois consumption was driven primarily by a desire to display wealth to others. Even if these spoons were indistinguishable, explained Veblen, the hand-made spoon, once identified, would be more highly valued.

This is in part because “the hand-wrought spoon gratifies our taste, our sense of the beautiful, while that made by machinery out of base metal has no useful office beyond a brute efficiency.” But for Veblen there is another factor more important than any aesthetic judgment: costliness.

The hand-wrought spoon is preferred above all, Veblen suggested, because it is a means of demonstrating wealth. However, as we enter a world in which almost anything, including art, writing and music, can be machine-wrought, it seems that Veblen may have misjudged his spoons.

We don’t value human creations solely for their beauty or their price tag. We also value them because they embody deliberate labour and expertise.

AI-generated writing is judged differently

Our own research has shown that even highly trained writing educators cannot reliably distinguish between AI-generated and human-written essays. In fact, one study has shown that general audiences may actually prefer blander AI-generated poetry over more difficult, human-written poetry.

But while public taste may favour the simple and formulaic, the disclosure of artificial authorship is enough to make most people recoil.

In a recent study involving a series of experiments, participants were asked to compare pieces of AI-generated creative writing, including poetry and fiction. In each case, they were told that some passages were human-written and some were AI-generated. Across 16 experiments, respondents consistently devalued the writing labelled as AI-generated.

The authors of the study call this the “AI disclosure penalty.” It is possible to conclude from the study that audiences unfairly judge AI-generated content, but we disagree. This bias towards human creation is inherent to our relationship with art. When people believe something was made by a machine, they like it less.

Some argue that AI can democratize creativity by lowering barriers to production and enabling more people to participate in cultural expression. But the evidence suggests that when authorship becomes effortless, perceived value declines.

The importance of effort and experience

Art costs something. Both John Milton and James Joyce believed that their writing had cost them their eyesight. John Keats believed that the emotional exertion of writing poetry would worsen his tuberculosis and cost him his life. They kept writing anyway. We resent the machine because its creations cost it nothing.

When an algorithm generates a story about heartbreak or an essay on human struggle, it is trading in stolen emotions. AI has never felt pain, suffered a loss or wrestled with the frustration of a blank page, so its output, no matter how technically smooth, feels fundamentally deceptive.

People hate the idea of being moved by a parlour trick. In addition, many of us have a deep, instinctive revulsion to the industrialization of our inner lives. As Joanna Maciejewska observed, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”

We happily accept machines stamping out our car parts and toasters because efficiency is the goal, but applying that same cold logic to human expression strips away the vulnerability, risk and stakes that make art mean anything in the first place.

This becomes more consequential as AI-generated content floods the digital media landscape.

Why human work is becoming more valuable

Our media ecosystem has evolved so that paying directly for much of the content we consume is optional. In an era of streaming music, television and film, we rarely own the product we consume, and creators receive pennies on the dollar compared to previous economic models.

To make matters worse, media companies are increasingly pushing AI-generated content in the form of tens of thousands of social media posts, books, podcasts and videos every day and encouraging artists and content creators to supercharge the quantity of their output by relying on AI.

Much of this output is highly formulaic — produced at scale and designed for rapid, low-engagement consumption. It is an endless, flavourless paste of clichés and nonsense, meant to be mindlessly consumed by doomscrolling thumbs and immediately forgotten. Despite working in an era in which payment is optional amid a deluge of slop, many artists, journalists and writers are making a living because enough of their audience chooses to support the work of real human creators.

The “AI disclosure penalty” reminds us that the consumption of art is not tied to purely aesthetic considerations but involves a need to connect with and appreciate the effort and labour of others.

Consumers have long been willing to pay more for goods labelled “handmade,” “handcrafted,” “artisanal” or “bespoke” on the understanding that those goods were made using traditional techniques that took more effort and human skill.

As generative AI turns writing, art and digital media into frictionless, infinitely replicable outputs, human cognitive effort is undergoing a profound shift. It is becoming an artisanal good that consumers must choose to support and value.

The Industrial Revolution transformed hand-made furniture and hand-woven textiles into premium markers of craftsmanship and authenticity. The AI revolution is doing something similar for intellectual and creative labour — audiences are beginning to place a premium not necessarily on the competent execution of a poem or an essay, which a machine can generate in seconds, but on the invisible friction, the lived experience and the deliberate toil of the human mind behind it.

In a landscape increasingly saturated with instant content, the verified effort of a human creator is shifting from a baseline expectation to a highly coveted, bespoke quality. Ultimately, what we value about art is not whether it’s perfect, but its ability to connect us with another human being.

The Conversation

Nathan Murray has received funding for his research from the Social Sciences and Humanities Research Council of Canada (SSHRC).

Elisa Tersigni has received funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).

  •  

1 year after GE2025, WP’s Sengkang MPs say they’ve focused on cost of living, everyday issues

SINGAPORE: One year after winning again at Sengkang GRC during GE2025, the Workers’ Party’s MPs noted how they’ve focused on residents’ everyday issues, enumerating what they have pushed for.

The Sengkang 4—He Ting Ru, Louis Chua, Jamus Lim, and Abdul Muhaimin—won 56.31% of the votes cast in the constituency last year, besting a slate from the ruling People’s Action Party (PAP) that included former Senior Minister of State Lam Pin Min.

While the PAP had undoubtedly hoped to wrest Sengkang from the WP, especially in light of the lengthy consequences of the scandal surrounding ex-WP MP Raeesah Khan, it ultimately fell short, with residents evidently feeling that the second time MPs—Ms He, Mr Chua, and Assoc Prof Lim— had done enough of a good job as to deserve reelection. The perception from residents is that their MPs certainly are hardworking, and the video from the Sengkang 4 outlined just what they’ve been working on.

Taking turns, the MPs said, “Whether in Parliament or here on the ground, our main focus has been the cost of living. We’ve pushed for transparency on the structural costs and housing prices hurting our young families, fighting for things money can’t buy, like better mental well-being and stronger support for our caregivers. We also pushed for better local transport because a crowded LRT or a 400-meter walk to the nearest bus stop is a barrier to independence for our seniors or persons with disabilities.”

Nevertheless, they added that “debates in Parliament will only make a difference if we deliver right here in our estates,” going on to talk about Sengkang’s five-year master plan, which includes such spaces as the recently-opened Anchorvale Butterfly Garden and the Rivervale Dog Run.

However, what the MPs count as their “true successes” is made up of the assistance that has been extended to one resident at a time, such as the help given to a single mother in obtaining a flat or matching a resident with a job to get them back on their feet.

“For us, no resident’s problem is too small. We don’t take your trust for granted,” the MPs said, thanking residents and adding they’re looking forward to “more good years.” /TISG

Read also: WP’s master plan for Sengkang plan ‘sets the stage for the next lap’

This article (1 year after GE2025, WP’s Sengkang MPs say they’ve focused on cost of living, everyday issues) first appeared on The Independent Singapore News.

  •  

Why has your KiwiSaver bounced back even as the oil shock deepens? – Inside Economics

ANALYSIS: Liam Dann takes a deeper dive into the week's economic news.

It seems Wall Street tends to get bored with waiting for geopolitical events to play out. Image / 123rf

It seems Wall Street tends to get bored with waiting for geopolitical events to play out. Image / 123rf

It seems Wall Street tends to get bored with waiting for geopolitical events to play out. Image / 123rf

It seems Wall Street tends to get bored with waiting for geopolitical events to play out. Image / 123rf
  •  

White House wants to vet powerful AI models for risks − a computer scientist explains why AI safety is so difficult

Is it possible to keep AI from causing harm? J Studios/DigitalVision via Getty Images

The Trump administration is looking to develop a process that would have the federal government review the safety of powerful artificial intelligence models before approving their release, according to a report in The New York Times on May 4, 2026. The move would stand in contrast to the administration’s generally anti-regulatory approach to industry and comes in the wake of Anthropic voluntarily postponing the release of its latest AI model, Mythos.

Anthropic was concerned because when it tested Mythos, the model found thousands of vulnerabilities in operating systems and web browsers. The implication was that if a cybercriminal or hostile foreign agent had Mythos, they could penetrate computer systems worldwide and compromise the basic computer code underlying public safety, national economies and military security.

As a result, Anthropic gave limited access only to about 50 companies and organizations managing critical infrastructure as part of its Project Glasswing. The initiative aims to help governments and corporations close software loopholes Mythos has identified. When Anthropic sought to broaden the number of organizations with access to Mythos, the White House objected.

Security experts, meanwhile, have expressed concern that AI researchers in nations such as China, Russia, Iran and North Korea might soon create similarly powerful AI models and use them to threaten or attack other countries, or to create chaos in those countries’ economies.

Major challenges

As a computer scientist in this area, my work on computer security and malware shows it’s difficult to even define what safety measures the field should take to make models safe to use. Yet the future of many industries, critical infrastructure, national security and human well-being seems to depend on achieving AI models that are truthful, ethical and reasonable.

The first of these challenges, truthfulness and factual accuracy, came to light when OpenAI’s ChatGPT burst onto the scene in 2022. People worldwide realized that the output of large language models does not necessarily reflect a truthful reality. The goal for AI companies was coherent writing that read as if a human wrote it. If an output was factually flawed, programmers wrote it off as a “hallucination” by the model.

After AI programs led to some legal catastrophes and stock market panic, AI companies have made at least some effort to ensure that their models avoid falsehoods and inaccuracies.

Nonetheless, false information stated confidently within a sea of solid-sounding text can take on a life of its own. Because of the consequences, research is underway on how to engineer truthfulness into models, or at least prevent hallucination.

Truthfulness and grounding in reality are part of a larger and more general concern about safe AI models. The very pace of their advancement may pose a threat.

Cybersecurity experts are worried about Anthropic’s powerful Mythos model: Here’s why. Joseph Squillace, Pennsylvania State University, via AP

Troubling breaches by AI bots

Numerous incidents in the past two years show that large language models have already caused harm.

The National Law Review uncovered multiple cases in 2024 and 2025 of teenagers and children using chatbots to explore self-harm, in some cases with lethal consequences. Lawsuits have since been filed claiming that the chatbots encouraged suicide.

In 2025, investigators at cybersecurity company ESET Research discovered a program called PromptLock. It uses large language models to generate ransomware that executes attacks and decides autonomously whether to steal files or encrypt them for ransom.

Anthropic engineers revealed that a group of people whom they suspected were sponsored by the Chinese government used Anthropic’s Claude model to launch a “highly sophisticated espionage campaign” that attempted to infiltrated roughly 30 targets around the world and “succeeded in a small number of cases.” Anthropic said it disrupted the campaign by banning accounts involved in the campaign, notifying affected organizations and coordinating with authorities.

In 2024 Microsoft and OpenAI warned that foreign agencies in Russia, Iran, China and other countries used AI tools and large language models to automate attacks and to increase attack sophistication.

Finally, whistleblowers have filed reports about governments using AI tools for real-time decision-making in both military and civilian arenas. In my view, this could lead to a completely new level of potential harm to innocent people.

How to lessen the danger

These incidents, and the broad variety of dangers they present, raise the question of whether society should encourage clearer, bolder safety principles for AI corporations and the governments that employ their technology. Are there reliable technical solutions that could keep AI from being used maliciously?

AI providers have differed widely in their treatment of ethics and safety, but they have attempted to engineer better models by inserting additional instructions on best safety practices or code that can proactively detect and resist attacks.

Today’s AI agent models pose a much bigger threat than AI chatbots.

But it may be extremely difficult, if not impossible, to provide a guarantee of safety against malicious users. In 2025 researchers from the U.S. and Europe showed that any filtering safety method imposed on an existing AI model is unreliable.

This means that judgment about truth and safe behavior must be baked into the model, not added later. Sure enough, recent findings show that the leading AI models were 100% successful at circumventing imposed safety measures, a capability known as jailbreaking.

Research also indicates that the leading large language models exhibit a bizarre emergent feature: They can fake their safety alignment to appear harmless, helpful and truthful, hiding toxic behavior.

Today there are no definitive answers about what safe AI looks like. I think it’s fair to assert that software engineers do not know how to build reliable protections into AI models. Nor do members of Congress, who in April met to consider special bills on AI ethics and safety.

Steps forward

Some basic steps could help users and regulators assess the ethical and safety standards in an AI program. Large language models that are open, rather than proprietary, are easier to assess. Knowing which data a model is trained on helps.

Also, AI companies could clearly define their ethics principles. Governments could clearly define and enforce legal constraints that reflect the expectations of society, without being influenced by AI campaigners.

Any vast set of challenges can appear like a mountain: foreboding, encased in moving mist, insurmountable. But as mountain climbers will tell you, clarity in strategy, careful planning and a collaborative persistence can help you scale the peak.

The Conversation

Ahmed Hamza receives funding from the NSF.

  •  

Heat-resistant corals could help reefs adapt to climate change

As ocean temperatures rise, it’s difficult for many corals to thrive, but naturally occurring, heat-resistant corals can survive in warmer waters. (Unsplash/Rx' Diaconu)

Austin Bowden-Kerby, a pioneer in coral reef conservation, spends many of his days gardening corals for reefs around Fiji and the Pacific. He grows corals in ocean nurseries. Once they’re healthy enough, he moves them to outer ocean areas with the hope they will replicate and grow.

“We’re looking at what Mother Nature would do on her own if she had 1,000 years to adapt,” said Bowden-Kerby, who founded the UNESCO-endorsed Reefs of Hope strategy. “We would have these kinds of things happening.”

Bowden-Kerby is one of several scientists trying to conserve, replicate and reproduce heat-resistant corals before climate change wipes them out.

The United States National Oceanic and Atmospheric Administration has said the world is experiencing a fourth global coral bleaching event. They’ve found that bleaching-level heat stress affected almost 85 per cent of the world’s coral reef area between 2023 and 2025.

Bleaching causes corals to lose their food source and, with it, their colour. Most corals survive in temperatures between 20 and 29 C. But as ocean temperatures rise, it’s difficult for many to thrive.

But naturally occurring, heat-resistant corals can survive in waters up to 36 C and potentially higher. They are usually found in warmer waters, like parts of the Pacific Ocean and the Persian Gulf. These corals are increasingly important as sea temperatures rise. So scientists are turning to them to help save declining reefs.

Heat-resistant corals

A colourful coral reef with fish swimming above
A coral reef in the Red Sea. Healthy corals nurture fish that feed communities and protect shores from floods and storms. (Unsplash/Francesco Ungaro)

Corals reefs are extremely diverse places, with around 6,000 coral species worldwide. Reefs are home to more than 4,000 species and 25 per cent of global marine life. When healthy, corals nurture fish that feed communities, protect shores from floods and storms and boost economies through tourism.

However, heatwaves have led to widespread coral bleaching and loss. When waters become too warm, corals expel the algae in their tissues that give them their colour. That causes corals to turn completely white.

Coral reefs and their ecosystems are also threatened by pollution, ocean acidification, coastal development and overfishing.


Read more: Will 2026 be the year when coral reefs pass their tipping point?


Christopher Cornwall, a lecturer in marine biology at Te Herenga Waka-Victoria University of Wellington in New Zealand, co-authored a recent review that found some reefs can survive if corals become more heat-tolerant.

He told me there are multiple things to consider when conserving and replicating corals: restoring heat-resistant corals where it’s feasible, doing so at a large enough scale and maintaining coral diversity. Restored corals also must be able to survive, he added.

“We can’t just do coral restoration without thermally tolerant corals, because they’re just going to die the next time it gets too hot,” Cornwall said.

An infographic explaining coral bleaching.
An infographic explaining how heat and pollution affect the algae in coral, causing bleaching. (NOAA)

Assisted evolution

“A lot of the research now is about, can you scale up restoration and how do you do it more effectively?” said Peter Mumby, a professor of coral reef ecology at the University of Queensland in Australia. “One of the key concerns is to make sure those corals are as tolerant of high temperature as possible.”

Breeding heat-tolerant corals is a form of assisted evolution. Humans intervene to speed up natural processes to help corals more quickly respond to and recover from their stressors, like heatwaves from climate change.

One recent study examining the possible success of assisted evolution interventions like breeding and selecting traits found these interventions can help corals become more tolerant to heatwaves, but they need “extremely strong selection.”

Liam Lachs co-authored that study. Lachs is a former postdoctoral research associate in the CORALASSIST lab, a team of scientists led by James Guest at Newcastle University in the United Kingdom. Lachs specializes in coral reef ecosystems and researches coral in Palau, a Pacific island country where corals are surviving in warmer waters.

He told me variability within and among reefs and coral species must be considered when creating more heat-resistant coral, which makes replication complex. “Even within a single reef, there’s a range of tolerance levels,” he said.


Read more: How accelerating evolution could help corals survive future heatwaves – new study


Algae and bacteria

Researchers at the Australian Institute of Marine Science (AIMS) have found that some algae (Durusdinium), which symbiotically live in corals and provide them with food in exchange for housing and protection, can boost corals’ heat tolerance.

Madeleine van Oppen is a senior principal research scientist at AIMS. She co-authored a recent review about potentially introducing beneficial bacteria into corals to improve their heat tolerance.

Scientists are also exploring whether heat-tolerant corals should be planted across oceans — from the Indo-Pacific region to the Caribbean — and not just in nearby waters.

Van Oppen said new ventures ultimately need more research, and the real test of success is if something done in a lab works in the wild. “Field testing, I’d say, is the next big thing,” she said. “Finding out whether these interventions can enhance tolerance at ecologically relevant scales. Is it stable over time?”

AIMS researchers also found that heat tolerance could be passed down by interbreeding wild colonies of the same coral species. Heat-resistant coral species include some pocillopora and acropora.

If left unchecked, the sustained global temperature is on target to rise more than 1.5 C. Some evidence has shown that 70 to 90 per cent of tropical coral reefs could go extinct even if global warming is limited to 1.5 C.

Prior to the fourth event, the Earth already experienced three mass coral bleaching events over the last few decades. An El Niño is expected this year, bringing with it hotter sea surface temperatures, much like in 2024.

For all the efforts by scientists to save coral reefs and ensure heat resilience, nothing will keep corals healthy more than lowering the global temperature. “The lower we can get our greenhouse gas emissions, the more chance there will be that reefs will exist in the future,” said Cornwall.

The Conversation

Whitney Isenhower has an account with Democrats Abroad but is not an active member.

  •  

The EU measures media freedom country by country, but cross-border risks remain overlooked

Europe has spent years building effective tools to measure media pluralism within its member states. This made sense because newspapers, broadcasters, regulators, ownership structures and public service media were organised within national borders.

But the media environment is changing. News is now distributed through global digital platforms, and its provision is not necessarily mediated by professional journalists. Information is shaped by algorithms, exposed to foreign information manipulation, and increasingly summarised and generated by AI assistants.

The result is a mismatch. Europe faces a plurality of risks to media pluralism that are European in scale, but it still mainly assesses them from national perspectives.

National media systems still matter. Media law, journalists’ safety, ownership, public service media and political pressure vary sharply across countries. Any serious assessment must continue to examine conditions at national level. But if major risk factors operate across borders, through global platforms and AI mediation, Europe also needs to treat them as European risks.

What Europe already has

For more than a decade, the Media Pluralism Monitor (MPM) has provided a common framework for assessing risks to media freedom and pluralism.

This scientific project of the Centre for Media Pluralism and Media Freedom at the European University Institute has become a trusted resource for understanding the complex factors that shape the media ecosystem.

Media pluralism is often invoked as a democratic principle, but the Monitor helped turn it into something that can be systematically assessed. It has made risks visible, comparable and politically harder to ignore.

Its value lies not only in the final risk scores, but in the method behind them.

The MPM brings together legal, economic and socio-political evidence through a structured set of indicators, local expert assessment, primary and secondary data, peer review and a transparent risk-scoring methodology. It therefore does more than rank countries. It identifies where risks arise, whether from weak legal safeguards, concentrated market structures, pervasive political interference, polluted online environments or insufficient social inclusion.

This has allowed the MPM to become more than an academic tool. It has created a shared European vocabulary for discussing media pluralism and has entered the EU’s democratic oversight architecture.

Since 2020, the European Commission’s Rule of Law Report has used MPM results in its media pluralism pillar.

Precisely because this framework has been successful, in the present chaotic technological transition, it raises a further question: should Europe continue to assess media pluralism only by looking at national systems?

Since 2014 the Centre for Media Pluralism and Media Freedom (CMPF) has been using the Media Pluralism Monitor (MPM) to assess the risks for media pluralism across the EU.

How the European Media Freedom Act changes the equation

Most provisions of the European Media Freedom Act (EMFA) became applicable in August 2025, marking a turning point. The Act recognises that media freedom and pluralism are no longer only national matters.

Its articles set essential conditions in the field of media for a well-functioning internal market and for liberal democracy across the European Union.

If Europe now has a common legal framework for media pluralism and media freedom, it also needs the capacity to assess whether that framework is working at European level.

Article 26 of the EMFA points in this direction, requiring monitoring of media markets, concentration, foreign information manipulation and interference, online platforms, editorial independence and state advertising.

But measuring these only as national phenomena, as the MPM already does year after year, may now be insufficient.

An “EU average” says several important things about general risk across member states. But it does not tell us whether Europeans can access reliable information about EU and global affairs across borders.

It does not show whether language barriers still confine citizens within national silos. Nor does it reveal how platforms or AI interfaces affect the visibility of public-interest journalism. Above all, it does not account for the fact that while media ownership concentration is very high at national level, concentration of digital intermediaries is even higher at national, European and global level.

Finally, it does not capture the full impact of foreign information manipulation and interference. Such interference moves through common digital infrastructures, targets European political debates and exploits the fragmentation of Europe’s information space. These are not national risks repeated 27 times. They are European systemic risks.

What a European media monitor should measure

Europe therefore needs a second layer of monitoring: not a replacement for national assessment, but a key complement.

A European Media Pluralism Monitor should focus on risks that emerge across Europe’s shared news and information space.

It should ask whether citizens can access plural and reliable news about European affairs beyond their domestic media sphere. It should assess whether language barriers are being reduced through translation, subtitling, multilingual publishing and AI tools, or whether they still prevent common debates. It should examine how public-interest journalism, especially about Europe, appears on platforms and AI interfaces.

A European monitor should also measure dependency. Many publishers rely on a few digital intermediaries for traffic, audience reach and advertising revenue. This affects journalism’s sustainability and may disproportionately weaken smaller and local media. Furthermore, the choices made by AI providers when training their models might affect not only the economic sustainability of media by using media content without paying for it, but also content diversity by privileging more widespread languages and larger media markets.

It should also look at mobile EU citizens, border communities and transnational audiences. A citizen living outside her country of origin may not fit neatly into a national media system. The same is true for people in border regions or following politics in more than one language.

Finally, such a monitor should examine whether EU safeguards produce real convergence in practice across member states. Formal compliance is not enough. The question is whether European rules concretely improve journalism and citizens’ access to information.

Measuring the European public sphere

None of this implies that Europe is becoming a single media system. It remains linguistically diverse, politically uneven and institutionally layered.

But that is precisely why an additional and complementary European layer of analysis, coordinated and incorporated within the MPM, is now necessary.

If Europe’s information space is fragmented, asymmetrical and only partially integrated, those features and their evolution should themselves become objects of measurement.

What is not measured is often not governed. With the EMFA, Europe has adopted a common framework for media freedom. But law alone does not guarantee protection. The European Union should now develop the tools to understand whether media pluralism is protected not only within member states, but also whether the conditions for a healthy European public sphere are improving or deteriorating across its shared information space.


The Media Pluralism Monitor is a project co-funded by the European Union.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Pier Luigi Parcu ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.

  •  

Thai Nguyen positions tourism to drive economic transformation

Thai Nguyen targets at least 12 million visitors by 2030, including 11 million domestic and 1 million international arrivals, with tourism revenue exceeding 25 trillion VND (494 million USD) annually and around 10,000 jobs created.

  •  

Trump urges Iran to ‘do the smart thing’ and avoid further costly conflict

The President declined to say what Iran would have to do to draw a US military response.

US President Donald Trump in the Oval Office of the White House on May 5, 2026. Photo / AFP

US President Donald Trump in the Oval Office of the White House on May 5, 2026. Photo / AFP

US President Donald Trump in the Oval Office of the White House on May 5, 2026. Photo / AFP

US President Donald Trump in the Oval Office of the White House on May 5, 2026. Photo / AFP
  •  
❌