Reading view

Will weakening Treaty provisions in NZ law create more problems than it solves?

On the face of it, the government’s desire to make references to te Tiriti o Waitangi consistent across all legislation sounds reasonable.

As Justice Minister Paul Goldsmith argued, current laws variously require decision-makers to “give effect to”, “recognise and provide for”, “honour” or “have particular regard to” the Treaty and its principles.

The cabinet quietly agreed to the advance the policy in February, after a ministerial advisory group suggested it might be helpful to promote consistent wording for each standard of obligation to the Treaty in legislation.

But the group did not recommend reducing those clauses to a single (low) standard of obligation, merely to “take into account” the Treaty principles.

Concerns had already been raised about this review of the law, including by the Waitangi Tribunal and the United Nations Committee on the Elimination of Racial Discrimination.

With legislation confirming the changes due to be introduced before this year’s general election, one of the National-led coalition’s most controversial policies may again ignite the campaign trail.

Predetermined policy?

The origins of the issue lie in the coalition agreement between National and New Zealand First which sought to “reverse measures taken in recent years which have eroded the principle of equal citizenship”. Specifically, it committed the government to:

Conduct a comprehensive review of all legislation (except when it is related to, or substantive to, existing full and final Treaty settlements) that includes “The Principles of the Treaty of Waitangi” and replace all such references with specific words relating to the relevance and application of the Treaty, or repeal the references.

The normal process to achieve such a policy outcome would begin with defining the problem that exists. Officials can then develop a range of policy options to address that problem.

The relative merits and risks of different approaches can be assessed to inform a ministerial decision. During the Waitangi Tribunal hearing, however, officials acknowledged the normal policy development process has not happened.

As the Waitangi Tribunal noted, the outcome of replacing or removing legislative references to “the Principles of the Treaty of Waitangi” was predetermined by the coalition agreement. The existing problem wasn’t defined, nor was there any consideration of how best to achieve the policy objectives.

As described in Cabinet papers, the policy objective is:

to ensure that where it is appropriate to encapsulate the Treaty or the Treaty relationship in legislation, the provisions are clear as to how the Treaty applies in the context of each legislative regime, to reduce uncertainty and support better compliance.


Read more: What is happening with the government’s contentious review of the Waitangi Tribunal?


Clarifying statutory obligations seems like a sound objective. But as the Waitangi Tribunal also pointed out, this does not appear to reflect the stated purpose in the coalition agreement to “reverse measures taken in recent years which have eroded the principle of equal citizenship”.

Nor does it explain why it has been determined that all Treaty principles clauses should be replaced or removed before any analysis of how clear or unclear those provisions are.

In fact, many provisions describe quite specifically how they will give effect to Treaty rights and obligations.

For example, section 3A of the Climate Change Reponse Act 2002 sets out a detailed list of actions which must be done “to recognise and respect the Crown’s responsibility to give effect to the principles of the Treaty of Waitangi”.

These actions include seeking nominations from iwi for appointment to the Climate Change Commission, ensuring Māori are consulted on emissions reduction plans, and taking into account the effects of climate change on Māori in the preparation of national adaptation plans.

It is difficult to see how replacing or removing a provision like this would reduce uncertainty.

‘Significant risk’

There are also Treaty principles clauses which have much broader wording. For example, section 9 of the State-owned Enterprises Act 1986 states: “Nothing in this Act shall permit the Crown to act in a manner that is inconsistent with the principles of the Treaty of Waitangi.”

These types of clauses are referred to as “operative provisions” as opposed to the more detailed “descriptive provisions” such as those in the Climate Change Response Act.

Operative provisions allow greater discretion for the courts to determine the precise obligations they create in specific circumstances.

It could be argued such clauses might benefit from greater clarity or elaboration. But there may well be situations where greater flexibility and discretion is appropriate – and exactly what parliament intended.

Either way, the Waitangi Tribunal noted that the case law and official guidance built up over several decades make the requirements of Treaty principles “easily discoverable”.

In their Regulatory Impact Statement, Paul Goldsmith’s own officials advised the proposed measure “has no apparent benefits and carries significant risk to the Māori-Crown relationship”. Regional hui with Māori were also reportedly removed from the Treaty clause review plans.

Māori have again raised concerns about the policy at the UN, and there is now an application for an urgent hearing before the Waitangi Tribunal. Further legal challenges are likely.

Little wonder, perhaps, that some are now suggesting the policy could generate opposition on the scale of the the failed Treaty Principles Bill which inspired one of the country’s largest ever protests.

The Conversation

Carwyn Jones does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

  •  

What we lose when artificial intelligence does our shopping

Amazon's AI shopping assistant, Rufus, on a computer monitor on Dec. 1, 2024, in New York. Company apps, including Rufus, may make it easier to shop, but consumers might balk at giving up too much of the shopping experience AP Photo/Peter Morgan

Americans spend a remarkable amount of time shopping – more than on education, volunteering or even talking on the phone. But the way they shop is shifting dramatically, as major platforms and retailers are racing to automate commercial decision-making.

Artificial intelligence agents can already search for products, recommend options and even complete purchases on a consumer’s behalf. Yet many shoppers remain uneasy about handing over control. Although many consumers report using some AI assistance, most currently say they wouldn’t want an AI agent to autonomously complete a shopping transaction, according to a recent survey from the consultancy firm Bain & Company.

As scholars studying the intersection of law and technology, we have watched AI-assisted commerce expand rapidly. Our research finds that without updated legal measures, this shift toward automated commerce could quietly erode the economic, psychological and social benefits that people receive from shopping on their own terms.

Caveat emptor

Part of shoppers’ hesitation is about privacy. Many are unwilling to share sensitive personal or financial information with AI platforms. But more profoundly, people want to feel in control of their shopping choices. When users can’t understand the reasoning behind AI-driven product recommendations, their trust and satisfaction decline.

Shoppers are also reluctant to give away their autonomy. In one study involving people booking travel plans, participants deliberately chose trip options that were misaligned with their stated preferences once they were told their choices could be predicted – a way of reasserting independence.

Other experiments confirm that the more customers perceive their shopping choices being taken away from them, the more reluctant they are to accept AI purchasing assistance.

Although the technology is expected to get better, there have been some well-publicized missteps reported in financial and tech media. The Wall Street Journal wrote about an AI-powered vending machine that lost money and stocked itself with a live fish. The tech publication Wired cataloged design flaws, like an AI agent taking a full 45 seconds to add eggs to a customer’s shopping cart.

The business case for AI shopping

Consumers have good reason to be cautious. AI agents aren’t just designed to assist; they’re designed to influence. Research shows that these systems can shape preferences, steer choices, increase spending and even reduce the likelihood that consumers return products.

And companies are hyping these capabilities. The business platform Salesforce promotes AI agents that can “effortlessly upsell,” while payments giant Mastercard reports that its AI assistant, Shopping Muse, generates 15% to 20% higher conversion rates than traditional search – that is, pushing shoppers from browsing to completing a purchase.

A man seated in front of a laptop holds a credit card in one hand while making an online purchase with the other.
To retailers, AI tools are one way to convert searches into actual purchases. Rupixen on Unsplash., CC BY

For companies, the appeal is obvious. From Amazon’s Rufus app and Walmart’s customer support to AI-enabled grocery carts, companies are rapidly integrating these tools into the shopping experience.

Assistants with names like Sparky and Ralph are being promoted as the future of retail, while technologists are calling on companies to prepare their brands for the era of agentic AI shopping.

The real concern is not that these systems might fail, but that they may succeed all too well.

The human side to shopping

AI shopping agents do offer considerable benefits.

For example, they can scan numerous products in seconds, compare prices across sellers, track discounts over time, sift through thousands of product reviews, and tailor recommendations to the user’s preferences and needs. They can even read through terms of service and privacy policies, helping consumers detect unfavorable fine print.

But there’s more at stake than these considerations.

While consumers have reason to focus on privacy and control, AI shopping agents carry some overlooked emotional risks, such as squashing the joy of anticipation. Psychologists have shown that the period between choosing a purchase and receiving it generates substantial happiness – sometimes more than the product or experience itself. We daydream about the vacation we booked, the outfit we ordered, the meal we planned. Automated buying threatens to drain this anticipatory pleasure.

Two young Black women with shopping bags smile and laugh as they take a selfie after a mall sale.
Consumers still value the social connection that shopping in real life fosters. Vitaly Gariev on Unsplash, CC BY

This anticipation connects to another value: a sense of personal and ethical authorship. Even mundane shopping decisions allow people to exercise choice and express judgment. Many consumers deliberately buy fair-trade coffee, cruelty-free cosmetics or environmentally responsible products. The brands and products we choose, from Patagonia and Harley-Davidson to a Taylor Swift tour shirt, help shape who we are.

Shopping, moreover, has a communal dimension. We browse stores with friends, chat with salespeople and shop for the people we love. These everyday interactions contribute considerably to our well-being.

The same is true of gift-giving. Choosing a gift involves anticipating another person’s preferences, investing effort in the search and recognizing that the gesture matters as much as the object itself. When this process is outsourced to an autonomous system, the gift risks becoming a delivery rather than a meaningful gesture of attention and care.

Keeping human agency alive

AI shopping agents are likely to become part of everyday life, and the regulatory conversation is beginning to catch up, albeit unevenly.

Transparency has emerged as a central concern. Past experience with recommendation engines shows that undisclosed conflicts of interest are a real risk. The European Union has proposed a disclosure framework around automated decision-making, although its implementation was recently delayed. In Congress, U.S. lawmakers are considering bills to require companies to reveal how their AI models were trained.

So far, consumers seem to want to choose their own level of engagement – a signal that shopping, for many people, is more than just the efficient satisfaction of preferences. Perhaps the least-settled, yet most crucial question is whether AI shopping tools will be designed and regulated to serve users’ interests and human flourishing – or optimized, as so many digital tools before them, primarily for corporate profit.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

  •  
❌