Catherine Arrow
Jul 10

An empty boat in rough seas – new AI 'strategy' casts us all adrift

A plan is not a strategy. An overview laced with opinion is not a plan. Seemingly years in the making, the newly published New Zealand’s Strategy for Artificial Intelligence: Investing with Confidence has turned out to be a disappointing waste of digital dots.

What we’ve received this week is not a strategy in any meaningful sense. It is a promotional overview framed by political aspiration, padded with a few case studies and awash with speculative economic promises. It avoids the essential work of strategy and fails to provide the leadership needed at a critical moment in our technological and societal evolution.

As someone who has worked for years at the intersection of AI, communication, societal relationships and strategy, I truly hoped for something meaningful - something useful. Something that made sense. What we’ve received falls far short. It lacks vision, direction, measurable outcomes, coherent guidance and ethical substance. What's even worse is that it shows little understanding of the technologies it’s meant to govern. Let’s look at what’s gone wrong.

Vision – or the absence of one

As I’ve said many times before, general use and availability of artificial intelligence brings with it the greatest societal shift we have seen since the advent of the web. It affects every aspect of our lives. A national AI strategy should articulate a long-term vision for the country’s role in a changing technological world. It should ask: what kind of future do we want to share with AI? Who will benefit? How will it improve - or harm - the lives of people across Aotearoa New Zealand and the world? Can we bridge the rapidly widening digital chasm or is it already too late?

The document avoids those questions. Its central message is simply “invest with confidence.” It positions AI as an economic lever rather than a force continually shaping and reshaping society, knowledge, relationships and power structures. Even where public sector use is mentioned, the framing remains productivity-based, not human-centred.

There is no grounding in Te Tiriti o Waitangi, no cross-sector articulation of future societal benefit and no real acknowledgment of the civic, democratic or cultural consequences of accelerating AI deployment. So, no vision, just omission.

Direction – vague and unstructured

A credible strategy provides a sense of direction. An effective strategy takes us from one state to another. It defines a pathway, prioritises actions, sets a course. Here, we get an enthusiasm for uptake but no real sequencing, no sense of priority sectors beyond a few nods to agriculture, tourism and health, and no clear roadmap for implementation. The document gestures vaguely to companion guidance and supporting documents but does not synthesise them into a directional approach. Where is the transitional guidance for workforce reskilling? The governance frameworks for AI deployment in education? The safeguards for its use in justice, housing or social services? These aren’t present — not even as ambitions.

The key supporting document referenced — Responsible AI Guidance for Businesses — does little to strengthen the strategy. While presented as a companion, it aligns more in tone than in substance. It gestures toward values like fairness, human oversight and transparency - and includes a passing mention of agentic AI - but offers no roadmap, no mechanisms for enforcement and no new protections. Most notably, it frames AI risk as something for businesses to navigate in purely reputational terms, omitting any serious treatment of civic responsibility or public good. The hard questions of harm, inequity and democratic integrity are left unasked. In doing so, the document reinforces the strategy’s central failure — an apparent government retreat from responsibility, leaving decisions of public consequence to private actors.

Outcomes – undefined and unmeasurable

For a strategy to succeed, it needs targets, milestones, measures. This document offers none. Instead, it repeats a company's projection that generative AI could add $76 billion to GDP by 2038, a speculative figure drawn from corporate modelling rather than grounded national analysis. There are no benchmarks, no policy impact assessments, no timeframe for delivery — just general sentiment and economic extrapolation.

Could we not have had something like Australia’s AI Safety Standards or Safe and Responsible AI framework which outlines specific areas for regulation, workforce readiness and public engagement? Their work is underpinned by a recognition of risk, rights, and the need for transparency and founded on the ethical principles agreed in the last decade. Likewise, the EU’s AI Act created a structured classification of AI risk tiers, linking legal obligations to real-world impact. These are strategies. Ours is a digital fluffy bunny.

Guidance – thin and fragmented

Instead of providing robust, consolidated guidance, the strategy defers to companion documents, future updates and existing legislation. Businesses are told that New Zealand’s legal system is already “principles-based and technology-neutral,” which, while technically accurate, sidesteps the real issue: AI introduces new dynamics of harm, autonomy and scale that existing laws were never designed to handle. We are in a giant game of leapfrog and the technology - mostly held by private companies - can jump higher and faster than society's ability to legislate and regulate. The apparent 'she'll be right' attitude of the government does not display the strength and leadership we need right now.

There is no guidance on agentic AI or embedded systems. No mention of synthetic content governance, large model auditing, digital identity protections or AI explainability in critical domains like health, justice or education. No mention at all of human-AI teaming, digital dependency or autonomous decision-making — all glaring omissions in 2025 for a strategy that should be looking beyond the next decade.

This lack of foundational understanding is perhaps the most shocking element. For a government document produced in consultation with multiple departments (and with AI assistance), there is no apparent grasp of the diversity of AI types or the specific implications of generative, predictive or agentic AI systems.

Ethics – mentioned not embedded

Ethics are not an afterthought in AI strategy. They are the foundation. In this document, they are treated as incidental reputational risks to be managed. There are fleeting references to fairness, transparency and human rights but no operational ethical framework and no institutional commitment to safeguard those values.

The Māori data sovereignty section is commendable but disconnected. There is no meaningful attempt to integrate te ao Māori into the strategy’s core assumptions, governance models or accountability mechanisms.

Meanwhile, international peers are setting standards. Australia’s guidance includes proposed prohibitions on high-risk use cases such as biometric surveillance, predictive policing and emotion recognition (Australian Government, 2024). The UK’s Office for AI (2023) supports sector-specific regulatory sandboxes. And Canada has offered tools - easy access accountability through approaches such as the mandatory Algorithmic Impact Assessment system.

Our new government strategy does none of this. It talks about enabling innovation but forgets that trust is what makes innovation possible - and trust comes from protection, accountability and inclusion.

A missed opportunity

This document was a chance to lead. Instead, it follows a dated script that frames AI as economic uplift with a few risks tacked on. It confuses opinion for planning and aspiration for action. It misunderstands the technology it seeks to champion and underestimates the ethical and societal depth required for a genuinely responsible strategy. New Zealand has done some excellent work in this space over the years - check the research and guidance published between 2017 to 2021, stalled, as with so many things, by Covid. 

We are home to world leading companies deploying AI in all manner of constructive ways but on the flip side, there will also be those, here and overseas, who deploy AI in decision making with no thought or care for the consequences. Chasing only the illusion of productivity and profit and, in doing so, sacrificing societal cohesion in the process. Where once New Zealand could be seen to be waving encouragingly as AI came over the horizon, this latest offering is adrift, dowsed and drowned by its own ineptitude.

A plan is not a strategy. And - to all our detriment - this isn’t even a plan.

Useful links and references for you

https://www.mbie.govt.nz/about/news/artificial-intelligence-strategy-and-business-guidance-now-available - you can download copies of both documents at this link

https://www.industry.gov.au/publications/voluntary-ai-safety-standard

https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government

https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles

https://www.apsc.gov.au/initiatives-and-programs/workforce-information/research-analysis-and-publications/state-service/state-service-report-2023-24/fit-future/supporting-safe-and-responsible-use-artificial-intelligence

https://artificialintelligenceact.eu

https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html