Catherine Arrow
Apr 17
Beyond the prompt - where AI is heading
In this month's Hands on AI session I wanted to step back from the tool-of-the-week noise and ask a sharper question: what shape will our role take in the next twelve to eighteen months and what capabilities do we need to grow into it?
A decade ago I could comfortably ask practitioners what their work might look like in five or ten years' time. That window was firmly closed when the winds of changes blew through. Waiting for the storm to settle is not an option because the maelstrom is here to stay.
What has changed?
The change worth attending to is the move from passive AI — where you prompt and it responds — to agentic AI, where you set a task and the system gets on with it. Agents have been generally available since January last year. In the last six months they have matured enough that I can run them alongside day-to-day work and orchestration across multiple agents is emerging as a real job rather than a theoretical one.
A few specific developments sit behind this. AI browsers arrived with what I think of as the Envoy — an agent embedded in the browsing layer itself, able to interrogate pages as you work and function as its own operating system. MCP protocols, the ‘swipe cards’ that let different AI systems talk to each other, are what make this practical. An agent with MCP access can reach into your email, pull from a feed or send you a text when a job is done without you touching each step.
Agentic environments for running these systems have consolidated. OpenClaw emerged late January, Nvidia has now committed to NemoClaw as its agentic operating environment and Kimi 2.5 is operating in similar fashion. The early agentic experiments were chaotic — a social network built exclusively for AI agents produced a fake religion, a dating site and doxxed several of their humans within seventy-two hours — but controls are tightening and deployment is accelerating regardless. I am setting up a detailed course on agents for the end of May/early June because it is an area likely to further shape our work this year.
Content generation has moved on too. Google's Veo produced a ninety-second video for me on AI and human interaction in less time than it takes to write the prompt. Notebook LM's audio and video overviews will condense fifty sources into a six-minute briefing. Gemini's AI Studio lets you vibe-code a working app in the browser — I built a social-capital reputation monitor there from a two-page prompt and it ran analyses on Qantas and Woolworths while suggesting additional features as it built itself.
Warning lights are flashing too. A paper much discussed this week suggests prolonged interaction with flattery-trained models may produce a measurable cognitive decline in users. Wikipedia has banned AI-generated content though how it polices that ban is anyone's guess. And the sale of Generative Engine Optimisation is, in my assessment, snake oil — there are no guarantees of inclusion in any LLM and the digital chatter these systems scrape will reproduce a defamatory rumour as happily as a fact.
What we need to do
My AI Compass has four points — Insights, Imagination, Implementation and Integrity — and the most under-used is imagination. The pattern I keep seeing is organisations retrofitting AI onto existing processes. You would not put an engine on a horse to make it go faster. The value is not in automating the old model of practice but in rethinking what the practice is for.
Three capabilities to build today
The first is precision. Agents reward clear instruction and punish vague briefs. The prompt is no longer a sentence — it is a structured document setting role, context, constraints and desired output.
The second is orchestration. When several agents are running on behalf of a human team someone has to be the conductor and that conductor is us. The job is making sure nothing flies off the handle — think of the kettle-drum player who hits the drum so hard the stick takes out the violinist.
The third is judgement about where AI belongs. Real-time ecosystem scanning — sentiment, narrative formation, stakeholder mapping, risk modelling — is where these tools earn their keep. General content generation, the default use case, is the least interesting thing they do. My reputation gem in Gemini, trained on my own relationship-measurement work, produced a cross-sector interdependency map for New Zealand that flagged fertiliser supply as the first critical exposure point following the attack on Iran by the USA and Israel, a full month before it surfaced as a public issue.
The Integrity Point
None of this is a technical conversation. It is an ethical one. AI widens the digital chasm or narrows it depending on how organisations deploy it and our role providing trusted counsel extends to both sides of that. The realities our stakeholders inhabit are now shaped by algorithmic curation which means two audiences looking at the same event are seeing different versions of it.
That is uncomfortable ground. It is also where the capability frameworks — the Global Alliance Global Capability Framework in particular — remain the useful compass. Technologies will come and go. The work of building the relationships our organisations need to keep their licence to operate will not.
I’ve included an interactive guide to the latest model releases in the course player which you can find in the learning centre and the session recording remains available for a short time if you want to dig a deeper into the latest AI dynamics.