When we talk about artificial intelligence, it’s tempting to think of it as something almost otherworldly, a genie, suddenly out of the bottle we accidentally found by the wayside. But strip it back and what you find is algorithms. Not a single algorithm, but thousands upon thousands of them, stitched together and fed with data. And it’s here that our first ethical question arises — because those algorithms are written by people and people bring bias, prejudice, fallibility and discrimination to the table.
You’ll frequently hear me say whoever controls the algorithm controls the game. Algorithms decide what is seen and what is hidden, what is amplified and what is erased. They are not neutral. Nor is the data they feed on. Data is a reflection of the society it comes from and, sadly, our societies are neither equal nor fair. If the building blocks are compromised then so too is whatever we build with them.
This is where algorithmic ethics comes in. Are the rules transparent? Are they fair? Can they be challenged? At present, the answer is often no.
Beyond the building blocks
Once those algorithms are bundled together into systems, we arrive at AI ethics. The question shifts from the rules themselves to the consequences of those rules at scale. What happens when algorithmic bias shapes decisions about hiring, credit, health or justice? What happens when millions of people rely on outputs that are based on skewed, synthetic or incomplete data?
For a brief moment, some of the large companies tried to answer these questions. They formed advisory councils and ethics boards. But when advice clashed with commercial goals, those groups were dismantled. Development continued at speed, driven by profit rather than progress.
That’s where we are today. AI is embedded in our lives, shaping decisions large and small but the ethics around it are left largely to chance. Different regions are developing their own models, what I call a kind of “continental drift” — yet none are inherently more ethical than another. They are all built on human motivation, and at present, the strongest motivation is profit closely followed by power.
Our own responsibilities
So where does that leave practitioners? It leaves us in a place of heightened responsibility. Our work has always been about sustaining the relationships organisations need for their licence to operate. That doesn’t change with AI — in fact, it becomes more urgent.
Even if you never use an AI tool directly, your world — and your stakeholders’ world — is already shaped by AI-driven decisions. If your organisation adopts these systems without regard for ethics, then sooner or later those decisions will harm people. And when that happens, trust is lost, reputations fracture, and licence to operate is jeopardised.
This responsibility sits alongside the wider pressures of our time. Organisations are already struggling with geopolitical tensions, disinformation and polarisation. Too often we see corporate cowardice - leaders bowing to external forces instead of standing on principle. AI doesn’t ease these pressures, it multiplies them.
That’s why practitioner ethics matters more than ever. It is not about producing content faster. It is about guiding behaviour under strain.
Literacy and realities
To do that, we need AI literacy. And by literacy, I don’t just mean the ability to push the right buttons in ChatGPT or Gemini. Yes, we should understand what the tools do and where they might help. But true literacy goes deeper. It’s about understanding where the data comes from, how it’s cleaned and what flaws it carries. It’s about asking whether synthetic data — data generated by machines to train machines — is valid or simply compounding distortions.
It’s also about recognising the realities that AI creates. There are synthetic realities, spun out by machines. But there are also competing realities, emboldened by political agendas or profit agendas. For those living inside them, these realities are absolutely real, even if they look false from the outside.
For us, that means two things. First, we must ensure we don’t construct false realities on behalf of our organisations. Second, we must be able to spot and address the emergence of competing realities in society. Literacy is about more than technology — it is about critical thought, identity and human behaviour.
Because a genuine and stark reality is that we are already seeing digital dependence grow. People outsource their thinking to the machine, lose the habit of reflection and risk losing part of their own identity. If we don’t recognise that, we cannot hope to guide organisations responsibly.
The road ahead
So what does the future hold? At its heart, the real challenge is AI–human relations. How do we preserve trust, identity, and dignity when decisions are increasingly mediated by machines? How do we make sure AI agency serves human agency instead of eroding it?
This is where public relations professionals are uniquely placed. We are mediators of relationships. In the future, that means mediating first between AI and humans — helping organisations deploy systems responsibly — and then back between humans, making sure those machine-driven decisions do not corrode trust or cohesion.
Training and identity
To do this, we need training and professional development that is designed for our field. It is not enough to be taught how to operate the shiny tools created by others. We need education rooted in public relations, that understands our practice, and that helps us deliver on the Global Capability Framework’s call to provide wise counsel and sound advice.
And here identity is crucial. If we define ourselves as content producers, then we will be outpaced by the machines that can generate content at scale. If we embrace our true identity as strategic advisors — grounded in ethics, relationships, and human understanding — then we can lead.
AI is not an abstract force, it is a mirror of us. The choice now is whether we allow it to magnify our worst behaviours or guide it to reflect our best. For practitioners, the responsibility is clear: stay literate, stay ethical, stay strategic — and never lose sight of what it means for organisations - and individuals - to behave well.