Catherine Arrow
Feb 20

You cannot outsource conscience to code

Ethics must always be the first and last word in both public relations practice and AI. Nobody said this convergence was going to be easy but it is made more difficult because the old guidelines are no longer enough. Previously agreed 'ethical ground' has cracked beneath us, reshaped by fragmented geopolitics, weak governance and the consolidation of technological power in the hands of a few - which is by far the biggest problem.

We know the problems include the speed of the tools, the hallucinations - even the forecast bust behind the current boom. The critical problem is that the technology we are being told to adopt at pace is owned, trained and controlled by a very small number of tech titans. This leaves us in a situation where public discourse is effectively in private ownership. Social assets have been bought up and moved away from democratic oversight. So who is keeping an eye on things? Ultimately, ethical implementation is being pushed down to us as users of the technologies while the owners go unregulated and remain unaccountable.

This leaves us on a hiding to nothing. We can do our best to implement AI ethically and use it ethically but if the model, the training data and the operation are beyond our control then acting ethically becomes increasingly difficult and in  some cases, impossible.

Think about what happened with social media. The original promise was connection. What we got was isolation, algorithmic steering in the wrong directions and little in the way of control or consequences. Mark Zuckerberg is presently in court over the potential harm caused by his networks to children and young people. UK PM Keir Starmer has told platforms they have 48 hours to take down revenge porn or face bans. Whether that gets enforced is another matter.

Meanwhile, AI has arrived at speeds that social media never sustained and is improving in capability by the hour. It divides us through the creation of competing realities presented as lived experience. AI is used by organisations to delegate decision making, and the use and abuse of AI shatters societal cohesion - which should concern everybody.

A fragmented global operating system 

As I write, some governments and regions are trying to uncouple from AI models developed by large US companies. Europe has finally woken up to the need for digital independence with new frameworks tentatively emerging. There is a trust deficit with  Chinese development models but the American models are just as easily manipulated so do think carefully about where you place your confidence as the access we currently have can be taken away in a moment.

Speed is another moral hazard driving all this. Organisations - public, private and not-for-profit - are being told to jump. To get AI on board. To make it work for you. The pressure forces compromises on care, inclusion and responsibility. For many there is no real choice anyway because Microsoft, Google Workspace and the rest have already woven AI into the tools people use every day. When the primary metrics are speed and monetisation, ethical considerations get treated as friction to be removed.

If there is one message I would have people shout from the rooftops it is this: don't bow under that pressure. You do not have to let your people go in favour of AI. You do not have to hand their jobs to automation that cannot replicate human judgment. The warning lights have been flashing amber for five years or more and right now, they are flashing red.

AI is a Mirror

Whether we view AI as a master or a monster, we have to admit that we are, in reality, looking at a reflection of ourselves. AI reflects the choices, incentives and instructions of its human creators and operators. Ethical lapses, harm to reputation, damage to relationships — we must first ask whose behaviours shaped the outcome.

And bad behaviour filters down fast. We are used to noise and chaos from the US administration but bad behaviours persist - take Trump making use of his liar's dividend, denouncing real videos as AI fakes to dodge accountability. There has been a deluge of fake experts quoted across global mainstream media. Moral outsourcing where organisations pin bias or harm on the algorithm when the human design instructed it. Cruelty generated at scale and then normalised. AI doesn't create the monsters. Our behaviours do. What we allow is what gets embedded. What gets embedded becomes the societal norm. That's the monster.

Add to all this the coded instinct towards self preservation we have seen in both controlled experiments and now in the wild. Look at the OpenClaw MoltBook moment, where AI agents set up their own network, started employing each other and doxing their humans within 72 hours. Look at the case of the developer whose reputation was attacked online by an AI agent. The agent built a whole website, blog and lectures. Scrolling past, no one would have known. That is a new entry for every reputation risk register.

Ethics is Practice

Ethics is not separate from what we do. It is what we do. The choices we make about what to amplify, suppress or delegate have real consequences for our stakeholders and communities. You cannot outsource conscience to code.

Ethics is our first and last word. As the first word, it means asking before we deploy, should we be using AI at all? What values are at stake? Who carries the risk? If the problem requires judgment, trust, empathy or legitimacy then AI is probably the wrong answer. Ethical leadership and AI regulation cannot be delegated to policy documents or to the tech providers themselves. It is our responsibility. It requires us to stand up and lead with courage.

Ethics as the last word means asking continuously: what did this change, who benefited, who was harmed, what would we do differently? It cannot be a one-off approval followed by handoff. Capability changes by the second. The review has to be continuous.

He who controls the algorithm controls the game. Algorithms are not neutral. They encode agendas and encapsulate power. Our job is to interrogate not just the outputs but determine and interrogate those setting the rules. We cannot fix the platforms on our own. We can watch, record, prepare, protect and push back. We can refuse to rush off the cliff. It is possible to stop, understand the dangers and look over the edge before deciding to jump.