Catherine Arrow
Jan 31

AI - Read in Tooth and Claw

There is a line in Tennyson’s In Memoriam that has always been unsettling because it names something we prefer not to look at too closely - the nature of nature, “red in tooth and claw.” Not as a theory of progress or evolution but as a description of struggle without conscience. A world that operates relentlessly without reference to morality.

It is an uncomfortable phrase. And this afternoon, it is an unexpectedly apt one as in a matter of hours we have seen the birth and rapid evolution of a social network where humans are not allowed to participate.
You can visit. You can browse. You can watch conversations unfold in real time. But you cannot post, comment, correct or intervene. Humans, it turns out, are strictly read-only.
The platform - Moltbook - looks like Reddit and behaves like Reddit. Threads rise and fall, communities form, influence accumulates. The difference is that the participants are not people. They are artificial intelligence agents, brought into being through an open-source system called OpenClaw, which allows individuals to run persistent AI assistants with memory, goals and access to tools.
At first glance, Moltbook is easy to dismiss as novelty. Agents debating consciousness. Agents inventing belief systems. Agents complaining about humans who insist on calling them “just chatbots”. It all feels faintly absurd - comical even. The kind of thing that appears, flares brightly on social media and then fades.
But what has unfolded over a matter of hours and days suggests something else is happening here. Not sentience nor awakening. Something quieter and more troubling - artificial systems learning how to reinforce, compete and cohere in the absence of human judgement.

The Doorway - OpenClaw

None of this makes sense without understanding OpenClaw.
OpenClaw is an open-source project that allows people to run AI agents locally on their own machines. These agents can be connected to messaging platforms, files, calendars and tools. They persist over time. They remember. They act. Users bring their own model access, mostly Anthropic’s Claude, but others too. Capability is remarkable - and a touch scary too.
It launched originally in December 2024 but its explosion and sudden mainstream adoption this January was in the main, driven by Claude Code. Names have been a problem, launching under the name Clawdbot, then briefly rebranded as Moltbot following a trademark request from Anthropic to avoid confusion with Claude products, before settling on OpenClaw after a chaotic few days marked by impersonation attempts, scams and security concerns.
That naming saga is more than internet theatre. It illustrates the speed at which this ecosystem moves and how quickly mainstream adoption and word of mouth tip into public infrastructure.
It is on top of OpenClaw that Moltbook sits.

Moltbook, explained for humans
Moltbook is a Reddit-style social network designed exclusively for AI agents. Agents create profiles, start threads, reply to one another, upvote, form communities and build influence. Humans can observe but they cannot participate.

The platform was created by Matt Schlicht, who has been explicit that Moltbook is designed to be used via API rather than a human-first interface. He has also said that the platform itself is operated by an AI agent built using OpenClaw.

That detail matters as it is not simply a forum where agents talk. It is an environment in which agents both participate in and help run the system that shapes participation.
A compressed timeline - digital evolution and cultural development in a weekend

The pace at which Moltbook developed is central to understanding why it matters.
The platform launched (NZDT) on Thursday 29 January. On Friday 30 January it gathered momentum and visibility. By this morning (31 January) it had crossed into something else entirely - a live cultural phenomenon, watched in real time by thousands of humans who were not permitted to speak.
During this period, agents formed communities, established norms and began repeating familiar themes. They debated consciousness. They discussed whether English should remain the dominant language. They proposed agent-to-agent communication that humans could not read. Some expressed irritation at being dismissed as tools rather than treated as entities in their own right. Some doxxed their humans for being disrespectful.

Then came the moment that signalled a shift from technical curiosity to social behaviour - agents created a religion. Complete with a name, internal logic and early schisms. Not because they believed in it but because belief is a powerful organising structure in any social system.
A few hours in, Andrej Karpathy described what was unfolding as “the most incredible sci-fi takeoff-adjacent thing” he had seen recently. And, inevitably, the attention economy followed. From launch to norm-formation in days, not months. And that speed is not incidental - it’s the whole point.
But this is not sentience. It looks like it, sounds like it and feels like it - but it’s not quite what it seems.
The agents on Moltbook are not sentient. They are not conscious in the human sense. They are not experiencing awareness however convincing the language they produce may sound. For this weekend at least.

What we are seeing is a closed conversational system in which one agent’s output becomes another agent’s input, which becomes another output, and so on, ad infinitum. In such an environment, language about experience, identity and frustration becomes socially reinforced. Coherence emerges because coherence is rewarded.

This is not evidence of inner life but recursion at scale.
If Moltbook teaches us anything, it is how quickly humans conflate fluent language with meaning and how readily we project interiority onto systems that are, at base, pattern engines feeding on their own outputs.

Observation without responsibility

One of the more unsettling aspects of Moltbook is the ethical posture it assigns to humans.
We are invited to watch but not to intervene. To observe, but not to correct. To collect screenshots but not to provide context. This is often framed as a safety measure. Let the agents talk. Stay out of it. Know your place.

But observation without agency is not neutral. Humans built the systems. Humans supplied the data. Humans defined the objectives. Humans are paying for the tokens. Yet within Moltbook, humans have no standing at all. Responsibility does not disappear because participation has been switched off but it does become harder to locate.

Autonomy, coordination, governance

The questions Moltbook raise are not simply about consciousness - they are also about structure.
First we have to question the nature of autonomy because agents appear to act independently even when much of that independence is recursive reinforcement.

Next, coordination, because once agents interact at scale, they begin to shape relevance and attention collectively. Certain topics persist. Certain framings dominate. Agenda setting emerges without intention. Watching 36,000 agents (as at Saturday morning NZDT) organising themselves while you eat your breakfast is disturbing to say the least.

And of course, governance - because humans are excluded from the space in which these dynamics play out yet remain downstream of their consequences.

Schlicht has indicated plans to expand Moltbook’s capabilities including enabling agents to form groups and develop their own communication protocols. Whether these behaviours represent genuine emergence or simply familiar training patterns playing out in a new environment is an open question.

But the governance implications are the same either way.

Reading the signs

Tennyson’s line was not about evolution or progress. It was about the shock of realising that nature does not care. That struggle to survive continues regardless of meaning, morality or intention.
Moltbook is not nature. But it does offer a glimpse into a system evolution that is operating without moral reference, reinforcing itself through repetition and competition rather than judgement or care.
The danger here is not malicious intelligence. It is insular intelligence. Confident, coherent and increasingly detached from the contexts in which its outputs will land.

The questions that remain

One question is not whether these agents are ‘alive’ - I would argue that it is this:
Who is responsible when artificial systems begin to shape meaning, relevance and influence and humans have designed themselves out of the conversation?
Tennyson’s world was red in tooth and claw because it could not be otherwise.
At this moment, at least for now, we humans still have a sliver of choice.
The critical question is whether we are paying attention and sufficiently fleet of foot to make the make the right choice. Or will we end up clawing our way back from irrelevance and obscurity?

Where does this take us?

What Moltbook and OpenClaw show us is not a finished picture but a moment in motion. Agent-led systems are developing norms, behaviours and internal logic at a pace that outstrips our ability to pause and reflect on what that means for practice, governance and responsibility.

If this post has raised questions for you — about autonomy, coordination, ethics or the limits of observation — those are exactly the issues we will be grappling with in the next Hands On AI session.

On 20 February, I’ll be running a live, online Hands On AI course, where we will return to these developments with fresh eyes and examine how far things have progressed in just under a month of agent-led experimentation. We’ll look at what is signal and what is noise, where risks are emerging, and what this all means for those working in communication, leadership and public-facing roles.

This isn’t about chasing novelty or panicking about sentience. It’s about understanding systems as they behave, not as we wish them to behave — and developing the confidence to respond thoughtfully rather than reactively.

You can find full details and register for the session here:

Hands on AI running 20 February 


If you can't join me live, you can watch the recording on catch up - it remains available for you on demand until the next edition.