From the time I was a little girl I always wanted to ride a dragon. In my imagination it was the greatest adventure I could ever have – soaring high, seeing the world from above, a friendly creature by my side. This week, thanks to Sora and its new app, I can do exactly that. I can drop myself into any reality I desire. More intriguingly, my friends can drop me into theirs too – and, if there are any out there, so can my enemies.
With the launch of OpenAI’s Sora app, we are handing over far more than we realise. It promises astonishing things but arrives carrying significant risk, not only for our individual identities but immense societal harm. The technology is being rolled out at speed, with minimal guardrails and the effects are already visible. Our faces, our voices, our gestures no longer truly belong to us. They are captured and replicated by AI systems across multiple platforms, under terms we rarely read, on assumptions we rarely challenge. That alone reshapes new and shifting realities.
We live with competing realities, with no reliable way to discern between a genuine recording and an artificial one. There is little difference to the viewer between a real video of me speaking and one in which I swoop past on a dragon - it all depends on the intent of the storyteller. The capacity for deepfaking was already at our fingertips in 2019.
I wrote then that our realities were about to shift. In the few short years since, the technology has exceeded even those early predictions. Yet the human condition – polarisation, authoritarian pressure, discrimination, greed – has remained constant. Technologies built and marketed for profit rather than progress are now accelerating societal fragmentation at a scale we have not experienced before.
OpenAI’s Sora 2, is packaged inside what is, in effect, a social media app. On the surface it is fun, shareable, humorous. Beneath that, it is a powerful content engine emerging into a global atmosphere already charged with tension, polarisation and bias. The original promise of social media – connection, discovery, openness – has long been corroded. What was once a place to find community has also become a venue for manipulation and harm.
Within hours of Sora’s public debut, fabricated clips appeared placing real people – public figures and celebrities – into absurd or compromising contexts. One widely shared clip depicted OpenAI’s own CEO Sam Altman apparently “stealing graphics cards”, a wholly synthetic scene that nonetheless sparked debate about whether it was real.
As these clips are shared beyond the Sora app into other platforms, the reality distortion spreads. People will wake to images of themselves that they never made and never consented to. The ability to trust one’s own image is slipping away. We have less faith in our own faces. Without new norms, safeguards and avenues for redress, our reflection will belong to others. Like the warped mirrors of old fairgrounds, our image will be distorted, unsettling and, while briefly amusing, fundamentally undermine the physical lives we are living. Seeing, in this new world, makes us blind to reality, blinkers that which is living and breathing, consigning us instead to the trap of illusion. This is not a life we can live - instead, the titans of tech trap us in a fantasy of servitude - as we ride our dragons, they capture and retrain our minds.
For decades I have been an advocate for technology, for the marvellous things it can do, for the ways it can improve lives and create opportunities. I have celebrated and amplified its promise because I believed – and still believe – such tools can serve humanity. But what confronts us now is technology that reshapes our human world beyond recognition, is wielded not by disinterested stewards but by humans driven by profit, not progress. The titans of tech invest in one another at unimaginable scale – billions and trillions changing hands – and in their race towards private gain they give little thought to the public good. They are not concerned with you or with me. Their eyes are fixed on horizons of their own making, goals they define as valuable but which do not serve us all.
We must also reckon with the hidden price of our synthetic realities. Every dragon ride conjured by a generative engine feeds on data centres spreading across the globe, vast complexes draining water and electricity from communities that need those resources simply to cook dinner, heat homes or irrigate fields. The slop of synthetic content is built on altered landscapes carved up, acre after acre after acre, each server hall a monument to our appetite for novelty. There is no long-term thought or apparent care for the impact these changes will have - environmental, social and cognitive costs are externalised while the profits are internalised.
Individually, the question of reality has to be addressed – how we understand and recognise ourselves and our likeness when it can be captured, remade and broadcast at will. But collectively, the cost of our altered states and warped realities must be calculated and confronted. It must be halted until the corporate intent behind this technological destruction is examined and understood. Until then, as we roam the world on dragons conjured from code, we scorch the earth beneath us with our imaginations.
If we as individuals now face acute identity risk, then organisations face the prospect of seeing their reputations burnt to a crisp. Deepfakes do not only target people; they target institutions, corrode internal narratives and destroy stakeholder trust. We have already seen fabricated videos of CEOs, of leadership figures, of politicians. There is something out there for everyone – a synthetic statement of support for a controversial policy, or a manufactured “employee” levelling accusations of misconduct in a workplace that never existed.
The velocity of synthetic video ensures that damage begins before legal or technical teams can even react. Crisis escalation is no longer hypothetical. Already, so-called apocalyptic influencers are producing disaster videos and fabricated news reports using tools like Google’s Veo, blending alarmism with believability. In such a scenario, even if your internal teams flag content as fake, many decision-makers and members of the public will not wait. Social media cycles, newsrooms and regulators may treat the clip as truth before you have had a second to respond. The reflex remains powerful - if we can see it and hear it, it must be real even if it blinds us to the truth.
Our regulatory environment is dangerously light. Synthetic content ought to fall under transparency obligations, with clear labelling and provenance, but legislation and regulation lag far behind the technological changes. Worse, the lack of coordination across jurisdictions means the dangers for organisations and individuals remain acute. Most internal policies, contracts and governance frameworks do not contemplate the scale of forgery that is upon us. They were written for cloned emails, for phishing sites, for still images tweaked at the margins. They were not written for a world in which your chief executive can appear in a synthetic film in minutes, saying words they never said in a place they never stood.
This new paradigm demands contracts that carve out licence, audit rights, takedown obligations, revocation clauses and provenance requirements. Yet from the technology titans we will not see that willingly. We all know the difficulty of reaching anyone inside Facebook, Google or the other major platforms when help is needed. Requests vanish into automated forms. Frustration mounts. In the meantime, falsehoods and fabrications propagate unchecked and the harm multiplies.
If misinformation was already corrosive, this latest layer of synthetic identity risk is lethal. Organisations need to treat it as such. It must sit on the risk register, not buried in the footnotes but written in bold. Scenario planning and crisis preparation must now include synthetic crisis drills. Rehearse how you will respond when a video appears, how you will publish verifiable originals, how you will reclaim your narrative. Without that preparation, your reputation will fracture irreparably before you can even attempt to rebut.
In politics and civic life, video has become the highest form of persuasive currency. Until now, because video was hard to fake, we held a shared baseline. If a leader said something on camera, we debated the meaning. These days we debate whether the video ever happened at all.
This week, we have seen Donald Trump’s team circulate manipulated video and content to demean opponents and once again present a version of reality that is false, misleading and harmful. Because it is packaged in a familiar style and tone, it appeals to the basest instincts of his follower base. Thanks to the ownership model of social and other media channels, the harm seeps in at speed, corrupting understanding and fuelling fears. Mainstream media reporting on manipulated Trump campaign videos highlights how these tactics are evolving to blend authentic and synthetic clips into a seamless, viral whole.
Deepfake video can be weaponised for elegant disinformation – synthetic speeches, protest footage, scripted events that never occurred. With generative video tools in the hands of many, scale and sophistication increase exponentially.
Sora’s rise matters here, not simply as a novelty but as a powerful tool for state or non-state actors seeking to distort truth. It blends a feed model — viral content — with deepfake generation. That fusion creates a vector for political content to spread before fact-checkers, journalists or authenticity defenders can respond.
Regulators are struggling to keep pace. Many election laws and media regulations predate realism at scale. When political video claims arise, authorities often lack both the legal mandate and the forensic capacity to intervene. Synthetic video becomes a weapon of plausible deniability - any damaging real footage can be dismissed as fake, any fabricated footage passed off as real.
We may be entering an epoch where political legitimacy is fought in holograms and counter-holograms, where truth is no longer about facts but trust in sources. And in that space, the first to flood the field with synthetic content often wins.
And so we come full circle to the future, to our children and grandchildren, who will find themselves not immersed in books but immersed in synthetic realities. When I was a girl, dreaming of dragons, I found them in the golden treasure of stories. One that springs to mind is The Magician’s Nephew by C.S. Lewis, where Digory and Polly stumble into the wood between the worlds, each pool leading to another reality, another existence. For them, stepping through was a choice of imagination. For our children, their jumps between realities will be made not through fiction but through technologies presented to them, technologies engineered and manipulated by the titans of tech.
How will that affect the world they grow into? How will their lives be altered by the environmental destruction wrought by the sprawling data centres that feed these illusions? Buildings consuming rivers of water and gulps of power while communities struggle to grow food, keep the lights on or simply breathe unpolluted air? What does it mean for them when the landscapes they inherit are already scarred by today’s hunger for artificial visions?
The youngest generation will grow up not only on social media but on synthetic reality platforms—many of which will escape scrutiny because they are not labelled as “social networks”. The danger is not merely that they will consume or create such content but that they will be immersed in it. From their early teens, children are vulnerable. We already know from past misuse—deepfake nudes, manipulated images circulating on Snapchat and elsewhere—that cruelty among peers is amplified online. AI video 'social apps' raise the stakes further. Now classmates can deepfake your likeness, mock you, shame you, or reach out to you through synthetic personas.
At scale this creates a profound dislocation from shared reality. When a generation can live simultaneously across multiple, constructed worlds, how do they anchor themselves to any one truth? How do they trust institutions, facts or even each other? The social glue, our common narrative, our humanity, begins to crack.
Regulations aimed at children and social media -minimum ages, bans, filters - may not apply to AI tools presented as “art apps” or “creative platforms”. Loopholes abound. Services can claim to be story builders or learning companions, while carrying under the surface the full social features of networks. Already, AI chatbots and avatars are becoming default companions for teens. The US Federal Trade Commission last month launched an inquiry into AI chatbots acting as companions, questioning what steps companies are taking to limit risks for children and adolescents. If a young person confides in a synthetic “friend” after hours, entrusts it with feelings, or is manipulated by its responses, this sits largely outside parental controls or existing child-protection laws.
We are likely to see emotional, social and cognitive harms emerge not simply from exposure to distressing material but from living partially in constructed realities. The line between self and substitute will blur. What will that mean for them? For the lives unlived or the lives lived entirely in synthetic space? And for their children, and their children’s children?
Too much is made of generational divides - Generation This, Generation That - we know that is a marketing and media shorthand more than a social truth. In reality, most parents and grandparents want the same. That the next generation - their loved ones - experience a life better than theirs has been. That hope is what binds families. Yet those who hold power and profit have already done much to undermine that hope. They strip away security, stability and sustainability, replacing them with stories sold for engagement. Like the Snow Queen scattering shards of her evil mirror, they embed distortions that freeze our hearts bit by bit.
Synthetic realities risk becoming those shards – tiny splinters of falsehood that change how the young see themselves, each other and the world around them. Without resistance, they will be drawn further into frozen fakes, an endless winter of illusion. What melts those shards is not more technology but our determination to hold on to truth, to love and to human connection. In this moment, with the launch of apps like Sora, that determination is placed in jeopardy once again.
It falls to all of us to resist. To push back against the authoritarians, against the technocrats, against those who hoard wealth and power at the expense of human progress. To not slip into illusions that tempt us away from truth and into fantasy. We must demand that the technology available to us serves people, not profit. That it helps us build a world better than those we have seen before - better than the fractured realities we are offered today and better than manufactured worlds - a reality we can be proud to pass on.