At an event focused on the latest technological trends in the video camera industry, a speaker asks an audience of around 200 people how many of them are protecting the integrity of the content captured by their devices well enough for it to be admissible as evidence in court. Every hand in the room goes up.
He then asks a second question: how many believe that everyone else in the room is protecting their video content to the same standard? Not a single hand is raised. “That is the reality of the industry,” explains Jason Crawforth, founder and CEO of the start-up SWEAR, winner of multiple innovation awards including CES Las Vegas, SXSW, the Edison Awards in 2024 and 2025, the CIO Award and the NPS Award from the Security Industry Association.
Camera technology itself has changed little over the past two decades, “but everything outside the sector has transformed dramatically,” Crawforth says, particularly with the rise of generative artificial intelligence and, to a lesser extent, quantum algorithms. In his view, video recorded today can still be used as courtroom evidence, “but in three years’ time that same video will no longer be accepted.”
The story does not end there. Crawforth asks the audience how many are using AI and “between 70% and 80% of the room raises a hand”. He then asks how many are developing technologies to protect themselves from AI. “Zero.” If nothing is done, he warns, AI will become “a weapon. It will cause wars and conflicts, destroy reputations, bring down companies and change elections.”
The question has never been more pressing. At the start of 2025, estimates suggested that more than four million deepfakes were already circulating online — video, audio and images manipulated or generated using AI. That number is now significantly higher. Around 25,000 new deepfakes are created every day, with annual growth running at 900%.
The impact on trust is already visible. According to YouGov, 81% of respondents doubt the reliability of online content, while the World Economic Forum’s Global Risks Report 2025 ranked misinformation and disinformation as the single greatest risk facing societies over the next two years.
All this comes at a time of what the Harvard Kennedy School has described as a “friendship recession”. Almost 40% of Americans report having friendships that exist exclusively online — but is there really anyone on the other side? Solo dining has increased by 29% over the past two years, and teenagers now spend just 40 minutes a day with friends in person outside school hours, compared with 140 minutes a day almost two decades ago.
The crisis of trust may be the defining feature of our era. According to Gallup, public trust in the media has fallen even more sharply than trust in other institutions.
SWEAR’s technology is one example of how the tech sector is trying to respond. Integrated directly into cameras and recording devices, it assigns a unique digital “DNA” to every single video frame, which is then written to an independent, immutable blockchain ledger. Potential clients range from airports and city authorities to retailers, law firms and surveillance providers.
Three years ago, the company Axis launched an algorithm to cryptographically sign every frame and image produced by its cameras and made it freely available. It even approached media players such as VLC and Media Player. The initiative gained little traction.
“The European Union’s Artificial Intelligence Act requires AI-generated content to be labelled as such, but anyone creating content with the intention of passing it off as real is not going to label it,” explains Alberto Alonso, Axis’s Director of Engineering for Europe. “The only real solution would be the opposite: to label all legitimately generated content, whether it is artificial or not.”
Verifying identity has also become a strategic issue online, and a major cybersecurity challenge. The problem is no longer whether attackers are human users — identifiable through a wide range of authentication systems — or machines that follow predictable rules. At Identiverse 2025, attention focused on the emerging risks posed by so-called non-human identities (NHIs).
The NHI Mgmt Group has been analysing this phenomenon for some time and estimates that the number of non-human identities within a large organisation’s IT environment could be 25 or even 50 times greater than the number of human identities. In some global programmes within the financial sector, more than 500,000 NHIs have already been identified.
Particularly significant are AI agents that operate with human-like intelligence at machine speed. This has prompted debate over whether they should be subject to the same rights and obligations as the humans they represent. After all, they make decisions on their behalf.
Identity remains a challenge even for humans. The World Bank estimates that 850 million people worldwide lack any form of official identification, although some sources put the figure closer to 1.1 billion. The European Union aims to provide digital identity to all citizens by 2030 and is promoting adaptive authentication systems, in which users are assigned dynamic risk profiles. AI will analyse behaviour in real time.
So how will we know what is real in five years’ time? “We are one or two years away from not being able to believe anything,” Crawforth insists. His technology also records when something was captured, where it was captured, how it was captured and even who captured it. “That matters, because truth depends on time, location and content. You need to know whether that was the real place where it was filmed, or whether the scene is real but happened ten years ago.”
In the United States, several major technology companies formed the Coalition for Content Provenance and Authenticity (C2PA) a few years ago, led by Microsoft and Adobe, with the participation of media organisations and agencies such as the BBC and Publicis. In recent months, familiar names such as Meta, Google, OpenAI and Amazon have joined, alongside Chinese companies including Huawei and TikTok. Its impact so far has been limited. Its main achievement has been agreement on the use of a watermark to indicate AI-generated content.
Regulation will also need to be rethought in the face of the coming wave of ambient intelligence. Europe remains highly restrictive when it comes to the use of images for facial recognition. Although the recent Digital Omnibus softens some edges, it remains far removed from China’s excesses and from the more permissive stance of the United States and the United Kingdom.
The industry does have anonymisation tools at its disposal, even if Spain is less proactive than Germany or Italy in mandating their use. In practice, systems only ever see an avatar of us. There is no one sitting behind the cameras watching.
Ultimately, identity is the central issue in the technological future now taking shape. The question is who will control it: users, companies or states. David Barrera, a researcher at Carleton University in Canada, believes computers will be embedded wherever possible — from hearing aids to coffee cups, washing machines and toasters.
“For a very simple reason: once you put a computer into something, you can collect data on how it is used and better understand user behaviour. That data and the analytics built on top of it can then be sold. If someone knows how many times you lift your coffee cup, someone else will find a way to monetise that information.”
Sources:
Eugenio Mallol is a journalist specializing in technological innovation. He created the INNOVADORES supplement in El Mundo and La Razón, which he directed for 11 years. He is currently Director of Strategy and Communications at Atlas Tecnológico, as well as analyst and coordinator of the Science and Society Chair at the Rafael del Pino Foundation. He is a columnist for Forbes Spain and contributes to digital outlets such as InnovaSpain and Valencia Plaza. He is also the author of books and reports on technological innovation and a frequent speaker.