The systems humans and institutions use to verify what is real online are failing to keep pace with the technology being used to fabricate or obscure it, according to a report from Wired.

For decades, digital verification relied on a combination of metadata, reverse image search, cross-referencing satellite imagery, and basic visual literacy. Each of those pillars is now under strain — not from a single breakthrough, but from the convergence of generative AI, platform policy changes, and the increasing restriction of open-source data that investigators once took for granted.

The Collapse of Visual Trust

AI-generated images have moved from novelty to noise at industrial scale. Tools built on diffusion models can now produce photorealistic scenes — crowds, conflict zones, political figures — that defeat casual scrutiny and increasingly challenge forensic analysis too. The volume alone is part of the problem: when fabricated content is cheap to produce and expensive to debunk, the asymmetry favours the deceiver.

Visual verification experts have long taught a set of heuristics — check hands, check backgrounds, look for lighting inconsistencies. Those heuristics are eroding. The latest generation of image synthesis models produces hands with the correct number of fingers and shadows that fall at plausible angles. The tells are disappearing.

When fabricated content is cheap to produce and expensive to debunk, the asymmetry favours the deceiver.

The problem extends beyond still images. Video synthesis, while still more detectable than static imagery, is closing the gap. And audio cloning — the ability to replicate a specific person's voice from a short sample — has already been used in fraud cases affecting thousands of individuals across multiple countries.

The Satellite Data Problem

Open-source investigators — journalists, researchers, and conflict monitors — built a powerful verification ecosystem over the past decade, using freely available satellite imagery to geolocate events, track troop movements, and confirm or refute official accounts. That ecosystem now faces a structural threat.

Commercial satellite operators have begun restricting access to high-resolution imagery in conflict zones, citing legal pressure and government contracts. The platforms that aggregated and democratised this data have tightened their terms of service. What was once a common good for accountability journalism is becoming a tiered resource, accessible primarily to state actors and well-funded institutions.

This matters because the verification gap it creates is not neutral. Governments and military actors retain access to the same data that independent monitors are losing. The information asymmetry tilts against accountability.

What Platforms Broke and Won't Fix

Social media platforms were once pressure-tested by fact-checkers and researchers who could access post-level data through APIs and third-party tools. Meta, X (formerly Twitter), and others have progressively dismantled that access. Researchers who tracked misinformation at scale — measuring spread, identifying coordinated behaviour, mapping amplification networks — have lost the infrastructure their work depended on.

The practical consequence is that the independent verification layer that once acted as a check on viral falsehoods has thinned considerably. Platform-native fact-checking programmes have been scaled back or eliminated entirely. Meta announced in January 2025 that it was ending its third-party fact-checking programme in the United States.

The human cost is concrete. A 2023 study of 1,600 participants conducted by researchers at the University of Regina found that repeated exposure to AI-generated misinformation significantly degraded participants' ability to identify false content over time — a phenomenon the researchers termed "epistemic fatigue." When correction mechanisms weaken and fabrication accelerates, ordinary people bear the cognitive burden.

The Verification Community's Response

Some organisations are fighting the trend directly. Bellingcat, the open-source investigation collective, has continued to develop and share geolocation methodologies. The Duke Reporters' Lab tracks fact-checking organisations globally and reported in 2024 that while the number of active fact-checking outlets has grown to over 400 worldwide, their collective capacity remains dwarfed by the volume of false content circulating daily.

Technology companies are developing AI-based detection tools, but those tools face the same adversarial dynamic as every previous verification method: as detection improves, generation adapts. Google's SynthID watermarking system, which embeds imperceptible signals in AI-generated content, represents an attempt at provenance tracking — but it only applies to content created with Google's own tools and depends on voluntary adoption across the industry.

Media literacy programmes are expanding in schools across Europe and parts of North America, with curricula designed to teach source evaluation and lateral reading. The evidence for their effectiveness is cautiously positive but limited: a 2022 randomised trial of 3,000 students across six countries, conducted by the News Literacy Project, found that structured media literacy training improved source-evaluation scores by 26% on average. The challenge is scale and durability.

What This Means

The verification crisis is not a temporary gap waiting to be closed by the next tool — it is a structural condition produced by the intersection of accessible AI, platform retreat from transparency, and the commodification of data that was once open. Anyone consuming information online — which is to say, nearly everyone — is operating with weakened instruments for determining what is real.