OpenAI has released a safety overview for Sora 2, its next-generation AI video model, and the Sora app, a new social creation platform, claiming that safety measures were built into both products from the ground up.

The announcement, published on the OpenAI Blog, arrives as regulators and civil society groups worldwide are intensifying scrutiny of AI-generated video — a medium increasingly associated with synthetic media risks including non-consensual deepfakes, election-related disinformation, and the potential for large-scale visual misinformation. OpenAI's post describes its approach as "anchored in concrete protections," but stops short of publishing detailed technical specifications for those measures.

What OpenAI Says About Sora 2's Safety Design

According to the company, Sora 2 and the Sora app were designed with safety considerations integrated at the development stage rather than applied after deployment. OpenAI frames this as a response to what it calls "novel safety challenges" posed by a state-of-the-art video model and a social platform built around user-generated AI content.

The distinction between a standalone model and a social creation platform matters here. A platform that enables users to publish and share AI-generated video introduces content moderation obligations that a pure API product does not, including questions about who bears responsibility for harmful outputs — the developer, the platform operator, or the individual user.

A platform that enables users to publish and share AI-generated video introduces content moderation obligations that a pure API product does not.

OpenAI has not specified in this post the technical mechanisms underpinning these protections — for example, whether Sora 2 employs classifier-based content filtering, watermarking of generated video, or human review pipelines. The company has also not detailed the appeals process available to users whose content is restricted, or published an independent audit of the system's effectiveness.

The Regulatory and Jurisdictional Context

OpenAI's safety claims are voluntary and self-reported, not verified by a third-party auditor or mandated by a binding regulatory framework in any jurisdiction at the time of publication. In the United States, there is currently no federal AI content law specifically governing synthetic video. The European Union's AI Act, which entered into force in August 2024, imposes transparency obligations on general-purpose AI models and providers of deepfake tools — but enforcement of those provisions is still being phased in, and the Act's specific requirements for video generation platforms remain subject to ongoing regulatory guidance.

In the United Kingdom, the Online Safety Act places duties on platforms to address illegal content including non-consensual intimate images, which could encompass AI-generated video. Whether the Sora app falls within the scope of UK regulation would depend on its availability and user base in that jurisdiction.

The gap between a company's stated commitments and binding legal obligations is a recurring tension in AI governance. OpenAI's framing of safety as "at the foundation" is a design philosophy claim, not a compliance certification.

Industry Pattern: Safety Announcements Ahead of Product Launches

The timing of the blog post follows a now-familiar pattern in the AI industry: publishing safety documentation alongside or shortly before a major product release. Critics, including researchers at organisations such as the AI Now Institute and the Center for AI Safety, have argued that such announcements can function as reputational management as much as genuine transparency — particularly when they lack quantitative benchmarks, red-team findings, or commitments to external review.

Proponents of this approach counter that publishing safety reasoning publicly, even without full technical disclosure, creates a record against which future failures can be measured and establishes norms that competitors may feel pressure to match.

OpenAI has previously published more detailed safety cards and system cards for models including GPT-4 and DALL-E 3. Whether a comparable level of documentation will accompany Sora 2's full release is not addressed in this post.

What Happens Next

The Sora app's launch as a social platform will test OpenAI's safety claims at scale. Content moderation on user-generated AI video is a materially harder problem than restricting outputs from a closed API — the volume, speed, and creativity of user attempts to circumvent restrictions are substantially higher when a product is publicly accessible.

Policymakers in the EU are likely to scrutinise how OpenAI classifies Sora 2 under the AI Act's risk tiers, particularly given the model's capacity to generate realistic human likenesses. Separately, the US Copyright Office is actively examining questions around AI-generated content and authorship, which could affect how Sora-generated video is treated legally.

OpenAI has said it will share more details about its approach, but has not committed to a specific timeline for further disclosure.

What This Means

Until OpenAI publishes independently verifiable safety benchmarks or submits to third-party auditing, its claims about Sora 2's safety architecture remain the company's own assessment — meaningful as a statement of intent, but not a substitute for enforceable accountability.