OpenAI has testified in favor of an Illinois state bill that would restrict when AI companies can be held legally liable for harms caused by their models — including scenarios involving mass deaths or large-scale financial disasters.
The bill, introduced in the Illinois state legislature, would limit the circumstances under which AI laboratories face lawsuits, even in cases the legislation itself classifies as 'critical harm.' According to Wired, OpenAI — maker of ChatGPT — appeared before lawmakers to support the measure, marking a significant example of an AI company lobbying directly on liability law at the state level.
An AI company lobbying to limit its legal exposure even in mass-casualty scenarios reframes the central question of AI governance: not just what the technology can do, but who pays when it goes wrong.
What the Illinois Bill Would Actually Do
The legislation would establish a liability shield for AI model developers, narrowing the legal pathways through which individuals, businesses, or governments could sue AI companies for damages. The protections would apply even in instances of 'critical harm' — a term that, under the bill's own framing, covers outcomes as severe as mass casualties and systemic financial collapse.
The bill operates at the state level, meaning it would be binding only within Illinois jurisdiction and enforceable through Illinois civil courts. It is not a federal statute and would not preempt federal litigation or regulation. However, Illinois is a significant commercial and technology hub, and state-level liability law has historically shaped corporate behaviour well beyond state borders.
The precise mechanism of the liability limit — whether it caps damages, requires proof of gross negligence, or narrows the class of eligible plaintiffs — was not fully detailed in available reporting at the time of publication, according to Wired's account.
Why OpenAI Is Lobbying at the State Level
The decision to engage state legislatures reflects a broader strategic calculation by AI companies. With no comprehensive federal AI liability framework currently in force in the United States, state legislatures have become the primary venue for legal accountability questions. Several states, including California, Colorado, and Texas, have introduced or passed AI-related legislation in recent sessions.
For OpenAI, securing favourable liability terms now — before a federal standard is set — could significantly reduce legal exposure as its models are deployed at scale across healthcare, finance, legal services, and critical infrastructure. The company has argued publicly that overly broad liability rules could stifle innovation and make AI development economically unviable in the US.
Critics of that position contend that liability is precisely the mechanism that incentivises companies to invest in safety. Without meaningful legal consequences, they argue, AI developers have weakened incentives to prevent foreseeable harms.
The 'Critical Harm' Classification and What It Covers
The use of the term 'critical harm' in the bill's own text is significant. Legislative language that explicitly acknowledges catastrophic risk categories — mass death, large-scale financial damage — while simultaneously limiting liability for them is unusual. It suggests lawmakers and lobbyists are aware of worst-case scenarios but are nonetheless prioritising commercial protection over victim recourse.
Legal scholars have noted that existing tort frameworks in the US struggle to handle AI-related harms cleanly. Questions of causation — whether a model's output directly caused a harm, or whether a human intermediary broke the causal chain — are genuinely complex. However, the approach of limiting liability broadly has drawn concern from consumer advocates and safety researchers who argue it removes the legal system as a check on reckless deployment.
No independent legal experts were quoted in the available source material, but the structure of the bill as reported places it firmly in the category of binding state civil law, not an advisory standard or voluntary industry code.
What Happens Next
The bill's current status in the Illinois legislative process was not confirmed in the available reporting. If passed, it would take effect within Illinois and could face constitutional challenges or conflict with future federal legislation. The US Congress has not passed comprehensive AI liability legislation, though several proposals are under discussion in both chambers.
OpenAI's testimony marks a significant public commitment. The company has now placed itself on record as supporting liability limits even in catastrophic-harm scenarios — a position that will feature in future policy debates, regulatory proceedings, and, potentially, litigation.
Other major AI developers, including Google, Meta, and Microsoft, have not publicly stated positions on the Illinois bill based on current reporting. Whether they follow OpenAI's lead in state-level lobbying on liability will be an important signal of how the industry intends to engage with accountability law.
What This Means
If the Illinois bill passes, it sets a precedent that AI companies can shape their own liability exposure through state legislatures — potentially before federal regulators or Congress define the rules — leaving victims of AI-caused catastrophic harm with significantly narrowed legal options.
