OpenAI Distances From Illinois Liability Shield Proposal
Why It Matters
The shift highlights a growing tension between AI labs seeking 'safe harbors' and the public demand for legal accountability for high-risk systems. It signals that legislative liability remains a primary battleground for the future of AI governance.
Key Points
- OpenAI clarified it does not support the liability immunity provision for AI catastrophes in Illinois SB 3444.
- The company has shifted its focus to supporting SB 315, which requires mandatory third-party audits for advanced AI.
- Internal documents and previous statements by OpenAI spokespeople suggest the company initially favored liability protections.
- Critics argue OpenAI is retroactively changing its narrative following public and media backlash.
OpenAI has officially clarified its position regarding Illinois Senate Bill 3444, stating it does not support a provision that would grant developers immunity from liability for catastrophic AI outcomes. This represents a significant pivot from earlier statements where the company appeared to endorse the bill to avoid a patchwork of state-level regulations. While OpenAI now supports SB 315, which mandates third-party audits for AI systems, critics have pointed out discrepancies between the company's current stance and its previous public comments. Nathan Calvin and other observers suggest the reversal is a response to public backlash rather than an original policy position. The controversy stems from a 2025 AI Action Plan submission by OpenAI's Chris Lehane, which explicitly sought liability safe harbors. Despite the skepticism regarding their motivations, OpenAI's move toward supporting mandatory audits and rejecting total immunity marks a notable shift in their legislative engagement strategy.
OpenAI is trying to clear the air after getting caught in a bit of a flip-flop over a new law in Illinois. Originally, they seemed to be backing a bill that would have given them a 'get out of jail free' card if their AI caused a major disaster. Now, after people started complaining, they are saying they never actually wanted that legal immunity and instead support a different bill that requires independent audits. It is like a car company saying they love safety rules only after someone pointed out they were lobbying to be un-sueable for brake failures.
Sides
Critics
Skeptical of OpenAI's timeline, arguing the company only retreated from the liability shield due to public backlash.
Defenders
Claims it supports risk reduction and audits while denying it ever truly sought immunity from catastrophic liability.
OpenAI spokesperson who initially provided a statement supporting the approach of the Illinois bill without distancing from the immunity clause.
OpenAI executive who authored the 2025 AI Action Plan which explicitly discussed liability safe harbors.
Noise Level
Forecast
OpenAI will likely face increased scrutiny over its 2025 AI Action Plan as other states propose similar liability shields. Expect a push for standardized federal liability frameworks to replace the current 'patchwork' of state bills.
Based on current signals. Events may develop differently.
Timeline
OpenAI Clarifies Position
The company publicly distances itself from the immunity provision in SB 3444 and backs SB 315.
WIRED Inquiry
OpenAI spokesperson Jamie Radice supports the general approach of the Illinois bill in an email to WIRED.
AI Action Plan Proposed
Chris Lehane authors OpenAI's 2025 plan including requests for liability safe harbors.
Join the Discussion
Discuss this story
Community comments coming in a future update
Be the first to share your perspective. Subscribe to comment.