On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence. Every major tech publication ran headlines about the US finally taking AI regulation seriously.
They got the story exactly backwards.
This framework does not regulate AI companies. It protects AI companies from regulation — by gutting the only laws that were actually working, blocking states from passing new ones, and replacing meaningful oversight with a system the industry itself will control.
What the Framework Actually Does
The most important provision — buried in the technical language — is federal preemption of state AI laws. In plain English: if your state passed a law to protect you from AI systems, this framework can make that law unenforceable.
California’s Transparency in Frontier Artificial Intelligence Act. Texas’s Responsible AI Governance Act. Both went into effect on January 1, 2026. Both are now under threat of being wiped out by federal preemption before a single enforcement action takes place.
The framework doesn’t replace these state protections with stronger federal ones. It replaces them with industry-led standards — meaning the AI companies write the rules for AI companies.
This is regulatory capture so complete it doesn’t even try to hide itself.
Who Wrote This Policy?
The AI industry spent millions of dollars on the 2026 midterms. Interest groups funded by AI industry leaders — the same people whose companies would benefit from deregulation — are pouring money into congressional races specifically to influence how AI gets governed.
The framework was not written by safety researchers. It was not written by labor economists studying displacement. It was not written by civil rights organizations concerned about algorithmic discrimination.
It was written in an environment where the people with the most to gain from weak regulation have unprecedented access to the people writing the regulations. The result looks exactly like what you’d expect.
The One Thing It Does Protect
To its credit, the framework does contain a provision limiting the federal government’s ability to “coerce AI providers to restrict or alter content for partisan or ideological reasons.”
This is the one protection in the entire document that is genuinely valuable — and it’s there because the tech companies wanted it, not because consumer advocates demanded it. Silicon Valley has been furious at content moderation pressure from both parties for years. They got their carve-out.
The government can’t pressure AI companies to censor speech. But AI companies can fire 40% of their workforces, discriminate through algorithmic hiring systems, and deploy surveillance technology — all under a framework that explicitly relies on the industry to regulate itself.
Why States Were the Last Line of Defense
Federal inaction on tech regulation is not new. Congress spent a decade failing to pass meaningful privacy legislation while states like California, Virginia, and Colorado built their own frameworks. The same pattern was emerging with AI.
States move faster. They’re closer to constituents. They face real political consequences when residents get harmed by technology that has no accountability structure. State AGs have been the most aggressive enforcers of consumer protection in the tech space for twenty years.
Preempting state AI laws doesn’t just slow regulation. It eliminates the only regulatory experiments that were actually producing results — and does so at exactly the moment when AI systems are making consequential decisions about your job application, your loan, your medical care, and your bail hearing.
What Comes Next
More investment. More deployment. More “industry-led standards” that protect companies from liability. More congressional hearings where senators ask CEOs to explain what a large language model is while those same CEOs’ lobbyists write the answers to the follow-up legislation.
The AI industry has now successfully captured the regulatory process at the federal level. The window for meaningful oversight is closing.
Call it what it is: not a framework for governing AI, but a framework for protecting the people who profit from it.