The Trump administration on Friday laid out a legislative framework for a singular coverage for AI in america. The framework would centralize energy in Washington by preempting state AI legal guidelines, probably undercutting the current surge of efforts from states to control the use and improvement of the expertise.
“This framework can solely succeed whether it is utilized uniformly throughout america,” reads a White Home assertion on the framework. “A patchwork of conflicting state legal guidelines would undermine American innovation and our skill to steer within the international AI race.”
The framework outlines seven key aims that prioritize innovation and scaling AI, and proposes a centralized federal strategy that might override stricter state-level laws. It locations important duty on dad and mom for points like little one security, and lays out comparatively delicate, nonbinding expectations for platform accountability.
For instance, it says Congress ought to require AI firms to implement options that “scale back the dangers of sexual exploitation and hurt to minors,” however doesn’t lay out any clear, enforceable necessities.
Trump’s framework comes three months after he signed an govt order directing federal companies to problem state AI legal guidelines. The order gave the Commerce Division 90 days to compile a listing of “onerous” state AI legal guidelines, probably risking states’ eligibility for federal funds like broadband grants. The company has but to publish that checklist.
The order additionally directed the administration to work with Congress on a uniform AI legislation. That imaginative and prescient is coming into focus, and it mirrors Trump’s earlier AI technique, which targeted much less on guardrails and extra on selling firms’ development.
The brand new framework proposes a “minimally burdensome nationwide customary,” echoing the administration’s broader push to “take away outdated or pointless limitations to innovation” and speed up AI adoptions throughout industries. It is a pro-growth, light-touch regulatory strategy championed by “accelerationists,” one among whom is White Home AI czar and enterprise capitalist David Sacks.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Whereas the framework nods to federalism, the carve-outs for states are comparatively slender, preserving solely their authority over basic legal guidelines like fraud and little one safety, zoning, and state use of AI. It attracts a tough line towards states regulating AI improvement itself, which it says is an “inherently interstate” difficulty tied to nationwide safety and international coverage.
The framework additionally seeks to stop states from “penaliz[ing] AI builders for a 3rd celebration’s illegal conduct involving their fashions” — a key legal responsibility protect for builders.
Lacking from that framework are any gestures towards legal responsibility frameworks, impartial oversight, or enforcement mechanisms for potential novel harms brought on by AI. In impact, the framework would centralize AI policymaking in Washington whereas narrowing the area for states to behave as early regulators of rising dangers.
Critics say states are the sandboxes of democracy and have been faster to move legal guidelines round rising dangers. Notably, New York’s RAISE Act and California’s SB-53 search to make sure massive AI firms have and cling to security protocols which are publicly documented.
“White Home AI czar David Sacks continues to do the bidding of Huge Tech on the expense of normal, hardworking Individuals,” stated Brendan Steinhauser, CEO of The Alliance for Safe AI. “This federal AI framework seeks to stop states from legislating on AI and gives no path to accountability for AI builders for the harms brought on by their merchandise.”
Many within the AI business are celebrating this course as a result of it offers them broader liberties to “innovate” with out the specter of regulation.
“This framework is precisely what startups have been asking for: a transparent nationwide customary to allow them to construct quick and scale,” Teresa Carlson, president of Basic Catalyst Institute, instructed TechCrunch. “Founders shouldn’t need to navigate a patchwork of conflicting state AI legal guidelines that impede innovation.”
Little one security, copyright, and free speech
The framework was issued at a second when little one security has emerged as a central flashpoint within the debate over AI. Sure states have moved aggressively to move legal guidelines aimed toward defending minors and inserting extra duty on tech firms. The administration’s proposal factors in a special course, inserting larger emphasis on parental management than platform accountability.
“Mother and father are greatest geared up to handle their youngsters’s digital atmosphere and upbringing,” the framework reads. “The Administration is looking on Congress to present dad and mom instruments to successfully do this, comparable to account controls to guard their youngsters’s privateness and handle their system use.”
The framework additionally says the administration “believes” that AI platforms ought to “implement options to cut back potential sexual exploitation of youngsters and encouragement of self-harm.” Whereas it calls on Congress to require such safeguards and affirms that current legal guidelines, together with these banning little one sexual abuse supplies, ought to apply to AI methods, the proposal employs qualifiers like “commercially affordable” and stops in need of laying out clear stipulations.
On the subject of copyright, the framework makes an attempt to discover a center floor between defending creators and permitting AI methods to be skilled on current works, citing the necessity for “honest use.” That type of language mirrors arguments AI firms have made as they face a rising variety of copyright lawsuits over their coaching knowledge.
The principle guardrails Trump’s AI framework appear to stipulate contain making certain “AI can pursue fact and accuracy with out limitation.” Particularly, it focuses on stopping government-driven censorship, somewhat than platform moderation itself.
“Congress ought to stop america authorities from coercing expertise suppliers, together with AI suppliers, to ban, compel, or alter content material primarily based on partisan or ideological agendas,” the framework reads. It additionally instructs Congress to supply a means for Individuals to pursue authorized redress towards authorities companies that search to censor expression on AI platforms or dictate info offered by an AI platform.
The framework comes as Anthropic is suing the federal government for allegedly infringing on its First Modification rights after the Division of Protection (DOD) labeled it a supply-chain danger. Anthropic argues that the DOD is designating it as such in retaliation for not permitting the army to make use of its AI merchandise for mass surveillance of Individuals or for making concentrating on and firing selections in autonomous deadly weapons. Trump has referred to Anthropic and its CEO Dario Amodei as “woke” and a “radical leftist.”
The framework’s language, which emphasizes defending “lawful political expression or dissent,” appears to construct on Trump’s earlier govt order concentrating on “woke AI,” which pushed federal companies to undertake methods deemed ideologically impartial.
It’s unclear what qualifies as censorship versus customary content material moderation, so such language might make it tough for regulators to coordinate with platforms on points like misinformation, election interference, or public security dangers.
Samir Jain, vice chairman of coverage on the Middle for Democracy and Expertise, identified: “[The framework] rightly says that the federal government shouldn’t coerce AI firms to ban or alter content material primarily based on ‘partisan or ideological agendas,’ but the Administration’s ‘woke AI’ Govt Order this summer season does precisely that.”
