The White House issued a new executive order on December 11, 2025, calling for a unified federal policy framework for artificial intelligence (AI) and directing multiple agencies to challenge state-level AI laws viewed as burdensome or inconsistent with national priorities. The executive order represents a significant executive branch-led shift in U.S. AI governance and signals increased federal involvement in areas that have largely been shaped by state legislatures over the past several years.
The move comes as businesses continue to face fragmented AI and data privacy requirements across the country. The new executive order attempts to reduce a fragmented framework of state regulation and ease compliance constraints — or as the president framed it, "you can’t expect a company to get 50 approvals every time they want to do something."
A Move Toward a Single National AI Standard
The executive order attempts to promote U.S. leadership in AI through a "minimally burdensome" national regulatory framework. The Trump administration asserts that inconsistent state statutes — particularly those focused on algorithmic discrimination, transparency, and model disclosures — raise compliance costs, threaten innovation, and may conflict with constitutional protections.
Consistent with trends highlighted in our earlier alerts, the order indicates renewed focus on federal preemption and a growing belief that divergent state requirements complicate governance strategies for companies operating in multiple jurisdictions. The order focuses primarily on limiting state authority rather than creating new prescriptive obligations at the federal level. The order could spark challenges in court from states regarding the administration’s authority to prevent state regulation.
The executive order draws parallels to the early internet era, where the Clinton administration argued that the government should avoid regulating the internet in a way that would stifle innovation, touting the principle of "do no harm." While the U.S. has clearly dominated the internet era, many critics argue the lack of regulation has led to many of the social harms experienced today. One distinct difference between the internet era and the AI era is whether they are viewed as tools that distribute or concentrate power. The internet was viewed as a tool of political democratization, with the Clinton administration famously saying that trying to regulate the internet was like "trying to nail Jell-O to the wall." On the other hand, AI is framed as a national security priority with countries asserting the right to AI sovereignty.
Federal Agency Activity: Expanded Roles in Shaping AI Governance
The executive order assigns new responsibilities across several federal agencies aimed at curbing what it characterizes as onerous state regulation:
- Department of Justice — AI Litigation Task Force: Attorney General must establish an AI Litigation Task Force within 30 days, with the sole mandate to challenge state AI laws that may impede interstate commerce, conflict with federal law, or otherwise run counter to the national AI policy articulated in the executive order, representing a notable escalation in federal intervention in state-level AI policymaking.
- Department of Commerce — Evaluation of State AI Laws; BEAD and Grant Conditions: Within 90 days, Commerce must publish an evaluation identifying state AI laws that require AI systems to alter "truthful outputs," compel disclosures that raise constitutional concerns, or otherwise burden innovation. Commerce must also issue a policy notice tying access to certain Broadband Equity Access and Deployment (BEAD) program funds to a state’s compliance with the executive order’s national AI policy, under which states with laws deemed inconsistent may be ineligible for non-deployment portions of the program, and other federal discretionary grants may also be conditioned on states agreeing not to enforce AI laws deemed conflicting.
- Federal Communications Commission — Potential Federal Reporting/Disclosure Standard with Preemption: FCC must consider establishing a federal reporting and disclosure standard for AI models that would expressly preempt inconsistent state reporting and disclosure requirements.
- Federal Trade Commission — Policy Statement on State AI Mandates and UDAP: FTC must issue a policy statement addressing whether state AI regulations — particularly those that restrict or mandate AI model outputs and related disclosures — implicate or conflict with federal standards governing unfair or deceptive acts or practices, including whether such state requirements are inconsistent with the national policy direction in the executive order.
Federal Legislative Direction: Framework for a Preemptive AI Statute
The order instructs the special advisor for AI and crypto and the assistant to the president for science and technology to develop legislative recommendations for a national AI framework. The proposal must include federal preemption of state AI laws that conflict with the order’s policy goals.
At the same time, the administration indicates that some areas should remain within state purview, including:
- Child safety protections.
- AI compute and data center infrastructure (with limited exceptions).
- State government procurement and use of AI.
This signals that any future federal AI legislation may take a hybrid approach, preempting certain operational or commercial requirements while preserving state authority in limited domains.
What This Means for Businesses
The executive order reinforces this administration’s trend toward re-evaluating regulatory structures to promote innovation, which is consistent with signals from both the U.S. and European Union. For companies, several practical implications emerge:
- Short-term uncertainty is likely as federal agencies develop reports, initiate proceedings, and begin evaluating whether particular state laws should be challenged.
- State AI laws may face increased legal scrutiny or become unenforceable during federal grant periods, particularly those addressing algorithmic discrimination, bias audits, or model transparency.
- Compliance strategies may need to adapt as the federal government signals a preference for centralized AI governance, even though substantive federal safety or accountability standards remain limited.
- Existing privacy and cybersecurity rules remain in effect. The order does not modify planned federal activity on consumer privacy, commercial surveillance, financial data rights, or sector-specific data-security updates.
Organizations should continue monitoring state AI developments, ongoing litigation, and federal activity as the landscape continues to shift. Building agile governance frameworks and conducting regular reviews of automated decision-making tools can help reduce compliance risk amid rapid regulatory and administration changes.
It remains to be seen how states will react to the new order. Companies should be on the lookout for challenges in court from states over the order.
For more information, please contact us or your regular Parker Poe contact. Click here to subscribe to our latest alerts and insights.