An attempt to prevent states from regulating AI may be rising from the grave. Now that states are enacting more rules around the fledgling technology, AI industry boosters are again prodding the federal government to step in and preempt them.
Republican leaders in Congress are reportedly considering a measure in the National Defense Authorization Act (NDAA) that would nationalize state laws on AI. (They are considering folding the push into rules to keep children safe online, Tech Detour reported.)
And the Trump administration also floated a draft executive order that pressures states not to regulate A.I. through litigation and by restricting federal dollars, although the White House has reportedly shelved it for now.
Last summer, the Senate defeated a similar moratorium that would have barred state-level legislation on AI for a decade in an overwhelming 99-1 vote.
That loss, however, hasn’t deterred factions of the AI industry from continuing to advocate for a revived effort as industry-backed lobbies increase spending in individual states. They say that a piecemeal set of requirements across states would stifle innovation.
State legislators are equally predictably against the idea of federal preemption; some 300 state lawmakers from both parties signed a letter in opposition to including the ban in the NDAA. A website created by the advocacy group Americans for Responsible Innovation has been gathering an ever-growing list of other statements against preemption, even coming from some Republican governors and national figures on the party’s more populist wing.
What’s at stake: Not much AI regulation is coming out of Washington these days, so statehouses have taken the lead.
On New Year’s Day alone, 17 new state measures related to A.I. are expected to begin taking effect. The California Legislature recently approved the country’s first law to specifically regulate frontier models, and New York might take a similar step by the end of the year.
“I’m not one to say that a hodgepodge of state laws at the end of the day is ideal,” said New York Assemblyman Alex Bores, who authored an AI safety bill now sitting on the governor’s desk and, more recently, became the subject of some heavy-hitter pro-industry super PAC spending. But state lawmakers have collaborated to standardize these laws, at least somewhat.
“What stays with me is, this is such a weird framing of, do we want to preempt the states or not? You would have to then ask: Is Congress solving the problem or not? And if they’re not fixing the problem, it’s obviously something that states should take on,” Bores told Tech Brew last week. “What’s going on right now is not a push for a federal standard, it’s a push to ban states from doing anything and leave us in this spot of hoping that Congress does something,” he said.
A coming showdown: Regardless of the fate of these specific efforts, a battle over AI policy driven by big money is taking shape for next year’s midterm elections. Deep-pocketed AI companies are starting to launch enormous lobbying operations, like the $100 million super PAC Leading the Future — which is currently aimed at Bores’s congressional bid — with backing from Andreessen Horowitz, Perplexity, and OpenAI president Greg Brockman.
In the meantime, The New York Times reported last week that AI safety advocates are raising money for their own network of super PACs — political action committees set up to receive unlimited donations — and plan to support candidates who have consulted on AI regulation priorities. The activist network is attempting to raise $50 million in order to balance out Leading the Future’s reach, and could find support from execs at Anthropic, OpenAI’s more safety-friendly competitor, the NYT said.

