As Congress continues to battle out the details of President Donald Trump’s “One Big, Beautiful Bill,” a provision tucked inside the reconciliation package may provide a sigh of relief for the NAB and many of the television and radio operators they serve when it comes to individual state regulations of artificial intelligence – particularly in political ads.
The language, included in both House and Senate drafts of the budget, would establish a 10-year federal moratorium on state-level enforcement of AI regulations. Under the proposal, states could continue passing AI-related laws, but any attempt to enforce those rules would be blocked, effectively consolidating authority within federal agencies and leaving private industry with broad discretion over how AI is deployed.
“This is about whether states can protect consumers from emerging AI risks,” Tennessee Attorney General Jonathan Skrmetti said at a recent press event alongside Washington Attorney General Nick Brown and Senators Maria Cantwell (D., Wash.) and Marsha Blackburn (R., Tenn.). “We want America to be AI-dominant. We want to make sure that our adversaries don’t get ahead of us, but we need to make sure that in the process, we’re not leaving American consumers behind. If there’s a 10-year moratorium on state enforcement, that effectively means 10 years where we are at the mercy of the judgment of big tech.”
Several states have already moved aggressively to regulate how AI interacts with broadcasters. The debate arrives at a moment when public skepticism over the integrity of media content is already mounting. According to a 2024 report by the Reuters Institute for the Study of Journalism, 72% of Americans now express concern about their ability to distinguish real from fake content, up three percentage points from the previous year.
Tennessee’s ELVIS Act, cited by Sen. Blackburn, criminalizes unauthorized AI-generated voice cloning, including applications that could affect radio hosts, commercial voiceovers, and music licensing. In New York, broadcasters must disclose when AI-generated content is used in political ads – a rule that looms especially large ahead of the 2026 midterm elections.
Other states including California, Texas, Minnesota, New Jersey, Idaho, Indiana, New Mexico, Utah, Wisconsin, and Washington have passed laws targeting deepfakes, requiring stations to either reject or prominently label deceptive AI-generated material. Oregon’s law goes further still, mandating disclosures for AI use across all campaign communications, not just broadcast ads.
Yet for broadcasters, the issue isn’t entirely black and white. The NAB has walked a careful line. While supporting efforts to curb misleading content, the trade group has warned against a fragmented patchwork of state rules that could overwhelm small and mid-sized operators with compliance costs and legal ambiguity.
If Congress enacts the moratorium, these state measures could be rendered effectively moot until 2035, leaving broadcasters without the legal backing many have come to rely on as they navigate a growing volume of AI-driven political and commercial content.



