NAB’s Top Lawyer To FCC: Leave AI Ad Rules To Congress

0

WASHINGTON, D.C. — The Chief Legal Officer and EVP of Legal and Regulatory Affairs at the NAB has shared his thoughts in a new blog post about the FCC’s proposed rulemaking that would require disclaimers on political ads that use AI.


In short, Rick Kaplan has told Jessica Rosenworcel and the Commission to leave that task up to Congress.

With AI reshaping the entire political landscape, its rise brings serious risks, Kaplan shares. This includes “deepfakes,” which the Commission seeks to rein in through its regulatory oversight of broadcast media.

The NAB supports government efforts to curtail them. But, Kaplan says Chairwoman Rosenworcel isn’t the one who should be leading those efforts.

A NPRM the Commissioners will consider would require broadcasters to insert a disclaimer on political ads that use AI in any form. Kaplan is not a fan, labeling the disclaimer “generic” and lacking meaningful insight for audiences. “AI is often used for routine tasks like improving sound or video quality, which has nothing to do with deception,” he says. “By requiring this blanket disclaimer for all uses of AI, the public would likely be misled into thinking every ad is suspicious, making it harder to identify genuinely misleading content.”

Worse yet, Kaplan argues that if the rule only applies to broadcasters, “political advertisers may decide to move their ads to digital platforms where these rules don’t exist.”

That is why Kaplan believes that to “truly tackle the issue of deepfakes and AI-driven misinformation, we need a solution that addresses all platforms, not just broadcast TV and radio.”

That solution sits on Capitol Hill. “Congress is the right body to create consistent rules that hold those who create and share misleading content accountable across both digital and broadcast platforms,” he says. “Instead of the FCC attempting to shoehorn new rules that burden only broadcasters into a legal framework that doesn’t support the effort, Congress can develop fair and effective standards that apply to everyone and benefit the American public.”

Meanwhile, Kaplan makes the assertion that “deepfakes” and AI generated misleading ads are not prevalent on broadcast TV or radio. “These deceptive practices thrive on digital platforms, however, where content can be shared quickly with little recourse,” he writes. “The FCC’s proposal places unnecessary burdens on broadcasters while the government ignores the platforms posing the most acute threat. This approach leaves much to be desired.”

Kaplan concludes, “Unfortunately, due to the FCC’s limited regulatory authority, this rule risks doing more harm than good. While the intent of the rule is to improve transparency, it instead risks confusing audiences while driving political ads away from trusted local stations and onto social media and other digital platforms, where misinformation runs rampant.”

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here