In August, the Federal Election Commission joined “the AI regulation bandwagon” when it began a process to potentially regulate AI-generated deepfakes in political advertisements ahead of the 2024 election, notes Bronwyn Howell, the nonresident senior fellow at Washington, D.C., think tank American Enterprise Institute. Howell believes advocates have welcomed the move, claiming it would “safeguard voters against a particularly insidious form of election disinformation.”
Howell notes that to address similar concerns—and “up the ante” in responsible advertising—Google announced plans earlier this month to require verified election advertisers to make “clear and conspicuous” disclosures when their advertisements contain “synthetic content that inauthentically depicts real or realistic-looking people or events” starting in November.
That’s a big deal, as the disclosure must be placed in a location where it is likely to be noticed by users. “This policy will apply to image, video, and audio content in advertising, but not to unpaid content uploaded to the site,” Howell says. In her view, it is tied to grander efforts to reduce risks posed by artificial intelligence in political advertising, which she argues expose deeper flaws in electoral advertisement regulation.