In August, the Federal Election Commission joined “the AI regulation bandwagon” when it began a process to potentially regulate AI-generated deepfakes in political advertisements ahead of the 2024 election, notes Bronwyn Howell, the nonresident senior fellow at Washington, D.C., think tank American Enterprise Institute. Howell believes advocates have welcomed the move, claiming it would “safeguard voters against a particularly insidious form of election disinformation.”
Howell notes that to address similar concerns—and “up the ante” in responsible advertising—Google announced plans earlier this month to require verified election advertisers to make “clear and conspicuous” disclosures when their advertisements contain “synthetic content that inauthentically depicts real or realistic-looking people or events” starting in November.
That’s a big deal, as the disclosure must be placed in a location where it is likely to be noticed by users. “This policy will apply to image, video, and audio content in advertising, but not to unpaid content uploaded to the site,” Howell says. In her view, it is tied to grander efforts to reduce risks posed by artificial intelligence in political advertising, which she argues expose deeper flaws in electoral advertisement regulation.
In a thought piece released in late September by AEI, Howell notes that Google has led the field in responsible online political advertising since 2020, requiring all political advertisers to undergo an identity verification process before their advertisements are accepted for circulation.
This now applies to political advertising in the United States and several other countries, including those in the European Union and the U.K. “While so far rival firms such as Meta and X (formerly Twitter) have not responded, it seems likely that they too will voluntarily subject their election advertising to similar obligations if they wish to take a share of this lucrative market in the future,” Howell says.
However, she adds, the success of both the FEC and Google’s endeavors relies on the ability to reliably detect when an advertisement has been created by AI in the first place. “It also begs the question of what the objective of election advertising is, and whether other important matters regarding the communication of election ‘misinformation’ are being overlooked by homing in on AI content specifically,” Howell believes.
“First, the perennial problem with AI-generated content is that it is now arguably so good that it is difficult to distinguish it from truly original content, even using AI detection apps,” she adds, noting that monitoring and enforcing may be possible only “after the fact” when the advertisement has already been circulated and the “wronged” individuals alert the relevant authorities.
“In the absence of clear definitions, it may be difficult to draw the boundaries between what is acceptable (or requiring declaration) and what is not,” Howell says. “Google is not banning AI use in political advertisements outright.”
Exceptions include “synthetic content altered or generated in a way that’s inconsequential to the claims made in the ad.”
For Howell, AI editing techniques such as image resizing, cropping, color, defect correction, or background edits are OK. “So is it necessary to declare the manipulation of a video of (say) an octogenarian electoral candidate to look and sound 25 years younger when the advertisement discusses a policy point unrelated to the candidate’s age or fitness for office? Or only when the candidate is making claims of the ability to fulfill the elected role for the duration of the term?,” she asks.
Second, Howell wants to know what actually constitutes “fake” electoral advertising in the first place. “Society has long accepted that what is presented in advertising of all sorts is not necessarily true,” she says. “If the intention is to remove all forms of disinformation from political advertising, not just that associated with AI, then perhaps requiring other declarations may be useful. For example, maybe political advertisers should declare whether the people featured in advertisements are actual voters expressing their views (or displaying their apparent happiness with the state of affairs created or promised by the candidate) or actors paid for their roles. Similarly, it would be valuable for voters to know the source of all audio or video content used in the advertisement, and that appropriate rights to use it have been obtained (e.g. explicit permissions obtained, use gifted, or royalties paid).”
Howell concludes that, based on past experience, “breaches of these intellectual property norms in political advertising are only detectable after the fact when harmed parties raise concerns.” In her view, “Ex-ante consideration would avoid subsequent embarrassment and associated bad publicity if things haven’t quite gone according to plan.”
She points to New Zealand Minister of Finance Grant Robertson, who in 2019 released publicity photographs online of him holding a copy of his first “Wellbeing Budget.” The cover photograph was a stock photo; usage rights had been paid for. Alas, this “happy family,” dissatisfied with their prospects in New Zealand, had already emigrated to Australia by the time the Wellbeing Budget was announced.
“Rather than just focusing on AI, requiring truth and disclosure of all political advertising sources should be the objective,” Howell says.
Dr. Bronwyn Howell is a faculty member of the Wellington School of Business and Government at Victoria University of Wellington in New Zealand, a senior research fellow at the Public Utilities Research Center at the University of Florida, a board member and secretary of the International Telecommunications Society, an associate editor of the journal Telecommunications Policy, and a research principal at the Institute for Technology and Network Economics.



