Facebook Criticized For Political Censorship Nix … But Why?

0

By David Oxenford


WASHINGTON, D.C. — In recent weeks, Facebook has been criticized for adopting a policy of not censoring advertising and other content posted on its platforms by political candidates. While Facebook apparently will review content whose veracity is challenged when posted by anyone else, it made an exception for posts by political candidates – and has received much heat from many of those candidates, including some who are currently in Congress.

In some cases, these criticisms have suggested that broadcasters have taken a different position and made content-based decisions on candidate ads. In fact, Congress itself long ago imposed in Section 315(a) of the Communications Act a “no censorship” requirement on broadcasters for ads by federal, state, and local candidates. Once a candidate is legally qualified and once a station decides to accept advertising for a political race, it cannot reject candidate ads based on their content. And for Federal candidates, broadcasters must accept those ads once a political campaign has started, under the reasonable access rules that apply only to federal candidates.

In fact, broadcasters are immune from any legal claims that may arise from the content of over-the-air candidate ads, based on Supreme Court decisions. Since broadcasters cannot censor ads placed by candidates, the Court has ruled, broadcasters cannot be held responsible for the content of those ads. If a candidate’s ad is defamatory, or if it infringes on someone’s copyright, the aggrieved party has a remedy against the candidate who sponsored the ad, but that party has no remedy against the broadcaster. (In contrast, when a broadcaster receives an ad from a non-candidate group that is claimed to be false, it can reject the ad based on its content, so it has potential liability if it does not pull the ad once it is aware of its falsity – see our article here for more information about what to do when confronted with issues about the truth of a third-party ad).

This immunity from liability for statements made in candidate ads absolves the broadcaster from having to referee the truth or falsity of political ads which, as is evident in today’s politically fragmented world, may well be perceived differently by different people. So, even though Facebook is taking the same position in not censoring candidate ads as Congress has required broadcasters to take, should it be held to a different standard?

That decision is well past my pay grade.

Perhaps, given the difference in the nature of the platforms, different regulation is justified. On a broadcast station, if an ad is false, the station’s news department may point out the falsity of the ad, or the candidate being attacked can buy responsive ads to refute the false claims. Presumably, these corrections will reach an audience similar to that which heard the initial ad. With online platforms, given the algorithms used to target ads at fragmented and sometimes unique audiences, responsive ads may not reach the same people. But the government, and the FCC specifically, has thus far taken a hands-off approach to Internet content regulation, reasoning that because there is no bandwidth scarcity (which has always been used to justify broadcast regulation), there is no need for government content regulation. Moreover, such regulation would be suspect under First Amendment principles.

In fact, the adoption of Section 230 of the Communications Decency Act, which generally absolves Internet services from liability for content posted on their platforms by others, seems to reflect a Congressional recognition of the reluctance that government should have in regulating online content.

Of course, these arguments about censoring political speech may well be a reflection of our polarized political times, and the unique strains currently being put on political discourse. But, as the old maxim suggests, difficult cases make for bad law. We don’t want the current difficult times to, in the name of trying to preserve democracy and civil political discourse, put more strains on that discourse by trying to make private companies (or some government agency) the arbiter of truth or falsity in political debates. That has traditionally been left to the marketplace, backstopped by the defamation laws that can be employed by someone who can prove that they have been wronged by a political attack.

That process has worked well for many years, and the government should be very cautious in urging changes to that system now.


David Oxenford is a partner at Wilkinson Barker Knauer. This column originally appeared in the law firm’s Broadcast Law Blog.