Search

Facebook's long-awaited content 'supreme court' has arrived. It's a clever sham | Jeremy Lewin

Guardian Technology 17 Mar 2021 10:23

In October, Facebook unveiled its long-awaited “oversight board” – a special, semi-independent body, staffed mainly by experts on free speech and constitutional law, with the authority to make decisions about controversial content posted on Facebook’s platform.

Sometimes described as Facebook’s “supreme court”, the oversight board has been met, in the legal and academic worlds, mostly with wonder, excitement and praise. Giving predominantly legal scholars input on the content moderation of the world’s largest social media platform seems like a positive step for social media governance.

But behind the gloss, Facebook’s experiment is intended to foster anything but genuine accountability. It is a clever obfuscation offering Facebook cover to engage in socially irresponsible profit-seeking that would be publicly reviled were it more transparent.

The trick is simple. Facebook faces a problem of two-sided economic incentives: dangerous and socially objectionable content is genuinely valuable to its bottom line, but so is the public perception that it’s proactively committed to maintaining a socially responsible and safe community. It designed the oversight board to escape this double-bind. Oversight by a legalistic body with the appearance of neutrality earns Facebook public goodwill and deflects blame for lax content moderation. But in designing the structure of the body itself, Facebook has virtually ensured certain financially beneficial outcomes: maximum content, even the dangerous and harmful, left online. The result is a win-win for Facebook. The platform keeps valuable content while heaping social culpability on an external body. Already, the board is showing these true colors.

The public already recognizes this and is demanding stricter moderation. Even before the 2020 election “Big Lie” and ensuing violent insurrection, 78% of American adults held platforms solely or partially responsible for the spread of false and offensive content on their sites, and 87% think platforms at least sometimes (65% “always”) have a duty to take down false content. Facing this mandate, a clear majority consider platforms “not tough enough” in content moderation. Most critically, this is not cheap talk by the public; it has already begun to affect Facebook’s bottom-line through widespread advertiser boycotts, user defections, and regulatory and legal scrutiny.

The board’s operation mirrors an Anglo-American appellate court and imports public law principles. Almost all of its members are constitutional or human rights lawyers. Three of its four co-chairs are constitutional lawyers; two are from America, currently home to the most speech-protective jurisprudence in the history of the world. Conspicuously absent are scientists or economists; Facebook wants the benefit of speech-protective legal doctrines, not a quantification of the externalities of harmful speech.

The effects of this asymmetry go beyond the obvious. Many observers are tracking how often the board disagrees with Facebook, treating this as a critical indicator of independence. But if it only considers content already removed by Facebook, the board can only assert its “independence” by forcing Facebook to restore content, dovetailing with Facebook’s financial interests. So we shouldn’t be terribly surprised that Facebook has been receptive to the board’s early assertions of independence. Doing so is the ultimate expression of the win-win for Facebook: it restores valuable content while simultaneously bolstering the narrative that it is committed to independent oversight.

The decisions read like a caricature of American constitutional law, particularly the medical misinformation opinion. It offers two principal justifications, both referencing foundational first amendment principles: vagueness in the relevant Facebook policy, and insufficiently “imminent” risk posed by the misinformation. To nobody’s surprise, constitutional lawyers formed in a court-like institution are employing familiar legal norms.

But their invocation here is specious. “Imminence” is a fluid standard developed to prevent political critics from being jailed for harmlessly criticizing the government; that notion has been ham-handedly applied here even though the FDA has already attributed foreseeable death and serious injury to off-brand Covid-19 treatment with exactly the same drug. And vagueness challenges, which prevent disparate enforcement, can be so broad that the supreme court recently affirmed that speakers whose conduct can constitutionally be regulated – like the poster here – cannot raise them. Nor does the board place any value on public sentiment; 85% of Americans believe platforms should never permit misleading medical information.

Accordingly, particularly in its most contemporary form, this jurisprudence has evolved to become remarkably protective of speech and constraining of the state. If applied to Facebook as though a state actor, the familiar constitutional categories will inevitably produce only one result: markedly less content moderation. Although it may have chosen a neutral arbiter in the strictest sense of the term, by choosing Anglo-American free speech public law as its framework, Facebook has all but selected the outcomes itself.

Continue reading original article...

Tags

FacebookAmerican appellate courtsupreme courtMark ZuckerbergJoseph Goebbels
You may also like