Illustrated | Getty Images, iStock
Last week we learned more details about the Biden administration’s efforts to strong-arm Facebook and other social media companies into doing more to combat the spread of extremism and disinformation online. But what if those efforts are doomed to make the problem worse?
You wouldn’t think so from perusing the response, which comes from two broad directions. Most observers writing from the center (liberals, progressives, and the few remaining Trump-critical conservatives) have been generally supportive of the move. While there’s some uncertainty about whether the administration is going about it the right away, many in this group agree that something needs to be done to curb extremism and disinformation, and so they tend to conclude that any attempt to tackle the problem should be applauded.
Then there are those on the Trumpian right and their allies of convenience on the absolutist libertarian left (Glenn Greenwald, Matt Taibbi, and their imitators), who insist that the administration leaning on social media companies is the leading edge of authoritarianism, and perhaps even a sign that we already live in an authoritarian state where the government and big business collude in trampling basic freedoms in order to suppress dissent.
Both camps get this wrong. The Biden administration’s efforts to police extremism and disinformation online aren’t going to be effective, but neither are they evidence of incipient authoritarianism. What they are is fuel for even more extremism. That’s the paradox of politics in a culture lacking consensus: Anything the powers that be do to impose uniform standards ends up generating even more disunity.
We have a hard time seeing this because our political imaginations have been decisively shaped by visions of totalitarian control in which an all-powerful and all-pervasive government thoroughly penetrates civil society, bending it entirely to the will of the state. The reality of America in 2021 is very different.
We are a nation deeply and sharply divided — and one in which the government needs to appeal to public opinion for support to justify its actions and lend them legitimacy. Put those two facts together and we’re left with a vision of a country lacking in the consensus necessary to justify and render legitimate attempts to police the boundaries of public discussion and debate. We don’t at all agree on what counts as “extreme” or as “disinformation.” So attempts to rule some speech out of bounds only reinforce our dividedness, generating an instantaneous backlash and new spasms of extremism in response.
Do you doubt it? Consider some examples.
The easiest case would be the decisions of Twitter, Facebook, and other social media companies to ban Donald Trump’s accounts in the immediate aftermath of the Jan. 6 insurrection against congressional certification of the 2020 election results. Given the high stakes — nothing less than the peaceful transfer of power to the next president on Jan. 20 — the decision to muzzle the outgoing president was justifiable, even though millions of Trump supporters dissented from it. But of course many of those supporters were muzzled, too, when the social media platform Parler was effectively shut down during this same period by Amazon, Apple, and Google. Given that the Trump-supporting right was using Parler to organize additional acts of unrest to disrupt the presidential inauguration, this, too, may have been a justifiable emergency measure. Though it was also one undertaken by private companies acting independently. They weren’t doing the bidding of a Democratic president and the regulatory agencies he controls.
That’s why the events of this past January are the easiest case — because of the political emergency that took place that month, and because private companies were acting on their own. What the Biden administration is aiming for now is different in both respects: It is seeking to influence directly what kinds of material and people get banned from social media platforms, and it wants these standards in normal times, when no acute political emergency is ongoing. On top of plans announced in June for fighting extremism, online and elsewhere, more recent White House efforts include tracking and flagging problematic social media posts, sharing that information with Facebook, and suggesting that (in the words of White House Press Secretary Jen Psaki) offenders “shouldn’t be banned from one platform and not others for providing misinformation out there.”
What counts as extremism and disinformation? Should a Trump supporter be prevented from saying on social media that he thinks Trump is the rightful president and that the Biden administration is illegitimate? That strikes me as pretty extreme. But didn’t plenty of Hillary Clinton supporters say online in the aftermath of the 2016 election that she was the rightful president and the Trump administration was illegitimate? Would this now be ruled out of line by Facebook and Twitter in the name of stamping out extremism? Consistency would seem to demand it, but wouldn’t plenty of Democrats balk at such a move? And if the social media companies didn’t apply the same standard to both sides, wouldn’t this serve as powerful evidence in favor of the right’s claims about a system rigged against it?
And how about expressions of skepticism with regard to the COVID-19 vaccines? Isn’t denying their efficacy or safety an example of spreading disinformation that could prolong the pandemic, thereby (in Joe Biden’s words) “killing people”? That seems obvious — at least until you recall that the FDA won’t be issuing full approval for the Pfizer-BioNTech vaccine until Jan. 2022, more than a year after it was approved for use on an emergency basis and after many millions of doses have already been administered to Americans. Either those vaccinated Americans are in no danger and well protected from COVID-19, in which case the vaccine should be approved now and those raising objections to it shown to be peddling lies that social media companies might be justified in suppressing — or the FDA isn’t yet absolutely sure the vaccine is safe, in which case the skeptics seem to have a point. Which is it?
And what about political positions slightly less extreme than those expressed by the most die-hard Trump supporters and vaccine skeptics? According to a recent NPR report, conservative pundit Ben Shapiro and his website The Daily Wire are massively popular on Facebook. How popular? So popular that over the past year, “The Daily Wire received more likes, shares, and comments on Facebook than any other news publisher by a wide margin.” Moreover, “in May, The Daily Wire generated more Facebook engagement on its articles than The New York Times, The Washington Post, NBC News, and CNN combined.”
As the NPR article also notes, right-wing websites like The Daily Wire do quite well on Facebook by deliberately provoking outrage in their readers and “by only covering specific stories that bolster the conservative agenda (like negative stories about socialist countries, and polarizing stories about race and sexuality issues).” As a result, readers “come away from The Daily Wire’s content with the impression that Republican politicians can do little wrong and cancel culture is among the nation’s greatest threats.”
Is this an example of extremism or disinformation that should be policed, discouraged, or even banned? Should the Biden administration lean on Facebook to get it to make Shapiro’s content less popular through tweaks to its algorithms? I suspect some Democrats would favor this. I’m quite sure the overwhelming majority of conservatives would not — and that they would view the effort as an example of political extremism from the left, treating it as further evidence of the need to embrace more extreme positions of their own.
Paranoia? Perhaps. But then the right can point to evidence of transgender activists and their allies pressuring Amazon and other tech companies to stop selling books by authors Ryan Anderson and Abigail Shrier, both staunch critics of the positions advocated by those activists. Does the Biden administration consider these authors beyond the pale? What about banning critics of Critical Race Theory on the grounds that they spread racism, bigotry, and hate, causing harm to vulnerable millions? If the Biden administration doesn’t favor doing so at present, can we be sure it won’t change its mind six months or a year or two years from now?
And what about the fate of such a system of social credit and sanction when it falls, as it inevitably would, into the hands of the opposing party after a future election? What happens when a conservative Republican begins twisting the arms of Facebook, Twitter, and other platforms to penalize the left for its own forms of extremism and disinformation?
The danger, once again, is less that we’ll see one-sided authoritarianism take hold. It’s that the effort to combat extremism and disinformation will accelerate our country’s centrifugal tendencies, generating new rounds of ever-more-intense extremism (and the disinformation that furthers its aims) in response to the very effort to stamp it out.
It might sound like a tautology, but sometimes that’s the way politics works: Enforcing a consensus against extremism requires the presence of a strong consensus about what counts as extremism. Americans lack any such consensus today. Which means the Biden administration’s efforts to police extremism are bound to backfire.