The online abuse the England football team received after the Euro 2020 final has pushed people towards drastic measures to stop it happening again: giving social media companies their legal identification.
Commentators like Piers Morgan, industry professionals, and average users have advocated for this in the belief that, if people were forced to take public responsibility for the things they wrote online, our experiences would be better.
Sociologists and digital rights campaigners have pushed back against this, however, for a number of reasons: many platforms already enforce real-name policies, the risks of data breaches by malicious actors is too great, and that the problem with racism is embedded in society but encouraged by social media algorithms.
“This focus on ‘real-name policies’ is obscuring the reality that racism is not a byproduct of some people’s access to anonymity online. In fact, many people post racist and abusive material on the internet from accounts that feature their name and images of them, so clearly this issue is much bigger and more complex than some of the conversations about online anonymity suggest”, Dr Francesca Sobande, a lecturer at the University of Cardiff who specialises in digital culture and Black identity, told The Independent.
Facebook, the biggest social network in the world, has used a ‘real-name’ policy since 2015. Initially, it did require users to hand over government IDs, but had to adapt its rules after LGBTQ and Native American activists said that using their real name puts them in danger. Facebook now says that users can verify their identity with “many forms of non-legal identification”, such as bills or a library card.
“It’s vital to acknowledge the fact that some people seek to participate in digital spaces in anonymous ways because they are at great risk as survivors of violence and abuse, and as people who face dangers related to racism, sexism, misogyny, xenophobia, homophobia, transphobia, and other forms of oppression”, Dr Sobande adds. “This begs the bigger question: why do people feel emboldened to perpetuate such abuse?”
For Katherine Cross, a sociologist and PhD candidate at the University of Washington, it is not anonymity on social media but rather deindividualisation and disassociation, that helps promote online abuse.
An oft-repeated mantra, albeit one that’s being challenged, is that social media is not real life. On Facebook, Instagram, TikTok, and Twitter, we all perform a certain type of behaviour based on the algorithmic requirements of that platform – and that performance creates a distance and disassociation between what is said online and offline.
“We may perceive ourselves as lost in a crowd when we harass someone online, and thus the ready visibility of our legal identity is of no consequence. Or we may see our actions as harmless; it’s just words, just venting, just a joke, et cetera. We may fail to see the humanity of our interlocutor as well, viewing them as mere pixels on a screen, the avatar of everything we despise, or a distant celebrity”, Cross told The Independent.
“Adding bigotry into the mix makes things even more toxic, and outright abuse more inevitable. The English footballers are targets of a harassment campaign; individual participants feel emboldened because of the online crowd they’re surrounded by, less because of their anonymity.”
First Amendment protections for anonymity originated in Jim Crow-era court cases involving southern attempts to force the NAACP to disclose their membership lists. They had *very* good reasons to want to stay anonymous.
— Jeff Kosseff (@jkosseff) July 13, 2021
But the internet is not anonymous, anyway; Facebook, for instance, gathers a vast amount of data about you from its application, as well as Instagram and WhatsApp. It already gathers enough information to find out the identity of anyone who posts racist messages – and much more besides.
The company knows what you are interested in; it knows your location; it knows all the posts you’ve shared, the messages you’ve typed and then deleted, and many more facts hidden behind its algorithm. Twitter, too, can identify users with near-100 per cent accuracy just from their metadata, or the data that describes and gives information about other data, such as the geolocation of a tweet.
That is not to mention the amount of personal information that can be revealed by the infrastructure of the internet itself, even from something as minute as an email or IP address – and adding more data into the mix only increases the severity of privacy breaches, such as the one that revealed personal information of 530 million Facebook users, and which Facebook refused to inform users about.
Advocates of online identification, however, point to the ability of users to hide themselves digitally. “While social media platforms – and the law enforcement agencies they are increasingly working with – will have access to user IP addresses, the widespread availability of VPN software that masks a computer’s IP address means this is far from a silver bullet”, said Yuelin Li, the Vice President of Strategy for Onfido, an identity verification and security company.
“Depending on how social media platforms want to embrace identity verification, or indeed retain some level of anonymity … [there could be] different tiers of accounts, from fully anonymous, to real person, to verified real person. Each user can then decide what level of access they want to the platform using these classifications.”
Allan Dunlavy, a privacy lawyer and partner at Schillings, is also an advocate for the move. “A requirement that social media companies obtain and hold real proof of identification for their users would allow the police to hold people responsible for criminal acts online”, he said.
“In autocratic states we have seen a number of instances where anonymous speech has played a role in advancing movements for democracy and this is an important function. There is though no general right to free anonymous speech and our courts are more than capable of reviewing requests for user identification data and ensuring that these requests are proper and serve a legitimate purpose”.
But for critics, the risks far outweigh the rewards. “When people say to me, ‘It’s no big deal, I have nothing to hide’, my response is: ‘hand me your phone, pin code, password to [your] email and a copy of your passport and let me do what I want’,” says Yassmin Abdel-Magied, a writer and broadcaster who trained as a mechanical engineer, and has been subject to torrents of abuse on social media.
“If you don’t feel comfortable doing that, why do you feel comfortable handing everything over to a private company who has not earned our trust … and what happens when someone who wants to destroy your life – or is just a bad faith actor – gets their hands on all your info? It only takes one bad apple to really screw things up.”
In the event that social media companies are unable to effectively moderate themselves, something critics have often argued, the government will implement legislation – such as the Online Harms bill – to encourage them further or risk huge fines. Some companies have already attempted to make changes to encourage better behaviour on their platforms.
“I think something that’s less controversial [than real name policies] is that social media companies should concentrate more on promoting the quality of discourse on their platforms, rather than just the quantity. I like that Twitter encourages people to read an article before tweeting it, and that Instagram has tried algorithms that encourage people to think twice before sending bullying messages,” David Stillwell, a reader in computational social science at Cambridge University, told The Independent.
Even as they promote healthier conversations with new features, however, other more hidden parts of social networks could encourage racism and other problem content. The algorithms themselves, many people have pointed out, can also be used to promote extremism – unless the social media company themselves interferes to diminish it. It has been alleged that Facebook knew its platform encouraged polarising content but proposals to change it would be “antigrowth”, so the research was reportedly shelved.
Whether this legislation would be able to push social media companies to properly tackle this problem remains to be seen, but with criticism surrounding how governments in the UK and US are dealing with racism, experts are not hopeful.
“It seems almost impossible to imagine an effective legislative solution emerging from Westminster or Washington right now, certainly not one that meaningfully addressed racism”, Cross says.
“Whatever else may be said of individual politicians, it is abundantly clear that governments on both sides of the Atlantic are reluctant to confront racism and other forms of prejudice throughout society, especially with so many of their members having built their careers on that prejudice … that should be borne in mind considering the intimate relationship between government and big business in both the US and UK”.