Football revealed how hopeless content moderation still is
Yesterday's nail-biting penalty shootout that handed victory to the Italians over the English in the Euro 2020 final also triggered a torrent of hate speech on social media.
In doing so it has revealed the lingering deficiencies in social media content moderation with tweets and newsfeed posts containing blatantly racist language, emojis and images having to be flagged for removal by other users, rather than automated systems.
The vitriol centered on Marcus Rashford, Jadon Sancho and Bukayo Saka, the three Black players who kicked for goals during the penalty shootout of the game but weren't able to outmaneuver Italian goalkeeper Gianluigi Donnarumma.
As Twickenham Stadium emptied of disappointed English football fans, the reaction spread through social media networks. Twitter reported that it had removed around 1,000 inappropriate tweets flagged by its automated systems or human content moderation team as inappropriate.
But Twitter users complained that racist tweets they had reported to Twitter were left for everyone to see. One Instagram user repeatedly reported users for posting strings of monkey emojis on the profiles of Black players, but Instagram's automated systems "found that this comment probably doesn't go against our community guidelines".
All of the major social media platforms have community use policies that prohibit hate speech. But their policing of it, particularly around sensitive events likely to bring out the haters has repeatedly been shown to be lacking. The United Kingdom has a particularly entrenched problem with racism and football culture. But visiting cricket teams have also faced racist abuse from New Zealand fans which in some cases has seen fans face lifetime bans from venues. It seems that sports can bring out the worst in people - particularly if they are on the losing side.
New Zealand has shared the frustration of lackluster content moderation in relation to the live-streaming of the 2019 Christchurch mosque shooting, content moderation gone wrong in the extreme. The Government is now in the process of passing hate speech laws, involving changing sections of the Human Rights Act to more specifically prohibit speech likely to incite hatred of a particular group or racial disharmony.
Hate speech law reform
Our Harmful Digital Communications Act 2015 already covers the types of offensive tweets that appeared on social media yesterday, should they have been sent by local users. The debate has raged over whether changes to the Human Rights Act to tighten up on hate speech will effectively criminalise "micro aggressions" and strongly worded opinion, essentially eroding free speech.
But whatever the final legislation looks like, it probably won't seek to hold social media companies responsible for lapses in policing their platforms. Numerous top athletes and sports teams in April boycotted the major social media platforms over what they see as repeated failures to stop racist attacks on players.
Now the UK Government is seeking to crack down on the tech companies hosting offensive content with its Online Safety Bill. It would impose large fines - up to £18 million or 10% of a company's global revenues, whichever is higher, for breaches of the legislation.
That would certainly motivate the likes of Facebook and Twitter to invest in their systems to remain compliant. But with those intent on venting their hatred turning to seemingly innocuous emojis and images to make their point, social media companies face an escalating fight to stay ahead of rogue users.
Ultimately, those who put their real name to such abuse and racism tend to pay the price. Two cases emerged in the UK over the last 24 hours of English football fans who have faced serious repercussions in their professional lives for their racist responses to England's loss.
Their response? Both claimed their accounts had been hacked and the tweets sent without their knowledge.
You must be logged in in order to post comments. Log In