ITP Sites:   ITP Site|TechBlog|TechHub in schools|NZ CloudCode|All Tech Events|Software Escrow NZ

ITP Techblog

Brought to you by IT Professionals NZ
Menu
« Back to Home

Tech giants fail to stop spread of Christchurch video

Sarah Putt, Contributor. 21 March 2019, 7:04 am

The failure of social media giants to prevent the video taken by the alleged gunman during the Christchurch mosque attacks on Friday from going viral has been strongly condemned in New Zealand and around the world.

The alleged gunman, whose acts of terror led to the deaths of 50 people in Christchurch live-streamed the events for 17 minutes on Facebook. According to Facebook it was seen by fewer than 200 people as it was live-streamed and by the time it was removed about an hour later, less than 4000 people had seen it.

But there was enough time for the video to be copied and to spread virally on You Tube (owned by Google), Twitter, Facebook and other platforms. Within 24 hours there were 1.5 million attempts to upload to Facebook alone, of which 1.2 million were blocked at upload, which meant 300,000 copies were uploaded to the platform which boasts over two billion monthly users. At the same time YouTube had reportedly assembled a war room and was removing tens of thousands of videos, with uploaders evading detection by making small modifications such as adding watermarks and logos or altering the size of the clips.

New Zealand telcos were quick to act on Friday and, in an unprecedented move, worked together to block sites hosting the video. Spark, Vodafone and 2degrees have since sent an open letter to the tech giants calling for them to find better ways to stop the spread of violent and objectionable videos. At the same time, advertisers and fund managers in New Zealand are rethinking their spend and investment in the social media network. Yesterday the New Zealand Superannuation Fund, ACC, the Government Superannuation Fund Authority, the National Provident Fund and Kiwi Wealth put out a joint statement saying the social media companies should "fulfil their duty of care to prevent harm to their users and to society."

On Tuesday Chief Censor David Shanks officially banned the video, so anyone who shares the clip can be fined up to $10K or 14 years' jail, and one man has since appeared in court on charges of distributing footage of one of the mosque shootings. The Privacy Commissioner John Edwards says Facebook should notify police of the account names of people who have shared the content.

Justice Minister Andrew Little says that Facebook has contacted the New Zealand Government directly and he expects there to be talks with senior members of Facebook shortly. But what can be or should be done to prevent the distribution of this video - and video's like this - from being shared?

Facebook, Twitter and YouTube have a combined database of terrorist content that is assigned digital fingerprints called "hashes" that detect visual similarities and automatically prevent content from being uploaded, but that did not seem to make a difference on Friday.  

Twitter and Facebook provide livestreaming functions, which means users can broadcast straight to the web, with no delay and no filter. Should these companies suspend this function until they can work out how to instantly detect and shut down damaging content?

Facebook CEO Mark Zuckerberg told the US Senate last year that 99% of ISIS and Al Qaeda content was flagged by its AI systems before any humans saw it - but that was not the case for the terrorist attacks, motivated by far-right extremism, in Christchurch.

In addition to AI systems the companies rely on human moderators, usually employed by third parties, to remove criminal and terrorist content. Facebook is estimated to employ 15,000 people and YouTube 10,000 people to view content flagged for being banned content. This is reported to be an incredibly difficult job where moderators are expected to view the very worst of humanity and are compensated by low wages and poor conditions. In an article by The Verge last month it noted that Facebook moderators in the Arizona earn US$28,000 a year, compared to the average Facebook employee who earns US$240,000.

Europe is the region that has imposed the most has most effective checks on the tech giants and it may be that the example lies in Germany. Its law imposes $80m fines for any site that fails to delete posts featuring hate speech or fake news. The prospect of financial penalties could be the motivation required to change the way social platforms deal with harmful content in the future.


Comments

You must be logged in in order to post comments. Log In


Web Development by The Logic Studio