ITP Techblog

Brought to you by IT Professionals NZ
Menu
« Back to Government

Harmful content vs Free speech online: Who should decide?

Harmful content vs Free speech online: Who should decide?

To what extent should Facebook, Twitter and others be censoring content and people, and how do you balance harmful content and free speech? 

We don't generally comment much on these sorts of things for 2 reasons. Firstly, ITP is a broad church and it's outside the areas that we - as an organisation - generally take a position on. Secondly, TechBlog's editor Paul Brislen usually covers this pretty well!

But there's a hugely important principle at stake and I think our community, as the largest tech community in New Zealand, should be thinking about all of this very carefully.

Let's start by thinking about three fundamental questions:

  1. What is the Internet?
  2. Who owns it?
  3. Who is responsible for regulating it?

At this point our Network Admins might start talking about networks and routers and root DNS servers etc etc etc, and that's certainly true from a technical or structural perspective. But the reality is, the Internet is more than this; in fact, no person, company, or government owns the Internet; As one website puts it, it's owned by humanity itself.

But increasingly, what geeks like us think of as "the Internet" differs from what the public - especially the younger public who never saw that side - think of it as. 

Some of us see ones and zeroes, some see infrastructure, and some see immense possibility - both commercially and socially. But for many (most?) people, the Internet means Facebook, Instagram, YouTube, TikTok, Twitter and Google. And maybe Netflix and a few others as well.

In short, the mindset has shifted as Internet access has become commoditised - just like we think of Electricity as the stuff that makes our devices and lights work (rather than the vast network of electricity generation and distribution), the Internet - for most people - has become what we do with it. And a huge part of that is on social media networks.

Now, if we accept that at least some regulation of "the Internet" is necessary to prevent the worst of humanity creating victims online, just as we expect Society's behaviour to be regulated to prevent people becoming victims offline, whose job is it to do so?

Putting aside some very significant jurisdictional challenges*, the answer has to be exactly the same as the offline world: the Government, via legally-formed laws and regulations.

Which raises a fairly crucial and fundamental conundrum.

Once a particular "service" such as Facebook has reached the point where it's basically part of the fabric of the Internet - and by extension, Society itself - as many of the services above have done, who is responsible for regulation of content?

To put it another way, we don't expect the power company to regulate what we do with electricity. We also don't expect "products and services" that use electricity to regulate our use - it would be daft to expect either the power company or heat-lamp providers, for example, to detect or prevent an individual setting up an illegal growing operation in their basement - or to cut off electricity because of it.

And it would certainly be daft for power companies or lines companies to start making up their own rules about what you can do with their power - over and above what's illegal or unsafe - and take action if you breach those rules. Quite rightly, there'd be a huge outcry.

Yet this is exactly what's happening more and more with "the Internet".

Let's take illegality out of the equation and accept that - arguably - mainstream social media and other companies have some sort of obligation to at least attempt to block illegal content.

It's not unreasonable to conclude that it's for the Government to regulate to the extent that content or behaviour is illegal regardless of whether it's online or offline. Unless you're an anarchist, that's a reasonable position to take and it's not too big a jump to then expect providers to have a role in preventing the publishing of content that is clearly illegal.

Some will argue with that, but let's put that aside for a moment and think about immoral, outrageous, offensive or unpopular - but not illegal - content.

Increasingly - and especially over the last few weeks since the brief occupation of the US Capitol building - social media companies like Twitter and Facebook have appointed themselves judge, jury and executioner on what content is acceptable on their platforms - both in "public" and "private" groups - and taken action against hundreds of thousands of people who have expressed views that differ from what they see as acceptable.

And there's that conundrum.

On one hand, it's their network so their rules. On the other, these companies have grown to the point that they're now a core part of the Internet itself - and their actions have a significant impact on everyone. They're now in a position to - and arguably do - shape public opinion by either shadow- or outright- banning views they don't approve of. 

Given the scale of these services, this means it's now up to a very small group of extremely rich and powerful unelected men (mostly) to decide what is acceptable speech and content for Society as a whole, above and beyond that deemed in law.

Surely we all have to agree that that's a problem.

And we're not just talking about speech. For example, these services have the proven ability to massively influence election outcomes and much more. At the other end of the spectrum, their algorithms can actively push content to you that research [pdf] suggests doesn't just shape opinion, but that can fully radicalise large groups of people. 

So what's the answer?

It surely has to be recognising that once a "service" reaches a large enough scale, their obligation has to at least partially expand from just their shareholders to society as a whole. And looked at in this context, you could certainly argue that this might include obligations around not censoring content and users except where content is explicitly illegal - as doing so infringes their right to free speech (remember, we're thinking societal obligations here).

Secondly, as with every other facet of life, surely it should be solely up to Governments to deem what is illegal when it comes to speech and content.

Facebook especially has been calling for Governments to step up and devise rules around harmful content, a different model for platforms' legal liability and a "new type of regulator" to oversee enforcement. But so far, many Governments have shirked this responsibility in a way they never would in the offline world.

New Zealand, on the other hand, has been fairly proactive in this area and already has specific laws regarding harmful content online - primarily The Harmful Digital Communications Act 2015 - complete with 10 communications principles and an Approved Agency (Netsafe) who have a role resolving matters before they get to Court. This law strikes a careful balance and focuses on an intent to cause harm and even includes a process for services to resolve complaints to the Agency about content - albeit many seem to ignore it.

Again jurisdiction* aside, surely this - the law of the land - is what large social media networks should be following when it comes to policing content rather than their own views? 

Should that not be an obligation, especially once a certain scale is reached and their editorial decisions are impacting a significant portion of society?

This problem isn't going to go away, especially as more and more of "The Internet" falls into private hands. It's up to all of us to push for a solution that protects victims while ensuring fundamental rights are maintained.

What do you think? Keen to read your views in the comments below.

Paul Matthews is Chief Executive of IT Professionals NZ, the professional body of the digital technology industry.

 

---

* And yes, appreciate jurisdiction is one of the biggest issues in this debate and I've conveniently skirted over it above. Personally, I think this is resolvable technically but either way, the article was already too long :). I'll address that in a future piece if there's interest.


Comments

You must be logged in in order to post comments. Log In

John Maindonald 18 January 2021, 5:50 pm

Once what one user does with their electric power starts to affect other users (not a common event), the power company will act. With social media, individuals (Trump, and many more) do frequently start waves that spread widely to other users, and may impact the wider society. There's no equivalent of a bot, or of a human influence peddler, in the electricity network. Not just Trump, but Chinese, Russian, US govt agencies can use the internet to spread their message. Snowden was right to be concerned.

A better analogy is with the print media, at least as a starting point for discussion. Whereas a few rich people have, in large parts of the enterprize, been able to use it to peddle their influence, social media have spread that ability more widely. Public figures with time on their hands, and a Trump-like ability to whip up crowds, get to spread their message far and wide, in ways that ordinary mortals cannot. They are those who have readiest access to "free" speech.

It is a minefield. The practicalities of influence peddling (politicians have their own message-spreading interests, and are too often prey to the rich and powerful) mean that it is pretty much left to the social media companies to act. If it is food companies selling sugary drinks, they'll not act, and govt (in NZ at least) is afraid to act. A risk to the whole order of society, and the consequent risk to them, is what in the present instance forced social media to act. The US govt was too much part of the problem, and in any case the wheels of govt turn too slowly to deal effectively with such a crisis.

John Maindonald 18 January 2021, 6:52 pm

As I understand the matter, freedom of speech has never meant that any newspaper editor, or book publisher, or other commercial entity, was required to allow me a channel for what I have to say. Rather, if I could somehow get my message out, perhaps by standing and shouting it at the street corner (providing I did not create a `public nuisance'), I was protected.

What precedent is there for requiring or expecting that

`once a "service" reaches a large enough scale, their obligation has to at least partially expand from just their shareholders to society as a whole. And surely this has to include obligations around not censoring content and users except where content is explicitly illegal`?

Why not, rather, obligations to censor content that is judged to harm the public good?

Ray Delany 18 January 2021, 10:01 pm

Free speech is not an absolute right.

It has always been implicitly connected with responsibility. The principle as first evolved during the Age of Enlightenment could never have imagined a situation where a person could reach millions of people in all corners of the world in seconds. Back in the day, if you said something offensive you ran the risk of immediate sanction from those within the sound of your voice, up to and including a punch on the nose. There was natural corrective moderation. This meant that there was little need to regulate responsibility and thus the emphasis on preserving free speech. When newspapers first emerged they suffered from a similar challenge of responsible behaviour that social media platforms now do and this led to consensus on accuracy and balance in publication, which was gradually reinforced in libel and other laws. The press has long been accountable for what it says and can be taken to task for what it puts on its platform. It seems to me that the likes of Twitter and Facebook have had a free pass in this respect, and this clearly needs to be examined.

The counter culture and its modern evolution - or perhaps devolution - which created the Internet in the first place has been steadily undermining its own argument in favor of absolute freedom of the Internet through refusing to regulate themselves in the way they would have to in a face to face conversion.

The power system analogy doesn't work well for me in this context. The service provided by power companies is ridiculously simple in comparison to information and is largely self correcting. Try plugging in a device designed for 110volts and 60hz into your Kiwi three point plug if you like and you'll find out pretty quickly.

In the end a choice between self-regulation or government regulation is inevitable. The first has clearly failed so far.

Russell Clarke 19 January 2021, 6:11 pm

There are so many issues to talk about. It's probably worth having a video call to go through them because text just loses the nuance. Happy to get involved, and on a regular basis. Do we have a forum for speech issues? Perhaps we need one.

But in short:

1. Define "harm" - because if you can't do that, you can't base a reasonable, repeatable law on it.

2. Having seen what Margaret Thatcher did in the UK with Clause 28 (which essentially banned the "promotion" of homosexuality in schools) in the 80s, quis custodiet ipsos custodes?

Mike Carroll 25 January 2021, 10:40 am

Paul I'm concerned by the underlying message here. You agree that tech companies with massive reach must be responsible to their wider community and stakeholders, as opposed to only shareholders, yet you seem to be implying that Facebook and Twitter should not have blocked or removed users posting messages of misinformation, racism, and inciting violence simply because they have not yet been deemed illegal, in the guise of protecting "free speech"?

Paul Matthews 25 January 2021, 9:13 pm

My intention was not to imply anything in particular, and in fact I tried to steer away from specific people, topics or politics. Because otherwise it's easy to use the extreme (eg Trump) to argue the point; whereas it's the grey areas that really matter.

It's the underlying principle that I think we, as society, need to be thinking about.

Are we happy with these companies deciding what is acceptable for us, when the outcome has a material impact on people's ability to participate in society?

And if not, what's the alternative? And can we really divert online from the principles offline - ie that the Govt has the responsibility of determining what is acceptable speech via laws and regulations?


Web Development by The Logic Studio