ITP Techblog

Brought to you by IT Professionals NZ
« Back to Government

Should social media platforms have a duty to care?

Should social media platforms have a duty to care?

I reckon most of us would answer yes to this question these days, even tech libertarians who have stood up for a free and open Internet.

The reality is there is a lot of stuff people do "on" the Internet that is not good for society and can cause serious harm. Misinformation, terrorist and violent extremist content, bullying and hateful and discriminatory speech are just some of the concerns. This is one of the reasons InternetNZ has started talking about an Internet for Good, alongside an Internet for All.

Governments around the world are thinking about how they regulate the activities of social media platforms and whether a stronger hand is needed. I say "yes please". In New Zealand, some of these questions will be asked as part of the work the Department of Internal Affairs is doing on media and online content regulation. The jury is still out on the scope of this work and some consultations are underway.

One place doing some work that fascinates me is the UK where they are talking about platforms having a "duty of care". On the face of it it sounds pretty sensible. We do so much on these platforms. They should take care in the services they offer and how they go about their business. It's not just about the content that gets served up, but how we are directed to it, what platforms do when bad stuff happens, and how transparent they are about it.

Duty of care?

The UK wants to be "the safest place in the world to go online, and the best place to start and grow a digital business". They want to provide clarity to social media companies on what is expected of them, rather than just leaving it to their terms and conditions and policies on acceptable use. I'm down with that.
But this is easier said than done. It's also a little confusing for lawyers who think about duty of care as a common-law thing. Let's take a look (any legal beagles can check out the Draft Online Safety Bill).

A big part of the UK plan is for the platforms to show they are fulfilling their duty of care. What exactly is required will be set out in codes of practice written by Ofcom who will be able to take action if companies don't comply.

There is a hook that allows alternative approaches to be used if it would have the same impact. This sounds like a bit of an advantage for the incumbents with the large legal departments.

Transparency reporting is on the agenda (yay), along with access to data for independent research, and a user complaints function that is overseen by the regulator.

Getting this duty of care thing right is going to be tricky for a number of reasons:

- It might be easy to put the principle of duty of care in law, but the hard work on the codes of practice that set the expectations is still getting going.

- Deciding what content is in scope is not easy. The UK plan is to include content that is illegal and some content that is harmful but not currently illegal. The responsible minister will come up with a list of the harmful things, which sounds like quite a hornet's nest. The UK lawyer Graeme Smith has explored some of the complications here: Harm Version 3.0: the draft Online Safety Bill.

- The fun with definitions continues in deciding which providers of Internet services are in scope. They are talking about user-generated content sharing providers and search engines. This is a broad church.

- Enforcement of the duty of care probably won't be a magic wand to make platforms do the things we want. Ofcom as the regulator won't have unlimited resources and will need to make choices about the things they enforce or pursue. I can't imagine being able to call them up about a takedown request that is taking too long to action.

- All of these open questions make it hard for people and online services to know what the rules will be, and how they will work in practice.
A lot of these issues relate to a question we'll need to grapple with in New Zealand too: what do we mean when we're talking about safety and harm online?

The Internet is global, and so are many of the problems, and the big services that governments and regulators around the world are responding to. In Aotearoa, we need to grapple with these same questions, but we don't need to accept the same answers as everyone else. We have the chance to figure out what makes sense for us. So when you get the chance to have your say in the review of online content regulation, do it.

Kim Connolly-Stone is Policy Director at InternetNZ


You must be logged in in order to post comments. Log In

Hamish MacEwan 17 October 2021, 10:30 am

"Should social media platforms have a duty to care?"

"To care," or "of care?" As, in this topic particularly, the words matter.

Certainly we all care, and the outcome is worthy and desirable. Still there remains the option to disagree, one hopes, with the notion that iron-fisted moderation conducted by private companies with a fiduciary duty to their shareholders are the best agents for this change. Already there are concerns that in addition to the "bad speech," "good speech" is being impaired as collateral, or intentional, damage. Perhaps to obtain favourable assessment, these companies are nudged to be more vigilant for speech that damages groups or activities that are favoured by the ruling party, or just for their own benefit, banning vehement criticism, so difficult to distinguish from "hate speech."

Not an expert on but it seems unlikely that it obliges anyone to make it impossible for harm to occur. The "guard rails" beloved of the regulatory mindset do not reach to the sky. As Douglas Adams noted, "A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools."

Even as the struggle with even the definition of this proposal is acknowledged, this Sisyphean task will be succeeded by who it is to apply to and how it might be maintained and enforced as times and circumstances change. There have been concerned that live audio chat is a risk as it can't be successfully monitored by misinformation and "hate speech." "Hate speech" is anti-money laundering for words, and likely to be even less successful.

It's time to stop clutching pearls at the words, and trying to stop them, which is futile, and time to look at the sources and what can be done about them. If you can't stop criminals, better stop making them. In a way, this proposal to make rules for the network to enforce for the legislators, to make them unbadged law enforcement agents, acknowledges that harmful digital communications rules applied to individuals has failed. Time to swallow a horse to catch a fly.

In conclusion, given the importance attached to this outcome, it seems inappropriate to put this in the hands of the private sector whose motivations have never entirely been focussed on the public good, that's something we reasonably *expect* from government. It would make more sense to continue with coercing the actors, the "bad users," than another futile attempt to pick gnat shit out of pepper, to find the "bad words," "bad money," or other contextless values that with or without AI pixy dust is apparently all that is required to assess "harm."

Web Development by The Logic Studio