ITP Techblog

Brought to you by IT Professionals NZ
Menu
« Back to Security & Privacy

Apple's controversial plan to detect child abuse images has merit - and it's totally legal

Rick Shera, Tech Law Contributor. 10 September 2021, 10:45 am

Apple has emphasised its privacy-protective position for many years. 

It has done this both in the way it has developed its devices and in its attitude to those who seek personal information from it about its users' activities, whether for law enforcement or other purposes, sometimes even to the extent of defending against information disclosure applications in court

Its recent announcement however that it would start accessing users' photos on their iOS devices (iPhones and iPads) to detect child sexual abuse material (CSAM), and then provide CSAM images to law enforcement, has been taken by many as an abdication of that position.

In the face of that antipathy, Apple has announced that it is "pausing" its plan. That, in turn, earned a vitriolic rebuke from Julie Inman-Grant, Australia's eSafety Commissioner, that Apple had "totally caved".

Screenshot 2021-09-10 at 10.29.51 AM.png

However, the expectation is that it may be rolled out in future, perhaps with less fanfare, so it is still important to review what is involved, particularly from a New Zealand perspective.

What is Apple proposing?

Apple actually proposed three things - an opt-in feature that would send alerts to parents if children under 13 try to send nude pictures; a warning message system if a user tries to search or ask questions about CSAM, which would explain the harm caused by this material and direct the user to resources to obtain help; and the on-device CSAM detection system referred to above. It is this detection system that caused the furore and which we'll look at it in more detail.

The CSAM detection system would apply to all photos on an iOS device (iPhone or iPad) that a user has synched with Apple's iCloud storage facility.  It would automatically take an image's metadata and create a unique hash. That hash would then be compared to hashes created by NCMEC, the US National Center for Missing and Exploited Children, which identify known CSAM images circulating online.  

CSAM is a huge and abhorrent problem and NCMEC is at the forefront of steps to track it and the children that it exploits. Apple has said that if there are a sufficiently large number of hash matches, it would then decrypt the matched photos and a human would check that they are indeed matches. If they are, then information would be sent to law enforcement.

Why not do it in the cloud?

The first thing to note about this is that it is done on the device. As things stand at present, photos uploaded to iCloud are unencrypted so it would have been far easier for Apple to have implemented CSAM detection there. While there would no doubt still have been complaints about Apple doing this, to my mind this would have been easier for Apple for a number of reasons.  Apple may well have thought that by doing this on the device it was being more transparent and privacy-protective.

First off, it would not have involved Apple breaking its own encryption. One of the big dangers of giving law enforcement a back door into encrypted systems is that it weakens the overall encryption and that may, in turn, be exploited by criminals or others. Apple's CSAM detection system does not give law enforcement a direct back door but creating any weakness in an encryption system is a problem.

Conducting CSAM detection on its own iCloud system would also have effectively given users the chance to continue to use their devices for taking photos, but opt-in to this new regime by deciding whether or not to upload their photos to iCloud.

Also, iCloud is quite clearly a repository that belongs to Apple. Sure, we retain copyright in the material we upload, but we give an extensive licence to Apple and the uploading of illegal material to a repository is a standard breach of every cloud storage provider's terms of use.

In fact, both Google and Microsoft already scan for CSAM images in their cloud storage repositories.

It is ironic also that the main criticism levelled at Apple, particularly by civil society groups such as EFF, is that this creates a slippery slope and Apple will inevitably succumb to law enforcement and Government pressure to expand the detection system beyond CSAM to other material.  

The slippery slope argument

There are three arguments against this that seem to be ignored - if Apple implemented the system at the iCloud level rather than on the device it would be far easier for it to be pressured to expand to other areas. Users would not even necessarily know. Conversely, expanding the system on the device would not be as simple. 

The proposed CSAM detection system relies on matching hashes provided by NCMEC.  While hash matching has been used for other material (hashes were used by platforms to automatically detect and remove the Christchurch terrorist video), there is no recognised internationally available database for non CSAM material and it is more difficult to conclusively decide if something is illegal in other spheres and across jurisdictions.  The illegality of CSAM is clear, obvious, and accepted in every country.  

Finally, the slippery slope argument assumes that just because Apple has created this system, it will be more amenable to its expansion. There is no evidence of this. CSAM is recognised as a significantly growing, hugely harmful, worldwide problem. It stands alone in its clear illegality, predation on the most vulnerable, and its obviousness. Apple has recognised this and has clearly singled it out as worthy of special attention. It is hard to see why Apple would simply roll over and adopt the same approach with other forms of illegal material given these differences and its historical approach to privacy.

Screenshot 2021-09-10 at 2.48.26 PM.png

But is this lawful for Apple to do in New Zealand?

Apple was proposing to roll out CSAM detection in the US only but presumably other countries would follow.

The first thing that I find interesting on a more general front is that we have become almost blasé about providers like Apple, Microsoft and Google, whose systems we rely on in our business and personal lives, simply making changes by way of software updates that can dramatically change the way in which our devices operate - without our specific permission for the change.  

This model is spreading to all forms of equipment, with cars being the latest growth area. At some stage, I would expect there to be push back against this from a consumer contractual position arguing that the device or car or whatever is no longer the same as that for which the consumer paid. But that is fodder for a future post.

So, would Apple's proposed CSAM detection system cause any legal issues in New Zealand under our privacy law? Simply put, the answer is no. 

There are clear exceptions in our Privacy Act that would allow Apple to detect and report CSAM. Its terms of use also allow this and its clear warning of the introduction of this system adequately discloses how and why it is doing so.  You can't use privacy as a shield behind which to break the law.

As in many areas of law, privacy balances personal rights against public good, but here there is no question where the legal balance lies. Your mileage may vary on whether you agree with it or not.

Rick Shera is a partner at Lowndes Jordan and one of the founders New Zealand's leading business, information technology and media law firms. He is the first lawyer to obtain the IT Professionals NZ CITPNZ certification and is a chartered member of the Institute of Directors.

 

 


Comments

You must be logged in in order to post comments. Log In


Web Development by The Logic Studio