Apple Wants to Protect Children. But It’s Creating Serious Privacy Risks.

Apple Wants to Protect Children. But It’s Creating Serious Privacy Risks.
Elva Etienne, via Getty Images

Apple last week announced a plan to introduce new tools that will allow it to scan iPhones for images related to the sexual abuse and exploitation of children. Apple is billing these innovations as part of a child safety initiative, and indeed they may help make the online world a safer place for children, which could not be a worthier goal.

But these tools, which are scheduled to become operational in the coming months, also open the door to troubling forms of surveillance. Apple should refrain from using these technologies until we can better study them and understand their risks.

Apple’s plan has two main prongs. First, parents can opt to have their children’s iMessage accounts scanned for nude images sent or received, and to be notified if this occurs in the case of children under 13. All children will receive warnings if they seek to view or share a sexually explicit image.

Second, the company will scan the photos you store on your iPhone and check them against records corresponding with known child sexual abuse material provided by organizations like the National Center for Missing and Exploited Children. Apple says it will do this only if you also upload your photos to iCloud Photos, but that is a policy decision, not an essential technological requirement.

The technology involved in this plan is fundamentally new. While Facebook and Google have long scanned the photos that people share on their platforms, their systems do not process files on your own computer or phone. Because Apple’s new tools do have the power to process files stored on your phone, they pose a novel threat to privacy.

In the case of the iMessage child safety service, the privacy intrusion is not especially grave. At no time is Apple or law enforcement informed of a nude image sent or received by a child (again, only the parents of children under 13 are informed), and children are given the ability to pull back from a potentially serious mistake without informing their parents.

But the other technology, which allows Apple to scan the photos on your phone, is more alarming. While Apple has vowed to use this technology to search only for child sexual abuse material, and only if your photos are uploaded to iCloud Photos, nothing in principle prevents this sort of technology from being used for other purposes and without your consent. It is reasonable to wonder if law enforcement in the United States could compel Apple (or any other company that develops such capacities) to use this technology to detect other kinds of images or documents stored on people’s computers or phones.

While Apple is introducing the child sexual abuse detection feature only in the United States for now, it is not hard to imagine that foreign governments will be eager to use this sort of tool to monitor other aspects of their citizens’ lives — and might pressure Apple to comply. Apple does not have a good record of resisting such pressure in China, for example, having moved Chinese citizens’ data to Chinese government servers. Even some democracies criminalize broad categories of hate speech and blasphemy. Would Apple be able to resist the demands of legitimately elected governments to use this technology to help enforce those laws?

Another worry is that the new technology has not been sufficiently tested. The tool relies on a new algorithm designed to recognize known child sexual abuse images, even if they have been slightly altered. Apple says this algorithm is extremely unlikely to accidentally flag legitimate content, and it has added some safeguards, including having Apple employees review images before forwarding them to the National Center for Missing and Exploited Children. But Apple has allowed few if any independent computer scientists to test its algorithm.

The computer science and policymaking communities have spent years considering the kinds of problems raised by this sort of technology, trying to find a proper balance between public safety and individual privacy. The Apple plan upends all of that deliberation. Apple has more than one billion devices in the world, so its decisions affect the security plans of every government and every other technology company. Apple has now sent a clear message that it is safe to build and use systems that directly scan people’s personal phones for prohibited content.

Protecting children from harm is an urgent and crucial goal. But Apple has created a model for attaining it that may well be abused for decades to come.

Matthew D. Green is a professor of computer science at Johns Hopkins University. Alex Stamos is the director of the Stanford Internet Observatory, which studies the potential abuses of information technologies.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *