Mark, who works remotely from his San Francisco home, noticed an infection in his son’s groin region. His wife used his phone to take a few high quality pictures, and upload them to the health care provider’s messaging system, ahead of consultation with a doctor. Thankfully it was a routine infection which cleared up after a few bouts of prescribed antibiotics.  Unfortunately, for Mark, Google flagged the images on his phone as potential child sexual abuse material (CSAM) and locked his account. This was done in just two days after the pictures were taken, with Google stating the account was locked due to harmful content, which might have been illegal according to Google’s policies.  At this point, when the account has been flagged for CSAM material, there’s little recourse for anyone. These incidents are immediately reported to the National Center for Missing and Exploited Children (NCMEC), which then reviews the material. Meanwhile, Mark was having a hard time as he lost access to all his Google services, including Gmail, his Google Fi mobile service and more.  These days, a person’s online profile is a big part of their identity. Usually people have all their data backed up to cloud, including their contacts and photos. Banning an account is routine work for these companies, but it does affect an individual’s life greatly. Mark was hoping a human would be a part of the review process somewhere, and that his account would get unbanned when someone looked at his form requesting a review of Google’s decision, where he explained the whole ordeal. 

Big Tech’s Broken Review System

Below, we’ve attached an excerpt from Apple’s whitepaper on CSAM, the system relies on a database provided by NCMEC, in which any and all matches are instantly reported. But the system is also able to detect potentially abusive pictures outside the database using AI, and exploitative material which isn’t part of the database, it is usually escalated as it can mean a new victim which hasn’t yet been identified.  But as the NYT report rightly states, “not all photos of naked children are pornographic, exploitative or abusive“. There is a lot of context involved too, and ban trigger happy tech companies don’t really help with the problem. All being said, CSAM is a brilliant tool, and has likely saved thousands of children from potential abuse. The problem lies with the silicon valley giants, whose review processes are highly automated and dysfunctional, and even after proving your innocence it’s hard to get your account restored.   The hashing technology, called NeuralHash, analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value. – Apple CSAM Technical Summary  As for Mark, he faced something similar. He received an envelope in December 2021, from the San Francisco Police Department. The envelop had all the details of the investigation, including details of all the data provided to law enforcement by Google. The investigator however, after reviewing all the data, exonerated Mark of any wrongdoing and the case was closed. But that wasn’t enough for Google, as Mark still couldn’t access any of his Google accounts.  He even thought of suing Google at one point, but figured it wasn’t worth the $7,000 of legal fees. 

Google s AI Mistakes   Bans Dad Over Medical Photos of His Child - 76