New AI tool detects child sexual abuse material with ‘99% precision’
Child sexual abuse material on the web has grown exponentially in recent years. In 2019, there were 69.1 million files reported to the National Center for Missing and Exploited Children in the US — triple the levels of 2017 and a 15,000% increase over the previous 15 years.
A new AI-powered tool called Safer aims to stem the flow of abusive content, find the victims, and identify the perpetrators.
The system uses machine learning to detect new and unreported c hild sexual abuse material (CSAM). Thorn, the non-profit behind Safer, says it spots the content with greater than 99% precision.
Thorn built the tool for businesses that don’t have their own content filtering systems.
“What we realized was that the [smaller] organizations just didn’t have the resources to identify, remove, and report this type of material,” Caitlin Borgman, Safer’s VP of business development, told TNW.
“There’s inconsistency across the industry as to what to do when you come across it and how to handle it. There’s no record of best practices. It’s costly to build — it can cost over half a million dollars, even for a midsize organization just to stand it up. And then you have to maintain it. There are a lot of gaps in what you can detect. And there’s no real centralized database of CSAM. It’s very siloed and distributed.”
Safer is an attempt to resolve these issues and help tech firms rid their platforms of abusive material.
How Safer works
The system detects abusive content on a platform by searching for digital fingerprints called hashes.
When it finds these hashes, it compares them to a dataset of millions of images and videos of abusive material. If the content hasn’t been previously reported, algorithms trained on abusive material determine whether it’s likely to be CSAM. It’s then queued up for review by the platform’s moderation teams, and reported to the National Center for Missing and Exploited Children, which adds the content to its database.
Among Safer’s early clients is photo-sharing service Flickr. The company recently used the tool to detect a single image of abuse on its platform. A law enforcement investigation that followed led to the identification and recovery of 21 children, ranging from 18 months to 14 years old. The perpetrator is now in federal prison.
In total, Thorn says it took down nearly 100,000 CSAM files during its beta phase, for customers including Imgur, Slack, Medium, and Vimeo.
The tool is now available to any companies operating in the US. Next year, Thorn plans to offer the product to overseas firms, after integrating it with their reporting requirements.
The non-profit’s ultimate goal is to eliminate child sexual abuse material from the open web.
“We see a future where every child can just be a child, the internet is a safe environment, and victims who may have been victimized 10 years ago … will not be re-victimized when their images are reshared or resurfaced,” said Borgman. It’s really just disrupting the supply chain. It’s stopping it in its tracks.”
Published July 31, 2020 — 12:18 UTC
- Searching the Web UVA - 1597
- Apple just gave the 27-inch iMac a major spec bump and a matte display option
- Roborock’s S6 MaxV robot vacuum is too smart to do the dirty work
- Bitfinex offers $400M reward for return of $1.3B in stolen Bitcoin
- Hey Google, can you sort out the Docs iPad app please?
- Why we need more image masking tools to avoid facial recognition systems from identifying u...
- Satoshi Nakaboto: ‘Twitter hacker who owns 300 Bitcoins has bail set for $725K’
- Swapping sofas for skincare: How MADE’s co-founder plans to renovate the beauty industry
- The PS4 controller will work with PS5, but not PS5 games
- We charted the screen-to-body ratio of Google Pixel phones
- Say hello to SHIFT, our new publication about the future of mobility tech
- COVID got the world addicted to (computer) tablets — we made some pie charts about it
- Prosecutors claim to have caught teenage mastermind behind Twitter hack
- Algorithm reveals which new emoji Twitter users most desire
- New AI tool detects child sexual abuse material with ‘99% precision’
- YouTube will no longer let viewers help creators with subtitles and captions
- India’s opaque site bans set a dangerous precedent for censorship
- This app uses AI to make your selfies look fabulously ridiculous
- Check out how often your article has been shared on social media, and by who
- Big Tech told Congress there’s loads of competition. This chart says otherwise