Thorn’s ‘Safer’ uses perceptual hashing and machine learning algorithms to identify child sexual abuse material (CSAM)


The vast majority of us profoundly put resources into securing our homes and working environments, children, and networks. So, for what reason would we accept anything less than a safe online condition in which to exist? 

The recent tech-product by the non-profit organization, Thorn – Safer brings new hope for children’s online safety against child abuse with 99% precision

Safer uses machine learning classification model to flag the files suspected to CSAM ( Child Sexual Abuse Material). 

AdvertisementCoursera Plus banner featuring Johns Hopkins University, Google, and University of Michigan courses highlighting data science career-advancing content

The recognition services of the product include: 

  • Image and Video Hash Matching: The product generates cryptographic hashes for pictures and compares them with known CSAM( Child Sexual Abuse Matching) hashes in the database. 
  • CSAM Image Classifier: Machine Learning classification model trained on various datasets containing thousands of images, including adult pornography, is used to classify suspected images as CSAM. 

The child abuse content found by Safer can be reviewed as well as reported using a single interface. It will lead to a faster response in helping the children and bringing them to a safe environment. 

Safer is presently accessible for any organization working in the US. Thistle intends to extend to different nations one year from now in the wake of modifying for every nation’s public announcing necessities.


Note: This is a guest post, and the opinion in this article is of the guest writer. If you have any issues with any of the articles posted at please contact at [email protected]m  



Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.