Image moderation system in minutes
Ido Shamun
Posted on March 5, 2019
Every user generated content platform needs some sort of content moderation system to make sure that the content is appropriate and respectful, otherwise you might get some serious negative feedback from your users (talking from experience 😵).
In this post I would like to talk specifically about image moderation and how easy it is to build a system which rejects NSFW images from your application. 🙈
Google Cloud Vision
Enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API.
I will be using the Cloud Vision API to automatically detect inappropriate images powered by SafeSearch. SafeSearch rates your image by the likeliness of the following: adult, spoof, medical, violence and racy. In our case (NSFW) adult, violence and racy might be the metrics we are looking for. You can try the API for free to see how it's like here.
Of course there are many alternatives to Cloud Vision but this is my favorite.
Server-side
We will be using Node to write our moderation code and the @google-cloud/vision package.
First, we have to initialize our annotator so we will be able to use it later on
const vision = require(`@google-cloud/vision`);
const client = new vision.ImageAnnotatorClient();
Next, let's say a user wants to upload an image to our server and we would like to reject the image if it is NSFW.
const veryLikely = detection => detection === 'VERY_LIKELY';
const likelyOrGreater = detection =>
detection === 'LIKELY' || veryLikely(detection);
const moderateContent = url =>
client.safeSearchDetection(url)
.then(([result]) => {
const detections = result.safeSearchAnnotation;
return likelyOrGreater(detections.adult) || likelyOrGreater(detections.violence) || veryLikely(detections.racy);
});
Our function moderateContent
gets a url
as parameter (it can actually receive also buffer
), this url
points to a local image file or a remote one. The functions returns a Promise which resolves to true if the content has to be rejected or false otherwise. It actually contains only one third-party call to Cloud Vision API to run a SafeSearch detection on the provided image. SafeSearch ranks the image with the following rankings:
UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, and VERY_LIKELY
.
I set the threshold to adult and violence ranking "likely" or better and racy to "very likely", obviously you can set your threshold to whatever you want.
Using the moderateContent
function our server can decide whether to proceed with the provided image or to reject it with error code 400 for example.
I hope that now you understand how easy it is to implement a content moderation system, all you need is a few lines of code and a Google Cloud account.
Good luck, let me know how it goes in the comment below :)
Posted on March 5, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.