Home » Technology » Bodyguard, the app that protects from toxic content on social networks

Bodyguard, the app that protects from toxic content on social networks

When a user accumulates several thousand followers on social networks, the most common is that they are exposed to unpleasant comments, insults and toxic followers who are dedicated to leaving negative comments on the publications, the so-called trolls.

Bodyguard allows users to block negative and toxic comments to maintain peace of mind on social media.

Although, unfortunately, this type of hate speech is increasingly common on social networks, the French startup Bodyguard has created an application that is capable of detect malicious content and comments and block them so that they do not reach the user, protecting him from toxic people on social networks. The application has been running in France for several years, focused on French content, and now it has also been launched for the English-speaking market.

Bodyguard has more than 50,000 users in France, and thanks to its Artificial Intelligence technology it is capable of detecting negative and abusive comments with an accuracy of between 90 and 95%, with only 2% false positives. The app connects with Twitter, YouTube, Instagram y Twitch, but it has no API for TikTok or Snapchat at the moment.

The app has a simple mechanism: you download the application, connect with the different social networks and choose a level of moderation. In this sense, the application offers several categories of moderation such as: insults, “body shaming” (shaming users because of their body), bullying, sexual harassment, racism and homophobia, and in each of these categories the user can set the moderation level between high, medium and low.

BodyGuard App

Once moderation is configured, the user no longer needs to access the application, unless they need to change any of the moderation levels. The app works in the background, scanning comments and social media posts to decide if it’s toxic content or not. When finding negative content, you can block the user, silence them or simply hide the comment. In this way, when the user opens the application again, it is as if the negative content no longer exists.

Unlike other comment moderation applications, Bodyguard does not work through machine learning. Instead, uses various algorithms that analyze the content of a comment within its context, thus being able to determine if it is offensive or not.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.