Home » Technology » Meta against weapons content on Facebook and Instagram of terrorism

Meta against weapons content on Facebook and Instagram of terrorism

  • The parent company of Facebook and Instagram is committed to complying with its content policies.

  • In non-advertising content, that is, the one that appears organically in the feeds of its users, Meta fights against weapons, drugs and terrorism content.

  • In its effort to eliminate harmful content since the last quarter of 2021, it analyzes the contents related to the bullying and harassment.

After in October of last year, Meta –formerly Facebook– was accused of prioritizing its profits instead of user safetyeven aware of the damage that Instagram caused among its adolescent users and will focus the business on the metaverse, the parent company shows that it is committed to complying with its content policies.

According to data from the Q4 2021 report, released today, Meta is fighting non-advertising content, i.e. weapons and drugs on Facebook and on Instagram against terrorism in the United States (EU).

In compliance with their policies, the content and pages that are removed are related to non-identical behavior, phishing, and harassment.

It should be noted that during the third quarter of 2021, the content that was removed the most from Facebook was that which represented a real risk of harm, such as physical violence, hate speech or false cures for Covid-19, but it seems that the fourth semester the Harmful content veered the other way.

The content that prevails in the feeds from Instagram and Facebook

The most common experience in feed of Facebook is to see posts that contain links from friends or groups to which you belong.

In the last period of 2021, 86.8 percent of visits came from posts shared by friends; of the remaining 10.7 percent came from Feed recommendations or also known as “unconnected posts” and 2.5 percent came from products such as events and others.

In the same period, on Facebook there was a 0.03 percent prevalence of adult nudity and sexual activity and remained constant during the third and fourth quarters.

For its part, Instagram increased content about adult nudity and sexual activity by 227,08.

This, despite the fact that Facebook has 14 content policies and Instagram 12. Among which the display of adult nudity and sexual activity is restricted, and Instagram, the content is eliminated to prevent the sharing of non-consensual or minor content. ; however, the content of protest and for educational reasons prevails on the platforms.

According to Meta’s transparency page on Facebook and Instagram, harmful content was prevalent and consistent.

One of the metrics that remained constant was that harassment and intimidation, it should be noted that it was not until last year that Meta began to include them in its Community Standards Compliance Report in the context that it is one of the most complex problems to address, In addition to the fact that the platforms must be a safe place to connect and interact with the community.

For the third quarter of last year, the prevalence of bullying and harassment was 0.14-015 percent on Facebook and 0.05-0.06 percent on Instagram, that is, the buylling and harassment were present between 14 and 15 times per 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views of content on Instagram.

To reduce the presence of bullying and harassment people are penalized on social networks, certain content and accounts linked to the behavior are removed; sexualizing attacks on public figures are eliminated; added warning screens to educate and avoid considerate practices bullying or harassment.

What kind of Meta content are you striving to remove from Facebook and Instagram?

By the fourth quarter of 2021, Meta took action on Facebook on content related to drugs and firearms.

In the fourth quarter of 2021, Facebook recorded an increase of 2.7 million drug-related content and analysis work stood at 4 million.

Also, since the third quarter of the same year, it increased 1.1 million pieces of content related to firearms and the work on the content remained at 1.5 million pieces.

A minor job, but no less important for that, was located in the content of spam and action was taken on 1.2 billion pieces of spam content.

As soon as to Instagram in the fourth quarter Meta took action on 905 thousand pieces of content related to terrorism and had an increase of 220 thousand compared to the third quarter.

It also worked on 195 thousand pieces of content related to firearms and had an increase of 154 thousand compared to the third quarter.

“The prevalence of harmful content on Facebook and Instagram remained relatively constant and decreased in some areas from Q3 to Q4, in part due to improvements and expansions in our proactive detection technologies,” Meta said in a statement.

Meta is at a decisive moment to prove to its users that the content that is distributed in their feeds it is filtered with respect to what they consider harmful, so that the idea that their platforms optimize what generates discussion, hate, is divisive and polarizing is eliminated.

This, although in his report “Our Commitment to Safety” of May 2021 ensured that in the fourth quarter of 2020 it removed 97 percent of hate speech content.

It is well known that Facebook has a global team that monitors the quality of content; however, they do not have a system dedicated to validating the security of the feed of users, only harmful content. That’s where the tech giant’s technology must show its potential.

Now read:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.