It’s no secret that Facebook has ongoing misinformation and misinformation issues. These rub shoulders with others such as those relating to deep fakes. And, of course, the company has made strides to combat these issues, including somewhat unofficial measures aimed at combating fake or paid product reviews on the Facebook platform. Now parent company Meta has announcement that it makes these measures more official for the Facebook platform in the United States.
How does Facebook combat fake reviews by changing policies?
Many aspects of the review policy will remain similar to what is effectively already in place. Now, Meta’s community comment policy for Facebook and its other apps and platforms is explicit and prohibits fake or paid reviews outright.
This means that users who post them are more likely to face consequences for those activities, to begin with. But they also can’t leave “irrelevant” or spam reviews in exchange for rewards. And reviews with content that violates other policies, such as those relating to graphical or offensive content, are also not allowed.
Facebook will continue to review reviews that may violate its policy on a case-by-case basis. The reviews themselves may be automated or reviewed by a real person. The company will give users found to be infringing the option to request a secondary content review. This will likely be performed by a real person instead of being performed again by automation.
However, it also indicates that infringing posts will be deleted. In addition, repeat offenders can face much more serious consequences. Like a permanent ban or long-term suspensions and personal account bans. Companies involved in the activity could also lose access to product listings and branding.
Since the Meta-owned company has already worked to commit to fighting fake reviews, most end users are unlikely to see a big change on their end. But it underscores the importance for users of the ever-popular social platform to only share legitimate reviews.
–
–