A new report released today by the Anti-Defamation League (ADL) finds that five major social media platforms – Facebook, Instagram, TikTok, YouTube and X – routinely fail to take action on anti-Semitic hate reported through regular channels User.
The ADL’s Center for Technology and Society (CTS) evaluated both policies and their implementation on anti-Semitism, providing ratings for each of the five platforms based on an established set of criteria.
X (formerly Twitter) scored the worst with an “F” because most of the platform’s actions were to limit the visibility of problematic content. Although the ADL acknowledged that X had taken action, unlike other platforms, X did not remove hateful content. Additionally, Facebook and TikTok each received a “C”; YouTube and Instagram received a “C-”.
“Social media platforms continue to fall far short of expectations when it comes to moderating anti-Semitic and anti-Israel content,” said Jonathan A. Greenblatt, CEO and national director of the ADL. “Following Hamas’s unprecedented attack on Israel on October 7, Jewish users on social media are facing more anti-Semitic insults and harassment than ever before. It’s not hard to recognize this hate, but it takes leadership to consistently enforce the rules.”
ADL researchers analyzed the extent to which the five platforms enforced their policies against hate content in two specific areas: anti-Semitic conspiracy theories and the term “Zionist” used as an insult. They concluded that while all five platforms have adequate policies in place to respond to this type of hate rhetoric (with the exception of
The report also found that most platforms only took action when the ADL escalated complaints through direct channels that were not available to regular users. And even then, the platforms only responded to some of the hate content.
In the year since October 7, the ADL has documented a dramatic increase in anti-Semitic hate online. The ADL’s annual report on online hate and harassment found that 47% of Jews saw anti-Semitic content or conspiracy theories related to the war between Israel and Hamas, compared to 29% of Americans overall. However, hate speech has changed in many cases and, alongside the resurgence of old ones, new anti-Semitic conspiracy theories have also taken shape.
“We have published several studies examining how platforms respond to anti-Semitism when it is reported by average users. “Disturbingly, this report card had the worst response rate to user complaints that we have ever seen,” said Daniel Kelley, acting director of the ADL Center for Technology and Society. “While we are encouraged by the platforms’ actions in reporting through our Trusted Reporters channels, we call on technology companies to conduct comprehensive audits of how their platforms moderate anti-Semitic content to understand why. “This is deeply troubling “Gap between platforms.” Policies and user complaints.”
methodology
To test each platform’s compliance with anti-Semitism policies, the ADL began reporting individual content using the tools available to a typical user. After a week, investigators checked to see if any action had been taken. For content where no or partial action was taken (e.g. limiting the visibility of the content), the ADL reported the content to direct contact points within the companies, usually through associations of “trusted whistleblowers”.
The ADL also evaluated each platform’s policies on conspiracy theories and anti-Semitic slurs. Comparing policies with actual content often reveals nuances, requiring much case-by-case analysis of potentially violent content. ADL platform policy experts assessed every piece of content sent to the platforms’ trusted whistleblowers.
After reviewing the remaining content by trusted whistleblowers, the ADL evaluated each platform based on its response rate to content reported as a regular user and its cumulative action rate following reports from trusted whistleblowers.
Recommendations for platforms
- Improve the user complaint system. While we recognize the challenges of moderation at scale, these platforms should be able to process complaints, review content, and respond to hateful content in less than a week.
- Close the gap between policy and implementation. Companies should begin conducting an internal audit to assess 1) whether employees are prepared to recognize anti-Semitic content and take appropriate action; 2) whether current policies take into account the multidimensional nature of anti-Semitism; 3) their ability and effectiveness to implement the policies; and 4) support mechanisms for users exposed to or affected by anti-Semitic hate and harassment.
- Provide independent researchers with access to data, including access to content-moderated files.
- Review reported content in context. Multiple complaints or other signals about the same content can provide context for content that might not otherwise appear to be infringing.
- Track emerging trends and conflicting shifts as they change over time. In the case of anti-Semitic hatred, these changes have accelerated since October 7th.