Neighbors in Levittown, Pennsylvania were left in shock after a horrifying incident took place inside a local home. Justin Mohn, a 32-year-old man, allegedly beheaded his father, Michael, and shared a 14-minute video of the gruesome act on YouTube. The video, which circulated for several hours before being removed, has once again highlighted the gaps in social media companies’ ability to prevent the spread of violent and horrific content.
YouTube, the platform on which the video was shared, claims to have strict policies against graphic violence and violent extremism. A spokesperson for YouTube stated that the video was removed for violating their graphic violence policy and that Justin Mohn’s channel was terminated in line with their violent extremism policies. The platform uses a combination of artificial intelligence and human moderators to monitor its content, and in the third quarter of 2023 alone, they took down 8.1 million videos for policy violations.
However, questions arise as to how the video was allowed to remain online for several hours before being taken down. YouTube did not provide any details regarding how the video was caught or why it wasn’t removed sooner. The incident comes at a time when social media companies are under scrutiny for their handling of violent and harmful content, particularly with regards to child safety online.
The incident in Levittown drew comparisons to beheading videos previously posted online by the Islamic State militants. This incident, along with other horrific clips shared on social media platforms in recent years, highlights the urgent need for better moderation practices. Domestic mass shootings and acts of violence filmed abroad have all been broadcasted on social media, raising concerns about the effectiveness of current moderation efforts.
Experts argue that while automated systems can help detect prohibited content, they sometimes fall short when it comes to new or unusually violent and graphic videos. This is where human moderators play a crucial role. The Global Internet Forum to Counter Terrorism (GIFCT), a group established by tech companies to prevent the spread of such videos, also alerted its members about the video. However, the video had already spread to other platforms, such as X (formerly known as Twitter), where it remained for at least seven hours.
The incident in Levittown further highlights the role of social media in radicalization and the ease with which individuals can access extremist ideologies. The internet provides a platform for like-minded individuals to connect and reinforce violent ideas. While social media platforms have policies in place to remove violent and extremist content, the emergence of newer, less closely moderated sites has allowed hateful ideas to fester unchecked.
Despite the challenges, social media companies must be more vigilant in regulating violent content. The reality is that social media has become a frontline for extremism and terrorism, and efforts to push back against these issues must be more serious and committed. Tech reforms should include increased transparency about the impact of layoffs on employees and more investment in trust and safety workers.
The Levittown incident serves as a reminder that social media companies have a responsibility to protect users from violent and harmful content. As the debate surrounding online safety continues, it is crucial for these platforms to improve their moderation practices and take decisive action against graphic and extremist content.