Meta, the parent company of Facebook and Instagram, is facing criticism for its failure to address the issue of child predators on its platforms. Recent investigations by The New York Times and The Wall Street Journal have revealed that Meta is allowing parents to exploit their children for financial gain on Facebook and Instagram, with some even using Meta’s paid subscription tools to do so.
The reports highlight the disturbing trend of parents running social media accounts for their underage children, mostly girls, who do not meet the platforms’ minimum age requirements. These “parent-managed minor accounts” have been found to sell materials to adult men, including photos of their children in revealing attire, exclusive chat sessions, and even their children’s used leotards and cheer outfits.
While these accounts may not feature illegal content or nudity, Meta staff discovered that some parents were knowingly producing material that would be sexually gratifying to pedophiles. Shockingly, these parents engaged in sexually charged conversations about their own children and made them interact with sexual messages sent by subscribers. Meta’s algorithms also promoted subscriptions for accounts featuring child models to suspected pedophiles, and some parents offered additional content of their children on other platforms.
Meta’s response to these alarming findings has been inadequate. Despite recommendations from its own staff to tackle the issue, such as requiring accounts selling child-focused subscriptions to register for monitoring or banning subscriptions to such accounts entirely, the company focused on building an automated system to prevent pedophiles from subscribing. However, this system proved to be unreliable and easily evaded by creating new accounts.
The way Meta’s social media algorithms work also contributes to the problem. Even accounts that are not intentionally insidious, such as those for child models, athletes, and performers, benefit from gaining large audiences of adult men. This visibility boost can lead to financial incentives and brand partnerships. Some companies reportedly pay child influencers $3,000 for a single post, and some even earn six-figure incomes through monthly subscriptions.
Meta’s failure to address the issue of child predators on its platforms is not new. The company already has a poor reputation regarding child protection, with accusations of creating a “marketplace for predators in search of children.” In fact, a lawsuit raised by the New Mexico attorney general in December accused Instagram and Facebook of facilitating the promotion of sexually explicit or suggestive materials of children to pedophiles.
The inadequate moderation attempts by Meta are also concerning. The Times reported that Meta responded to just one out of 50 reports made by the publication regarding questionable content featuring children over an eight-month period. Internal studies conducted by Meta in 2020 found that 500,000 child Instagram accounts had “inappropriate” interactions every day.
In comparison, TikTok has taken a stronger stance against the sale of underage modeling content on its platform. The Journal reported that TikTok bans such content on its marketplace and through its creator monetization services.
The revelations about Meta’s failure to address child predators on Facebook and Instagram are deeply disturbing. The company’s lack of action and inadequate moderation attempts raise serious concerns about the safety and well-being of children on its platforms. As more reports emerge, it becomes increasingly clear that Meta needs to take immediate and decisive action to protect its young users from exploitation and harm.