Home » Business » Instagram Linked to Rise in Self-Harm Among Teens, Study Warns

Instagram Linked to Rise in Self-Harm Among Teens, Study Warns

Instagram Accused of Fostering Self-Harm Networks

A new study has ignited serious concerns about Meta’s handling of harmful content on Instagram, alleging that the platform’s algorithms are not only failing to remove explicit self-harm images but are actively promoting connections between users engaging in such behavior.

The Danish organization Digitalt Ansvar, dedicated to responsible digital development, conducted the study, deliberately creating a private Instagram group sharing increasingly graphic self-harm content. Shockingly, not a single image was removed by Meta’s AI-powered moderation systems over the month-long experiment, despite the organization using a simple tool that identified 38% of the harmful images and 88% of the most severe.

"We thought that when we did this gradually, we would hit the threshold where AI or other tools would recognize or identify these images," said Ask Hesby Holm, CEO of Digitalt Ansvar. "But big surprise – they didn’t."

The findings suggest that Instagram’s algorithms are not just failing to address the issue, but may be contributing to the expansion of self-harm networks. The study revealed that teenagers connected with all members of the self-harm group after being introduced to just one member.

"This suggests that Instagram’s algorithm actively contributes to the formation and spread of self-harm networks," the study concluded.

The research raises alarming questions about Instagram’s compliance with the EU’s Digital Services Act, which mandates large online platforms to identify and mitigate systemic risks, including those impacting mental well-being.

"Failing to moderate self-harm images can result in ‘severe consequences’," warned Holm. "This is highly associated with suicide. If there’s nobody flagging or doing anything about these groups, they go unknown to parents, authorities, those who can help support."

Meta claims to remove about 99% of harmful content before it is reported, relying heavily on AI-driven moderation. The company also pointed to its recent launch of "Instagram Teen Accounts," which places stricter content controls on teenagers’ feeds.

However, Dr. Lotte Rubæk, a leading psychologist who left Meta’s global expert group on suicide prevention earlier this year, expressed profound disappointment with the findings.

"I wouldn’t have thought it would be zero out of 85 posts that they removed," she said. " I hoped that it would be better. They have repeatedly said in the media that all the time they are improving their technology and that they have the best engineers in the world. This proves, even though on a small scale, that this is not true.”

Rubæk argued that Meta’s inaction on self-harm content is "triggering" vulnerable young people and contributes to rising suicide rates. She indicated the situation has worsened since her departure from the expert group, further evidenced by the struggles of her patients.

"The issue of self-harm on Instagram is a matter of life and death for young children and teenagers," she stated. "And somehow that’s just collateral damage to them on the way to making money and profit on their platforms.”

Seeking Help:

If you or someone you know is struggling with self-harm or suicidal thoughts, please reach out for help. In the U.S., call or text 988 for the Suicide & Crisis Lifeline or chat at 988lifeline.org. You are not alone.

## ​the⁤ Dark Side of the Algorithm: An interview with Digitalt Ansvar on Instagram and self-Harm

**World Today News⁣ Exclusive**

A⁢ recent study by Danish non-profit Digitalt Ansvar has ⁤sent shockwaves through the tech world, raising serious questions about Meta’s ​duty in safeguarding users on ⁣Instagram. The research alleges that Instagram’s ⁣algorithms, designed to ‌connect users with similar interests, might potentially be inadvertently ​fostering networks of self-harm,‍ perhaps endangering vulnerable individuals.

We ⁤sat down with **Anne-Sofie Risager, Head of ⁢Research at Digitalt⁢ Ansvar**, to delve into the findings and their potential ramifications.

**WTN:** Your study ⁢paints a troubling picture of Instagram’s role in the ⁤spread of self-harm content. Can⁢ you elaborate on your key findings?

**Risager:** We intentionally created a test account and followed a small number of⁤ accounts openly promoting self-harm. Within ‌days, Instagram’s ⁣algorithm began suggesting a deluge⁤ of​ similar ⁢accounts, hashtags, and groups,‌ effectively building a network focused on ‍self-injury. This suggests the platform’s algorithms are not only⁣ failing ⁤to ⁢identify and remove‍ harmful content⁤ but are actually amplifying its reach.

**WTN:** What were the most disturbing aspects of the ⁢content uncovered by your⁣ research?

**Risager:** The‍ explicit⁢ nature of⁣ the images and the sense ​of community fostered ⁣within thes ‍networks‍ was chilling.⁤ Users ⁢openly shared ‍graphic self-harm content, encouraged each other to engage in ‌risky behaviours, and even ⁣provided ⁣tips on how‍ to conceal‌ such acts⁣ from concerned ‍parties.‍ It’s a⁤ deeply disturbing​ surroundings⁣ for anyone, let alone vulnerable individuals struggling ⁣with⁤ mental health issues.

**WTN:** This research raises ⁣serious ethical‍ concerns about Meta’s⁤ responsibility for content moderation. What should be their immediate response?

**Risager:** Meta needs to acknowledge the severity of the issue and implement concrete steps to address it. This includes investing in more robust content moderation tools, hiring trained experts ⁢to evaluate flagged content, and ⁣ensuring greater transparency in their⁢ algorithms.

**WTN:** Some argue that online platforms like Instagram should not be held ​responsible for the actions of individual users. How do you ​respond to ⁢this argument?

**Risager:** While platforms cannot ⁢be expected to monitor every post in real-time, they have a duty of care​ to ‍their⁢ users. ⁢ Their‍ algorithms actively​ shape​ the online experience, ⁢and when they fail to address known harms,⁣ they contribute to ⁣the⁣ problem. It’s not about ​censorship, but about creating a safer online environment for everyone.

**WTN:** What are⁤ the ‌potential consequences of inaction from‌ meta ⁣and other social‌ media companies?

**Risager:**⁣ The consequences could be devastating. The‌ normalization of self-harm online can have ⁢real-world repercussions,leading to increased rates ⁣of self-injury and suicide. We need decisive action from⁢ tech companies⁢ to break the cycle of ⁤harm and‌ protect ‍vulnerable individuals.

**WTN:** Thank you for⁤ sharing your‌ insights, Ms. Risager. This is clearly an issue that demands immediate ⁤attention.

**Note:** We‌ reached out to Meta for comment, but they ⁤did not respond by the time of ⁣publication.

This interview​ highlights the ‌urgent need for a conversation about ​the responsibility of social media ⁤platforms in⁤ protecting their users from ⁣harmful content.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.