4

Instagram actively helping spread of self-harm among teenagers, study finds | Meta


Meta actively helps self-harm content to thrive Instagram failing to remove explicit images and encouraging those who engage in such content to friend each other, according to a damning new study that finds its moderation “grossly inadequate.”

Danish researchers created a private self-harm network on the social media platform, featuring fake profiles of people as young as 13, in which they shared 85 pieces of self-harm-related content that gradually increased in severity, including blood, razor blades and encouragement on self harm.

The purpose of the study was to test Meta’s claim that it has significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The technology company claims to remove around 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organization that promotes responsible digital development, found that not a single image was removed in the month-long experiment.

A Meta spokesperson said: “Content that promotes self-harm is against our policies.” Photo: Algi Febri Sugita/Zuma Press Wire/Rex/Shutterstock

When he created his own simple AI content analysis tool, he was able to automatically identify 38% of self-harm images and 88% of the most severe. This, the company said, showed that Instagram had access to technology capable of addressing the problem, but “chose not to implement it effectively.”

Inadequate moderation of the platform, Digitalt Ansvar said, suggests that it does not comply with EU law.

The Digital Services Act requires large digital services to identify systemic risks, including foreseeable negative consequences on physical and mental well-being.

A Meta spokesperson said: “Content that promotes self-harm is against our policies and we remove this content when we find it. In the first half of 2024 we’ve removed more than 12 million pieces related to suicide and self-harm on Instagram, 99% of which we proactively removed.

“Earlier this year we launched Teen Instagram accountswhich will put teenagers in the strictest setting of our sensitive content controls, so they are even less likely to be recommended sensitive content, and in many cases we hide that content entirely.”

However, the Danish study found that instead of trying to shut down the self-harm network, Instagram’s algorithm actively helps it expand. The research suggests that 13-year-olds become friends with all members of a self-harm group after being associated with one of its members.

This, according to the study, “suggests that Instagram’s algorithm actively contributes to the formation and spread of self-harm networks.”

Speaking of ObserverAsk Hesby Holm, CEO of Digitalt Ansvar, said the company was shocked by the results because it thought that as the weight of shared images grew, they would cause alarm on the platform.

“We thought that when we did this incrementally, we would reach a threshold where AI or other tools would recognize or identify these images,” he said. “But big surprise – they didn’t.”

He added: “It was worrying because we thought they had some kind of machine trying to understand and identify that content.”

skip past newsletter promotion

Failure to tone down images of self-harm can lead to “severe consequences,” he said. “It is strongly associated with suicide. So if no one flags or does anything about these groups, they remain unknown to parents, authorities and those who can help support them. The meta, he says, does not moderate small private groups like the one his company created to keep traffic and engagement high. “We don’t know if they’re moderating larger groups, but the problem is that self-harm groups are small,” he said.

Psychologist Lotte Rubeck has quit a global panel after accusing Meta of “turning a blind eye” to harmful content on Instagram. Photo: Linda Kastrup/Ritzau Scanpix/Alamy

Lotte Rubeck, a leading psychologist who left Meta’s global expert panel to prevent suicide in March after accusing it of “turning a blind eye” to harmful content on Instagram, said that while she was not surprised by the overall findings, she was shocked to see that they had not removed the most egregious content .

“I wouldn’t have thought it would be zero out of 85 posts that they took down,” she said. “I was hoping it would be better.

“They have repeatedly said in the media that they are improving their technology all the time and that they have the best engineers in the world.” This proves, albeit on a small scale, that this is not true.

Rubæk said Meta’s failure to remove images of self-harm from its platforms “enables” vulnerable young women and girls to further self-harm and contributes to rising suicide figures.

Since leaving the global expert group, she said the situation on the platform has only worsened, the impact of which is clearly visible in her patients.

The issue of self-harm on Instagram, she said, is a matter of life and death for young children and teenagers. “And in a way it’s just collateral damage for them on the way to monetizing and profiting from their platforms.”

نوشته های مشابه

همچنین ببینید
بستن
دکمه بازگشت به بالا