Last updated on April 5th, 2024 at 12:08 pm

Exclusive: Psychologist accuses Meta of ignoring dangerous posts

A leading psychologist advising Meta on suicide prevention and self-harm has resigned, accusing the tech giant of disregarding harmful content on Instagram, repeatedly ignoring expert advice, and prioritizing profit over lives.

Lotte Rubæk, a member of Meta’s global expert group for over three years, stated that the company’s ongoing failure to remove self-harm images from its platforms is triggering vulnerable young women and girls to harm themselves further, contributing to a rise in suicide rates.

Rubæk’s disillusionment with Meta and its apparent lack of willingness to change led her to resign from the group. She believes Meta is indifferent to its users’ well-being and safety and instead uses harmful content to keep vulnerable young people glued to their screens for the sake of profit.

In her resignation letter, she stated, “I can no longer be part of Meta’s SSI expert panel, as I no longer believe that our voice has a real positive impact on the safety of children and young people on your platforms.”

In an interview with the Observer, Rubæk remarked, “On the surface it seems like they care, they have these expert groups and so on, but behind the scenes there’s another agenda that is a higher priority for them.”

According to her, this agenda involves “how to keep their users’ interaction and earn their money by keeping them in this tight grip on the screen, collecting data from them, selling the data and so on.”

A spokesperson from Meta stated, “Suicide and self-harm are complex issues, and we take them incredibly seriously. We’ve consulted with safety experts, including those in our suicide and self-harm advisory group, for many years, and their feedback has helped us continue to make significant progress in this space.

“Most recently, we announced that we’ll hide content discussing suicide and self-harm from teens, even if shared by someone they follow. This is one of many updates we’ve made after thoughtful discussions with our advisers.”

Rubæk’s caution comes as recent research by Ofcom, published last week, revealed that violent online content is “unavoidable” for children in the UK, with many first encountering it while still in primary school. Among the primary apps mentioned by those interviewed was Instagram.

Rubæk, who leads the self-injury team in child and adolescent psychiatry in the Capital Region of Denmark, was initially approached about joining the exclusive group of experts—which publicly lists 24 members—in December 2020. This invitation followed her public criticism of Meta, then known as Facebook, regarding an Instagram network linked to the suicides of young women in Norway and Denmark, following a documentary by Danish broadcaster DR.

She initially accepted the invitation with the hope of improving the platform’s safety for young people. However, after several years of having her recommendations disregarded—resulting in the continued existence of the original network she criticized—she concluded that the panel was merely for display.

She now suspects that the invitation may have been an attempt to silence her. “Perhaps they wanted me to join them so I would be less critical of them in the future.”

Emails reviewed by the Observer reveal that in October 2021, Rubæk raised concerns about the challenges users faced when trying to report potentially triggering images to Meta. In correspondence with Martin Ruby, Meta’s head of public policy in the Nordics, she mentioned attempting to report an image of an emaciated female but received a message from Instagram stating that they lacked enough moderators to review the image, which remained on the platform.

In November 2021, Ruby responded, stating, “Our team is reviewing the situation, but it’s not straightforward.” He also referenced the secret Instagram network that Rubæk had initially criticized, noting that Meta was “investigating further.”

Despite its well-documented association with suicides, Rubæk asserts that the network is still operational today.

Rubæk’s patients report attempting to flag self-harm images on Instagram, but they often remain visible. One patient mentioned that after reporting an image, it disappeared, only to reappear later on a friend’s account, indicating that it had simply been hidden from her view. Rubæk remarked that Meta employs various tactics to avoid removing content. She stated, “The AI is very sophisticated, capable of identifying even the smallest detail like a nipple in a photo.” However, when it comes to graphic self-harm images known to incite harm in others, she added, it seems to be a different story.