Last updated on April 3rd, 2024 at 11:50 am

Researchers state that extreme content is being promoted to young people and is becoming normalized

According to a new report, algorithms employed by social media platforms are rapidly amplifying extreme misogynistic content. This content is spreading from teenagers’ screens to school playgrounds, where it has become normalized.

Researchers observed a four-fold increase in the level of misogynistic content suggested by TikTok over a five-day monitoring period. The algorithm promoted more extreme videos, often focusing on anger and blame directed at women.

Although this study focused on TikTok, researchers believe their findings likely apply to other social media platforms. They advocate for a “healthy digital diet” approach to addressing the issue, rather than implementing outright bans on phones or social media, which are deemed likely to be ineffective.

The study, conducted by teams from University College London and the University of Kent, coincides with growing concerns about the impact of social media on young individuals. Recent research indicates that young men from Generation Z, many of whom admire social media influencer Andrew Tate, are more inclined than baby boomers to believe that feminism has had a negative rather than positive impact.

In a separate development, the mother of Brianna Ghey, a murdered teenager, called for social media apps to be prohibited on smartphones for individuals under the age of 16 after learning about the online activities of her daughter’s killers.

The Safer Scrolling study by UCL and Kent argues that harmful content is presented as entertainment through social media’s algorithmic processes. It notes that toxic, hateful, or misogynistic material is actively promoted to young individuals, particularly impacting boys who are experiencing anxiety and poor mental health.

Principal investigator Dr. Kaitlyn Regehr from UCL Information Studies stated, “Harmful views and tropes are increasingly being normalized among young people. Online consumption is influencing the offline behavior of young individuals, as we observe these ideologies transitioning from screens to schoolyards.”

Researchers conducted interviews with young individuals who were either consuming or creating radical online content. This helped create several archetypes of teenage boys who could be susceptible to radicalization. For each archetype, accounts were established on TikTok, each with distinct interests—such as content related to masculinity or loneliness. Subsequently, researchers analyzed over 1,000 videos suggested by TikTok on its “For You” page over a seven-day period.

Initially, the suggested content aligned with the stated interests of each archetype. However, after five days, researchers noted that the TikTok algorithm began presenting four times as many videos containing misogynistic content, including objectification, sexual harassment, or the disparagement of women. This increased from 13% to 56% of the recommended videos.

“Algorithmic processes on TikTok and other social media platforms target individuals’ vulnerabilities, such as loneliness or feelings of a loss of control, and gamify harmful content,” stated Regehr. “As young individuals engage with topics like self-harm or extremism in small doses, it can feel like entertainment to them.”

In addition to these observations, researchers interviewed young individuals and school leaders to understand the impact of social media. They discovered that hateful ideologies and misogynistic tropes have transitioned from screens to schools and have become ingrained in mainstream youth cultures.

Geoff Barton, the general secretary of the Association of School and College Leaders, which collaborated on the research, commented, “UCL’s findings highlight how algorithms, which are often poorly understood by most people, have a snowball effect, progressively delivering more extreme content in the guise of entertainment.”

“This is concerning on many levels, especially regarding the dissemination of messages related to toxic masculinity and its effects on young individuals who are in the crucial stage of growing up and developing their understanding of the world. They should not be exposed to such appalling content.”

“We urge TikTok and other social media platforms to immediately review their algorithms and enhance safeguards to prevent the dissemination of such harmful content. We also call upon the government and Ofcom to examine the implications of this issue within the framework of the new Online Safety Act.”

Andy Burrows, an advisor to the Molly Rose Foundation, established in memory of Molly Russell, who tragically took her own life after being exposed to distressing content on social media, commented, “This study underscores how TikTok’s algorithms aggressively target and inundate young people with harmful content. Within a short period, teens can be subjected to a continuous stream of unhealthy and sometimes perilous videos.”

“It is evident that the regulatory body Ofcom must take decisive and courageous steps to address algorithms that pose significant risks, prioritizing the safety and well-being of teenagers over the profits of social media companies.”

While visiting Northern Ireland, Prime Minister Rishi Sunak stated, “As a parent, I am constantly concerned about the impact of social media on my young daughters. That’s why I am pleased that we have enacted the Online Safety Act in the past year, granting the regulator strong new powers to regulate children’s online exposure.”

He continued, “If the major social media companies fail to comply with these regulations, the regulator can impose significant fines on them. Our current focus is on ensuring the effective implementation of this act.”

In response, a TikTok spokesperson stated, “Misogyny has always been prohibited on TikTok, and we proactively identify 93% of the content we remove for violating our hate speech policies. The methodology employed in this report does not accurately reflect the real user experience on TikTok.”

An Ofcom spokesperson commented, “Addressing violence against women and girls online is a top priority for us. Our research indicates that women are less confident about their personal online safety and are more affected by harmful content, such as trolling.”

The Online Safety Act requires online services like social media and search engines to safeguard users’ safety and rights. A key aspect of this is recognizing and addressing content that disproportionately impacts women and girls online.