TikTok Reportedly Leads Children's Profiles to Explicit Material Within a Few Clicks

According to a fresh inquiry, the widely-used social media app has been observed to direct children's accounts to pornographic content within a small number of clicks.

How the Study Was Conducted

A campaign organization set up fake accounts using a date of birth for a minor and turned on the app's "restricted mode", which is intended to reduce exposure to inappropriate content.

Investigators observed that TikTok recommended inappropriate and adult-themed search terms to the simulated accounts that were established on unused smartphones with no search history.

Alarming Recommendation Features

The terms suggested under the "you may like" feature included "very very rude skimpy outfits" and "inappropriate female imagery" – and then escalated to phrases such as "explicit adult videos".

For three of the accounts, the adult-oriented recommendations were recommended right away.

Quick Path to Pornography

Following just a few taps, the researchers found pornographic content from women flashing to penetrative sex.

Global Witness claimed that the content attempted to evade moderation, often by displaying the clip within an innocuous picture or video.

In one instance, the method took two clicks after logging on: one tap on the search bar and then one on the recommended term.

Regulatory Context

The research entity, whose mandate includes investigating digital platforms' effect on public safety, stated it carried out multiple testing phases.

One set occurred preceding the implementation of child protection rules under the United Kingdom's digital protection law on the 25th of July, and a second set following the rules took effect.

Serious Findings

Investigators added that several pieces of content included someone who appeared to be below the age of consent and had been sent to the child protection organization, which monitors online child sexual abuse material.

The research organization alleged that TikTok was in non-compliance of the digital protection law, which requires tech companies to stop children from encountering dangerous material such as explicit content.

Regulatory Response

A communications officer for Britain's media watchdog, which is tasked with monitoring the act, commented: "We acknowledge the research behind this study and will review its findings."

The regulator's guidelines for following the act indicate that tech companies that pose a significant danger of presenting inappropriate videos must "configure their algorithms to remove dangerous material from minors' content streams.

The platform's rules ban explicit material.

Platform Response

The social media company stated that upon receiving information from Global Witness, it had taken down the violating content and implemented adjustments to its suggestion feature.

"Immediately after notification" of these claims, we responded quickly to look into the matter, delete material that contravened our rules, and introduce upgrades to our search prompt functionality," commented a official speaker.

Melissa Sheppard
Melissa Sheppard

A passionate writer and life coach dedicated to helping others achieve their dreams through storytelling and actionable advice.