Internal TikTok Files Uncover Issues in Content Moderation and Areas of Concern

Recent revelations from confidential TikTok documents, unveiled in a U.S. court filing accessed by RTÉ, highlight serious concerns among senior personnel regarding the app’s detrimental impact on its young users due to compulsive use.

The documents also shed light on the alarming volume of content breaching TikTok’s own guidelines that remains unchecked. Among the statistics presented in the court filings, a staggering 35.71% of material categorized as ‘Normalisation of Pedophilia’ was found unmoderated. Additionally, 33.33% of content relating to ‘Minor Sexual Solicitation’ and 39.13% regarding ‘Minor Physical Abuse’ was left untouched. Shockingly, 50% of content glorifying ‘Minor Sexual Assault’ and a full ‘100%’ of material labeled as ‘Fetishising Minors’ was also found to evade moderation.

This information came to light inadvertently, a result of a flawed digitization and redaction process, after documents were publicly released from a case initiated by the Attorney General’s Office of Kentucky.

Kentucky is one of 14 states in the U.S. currently pursuing legal action against TikTok, alleging that the app was crafted with features intent on fostering addiction and causing harm to minors.

While not all documents reference specific dates, some dated materials trace back to as recently as May 2022. In these numerous lawsuits, internal TikTok communications, documents, and research findings were disclosed, with confidentiality agreements stipulating that certain details be redacted.

In a statement to Prime Time, TikTok claimed that the states’ “complaint selectively uses outdated documents and presents misleading quotes to mischaracterize our dedication to community safety.”

WATCH: Internal TikTok documents unveil failures in content moderation and rising concerns.

We need your consent to load this comcast-player content. We use comcast-player to manage extra content that might set cookies on your device and collect data about your activity. Please review their details and accept them to load the content. Manage Preferences

Meanwhile, TikTok, which has its European headquarters situated in Dublin, is facing scrutiny from the European Commission regarding the “protection of minors” and the “management of risks associated with addictive design and harmful content.”

Among the alarming revelations in these U.S. filings is data from an internal TikTok study focusing on the moderation of content related to suicide and self-harm. The analysis illustrated that videos discussing these sensitive topics, referred to internally as ‘SSH’, can glide through initial moderation stages, with “overlooked or improperly moderated videos gaining traction before being flagged.” The moderation phases are designated as ‘R1’ and ‘R2’.

According to the referenced study, “the SSH videos that slipped past R1 and R2 garnered an average of 75,370 views on TikTok before being identified for removal.”

Responding to inquiries about their practices, TikTok stated that “99% of the content we remove for policy violations has fewer than 10,000 views at the time of its removal.”

The court documents also cite an internal presentation from TikTok’s Trust and Safety team, noting that around “42% of users are ‘comment-only’ participants,” while the human review of comments is “disproportionately low.” “Human moderation for comment review stands at just 0.25%,” the presentation highlighted, meaning most concerning comments aren’t subject to human oversight.

Content moderation related to unhealthy eating habits and weight loss has also raised alarm bells within TikTok. An internal document states that rather than banning harmful content outright, certain posts are simply labeled as ‘not recommended.’ Consequently, these posts don’t surface in user feeds but remain accessible through the search function.

Users’ feeds on TikTok are shaped by an algorithm that curates content based on their previous interactions rather than what they actively seek. Regulators and policymakers are increasingly apprehensive about the potential of TikTok’s potent algorithm luring younger audiences into radical or extreme content through their feeds.

The platform is also criticized for its addictive nature. This phenomenon is sometimes referred to as “the rabbit hole effect,” denoting the tendency of users to fall deeper into negative content through extended usage. According to communications referenced in the court documents, TikTok executives were aware of these implications.

“The reason kids gravitate toward TikTok is the algorithm’s effectiveness,” one executive articulated in internal discussions. They further noted, “However, we ought to remain aware of the ramifications for various aspects of life such as sleep, eating, roaming around, or even making eye contact with others.”

In light of concerns regarding the rabbit hole effect and its mental health implications, TikTok embarked on various internal studies, where employees created new accounts to experience the app as a user would. One employee shared their experience, stating, “After following several accounts focused on pain and sadness, I found myself trapped in a ‘negative filter bubble’ within just 20 minutes. The overwhelming volume of negative content affected my mood, despite my otherwise positive mindset.”

Furthermore, the TikTank Report, another internal document, revealed that “compulsive engagement links to a range of negative mental health outcomes, including diminished analytical abilities, memory retention, contextual reasoning, conversational richness, empathy, and heightened anxiety.”

TikTok assured Prime Time that “safety remains a fundamental concern central to our mission.”

In the public eye, TikTok has responded to the criticism regarding the app’s addictive attributes by asserting that it implements ‘well-being measures’ and increasingly presents users with a broader array of content.

This ‘dispersion of content’ strategy entails showing users a more varied selection of topics within their feeds. For example, TikTok announced that it would bolster such measures following discussions prompted by a June Prime Time report highlighting these concerns.

READ: 13 on TikTok: Shocking self-harm and suicide content prompts experts’ alarm READ: TikTok concludes review of harmful content post-RTÉ coverage

Nevertheless, the Kentucky court filings assert that the experts TikTok consulted over the years “unanimously” advised against dispersion in favor of a different approach to combat dangerous rabbit holes. They suggested developing user-centric algorithms that allow individuals to discover other captivating content that diverts them from specific harmful directions.

Additional methods TikTok claims to have introduced to mitigate addictive tendencies among younger users include ‘Screen Time Management’ tools. These features prompt users to take breaks after an hour of usage.

The Kentucky AG court documents assert that this measure had “negligible impact.” According to the filings, after conducting an assessment, the platform found that these default screen time prompts marginally reduced the average daily usage of TikTok among teenagers—dropping from approximately 108.5 minutes to around 107 minutes.

Insights from internal discussions regarding this measure are also noted in the filings. A senior employee expressed skepticism about its potential impact on user engagement, stating, “After weighing these trade-offs with [a senior executive], it was indicated we could tolerate a 5% drop in usage with Screen Time Management for specific user groups like minors and heavy users.” They stressed, “However, this shouldn’t undermine retention—a major focus, given we do not anticipate a significant change in usage duration due to this merely increasing awareness rather than prompting substantive action.”

Another employee articulated, “Our objective isn’t primarily to reduce screen time but to enhance user satisfaction, ultimately boosting our daily active user rates and retention figures.”

In its defense, TikTok stated to Prime Time that it employs “robust safeguards, including proactive removal of suspected underage users.”

The documents filed by the Kentucky Attorney General accuse TikTok of evaluating its measures’ success using “three unrelated success metrics, one of which was ‘enhancing public trust in the TikTok platform through media coverage,’ rather than genuinely addressing time spent by teens on the platform.”

In light of inquiries from Prime Time regarding the court filing contents, TikTok reiterated it has instituted “robust safeguards, including proactive measures for removing suspected underage users,” in addition to voluntarily launching a range of safety features such as default screen-time limits and Family Pairing options. The social media giant, currently embroiled in significant legal issues in both the U.S. and Europe concerning societal impacts from the hosted content, decried it as “highly irresponsible” for named media entities to disseminate information under court seal.

For insight into TikTok’s practices, don’t miss Kate McDonald’s report airing on the October 17 edition of Prime Time at 9:35 PM on RTÉ One.

Edited by: Ali Musa

alimusa@axadletimes.com

Axadle international–Monitoring

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More