Social media is messy. Social media can be ugly. And social media can be unsafe. But social media fuels our lives and businesses. This is why apps ranging from YouTube to Instagram continue to be in the crosshairs of public scrutiny as their global base of users and brands continues to grow. TikTok is now the most visited site in the world. But with great power comes great responsibility. Social apps such as TikTok have been under fire for being harmful to people, especially more vulnerable populations such as teens. That’s why TikTok has announced measures to make the app safer.
TikTok announced new features and technologies that are intended to help viewers customize their viewing preferences and continue to have a safe and entertaining experience on TikTok. For example:
Users can apply a tool to automatically filter out videos with words or hashtags they don't want to see from their For You or Following feeds. The filter can apply to any kind of content – for example, someone who wants to see fewer meat recipes in their feed and more vegan recipes can filter accordingly. But the filter can also be used to restrict terms that a person finds inappropriate and harmful.
Content Levels ranks content based on “thematic maturity.” It is designed to keep mature content of all types from being shown to young users. As TikTok noted in a post, “Many people will be familiar with similar systems from their use in the film industry, television, or gaming and we are creating with these in mind while also knowing we need to develop an approach unique to TikTok. In the coming weeks, we’ll begin to introduce an early version to help prevent content with overtly mature themes from reaching audiences between ages 13-17. When we detect that a video contains mature or complex themes, for example, fictional scenes that may be too frightening or intense for younger audiences, a maturity score will be allocated to the video to help prevent those under 18 from viewing it across the TikTok experience.”

Source: TikTok
TikTok is also changing its algorithm to avoid recommending a series of similar content on topics that may be fine as a single video but potentially problematic if viewed repeatedly, such as topics related to dieting, extreme fitness, sadness, and other well-being topics. “We’ve also been testing ways to recognize if our system may inadvertently be recommending a narrower range of content to a viewer,” according to TikTok. “So, users’ For You feeds are less likely to be flooded with content that could be depressing or stressful if seen repeatedly.”
The changes are happening at a crucial time for TikTok. The most popular app in the world is also facing a huge backlash related to teen safety. And the app is under renewed scrutiny for data safety and integrity. But TikTok is not the only app under fire. Facebook, Instagram, and YouTube are among the many that have endured heavy criticism for user safety standards. Apps need to act in order to avoid:
And there is no easy fix. For instance, YouTube attempted to use artificial intelligence to clamp down on the proliferation of inappropriate content, but the efforts backfired when legitimate LBTQ+ content was unfairly demonetized. YouTube ended up changing its approach by involving more human judgment to moderate content.
For businesses that operate on social sites, brand safety is a legitimate problem, too. No business wants its brand appearing alongside inappropriate content. At the same time, businesses need to have a higher tolerance for risk on social media. when you build your brand on social media, you play by someone else’s rules. We recommend a carefully considered Connected Content approach with owned media (such as your website) at the center of the user experience and social media complementing owned media as your users’ journeys dictate. Contact Investis Digital to learn how.