The new features were launched on the popular social media platform on Wednesday, and are expected to have a positive impact on discourse taking place on the app. Similar strategies have been used on other popular social media platforms in recent years.
The first new feature launched by TikTok is called "Fitler all comments." It will allow users the possibility to either show or hide comments posted on their uploaded videos. If this feature is turned on, comments will not show automatically, but only after being approved by the content's creator - allowing users to limit hurtful or irrelevant comments on their videos.
The second feature, "Consider before you comment," will encourage TikTok users to be nicer to one another by making them pause before posting hateful messages. Whenever a user is about to post comments that include violent words or an aggressive undertone, a pop-up message with come up asking the user: “Would you like to reconsider posting this?” before encouraging users to edit their posts or post them anyway.
TikTok has recently announced a zero-tolerance policy toward violent content or discourse on its app.
"That's why the company has gone to great lengths to block violent content from appearing on the platform and is now adding technological solutions on the app, aimed at combating violent discourse that might rise due to controversial content being shared on the platform," a statement released by TikTok read.
TikTok's announcement reflects increasing instances of violence and hate speech being shared on the platform and reported by often underage users. In some cases, such as that of the violent crackdown imposed on protesters in Myanmar, TikTok was used in a political context by soldiers spreading fear among protesters.
Antisemitism directed at Jewish or Israeli content on the popular platform has also been reported by users, indicating a growing need for tools that will allow users to divert uninvited people only looking to spread hate.
Other social media platforms have taken similar steps in order to combat online bullying and try to promote a more respectful discourse. Instagram released in late 2019 a feature powered by "AI that notifies people when their comment may be considered offensive before it’s posted."