Skip to main content

YouTube will ask commenters to rethink posting if their message seems offensive

YouTube will ask commenters to rethink posting if their message seems offensive

/

An attempt to weed out offensive comments

Share this story

Illustration by Alex Castro / The Verge

YouTube is trying to combat offensive comments that appear under videos by following in the footsteps of other social media companies and asking people before they post something that may be offensive: “Is this something you really want to share?”

The company is launching a new product feature that will warn people when they’re going to post a comment that it “may be offensive to others,” in order to give them “the option to reflect before posting,” according to a new blog post. The tool won’t actually stop people from posting said comment. Prompts won’t appear before every comment, but it will for ones that YouTube’s system deems offensive, which is based on content that’s been repeatedly reported. Once the prompt does appear, people can post the comment as they originally intended or use additional time to edit the comment.

For creators, the company is also rolling out better content filtering systems in YouTube Studio (the backend where creators manage their channel). The new filter will seek out inappropriate or hurtful comments that were automatically flagged and held for review, and remove them from the queue so people don’t have to read them. The new feature will roll out on Android first and in English before appearing elsewhere.

There’s no question that YouTube has a problem with hurtful comments on the site, but one of the bigger issues is hateful comments. Through automatic filtering, the company has removed over 46 times more daily hate speech comments since early 2019 than ever before, according to YouTube. Then there’s videos. YouTube claims that of the 1.8 million channels terminated last quarter, more than 54,000 were due to hate speech. Those were the most bans for hate speech content in a single quarter that YouTube has seen, and three times as high than in early 2019 when new hate speech policies went into effect.

Hateful speech and hurtful comments are just some of the problems that greatly affect the creator community

YouTube is also trying to combat other issues affecting creators, including monetization, bias, burnout, and channel growth concerns. To better understand how different communities are impacted, the company will start to ask YouTubers to voluntarily supply information about their gender, sexual orientation, race, and ethnicity beginning in 2021.

The goal is to use the data to pinpoint how different communities are treated both in terms of discovery on the platform and when it comes to monetization. The LGBTQ creator community has consistently said that YouTube’s systems automatically demonetize their content or hide their videos, and they have publicly fought against the treatment they receive. YouTube’s teams also want to use the data to find “possible patterns of hate, harassment, and discrimination.”

One of the biggest questions is how that data will be used and stored once it’s collected. YouTube’s blog post states that the survey will outline how the information will be applied to the company’s research and what control creators retain over their data. The blog post as it stands does not specify that now. Instead, the company states that information will not be used for advertising purposes. People will also retain the ability to opt out and delete their information whenever they want.

“If we find any issues in our systems that impact specific communities, we’re committed to working to fix them,” the blog post reads.

There’s no current timeline for when the surveys will roll out, but more information about the project will be released in early 2021.