YouTube Removes More Videos, but Still Misses a Lot of Hate

On Tuesday, YouTube said that it removed more than 17,000 channels and over 100,000 videos between April and June for violating its hate speech rules. In a blog post, the company pointed to the figures—which are five times as high as the previous period’s total—as evidence of YouTube’s commitment to policing hate speech and its improved ability to detect it. But experts warn that YouTube may be missing the forest for the trees.

“It’s giving us the numbers without focusing on the story behind those numbers,” says Rebecca Lewis, an online extremism researcher at Data & Society whose work primarily focuses on YouTube. “Hate speech has been growing on YouTube, but the announcement is devoid of context, and is missing [data on] the money makers actually pushing hate speech.”

Lewis says that while YouTube reports removing more videos, the figures lack context needed to assess YouTube’s policing efforts. That’s particularly problematic, she says, because YouTube’s hate speech problem isn’t necessarily about quantity. Her research has found that users who encounter hate speech are most likely to see it on a prominent, high-profile channel, rather than from a random user with a small following.

A study of over 60 popular far-right YouTubers conducted by Lewis last fall found that the platform was “built to incentivize” polarizing political creators and shocking content. “YouTube monetizes influence for everyone, regardless of how harmful their belief systems are,” the report found. “The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online—and in many cases, to generate advertising revenue—as long as it does not explicitly include slurs.”

A YouTube spokesperson said changes in how the platform identifies and reviews content that may violate its rules likely contributed to the dramatic jump in removals. YouTube began cracking down on so-called borderline content and misinformation in January; in June, it revamped its policies prohibiting hateful conduct in an attempt to more actively police extremist content, like that produced by the neo-Nazis, conspiracy theorists, and other hate mongers that have long used the platform to spread their toxic views. The update prohibited content that promoted the superiority of one group or person over another based on their age, gender, race, caste, religion, sexual orientation, or veteran status. It also banned videos that espouse or glorify Nazi ideology, and those that promote conspiracy theories about mass shootings or other so-called “well-documented violent events,” like the Holocaust.


The WIRED Guide to Internet Addiction

It makes sense that the broadening of YouTube’s hate speech policies would result in a larger number of videos and channels being removed. But the YouTube spokesperson said the full effects of the changes weren’t felt in the second quarter. That’s because YouTube relies on an automated flagging system that takes a couple of months to get up to speed when a new policy is introduced, the spokesperson said.

After YouTube introduces a new policy, human moderators work to train YouTube’s automated flagging system to spot videos that violate the new rule. After providing the system with an initial dataset, the human moderators are sent a stream of videos that have been flagged by YouTube’s detection systems as potentially violating those rules and asked to confirm or deny the accuracy of the flag. The setup helps train YouTube’s detection system to make more accurate calls on permissible and impermissible content, but it takes a while—often months—to ramp up, the spokesperson explained.

Once the system has been properly trained, it can automatically detect whether a video is likely to violate YouTube’s hate speech policies based on a scan of images, plus keywords, title, description, watermarks, and other metadata. If the detection system finds that some aspects of a video are highly similar to other videos that have been removed, it will flag it for review by a human moderator, who will make the final call on whether to take it down, the spokesperson said.

Lewis says this approach can be effective at policing spam or scams, but can be gamed by users, including far-right influencers who generate income from YouTube ads. “These types of influencers are very savvy at avoiding the sort of signals that an automated system would catch,” Lewis explained. “As a human, if you watch [many of] these videos from beginning to end, you can see they do involve targeted harassment and are absolutely in violation of YouTube’s policies.” But, she said, the videos often use coded language “to obscure the context.”

social experiment by Livio Acerbo #greengroundit #wired