Matt Halprin, YouTube’s Global Head of Trust and Safety, said the new policy will “…not only prohibit explicit threats, but also veiled or implied threats. This includes content simulating violence toward an individual or language suggesting physical violence may occur.” He also added, “We will no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation. This applies to everyone, from private individuals, to YouTube creators, to public officials.” You’d have thought that wouldn’t need to be added, but here we are. Halprin specifically mentions an incident “earlier this year” that led to them re-examining their policy. The incident in question is the alleged harassment of Vox journalist Carlos Maza, which YouTube dismissed as merely a difference of opinion: “Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site.” It later demonetized the YouTuber offering the opinion, but, as Maza points out, that did virtually nothing to stop him. Needless to say, the response to this wasn’t pretty. It was exacerbated by the fact that Maza was in part targeted because of his sexuality, and he pointed it out during Pride Month, in which YouTube was doing a big, splashy promotion for its LGBTQ+ creators and how it was helping them. Within days, YouTube issued a statement saying it would reexamine its harassment and hate speech policies, “in consultation with experts, creators, journalists and those who have, themselves, been victims of harassment.”
— YouTube (@YouTube) May 29, 2019 The company also intends to examine whole channels, as it’s often no one specific thing in one video that qualifies as harassment or hate speech. If a content creator is found to “brush up against our harassment policy” too often, they can be removed from the partner program, have their vids taken down, or their entire channel removed in some cases. This is something that’s going to take a lot of work to enforce — by the very nature of veiled threats, they’re hard to quantify. YouTube’s also going to have to walk a very fine line lest it do too much or too little. Neal Mohan, chief product officer, told the New York Times, “There’s a lot of nuance and context that’s important here, but it is really something we want to get right on our platform. We don’t want this to be a place where individuals are harassed. We want to take a clear line about that.” It feels as though YouTube’s 2019 has been one long clean-up operation. It’s had to recon with extremist content and the lack of protection for children, in addition to its toxicity problem.