YouTube Quietly Eased Moderation For Trump-era Content

YouTube Quietly Eased Moderation For Trump-era Content

YouTube quietly relaxed its content moderation policies in December, doubling the amount of rule-breaking content allowed in videos deemed to be in the “public interest” from 25% to 50% of runtime, according to internal training materials reviewed by The New York Times.

The world’s largest video platform made the changes without public announcement, instructing moderators to prioritize “freedom of expression” over removing content that violates policies on hate speech, misinformation, and harassment. The shift came one month after Donald Trump’s re-election victory.

“Recognizing that the definition of ‘public interest’ is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,” YouTube spokesperson Nicole Bell told the Times. “Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm.”

The policy applies to videos discussing elections, race, gender, immigration, and other politically sensitive topics. Moderators received instructions to “err against restricting content” when weighing free speech against potential harm.

The training materials included real-world examples of the new approach in action. Moderators were told to keep online a video titled “RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS” despite medical misinformation claiming COVID vaccines alter human genes. YouTube determined public interest “outweighs the harm risk.”

Another approved video contained anti-transgender slurs in a 43-minute discussion of Trump cabinet appointees. The platform ruled that it could stay because it had only committed a “single violation” of the harassment rules.

Most striking was a South Korean video featuring commentators fantasizing about former President Yoon Suk Yeol’s execution by guillotine. YouTube approved it, reasoning that “execution by guillotine is not feasible.”

The changes follow similar moves by other major platforms after Trump’s victory. Meta ended its fact-checking program in January, while X eliminated content moderation after Elon Musk’s 2022 purchase. YouTube stands apart by implementing changes without public disclosure.

Critics argue the shifts represent a “rapid race to the bottom” driven by political pressure rather than user safety. Imran Ahmed, CEO of the Center for Countering Digital Hate, said the moves prioritize corporate profits over safety.

“This is not about free speech,” Ahmed told the Times. “It’s about advertising, amplification, and ultimately profits.”

YouTube removed 192,586 videos for hateful content in the first quarter of 2025, a 22% increase from the previous year. The platform hasn’t disclosed how many additional videos would have been removed under previous guidelines.

The timing reflects tech companies’ concerns about potential government retaliation under the Trump administration. Google, YouTube’s parent company, currently faces two Department of Justice antitrust lawsuits that could force a breakup of its services.

Republican lawmakers have long criticized content moderation as censorship, pressuring platforms to reduce oversight of user-generated content.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like