For years, YouTube has removed videos with derogatory slurs, misinformation about Covid vaccines and election falsehoods, saying the content violated the platform’s rules.

But since President Trump’s return to the White House, YouTube has encouraged its content moderators to leave up videos with content that may break the platform’s rules rather than remove them, as long as the videos are considered to be in the public interest. Those would include discussions of political, social and cultural issues.

The policy shift, which hasn’t been publicly disclosed, made YouTube the latest social media platform to back off efforts to police online speech in the wake of Republican pressure to stop moderating content. In January, Meta made a similar move, ending a fact-checking program on social media posts. Meta, which owns Facebook and Instagram, followed in the footsteps of X, Elon Musk’s social media platform, and turned responsibility for policing content over to users.

But unlike Meta and X, YouTube has not made public statements about relaxing its content moderation. The online video service introduced its new policy in mid-December in training material that was reviewed by The New York Times.