YouTube is ordering workers to review thousands of hours of its most popular content and setting new limits on which videos can run ads, its latest moves to ease advertisers’ worries that their brands are showing up alongside offensive or controversial videos.
YouTube said Tuesday that human reviewers would watch every second of video in its curated lineup of top content, dubbed Google Preferred, which brands pay a premium to advertise on.
Human reviewers also will have to approve new videos uploaded by Google Preferred channels before the videos can begin running the premium ads.
YouTube, a unit of Alphabet Inc.’s GOOGL 0.00% Google, says Google Preferred includes among the most popular 5% of channels, as determined by their likes, comments and shares, among other factors. The company didn’t say how many hours of content that entails.
But YouTube has said since 2015 that users upload 400 hours of video to the site a minute, or 65 years of footage a day, meaning reviewing even a small slice of that total would likely require at least tens of thousands of hours.
The company expects to have the full review completed by the end of March, then continue to review new videos as they are posted.
YouTube is also raising the bar for channels that want to carry ads. Channels must now have accumulated at least 4,000 hours of watch time in the past 12 months and 1,000 subscribers, compared with the threshold of 10,000 cumulative views that YouTube set last year. YouTube said a “significant” number of channels would be affected but declined to provide more details. The company said nearly all affected channels now make less than $100 a year in ad revenue.
The steps show YouTube is yielding to advertisers’ demands for more oversight on videos it sells as ad space, despite the fact that such policies are likely to upset its network of video creators, who are crucial to the site’s reach and popularity.
Google has long touted YouTube to advertisers as a better alternative to television, with unprecedented scale and diversity of content. But those traits have also made the site difficult to police. Human reviewers could never watch all of YouTube’s content, while software often doesn’t understand what could be offensive.
As a result, news organizations and advertisers over the past year have discovered YouTube running their ads before extremist, racist and hateful videos. Many top brands pulled spending from the site in response, prompting the company to adopt stricter ad policies, hire more human reviewers and give brands more control over where their ads appear.
YouTube has faced other controversies over some of its most popular stars. YouTube last year expelled from its Google Preferred program the most popular creator on its platform, Felix Kjellberg, also known as PewDiePie, after The Wall Street Journal reported on anti-Semitic jokes and Nazi imagery in some of his videos. Mr. Kjellberg later apologized for some of the jokes and said the Journal took some out of context.
This month, YouTube pulled another popular creator, Logan Paul, from Google Preferred after he posted a video that included footage of a person who had apparently committed suicide in Japan. Mr. Paul apologized for the video and deleted it. YouTube said in a tweet, “Suicide is not a joke, nor should it ever be a driving force for views.”
YouTube on Tuesday said it is improving those controls to give advertisers more data over how changes in ad placement affect their reach.
Last month, the company said it plans to have more than 10,000 people reviewing content by the end of this year, though it declined to say how many people it has in that role today.
Reviewing content manually has become increasingly important at tech companies that sell advertising alongside user-generated content, including YouTube, Facebook Inc. andTwitter Inc.
Those companies often outsource that work to contractors, who typically review thousands of posts a day, many of them disturbing. YouTube said its reviewers are a mix of employees and contractors.