Back in June, when the #StopHateForProfit Facebook ad boycott was gaining momentum, I posited that this advertiser protest against the social network might be different from the ghosts of advertiser protests past.
In some ways, I still think I was right. This pressure campaign was certainly larger, more organized and more voluminous than previous social media boycotts: Stop Hate For Profit said more than 1,200 companies got involved (though not all advertisers directly associated their Facebook ad spending pauses with the specific campaign). Plus, thanks to the involvement of civil rights groups, the focus of the protest aimed to hold Facebook to account for fundamental ethical and user-safety issues, beyond marketer myopia like metrics and ad adjacency.
But in other ways, the movie has played out much like it has before. Facebook has committed to change — but even though most of those changes haven’t actually been implemented yet, advertisers returned anyway. Digiday reported early August that of Facebook’s top 20 advertisers in the U.S., just five were still keeping their dollars away from the social network, according to Pathmatics. Data from the measurement company from August 1 through September 26 showed that of those five holdouts, CVS, Unilever and Diageo had returned to spending on Facebook (Pathmatics found Microsoft and Coca-Cola to have both spent a negligible amount — $200 — on Facebook ads in that time.)
The Stop Hate for Profit coalition said Facebook should hire a civil rights expert to its C-suite. Facebook has said it will appoint a civil rights leader at “vp level,” but a hire has not yet been announced. Facebook also said in July it will undergo a Media Rating Council audit, assessing the brand safety controls it offers advertisers and its partner and content monetization policies. Those discussions are still ongoing (in July I reported the MRC was pushing for the audit to have a wider scope); Facebook said it would provide an update on the scope and timing for the audit by Sep. 30. Facebook in August released a request for proposals for a third-party audit of its community standard enforcement report; it expects this will be conducted in 2021.
To be sure, these sorts of important changes don’t just happen overnight. But tell that to the many advertisers who appeared to react at lightning speed to switch the advertising spigot back on again the second that platforms assure them they’ll do better next time.
“The digital economy is so engineered around the business model that the protests around it are symbolic public-relations exercises, awareness campaigns — a way for brands to engage in the political debate of the day,” said David Carroll, associate professor of media design at Parsons School of Design. “Businesses can’t succeed at being businesses and also choke their revenue off: Both Facebook and … advertisers are limited in how much they can be purely civically minded because purely civically minded enterprises are not-for-profit, volunteer, or tax payer funded.”
Carroll is a member of a newly formed group, dubbed “the Real Facebook Oversight Board” comprised of some of Facebook’s most vocal critics and designed to draw more attention to issues at the company. Facebook first proposed its own independent oversight board in 2018, which is set to launch in October. Facebook’s oversight board, which includes 20 members at launch, will start hearing appeals from users whose content was taken down by Facebook. It will also be able to make decisions over content that Facebook decided to leave up, but only initially if those cases are directly referred to the board by the company itself.
Caroll, whose mission to discover exactly what data Cambridge Analytica had stored on him was documented in the Netflix film “The Great Hack,” describes Facebook’s official oversight board as “kneecapped to do anything of value.”
“[Facebook’s oversight board doesn’t] address the fundamental underlying issues of which content disputes are a second-order effect, a byproduct, a symptom of the disease,” said Carroll. “There are no mechanics in the company to prevent bad things happening, it’s a totally reactive system to ‘clean up the mess on aisle six! Clean up the mess on aisle six!’ — that’s all they do.”
Last week, the World Federation of Advertisers claimed “major progress” in its work — as part of the Global Alliance for Responsible Media — to push platforms to improve the way in which they define and handle harmful content on their platforms. That progress was praised by marketing and media leaders at Mastercard, Unilever and Mars, according to the press release. WFA CEO Stephan Loerke said that for advertisers, these types of conversations have escalated “from the engine room to the board room.”
“These discussions, until very recently, were led by heads of media in companies and were all about effectiveness, efficiency and brand safety,” said Loerke. “We have seen this escalate to the boardroom because there’s a sense of where a brand appears tells you something about a brand’s value.”
Again, we’ll have to wait to see how these commitments play out in practice.
Facebook, YouTube and Twitter have said they will adopt GARM’s common definitions for harmful content by November. Those tech platforms will also move to a more standardized way of measuring and reporting the occurrence of harmful content, though that’s not expected to come into force until the second half of 2021. Platforms are also expected to come up with a way to develop “advertising adjacency solutions” to give advertisers more controls over their ads appearing next to unsuitable content — using either their own tech, or that of third-parties — “by year-end.” YouTube’s adjacency mechanism already satisfies GARM’s demand, Loerke said.
Over the past couple of years, voluntary coalition the Conscious Advertising Network has been in discussions with the United Nations about how digital advertising inadvertently funds the proliferation of online hate material. Pia Oberoi, senior advisor on migration and human rights at the office of the UN High Commissioner for Human Rights said the UN “would acknowledge there has been some progress” in the way tech platforms address this issue — but the work is in no way complete as online hate continues to escalate into real-world violence and discrimination.
“There’s a lot more to be done to make this a global conversation … [and] looking into the context of places apart from the epicenters of the financial world,” said Oberoi. “I would suggest that the platforms go out and speak to the groups that are affected to understand what kinds of recommendations [they] have.”
“This is very much a part of the human rights based approach,” Oberoi continued. “Participation and empowerment of communities on the ground is the way you reach a solution that is then going to be operational and actionable on the ground.”
As I wrote back in July, advertiser boycotters — many now highly dependent on Facebook’s scale and its efficient targeting machine — faced a Faustian bargain when deciding when the right time to return to spending with the platform would be. QAnon isn’t the first and won’t be the last conspiracy theory/cult to spread rapidly across the globe before content moderation teams can figure out a way to get it under control.
The feature — an algorithm designed to boost engagement by serving you the exact content and ads you are most likely to resonate with — is also the bug.
The post Back in the fold: For many advertisers, just a commitment from Facebook to change is enough to return appeared first on Digiday.