For a global partnership between the top social media platforms and brands to create brand safety measurement standards to move ahead, the side holding the purse strings wants a lie detector test.
Nearly two years after its formation, the World Federation of Advertisers’ Global Alliance for Responsible Media, or GARM, has made progress toward creating a standardized set of brand safety measures agreed upon by platforms and advertisers. In April, the group — whose members include YouTube, Facebook, Instagram, Twitter, TikTok, Snap and Pinterest as well as big-spending global brands like Anheuser-Busch InBev and Unilever — published its first Aggregated Measurement Report, which featured some entirely new brand safety metrics. However, the milestone is marred by the participating platforms supplying their own, unverified measurements.
GARM aims to change that.
The next step is getting the platforms on board to allow an independent auditing firm to sign off on the transparency reporting data supplied to GARM by the platforms. Thus far, only one — Facebook — has committed. But, the rest remain holdouts. There is no indication yet that the other platforms participating in GARM will agree to allowing third-party auditing of the information they provide to the brand safety transparency group.
“These are ongoing conversations that the GARM Steer Team is having with each of the platforms to make sure that they follow through with their MRC accreditation in a way that is sustainable and appropriate,” said Rob Rakowitz, initiative lead at GARM.
He said that some platforms have fewer resources such as people or budget to fulfill audit requirements, for example. “It would be in everybody’s best interest that there is third-party verification on these numbers,” he said, adding that GARM members expect independent audits eventually to be an integral part of the reporting process.
“They will not be dodged,” said Rakowitz of the potential audits.
The state of platforms playing ball
Facebook publicly stated in July 2020 it would allow for the industry’s go-to measurement verification body, Media Rating Council, to evaluate its compliance with GARM’s brand suitability framework, in the hopes of garnering accreditation for the MRC brand safety guidelines for monetized content. However, those words have not yet resulted in tangible results. Although the company is the only one of the platforms that has publicly agreed to MRC conducting an audit of its brand safety transparency reporting for GARM, and plans an independent audit of its self-published content enforcement and standards reports, no MRC brand safety auditing specifically related to GARM reporting has been done at Facebook as of this article’s publication.
Separately, but on related trajectories, both Facebook and YouTube are working with MRC on brand safety-related audits that are not GARM-specific. Facebook is set to commence an MRC audit related to brand safety metrics in June. And Google-owned YouTube has received accreditation from MRC for its brand safety processes that evaluate content on the platform at the individual video level for ads bought through YouTube’s reservation program or through Google’s ad tech. But it has yet to commit to an audit of the brand safety transparency reporting it supplies for GARM. Last year the video platform did begin working on updating its brand safety processes to align with GARM’s standards.
“YouTube remains committed to partnering with GARM to support its mission to develop an industry-wide approach towards building a more sustainable and healthy digital ecosystem for everyone. We are in discussions with the MRC to explore our next accreditations, but have not committed to an independent audit of our metrics at this time,” a YouTube spokesperson told Digiday.
There’s a bit of bureaucracy adding complexity and slowing the process. MRC cannot conduct an audit to verify data supplied by platforms for the GARM report until GARM’s reporting requirements are finalized and then incorporated into MRC’s brand safety standards and audits. That has yet to happen, according to the MRC.
By contrast, while GARM participant TikTok believes in the accountability and transparency mission, the company isn’t ready to commit to a third-party audit. “We don’t really have a stance on it now,” said Dave Byrne, global head of brand safety and industry relations at TikTok, regarding third-party audits of the brand safety data it provides to GARM. But he added that GARM gives platforms a forum “to be transparent in a way that advertisers can hold them accountable, but it never feels like a conflict; it feels like a collaborative working environment.”
Convincing platform partners to agree to outside audits is “of course, a tender process,” said Luis Di Como, evp global media at Unilever, which is a founding member of GARM. While he said GARM advertiser members demand independent oversight of platforms’ first-party brand safety reporting, Di Como acknowledged, “This cannot be done overnight.”
A sign of progress
Overall, the GARM process aims to reconcile the platforms’ disparate efforts to moderate content and provide brand safety-related measurements. For instance, GARM’s aggregated measurement report translates content violation categories the platforms use internally or in their own branded transparency reporting into standard categories. What Facebook deems “Hate Speech” and “Bullying and Harassment” and Twitter calls “Hateful conduct” are all labeled by GARM as “Hate speech & acts of aggression.”
At this stage, Rakowitz and Di Como both stressed the value of the newly agreed-upon metrics included in the group’s inaugural report. The new Violative View Rate measures the percentage of views that contain content considered to be in violation, while another new standard used by YouTube for the report, the Advertising Safety Error Rate, gauges the percentage of total impressions on content that is in violation of content monetization policies aligning with GARM standards.
GARM’s report presents a macro-level view of aggregated data showing what’s happening according to brand safety measures across a platform. But the existence of the new metrics already appears to be influencing how the platforms and others report at the campaign level. The existence of those newly-created standardized metrics — which GARM hopes MRC will eventually verify for all platforms — “is definitely having a knock-on effect on post-campaign reporting,” said Rakowitz. “We are hearing from not only the content verification companies, but platforms themselves that some of these metrics will be introduced into campaign reporting,” he continued.
The knock-on effects may extend further and go beyond advertising. Ultimately, as regulators and legislators lump together big tech platforms and their alleged harmful societal impacts and demand transparent reporting about hate speech and disinformation, GARM standards could help the platforms align on how they report that information to governments, too, suggested Rakowitz.
“Advertisers, CMOs and media leads are not the only stakeholders,” he said.
The post As Facebook commits to independent GARM brand safety verification, getting YouTube, TikTok, Twitter and others on board is a diplomatic dance appeared first on Digiday.
For More Information about our services please visit:
Additional Informational Links:
https://sites.google.com/fuseologycreative.com/fuseology-creative-clients/home
No comments:
Post a Comment