Tech companies form a consortium for brand safety in the metaverse
Several tech companies including Agora, Fandom, Kuaishou Technologies, The Meet Group, Pandora, Riot Games, Roblox, Wildlife Studios and holding company Dentsu have joined forces to address internet safety.
The Oasis Consortium formally launched last week to set measurable standards for ethical online behaviors as more companies enter the metaverse space, including Facebook with its digital universe.
The consortium’s first objective is to launch user safety standards by Q4 based on the O.A.S.I.S. principles of openness, accountability, security, innovation and sustainability.
“On the platform side, we see the lack of privacy, safety, as well as inclusion, and as much as we want to push for brand safety, as long as platforms don’t make a change, we can’t see real change happen,” said Tiffany Xingyu Wang, president and co-founder at Oasis. “This consortium is meant to bridge the gap and have stakeholders for not only the metaverse universe but also brands, agencies and folks from other nonprofits.”
Prior to the launch, the consortium met on a weekly basis to establish its standards, Wang said. Moving forward, when guidelines are launched, the organization will call on platforms to pledge to the user safety standards.
From there, the consortium will host meetings periodically to evaluate progress. Wang noted it has not yet decided on the level of frequency with which to measure progress, but that each participating entity will be given custom guidelines according to its audience.
For instance, Roblox will be given guidelines for addressing child safety, whereas The Meet Group may be advised on addressing privacy and sexual harassment safety.
“We push for transparency reporting,” Wang said. “Security is mainly about data and identity protection and you cannot talk about user safety or content moderation without having proper guardrails for data protection. Similarly, if you don’t do community policy right at the beginning or enforce it, you end up in a toxic environment.”
“But if you get people who are accountable and use artificial intelligence to help do the moderation, you stand a chance to proactively prevent your platform from being toxic,” she added.