Questions about the security of the ‘metaverse’, where world’s largest tech companies bet big on an immersive digital world, have surfaced. Harassment, assault, bullying and hate speech are already commonplace in virtual reality games, which are part of the ‘metaverse’, researchers say, and there are few mechanisms by which inappropriate behavior can be reported.

Sheera Frenkel and Kellen Browning / New York Times
Chanelle Siggens recently strapped on an Oculus Quest virtual reality headset to play her favorite shooter game, Population One. Once she turned on the game, she maneuvered her avatar into a virtual lobby in the immersive digital world and waited for the action to begin.
But as she waited, another player’s avatar approached hers. The stranger then simulated groping and ejaculating onto her avatar, Ms. Siggens said. Shocked, she asked the player, whose avatar appeared male, to stop.
“He shrugged as if to say: ‘I don’t know what to tell you. It’s the metaverse — I’ll do what I want,’” said Ms. Siggens, a 29-year-old Toronto resident. “Then he walked away.”
The world’s largest tech companies — Microsoft, Google, Apple and others — are hurtling headlong into creating the metaverse, a virtual reality world where people can have their avatars do everything from play video games and attend gym classes to participate in meetings. In October, Mark Zuckerberg, Facebook’s founder and chief executive, said he believed so much in the metaverse that he would invest billions in the effort. He also renamed his company Meta.
Yet even as tech giants bet big on the concept, questions about the metaverse’s safety have surfaced. Harassment, assaults, bullying and hate speech already run rampant in virtual reality games, which are part of the metaverse, and there are few mechanisms to easily report the misbehavior, researchers said. In one popular virtual reality game, VRChat, a violating incident occurs about once every seven minutes, according to the nonprofit Center for Countering Digital Hate.
Bad behavior in the metaverse can be more severe than today’s online harassment and bullying. That’s because virtual reality plunges people into an all-encompassing digital environment where unwanted touches in the digital world can be made to feel real and the sensory experience is heightened.
“When something bad happens, when someone comes up and gropes you, your mind is tricking you into thinking it’s happening in the real world,” Ms. Siggens said. “With the full metaverse, it’s going to be so much more intense.”
Toxic behavior in gaming and in virtual reality is not new. But as Meta and other huge companies make the metaverse their platform of the future, the issues are likely to be magnified by the companies’ reach over billions of people. The companies are encouraging people to join the metaverse, with Meta, which makes the Oculus Quest headsets, cutting prices for the products during the holidays.
Mr. Zuckerberg, who appears aware of questions about the metaverse’s harms, has promised to build it with privacy and safety in mind. Yet even his own lieutenants have wondered whether they can really stem toxic behavior there.
In March, Andrew Bosworth, a Meta executive who will become chief technology officer in 2022, wrote in an employee memo that moderating what people say and how they act in the metaverse “at any meaningful scale is practically impossible.” The memo was reported earlier by The Financial Times.
You can read the full article here.