Pages

17 September 2022

Facebook Misinformation Is Bad Enough. The Metaverse Will Be Worse

Rand Waltzman

Here's a plausible scenario that could soon take place in the metaverse, the online virtual reality environments under rapid development by Mark Zuckerberg and other tech entrepreneurs: A political candidate is giving a speech to millions of people. While each viewer thinks they are seeing the same version of the candidate, in virtual reality they are actually each seeing a slightly different version. For each and every viewer, the candidate's face has been subtly modified to resemble the viewer.

This is done by blending features of each viewer's face into the candidate's face. The viewers are unaware of any manipulation of the image. Yet they are strongly influenced by it: Each audience member is more favorably disposed to the candidate than they would have been without any digital manipulation.

This is not speculation. It has long been known that mimicry can be exploited as a powerful tool for influence. A series of experiments (PDF) by Stanford researchers has shown that slightly changing the features of an unfamiliar political figure to resemble each voter made people rate politicians more favorably.

The experiments took pictures of study participants and real candidates in a mock-up of an election campaign. The pictures of each candidate were modified to resemble each participant. The studies found that even if 40 percent of the participant's features were blended into the candidate's face, the participants were entirely unaware the image had been manipulated.

In the metaverse, it's easy to imagine this type of mimicry at a massive scale.

At the heart of all deception is emotional manipulation. Virtual reality environments, such as Facebook's (now Meta's) metaverse, will enable psychological and emotional manipulation of its users at a level unimaginable in today's media.

I have been working on deception, disinformation, and artificial intelligence problems for nearly four decades, including two terms as a program manager at the Defense Advanced Research Projects Agency (DARPA). We are not even close to being able to defend users against the threats posed by this coming new medium. In virtual reality, malicious actors will be able to take the age-old dark arts of deception and influence to new heights—or depths.

The same features that make virtual reality environments so attractive as communication environments—the sense that you've teleported into a synthetic world—can also harm users. When it comes to emotional manipulation, two features of the metaverse are particularly important—presence and embodiment.

“Presence” means that people feel they are communicating with one another directly without any type of computer interface. “Embodiment” means that the user has the feeling that their avatar or virtual body is their actual body.

Even in virtual reality's current, primitive state, these two sensations are what make VR so powerful. They are also what makes emotional manipulation in VR so dangerous.

In VR, body language and nonverbal signals such as eye gaze, gestures, or facial expressions can be used to communicate intentions and emotions. Unlike verbal language, we often produce and perceive body language subconsciously.

Virtual reality environments allow interaction among people that exploits the full range of human communication. Person-to-person interaction at this intensity and scale has not been possible in traditional social media environments.

That is both good news and terrible news. Good, because it will allow for better communication. Terrible, because it will open users to the full range of deceptive influence techniques used in the physical world—and to what might be even more-intense, virtual versions of them.

The metaverse will usher in a new age of mass customization of influence and manipulation. It will provide a powerful set of tools to manipulate us effectively and efficiently. Even more remarkable will be the ability to combine tailored individual and mass manipulation in a way that has never before been possible.

A user's virtual experiences as an avatar are expected to seamlessly meld with his or her experiences, memories, and understanding from the physical world. This will almost certainly change how a person sees the world, understands it, and behaves.

We must not wait until these technologies are fully realized to consider appropriate guardrails for them. We can reap the benefits of the metaverse while minimizing its potential for great harm.

The first step toward designing these guardrails is to do a comprehensive study and evaluation of the existing extensive psychology literature on uses and effects of VR, and consider how it might be used for malicious, manipulative purposes. This study should describe the types of emotional manipulation techniques that are possible today and examine techniques that are likely to be possible in more-sophisticated versions of the metaverse. This has not been done. We cannot guard against something we do not fully understand.

The second step is to develop the technology to detect when these techniques are being applied. For example, we could build a type of emotional canary in a coal mine—an artificial character that could circulate in virtual reality environments, sense a broad range of attempts at emotional manipulation, and send out a warning when one is being deployed.

Society did not start paying serious attention to classical social media—meaning Facebook, Twitter, and the like—until things got completely out of hand. Let us not make the same mistake as social media blossoms into the metaverse.

No comments:

Post a Comment