Pages

18 February 2023

We Missed Social Media’s Dark Side. Let’s Be Smarter about the Metaverse

PRICE FLOYD

When I became the Pentagon’s acting spokesman in 2009, Secretary Robert Gates tapped me to draft the Department’s social media policy. I got it wrong.

I focused on the benefits of these newish technologies, not the dimly perceived problems that would grow into national threats. As today’s tech giants push toward a new vision of social media—an immersive experience sometimes dubbed extended reality or the metaverse—national-security leaders must not make the same mistake.

The potential dangers these social media platforms presented are now commonplace. Authoritarian and repressive governments (Russia, Iran, China, North Korea) exploit Twitter, Facebook, YouTube, etc., to target their own populations and those of democratic countries. This digital axis of evil works hard to divide the populations of democratic nations, attempting to weaken their governments from the inside.

Our policy to allow unfettered access and use of these technologies did not lead to an increase in civic engagement nor the spread of democracy. It has done just the opposite.

We missed these threats because no one took the time to think about where these technologies could lead us, and we didn’t and haven’t held any of them accountable.

We didn’t ask the right questions: How will you protect people’s personal data? Will you allow other nations and non-state actors access to this data? How could this data be exploited by those opposed to free and open societies? Few of us even took a step back to ask: How can this platform be free to use? How are they making money from my participation?

We only saw that an ever-growing audience could be reached through these platforms, and we didn’t understand that these were in fact new networks that had no regulation.

We were naïve in the extreme.

The question we must ask ourselves now is: will we to repeat our mistakes by hoping that these same digital actors will be good corporate stewards? Will they protect our data, our privacy, and our human rights? Are we okay with allowing them to operate virtual worlds without regulation?

There is a need for in-depth research that will look at the impact that extended/virtual reality technology will have on users and communities. It is imperative that we get in front of these technologies before they are widely adopted. These challenges include the health and physical impact on users, data privacy and security, governance, and diversity and inclusion.

For this research to be effective, it will probably mean requiring the companies that are developing these technologies to open their data to researchers and academics. Here are just a few of the many questions that should be asked and answered:How and where will all the data gathered by users be secured?
What are the companies doing with the data now?
How are participants' responses to stimuli being used in the design and development phase? (Let’s not wait until it’s deployed to find out)
What is being done in the design phase to ensure that children are safe? How are they ensuring predators are not allowed to prey on children?
Are the creators and designers themselves from diverse backgrounds?
Is unconscious bias understood and addressed?

The answers to these and what are sure to be many more questions can inform the research, which should lead to specific policy and regulatory recommendations.

While companies may protest the intrusion, perhaps citing risks to their intellectual property, we shouldn’t allow that to stop us from protecting our national security and the health and wellbeing of the very people these companies claim to serve.

There are some nascent efforts to conduct research at think tanks and non-profits, such as the Center for a New American Security. The funding for these research projects is miniscule compared to the reported $4 billion Meta/Facebook alone spends each quarter on developing its version of the metaverse.

Ultimately, it is likely that Congress will need to step in, but the Defense Department can take initial steps to ensure that these technologies are not adopted by troops and employees unless this research has been completed. We simply cannot afford to make the same mistakes again. There is a precedent for this. Early in the use of social media, the Pentagon blocked access to MySpace. This was seen as a heavy-handed tactic of leaders who didn’t understand the possibilities of that communication platform. Now those leaders appear prescient.

The good news is we can get this right. We have the time, but we need to move now. With the insights garnered from researchers, we can create virtual worlds that are innovative, safe, and collaborative, while at the same time ensuring free and open speech.

No comments:

Post a Comment