Caitlin Chin
Once more, Congress and the American public are captivated by rumors of Chinese government surveillance—first TikTok, then the balloon, and now back to TikTok. Congress blocked the video-sharing app on federal government devices last December in a last-minute addition to the National Defense Authorization Act (NDAA). Since then, a growing number of primarily Republican states such as Texas, Georgia, and Alabama have also prohibited the video-sharing app on government devices, and some public universities like the University of Oklahoma and Auburn University have blocked access on campus Wi-Fi. The European Union and Canada, which had not previously focused in on TikTok, suddenly joined the United States in recent weeks in banning it on government-owned devices.
These measures do not go far enough for some U.S. politicians; calls for a total TikTok ban are gaining momentum. On February 2, Senator Michael Bennet (D-CO) called on Apple and Google to remove TikTok from their app stores, which would amount to a soft ban. On February 10, Senators Angus King (I-ME) and Marco Rubio (R-FL) reintroduced the ANTI-SOCIAL CCP Act, which would force either an outright ban or a sale of TikTok to a U.S.-based company—and, more widely, any social media companies based in China and a handful of other countries. On March 1, the House Foreign Affairs Committee voted along party lines to advance the Deterring America’s Technological Adversaries Act, which aims to enable the White House to ban TikTok across the entire U.S. population. On March 7, Senators Mark Warner (D-VA) and John Thune (R-SD) introduced the RESTRICT Act to give the Secretary of Commerce greater powers to act against technology companies based in China, similarly motivated by TikTok.
Proponents of a TikTok ban cite two general concerns. The first is that the Chinese Communist Party (CCP) could potentially access U.S. personal information as TikTok’s parent company, ByteDance, is based in China. Buzzfeed reported in 2022 that at least some China-based ByteDance employees had accessed non-public U.S. data, although there is no known evidence that either TikTok or ByteDance have shared it with the CCP. From a data protection standpoint, it makes little practical sense to ban TikTok when numerous other U.S. mobile apps collect very similar types of personal information—device identifiers, geolocation, face or voice prints, and more—and face few legal restrictions on transferring it abroad. Foreign governments, including China, could very easily purchase Americans’ personal data from intermediaries like data brokers, so a ban on TikTok would not meaningfully increase the privacy or security of U.S. internet users.
Second, some politicians are concerned that the CCP could control TikTok’s content recommendation algorithm to target propaganda or disinformation to U.S. users. In a similar manner, there is no direct evidence to show that the CCP has yet conducted influence operations through TikTok. It is also important to look at the bigger picture: the Chinese government and other actors do not actually need corporate ownership of a platform to strategically target disinformation or other harmful content. Even if TikTok was owned by a U.S. company, the United States has almost no legal regulations on how social media companies collect and share personal information, build algorithms to promote unpaid content and paid advertisements, and flag harmful or polarizing content. In other words, the infrastructure is in place for disinformation to spread on U.S.-based platforms, too.
Social media companies have economic incentives to maximize user engagement and clicks, and so many have built algorithms that automatically amplify content based on a person’s inferred interests or recent activity. Even if foreign governments like China or Russia do not own a recommendation algorithm, they just need to know how to game the system. During the 2020 U.S. election cycle, Facebook and Twitter detected a number of fake accounts that strategically used keywords, reshares, account names, and photos to target disinformation toward Black individuals. These accounts likely originated in the United States, Russia, China, Iran, and Romania. Facebook and Twitter removed the accounts, but the posts still attracted tens of thousands of likes, retweets, and shares mere hours after being posted online—demonstrating how recommendation algorithms allow viral damage to rapidly occur even if some content is later removed. Foreign influence operations are continuously becoming more sophisticated, and it is hard to estimate the full extent of messaging on U.S.-based platforms that has not been detected.
In general, Section 230 of the Communications Decency Act has allowed U.S. platforms to avoid legal responsibility for most content that third-party users generate, and the Supreme Court is currently considering whether this statute also extends to algorithmic recommendation systems in ongoing cases such as Gonzalez v. Google and Twitter v. Taamneh. Extremist U.S.-based platforms like Parler and Gab have prospered in this unregulated environment, even advertising their tolerance of false or harmful rhetoric. Gab founder Andrew Torba has stated that foreign disinformation campaigns “can speak freely on Gab just like anyone else”—an example of how U.S. corporate ownership alone will not inherently prevent the spread of Chinese or Russian government speech. Facing few legal obligations, even some of the largest U.S. social media platforms recently chose to deprioritize content moderation resources: within the past three months, YouTube reportedly laid off all but a single employee to oversee global disinformation policy, and Twitter abruptly dissolved its Trust and Safety Council and fired numerous Trust and Safety team employees.
In addition to user-generated content, foreign governments could potentially sponsor political advertisements on U.S. digital platforms, many of which allow marketers to affirmatively target users based on demographic or other personal attributes. During the 2016 U.S. presidential election, the Russia-affiliated Internet Research Agency (IRA) ran approximately 3,000 political advertisements and posted 80,000 pieces of content on Facebook—which an estimated 126 million U.S. users may have seen. For both paid advertisements and unpaid content, the IRA disproportionately targeted voters by factors like race and geographic location. Some social media platforms changed their policies after the 2016 election to only allow U.S. individuals or groups to purchase political advertisements for domestic elections, but it is still possible for foreign governments to either outsource, fund front companies, or impersonate Americans to covertly carry out influence operations. In fact, the IRA did not directly purchase Facebook advertisements in 2016; it stole social security numbers, home addresses, and birthdates from real Americans to do so under false identities.
Several U.S. social media platforms also chose to limit options for entities to microtarget political advertisements due to the 2016 election—but most still allow buyers to select their audience based on at least some sensitive demographic attributes. In November 2019, Google stopped allowing advertisers to target paid political content by race, but still allows them to choose zip codes (which could act as a proxy), age, and gender. In March 2022, Facebook removed some ad-targeting options related to political beliefs, race, religion, and sexual orientation, but still allows the choice of audience’s age, gender, interests, and language. Twitter, under former CEO Jack Dorsey, ended political advertising altogether in October 2019—but resumed it in January 2023 under Elon Musk. TikTok is one of the few platforms that does not currently allow paid political advertisements in any form.
The spread of false or harmful online content is a negative externality of the vastly underregulated social media environment in the United States—U.S. policymakers should not mistake corporate ownership as a proxy for safety. Because proposals to ban TikTok focus on the wrong problems, they would create overly broad consequences for Americans with very little upside. For one matter, many artists, musicians, entrepreneurs, and influencers have built their brands on TikTok and depend on it as a source of income, and so even the threat of a ban could put them in a precarious position. On a much broader scale, a ban would cut off a form of expression for approximately 100 million users—especially younger Americans—who see the app as a creative outlet to share and view music, dancing, and communications. The United States should not broadly censor avenues for free speech based on their country of origin, especially when it lacks strong examples to show that TikTok’s data practices are different from its U.S. competitors or raise specific—not vague—national security concerns.
Instead of banning TikTok, a slightly more useful alternative would be to strengthen accountability mechanisms over the app’s privacy, security, and transparency policies. The Committee on Foreign Investment in the United States (CFIUS) has been engaged in years-long negotiations on a potential agreement to allow TikTok to continue to operate under ByteDance with increased U.S. government oversight. If finalized, this agreement could reportedly give Oracle and third-party auditors insight into TikTok’s search content recommendation algorithm, fully separate decision-making of U.S. TikTok operations from ByteDance, and require the domestic storage of sensitive U.S. personal information. These proposed safeguards are significantly more prescriptive than what any other U.S.-based social media platform currently offers its users.
But the strongest approach would be for Congress to establish comprehensive rules across the entire data ecosystem that would limit how all companies—including TikTok—use personal information in ways that could amplify the spread of harmful content. At a minimum, users of any digital platform should be able to opt out of targeted advertisements or algorithmic profiling based on sensitive personal information like race, gender, and religion. Furthermore, all technology companies need legal obligations to prevent harm to their users, including by auditing their algorithms for bias or disparate impact by race, gender, or religion and improving the transparency of their outcomes. In addition, every U.S. social media platform should implement robust systems to allow users to flag objectionable content and appeal the removal of posts or accounts. Policymakers should take popular interest in TikTok as an opportunity to implement industry-wide protections that could benefit all of society, rather than just a messaging tool primarily geared toward the CCP.
No comments:
Post a Comment