By Justin Sherman
On March 12, Sen. Josh Hawley introduced a bill into the Senate to ban the downloading and use of TikTok, the Chinese social media app, on federal government devices. Hawley’s bill carves out exceptions for such activities as law enforcement investigations and intelligence collection, but holds that
no employee of the United States, officer of the United States, Member of Congress, congressional employee, or officer or employee of a government corporation may download or use TikTok or any successor application developed by ByteDance or any entity owned by ByteDance on any device issued by the United States or a government corporation.
Currently, the Transportation Security Administration and the U.S. Army have also banned the app on employee phones.
But what’s Hawley’s objection to an app used widely for dance challenges and lip-syncing?
The narrative goes something like this: TikTok is a company incorporated within China; the Chinese government pervasively surveils within its borders and can get access to company-held data on a whim; thus, TikTok’s potential collection of information on U.S. citizens is a security risk. Yet also thrown into the discussion are other allegations—TikTok removes political content at Beijing’s behest, for example. The failure to decouple these risks only muddies the waters and makes it harder for policymakers and the general public to understand the threats at play.
In reality, TikTok carries five clear risks. Two pertain directly to national security, and three perhaps relate to it, though not as clearly. All have been conflated or blurred together, at one point or another, by pundits and others commenting on TikTok’s risks. Policymakers and analysts would be wise to make meaningful distinctions among these risks and provide more nuance and detail around each specific threat.
Policymakers may clearly have many different interpretations of each of these risks’ likelihood and severity. There’s also no clear answer on what policymakers should do about the app. And, in reality, the problems raised by TikTok are much bigger than the app itself—representative of larger questions that must be answered around U.S. data security policy.
Risk 1: TikTok Collecting Data on U.S. Government Employees
The first risk posed by TikTok is the collection of data on U.S. government employees (including those working as contractors). These are people who either have security clearances or could have clearances in the future or at the very least perform tasks that, if not classified, may still be considered sensitive in an unofficial sense. Data collection on these individuals and their activities can therefore reveal important national security information or be used in a coercive manner (that is, blackmail) to target those individuals.
There are two considerations with this type of data collection risk: the kinds of data that are being or might be collected; and Beijing’s ability to access that data.
The data collected by TikTok, at least on the surface, might seem relatively benign; after all, the app is a social media platform for sharing videos. Even if a U.S. federal government employee has the app, one could argue, that doesn’t mean they’re sharing any videos that somehow compromise their personal or professional activities. And they can use the app without jeopardizing sensitive information.
But where the risk gets more complicated is the reality that most phone apps collect far more information than what the average user would suspect they are handing over to the app. (This might even go beyond that single firm: Charlie Warzel at the New York Times, for example, has a great explanation of how “just by downloading an app, you’re potentially exposing sensitive data to dozens of technology companies, ad networks, data brokers and aggregators.”)
TikTok is reasonably upfront about the high volume of data it collects: its privacy policy for U.S. residents states,
We automatically collect certain information from you when you use the Platform, including internet or other network activity information such as your IP address, geolocation-related data (as described below), unique device identifiers, browsing and search history (including content you have viewed in the Platform), and Cookies (as defined below).
It notes further that “[w]e also collect information you share with us from third-party social network providers, and technical and behavioral information about your use of the Platform,” such as, potentially, contact lists on other social media services. This type of data collection can especially implicate national security—geolocations or internet search histories of federal employees can reveal quite sensitive information, such as the location of secret government facilities, details about events relevant to the government about which those employees are seeking publicly available information, and personal activities that could potentially be used to build files for blackmail.
TikTok is hardly alone in this kind of collection—go read the privacy policy of most major social media platforms and you’ll find similar if not more encompassing language.
But TikTok has a unique challenge: There are real questions about who beyond TikTok might have access to the collected data. This risk likely exists whether the app is downloaded on a government-owned device used by an employee, or on a personal device used by the employee.
So can the Chinese government compel the company to turn over data?
As Samm Sacks recently wrote, “Nothing is black and white, particularly when it comes to government access to data. Ultimately the Chinese government can compel companies to turn over their data, but this does not always happen.” In some cases, companies can and do push back against government requests, as they “have their own commercial interests to protect.” There are real risks of government access to data, and this does happen, but it’s not as clear-cut in practice as many might assume.
There are also real fears among some U.S. policymakers that data from a company like TikTok could be added into an enormous dataset Beijing continues to compile from incidents such as the Equifax breach and the hack of the Office of Personnel Management. The product of such data-hoarding, in this view, is a massive dossier on U.S. persons that the Chinese government can use for intelligence and security purposes—consisting of everything from communications to credit scores to travel histories.
It is clear that there are national security risks with TikTok’s collection of data on U.S. federal government employees. The question for policymakers comes down to one’s perceived likelihood of the risk, the severity of the risk and what to do about it.
Risk 2: TikTok Collecting Data on U.S. Persons Not Employed by the Government
Second is the risk that TikTok collects data on U.S. persons not working for the federal government in ways that still potentially impact national security. The considerations here mirror those of TikTok’s data collection on federal employees.
Yes, the link between data collection on federal personnel and national security threats (that is, counterintelligence operations) is clearer. One could imagine how a clearance-holding federal employee with an embarrassing internet search history could be blackmailed, or how the GPS movements of a clearance-holding federal employee would likewise be valuable to a foreign intelligence service.
Here, one danger is merely the potential for U.S. persons not currently employed by the government to have clearances or perform other sensitive government tasks in the future. There could also be the potential for collection to target individuals in the private sector working on proprietary and national security-related technologies.
The collection of this data could therefore have potential impacts on U.S. national security in ways that may give policymakers reason to consider wider action against TikTok. Policymakers’ decisions to take wider action would depend on where and how they interpret specific risk cases. For instance, one could perceive a risk of higher severity for an engineer working on tightly held and cutting-edge satellite imaging technology than for your average person.
It is also possible, in a Cambridge Analytica-style fashion, that such information could be used to develop profiles on Americans in ways that lend themselves to enhanced microtargeting on social media and other platforms.
In terms of the kinds of data being collected, TikTok, like most social media companies, very likely just collects the same types of information on all of its users. So collection on federal employees is likely the same as for non-federal employees.
The same goes for the legal authorities governing Beijing’s access to TikTok data: The risk remains largely similar to the risk for federal employees. Maybe Beijing has greater incentive to request access to certain kinds of information when data is on U.S. government employees than when it’s not. That said, this may also not be the case. TikTok might collect information from private citizens that exposes security-sensitive corporate activities. And what about the microtargeting—could Beijing have an incentive to access the data if it lent itself to, say, pushing advertisements for Chinese Communist Party (CCP)-preferred candidates in a U.S. election?
Risk 3: TikTok Censoring Information in China at Beijing’s Behest
The third risk pertains to Beijing ordering, forcing, coercing or otherwise leading TikTok to remove information on the platform in China. (This could include TikTok preemptively self-censoring content out of concern over possible retribution from the Chinese government.) This is not directly a U.S. national security issue, but it merits attention because of the way it has been roped into conversations about TikTok’s risks.
The Washington Post reported last fall, for example, on the ways in which certain content that the CCP dislikes—such as information on the Hong Kong pro-democracy protests—was strangely absent from TikTok.
Subsequently, amid this and other reports in the media about alleged TikTok censorship, Sens. Chuck Schumer and Tom Cotton sent a letter to the acting director of national intelligence, stating that
TikTok reportedly censors materials deemed politically sensitive to the Chinese Communist Party, including content related to the recent Hong Kong protests, as well as references to Tiananmen Square, Tibetan and Taiwanese independence, and the treatment of Uighurs. The platform is also a potential target of foreign influence campaigns like those carried out during the 2016 election on U.S.-based social media platforms.
In addition to raising concerns about the aforementioned risks of data collection on U.S. persons, the senators requested the intelligence community to investigate allegations that TikTok engages in political censorship at the direction of the Chinese government.
But many of the conversations about this political censorship do not distinguish between TikTok removing content within China’s borders and TikTok removing that same content globally. This might seem like a trivial distinction, but it’s not. In the former case, content would be removed (or perhaps algorithmically downplayed) for those accessing the mobile application from within China’s geographic borders. Thus, this “geoblocking” would affect those physically located within China. If TikTok was censoring content globally, by contrast, once flagged, the offending content would be deleted from anyone’s and everyone’s TikTok feed.
The former issue of geoblocked content within China (that is, this third risk) is mostly a domestic issue in China. It is an issue of free speech and human rights, certainly, but it doesn’t directly impact U.S. national security in the ways that it potentially would if content was removed globally at one government’s behest.
Risk 4: TikTok Censoring Information Beyond China at Beijing’s Behest
So what is the national security risk if TikTok did not limit its content takedowns to within China?
There is no clear evidence that Beijing has directly told TikTok to remove content around the world. TikTok’s parent company responded to the Post investigation from last September by asserting that the platform’s content moderation policies in the U.S. are handled by an American team and are not influenced by the Chinese government. But policymakers have expressed worries, in light of such observations as the aforementioned lack of Hong Kong protest videos on the platform, that TikTok is in fact (at Beijing’s direct behest or not) removing those kinds of content globally. This risk centers on whether and how TikTok could remove, for anyone using the app, a video critical of the CCP or that talks about concentration camps in Xinjiang, for example. In this case, nobody in the world would be able to access the content on TikTok once removed; the takedowns would be global.
Again, the national security risks here are not as direct as with data collection. Yet there are genuine concerns about the Chinese government exporting its censorship through platforms like TikTok. The worry is that Beijing compels high-demand Chinese-incorporated internet platforms to remove content worldwide. Beijing’s internet censorship practices, otherwise confined within Chinese borders, could hypothetically spread through this tactic.
This certainly presents risks to democracy and free speech. More teenagers in the United States are using TikTok to share political content. Political censorship is therefore not an insignificant issue. The takedown of certain critical videos could, for one thing, subtly influence platform users’ views of Beijing. And there are real concerns, especially in light of such investigations as the Washington Post’s report last November that “former U.S. [TikTok] employees said moderators based in Beijing had the final call on whether flagged videos were approved.”
Risk 5: Disinformation on TikTok
Fifth and finally, there is concern among U.S. policymakers about potential disinformation on TikTok. Tons of U.S. teenagers use TikTok and consume political content through the application, so there is a concern that the users could amplify disinformation on the platform. This incursion of disinformation into U.S. public discourse is no doubt corrosive to the democratic process. Yet this is not a national security risk that is necessarily specific to TikTok.
Virtually every internet platform deals with disinformation; thus, that TikTok is Chinese incorporated in and of itself has nothing to do with it. But U.S. officials have expressed concern about the potential for disinformation on the platform. (These concerns aren’t unfounded: See the false information that circulated on TikTok about the coronavirus.) One could certainly make the argument that the platform responses to disinformation—in light of political censorship concerns—might impact U.S. interests in undesirable ways. But the presence of disinformation on the platform is in many ways a distinct risk from the preceding four.
Looking Beyond TikTok
These questions, and the policy responses to them, have implications well beyond TikTok. And they have become increasingly urgent, as these questions about mobile apps, data collection and national security grow more frequent and as more bills like Sen. Hawley’s are introduced into Congress.
The issues here are complex. If the view is that any data collected by a Chinese internet company is a national security risk—because of Beijing’s purportedly easy access to that data, and the ways it could be potentially combined with other datasets (for example, from the Office of Personnel Management hack)—then many applications fall into the bucket of risk. The widely used application WeChat, for example, could certainly be banned under that view.
But the problem is even more complicated. After all, China isn’t the only country about which policymakers are or might be concerned.
Last fall, for example, Sen. Schumer sent a letter to the FBI requesting they investigate the security risks of Russian mobile apps. The letter cited “the legal mechanisms available to the Government of Russia that permit access to data” as reason for concern.
If Russian-made apps are also considered an unacceptable data collection risk for U.S. government employees, then how should the U.S. approach and maintain a list of countries that fit into that category?
The United States isn’t alone in confronting these questions. And these aren’t entirely novel problems. India’s military, for example, has prohibited personnel from installing Chinese social platform WeChat due to security concerns. The Australian armed forces have also banned WeChat. The Pentagon banned the military’s use of geolocating fitness trackers in August 2018 after live GPS data was found on the public internet: Researchers were able to track the location of troops on military bases and spies in safe houses.
This all raises challenging questions about where to draw the line: Is an app that, hypothetically, makes custom emojis and collects only a user’s phone number more of a security risk than one that provides the weather based on current geographic location?
Meanwhile, it’s worth remembering that apps are only one potential way for a government to get access to information on individuals: The highly unregulated data brokerage industry, which sells incredibly intimate information on all kinds of people to whomever is buying, could easily be exploited by foreign governments. Governments could buy information from brokerage firms and ascertain sensitive activities of, say, a U.S. federal employee with a security clearance or a non-government employee who happens to be running for Congress in the next election.
Policymakers might consider crafting legislation based on the people on whom data is being collected—that is, focusing on data collection of government employees, which presents immediate national security concerns, rather than about data collection on all Americans. Targeted bans on app downloads on government phones could be a solution, as Sen. Hawley proposed in his bill.
More broadly, one could imagine developing a framework of criteria to answer these questions that will arise again and again. This framework would function in the same way as would objective criteria by which to routinely evaluate other elements of digital supply chain security, another much-needed national security tool. For instance, the Committee on Foreign Investment in the United States could explicitly make data privacy and security a more central component of its investment screening process. Agencies like the Cybersecurity and Infrastructure Security Agency could lead an interagency process to determine government recommendations for baseline corporate cybersecurity standards writ large that, like with encryption, could be used subsequently by policymakers to evaluate protections implemented by firms like TikTok. Federal departments such as the Department of Defense could develop clear and at least semipublic frameworks by which they decide to prohibit employee use of mobile apps.
Again, though, even this route leads to more questions. What about American- or European-incorporated companies that collect disturbing amounts of sensitive personal information on U.S. government employees? Do they not fit these categories too? Policymakers need to consider these questions.
Policymakers also must consider whether these mobile app and data security decisions should depend less on the kinds of data collected and on whom, and more on the legal structures in the countries in which these companies are incorporated. Beijing, for instance, engages in unchecked surveillance. While the actual practice of Beijing getting data from private companies isn’t as straightforward as some might imagine, it’s certainly far easier than the U.S. government getting access to American company data. For some policymakers, that difference might be the end-all-be-all to allowing Chinese apps on U.S. government employee phones—forget about details like the kinds of data in question.
And this is all without even getting into the risks of content censorship in China, content censorship globally and disinformation—which pertain more to content management on an app like TikTok than they do directly to national security. This isn’t to say (as clarified above) that no national security linkages exist or could exist to, say, TikTok removing political content worldwide at Beijing’s behest. But, rather, I suggest that the links to a U.S. national security threat from censorship and disinformation are generally not as pronounced as those from the collection of geolocation data on a U.S. federal employee with an active security clearance, for example.
This isn’t just a laundry list of academic questions.
Some observers might find a TikTok ban to be a relatively narrowly targeted and sensible policy response to a perceived threat of Chinese state access to data. But the reality is that decisions in this sphere of data security and U.S. data protection are not made in a vacuum. They have broader implications—first-order, second-order, and even third- or fourth-order effects. Many countries develop mobile apps, and many of them could be perceived as posing security risks in various ways. They, too, must be considered as part of the picture. A cohesive and repeatable strategy for making these decisions is far superior—from economic, national security and rights-protection perspectives—than a whack-a-mole-style approach that might yield a sensible policy but not with a sensible process.
All the while, it is important not to blur and conflate these risks. The national security risks of mobile apps made and managed by foreign-incorporated companies may take different forms and may differ in likelihood, severity and desired response. Blurring the lines makes it hard to develop targeted policies that address actual risks in ways that fully consider costs and benefits.
Many countries worldwide are grappling with these same questions. Many governments, like Washington, are also considering if, where and how they want to “decouple” elements of their technology systems from other countries. Here, Washington should tread very carefully because these broader and global implications demand much more thought.
No comments:
Post a Comment