by Jeffrey Ding
Jeffrey Ding (Oxford University) provides an overview of national and international standards-setting organizations and explains how China, the United States, and other countries are balancing priorities in their pursuit of technical standardization in data governance and artificial intelligence in particular.
The International Organization for Standardization (ISO) is a global network that convenes tens of thousands of experts in 210 technical committees and 2,443 working groups. Every working day of the year, about seven ISO meetings are held to develop technical standards.[1] Since the work of organizations like the ISO is often shrouded in technocratic obscurity, the work of standards-setting is often underappreciated, or even ignored, in discussions of technological governance. Only major flashpoints, such as Microsoft’s campaign to have its document format adopted as a global industry standard in 2008, draw attention to what happens behind the scenes in forums like the ISO. Prescient observers, however, realize that much of the substantive work of data governance will be hammered out through setting technical standards.
On the occasions when technical standards are analyzed as instruments of technological governance, the focus is often on corporate competitiveness. Microsoft’s success in establishing its Open XML format as an ISO standard, for instance, greatly boosted its chances at competing for billion-dollar government contracts. But the stakes of standards-setting extend beyond competitiveness. For states, firms, regulators, and other actors, technical standardization is a process that involves balancing many other interests: the health of the innovation system, the protection of consumer interests, and the safe development of technology.
In this essay, I show how China, the United States, and other countries are balancing these interests as they push forward on technical standardization in data governance, with a particular focus on advances in artificial intelligence (AI). Following an overview of national and international standards-setting organizations, I examine how both the United States and China are adopting a strategy that attempts to balance multiple interests, including protecting competitiveness, promoting innovation, and ensuring safe and trustworthy development of new technologies. I then show how these strategies vary across different standards-setting institutions. I conclude by noting some limitations for standards as a vehicle for technology governance and pointing toward key future developments in this space.
THE VARIOUS CONTEXTS OF STANDARDS
Technical standards are a fuzzy concept, so it is important to first clarify what standards mean in different contexts. Domestic policy-oriented standards can function as a form of regulation. In countries with an industry-led standardization process, such as the United States, voluntary consensus standards can serve as the basis for eventual policy regulation. In countries with a government-led standardization process, such as China, the government can push out a new standard as a regulatory tool. For example, the Standardization Administration of China, China’s main standards body, published a personal information security specification in 2018 that provides guidelines to check against misappropriation of personal data.[2]
Other standardization efforts are more outward-facing, with the aim to develop internationally interoperable technical specifications recognized by bodies like the ISO. Amid the constellation of international standards institutions, two private regulatory networks shine the brightest: the ISO and the International Electrotechnical Commission (IEC) have produced around 85% of all known international standards, and they are the leading bodies for standards-setting in digital technologies.[3] The International Telecommunication Union (ITU), a treaty-based organization with member states, also plays a role in international standardization, with ITU standards being especially influential in the developing world.[4] Finally, standards development also occurs through industry associations and consortia. These associations range from more established institutions, such as the Institute of Electrical and Electronics Engineers, to newer open industry associations, such as the Fast Identity Online (FIDO) Alliance, which aims to develop identity authentication standards through non-password methods like facial recognition.
CHINA’S BALANCE OF PRIORITIES AND STANDARDS
Although the landscape for AI standards in China is still emerging, some of the key drivers of the country’s standardization push can be picked out. Along with co-authors Samm Sacks and Paul Triolo, I analyzed several factors in the wake of the Standardization Administration of China’s issuance of the White Paper on Artificial Intelligence Standardization in March 2018.[5] First, the Chinese government wants to strengthen the international competitiveness of the Chinese AI industry by assisting companies to develop intellectual property that becomes an essential part of global technology systems. In other foundational domains such as 5G and cybersecurity, China is also trying to increase its influence in international standards associations.[6] This aligns with the goal of the State Council’s AI development plan, which states: “The AI industry’s competitiveness should have entered the first echelon internationally. China should have established initial AI technology standards, service systems, and industrial ecological system chains. It should have cultivated a number of the world’s leading AI backbone enterprises.”[7]
The development of a competitive AI ecosystem is not just about supporting the top firms but also about building up the overall innovation system. Domestic-facing standardization efforts help by improving both the interoperability and product-quality assessments of AI systems. In terms of interoperability, more standardized data formatting protocols could combat the issue of “data islands,” which prevent AI companies from achieving economies of scale.[8] Furthermore, the 2018 White Paper on Artificial Intelligence Standardization and two other white papers on biometric recognition and AI security standardization all emphasize the need for reliable third-party testing of AI algorithms to help procurers differentiate between sellers.[9] In sum, China’s AI standardization push is also an effort to better integrate its evolving AI ecosystem.
Last, the objectives of setting technical standards extend beyond mere economic concerns. As suggested by the personal information standard mentioned earlier, China’s standardization efforts could also protect privacy and other consumer interests and ensure the safe and secure development of AI systems. Parts of the 2018 white paper, for instance, tackle the effects of AI on privacy issues with an impressive degree of depth. It acknowledges that AI technology could make it easier to derive more private information from public data and information about other people. The drafters argue that “we should begin regulating the use of AI which could possibly be used to derive information which exceeds what citizens initially consented to be disclosed.”[10]
Some trade-offs in this balance of objectives are unavoidable. Strict standards on privacy could limit the effects of standards to facilitate greater data sharing. Another scenario is that China pushes narrow techno-nationalist standards to prop up national champions instead of adopting international standards. For instance, nearly half of Chinese key smart manufacturing technology standards do not correlate with international standards, which raises concerns that international firms could be cut off from the Chinese market in these emerging domains.[11] At the same time, limiting strategic alliances and technology transfer opportunities with international firms could hamper the overall development of China’s techno-industrial base.
While the particular balance of interests China strikes with its standards-setting approach is still uncertain, the China Standards 2035 plan, set to be published later in 2020 after two years of preparation, demonstrates that standards will continue to be essential to China’s overall technology strategy.[12] As Naomi Wilson notes, China Standards 2035 will not “swindle the world’s best engineers into adopting voluntary standards that will shape the technological landscape in China’s favor.”[13] Still, it does represent a continuation of China’s commitment to building technical standards expertise and could lead to more Chinese standards that become international ones.
U.S. BALANCE OF PRIORITIES AND STANDARDS
In many respects, the United States is also trying to balance similar objectives with its development of technical AI standards. In contrast to China’s centrally led standardization drive, the United States allows the market to take the lead in standards development. The American National Standards Institute (ANSI), a private nonprofit institution, plays a leading role in representing U.S. industrial interests in nontreaty international standards-setting activities. Thus, ANSI, rather than the National Institute of Standards and Technology (NIST), a federal agency under the Department of Commerce, serves as the U.S. member body to both the ISO and IEC, though the two organizations often collaborate.
The U.S. government’s backseat role in setting technical standards does not mean that it does not value competitiveness in AI. Like all domains of technology policy, where you stand on standards depends on where you sit. Technology leaders have more license to allow for a standards approach based on the machinations of the free market, whereas technology laggards have more incentive to adopt a more protectionist approach to standards. In testimony to the U.S. House of Representatives, a NIST director pointed out that the institute has led on the development of biometrics standards that have gained widespread international market acceptance—evidence that its work on standards “ensures that United States interests are represented in the international arena.”[14]
U.S. strategy documents also highlight the importance of standards-setting for promoting the diffusion of AI advances through the overall innovation ecosystem. One of the seven key strategies of the National Artificial Intelligence Research and Development Strategic Plan, outlined by the Obama administration in 2016, calls for the development of standards and testbeds for AI technologies to measure and keep pace with progress in these domains.[15] In 2017, NIST switched its facial recognition vendor testing program, which had previously tested algorithms on a three-year basis, to conduct evaluations on an ongoing basis (open indefinitely for developers to submit algorithms whenever ready).[16]
Broader concerns over the ethical and societal implications of AI systems are also bound up in the U.S. standards strategy. One of these is ensuring that AI systems are reliable and trustworthy. Toward that end, in April 2020, 58 co-authors at 30 organizations published a multistakeholder report on improving verifiability in AI development. One of the ten high-level recommendations called for standards-setting bodies to develop audit trail requirements for safety-critical applications of AI systems.[17] This would involve documenting code changes, records of training runs, and data verification plans, among other measures, to prevent accidents and allow for fruitful retrospective analysis in case an accident does occur.
The U.S. standards strategy also presents difficult trade-offs. Recently, analysts have warned that China’s state-led approach to technical standards development is challenging the United States’ decentralized approach. However, for the objective of cultivating a healthy domestic innovation system, government-directed standardization can be counterproductive. In technological domains where there is a lot of uncertainty about future trajectories, such as AI and big data, governments face a “blind giant’s quandary” when it comes to standards-setting.[18] The period when government efforts can have the most influence in shaping the trajectory of an emerging technology coincides with the period of the least technical expertise about the technology. As a result, government intervention could lock in inferior standards compared with market-driven efforts for optimal standardization in AI.
CONCLUSION: NOT A PANACEA BUT STILL PROMISING
Setting technical standards is not a panacea. Like all instruments of governance, technical standards are imperfect and must be combined with other mechanisms such as laws, norms, and other institutions. Analysts should be careful to not attribute unique effects to the act of standards-setting, especially if it just consolidates or reflects developments that would have taken place anyway. In terms of the competitive advantages for particular companies, on some level the best technology usually wins out and the de facto standard in the market often becomes the codified standard in the international body. Regarding the regulatory effects of standards, observers note that since some standards organizations like the ISO are heavily dependent on industry stakeholders, the process often results in “modest, least-common-denominator” standards.[19]
Oftentimes the discussion of technical standards also devolves into least-common-denominator concerns about U.S.-China competition and great-power machinations over international influence. These concerns must be balanced with an understanding of how technical standards can also be used to better govern technologies like AI. For instance, data governance in the Indo-Pacific can play a crucial role in preventing accidents associated with AI systems. Existing technical standards in many industries that involve safety-critical applications, such as nuclear power, specify requirements for audit trails (e.g., IEC 61508) as a means to incentivize companies to adhere to safety standards. Enforcement failures in safety-critical domains come with significant consequences, as the nuclear accidents at Chernobyl, Three Mile Island, and Fukushima demonstrate. As AI is incorporated into more safety-critical applications, this verification mechanism will only become more necessary.
In a similar vein, establishing standards on transparency in healthcare data can facilitate international research collaborations and help coordinate responses to public health emergencies such as the Covid-19 pandemic.[20] Thus, the important public goods and regulatory benefits attached to setting technical standards could ensure more sustainable and safe development of AI for all.
No comments:
Post a Comment