JANUARY 06, 2014
BY PETER HAYNES
Technology evolves so quickly that government regulations are outdated from the day they are written. Policymakers should consider the thirty-year-old insights of an obscure British economist for a map to the new approach we need to regulating technologies.
It is a truism that policy always plays catch-up with progress—and nowhere is the disconnect between policymaking and reality starker than in the technology sector. Almost all technology policies today are intended to govern technologies as they are understood on the day those policies are written. This traditional, “steady-state” approach to policymaking implicitly assumes that a technology’s role in society will remain relatively static.
The poster child for this approach is the European Union’s proposed General Data Protection Regulation, unveiled with great fanfare in early 2012: it seeks to regulate ways in which data are used that have long ceased to be relevant. Apparently drafted by Eurocrats unfamiliar with everything from social media to big data, the regulation was immediately derided by anyone who has been online since, say, 2005. It has been in a state of constant revision ever since.
Technology, meanwhile, has not stood still—and the direction in which it is moving will test policymakers worldwide. Consider just one aspect of this (r)evolution: the increasing degree of digital intelligence and connectedness that is being incorporated into the physical objects we use or encounter every day. Networking-firm Cisco estimates that the number of smart, connected objects (from environmental sensors to bathroom scales) already exceeds 15 billion globally. By 2020 the number of devices that make up “the Internet of Everything” (IoE) is forecast to reach between 30 and 75 billion, depending on whose crystal ball is being rubbed.
Whatever the numbers, we are clearly moving into an era where almost every manufactured object will be able to exchange data with other objects, online services, and individuals. Most of this communication will be by objects measuring or recording aspects of their environment and transmitting that data autonomously. But the IoE ecosystem doesn’t consist merely of objects. It also comprises the end-to-end systems that connect those objects, either through a wired or wireless interface; the software, systems and services that parse and analyze data the objects have collected; and the adaptive behaviors/actions taken by those objects as a result. The majority of these communications, insights and actions will be realized without human involvement—for example, wearable sensors autonomously communicating with a home heating system to adjust room temperatures.
That the IoE can communicate in an ad hoc, asynchronous, adaptive, self-configuring and dynamic way—and be equipped with the “intelligence” to operate without human involvement—poses an unprecedented challenge to how technology policy is formulated. Instead of relatively predictable, static systems, policymakers will be faced with technologies whose behaviors are not only unpredictable but essentially unknowable.
Because establishing appropriate, interoperable, system-level policies for such technologies is so challenging, regulators may default to narrow technology-specific regulations, and in the process dampen technological innovation. For example, European Data Protection Supervisor Peter Hustinx spoke in London last year of the need for “explicit consent” for personal data usage (even though the currently ubiquitous practice of notice and consent is unworkable in an IoE world). Hustinx warned that “some of the innovation we’ve seen [is] perhaps not all welcome. We need to push back, scale back to allow space for appropriate innovation.” In other words, in the face of the unknowable, focus on what’s known, revert to past regulatory practices in an effort to recreate the existing world, and lock down.
There are better options. The state of “unknowledge” was first described around 1980 by the late George Shackle, a relatively obscure but influential British economist whose work should be required reading for every regulator. Shackle’s premise was that economic actors (and here he specifically called out policymakers) must assume that the present moment is where the unknown begins—what he called “the void of unknowledge.” [1] No matter how much evidence has been collected about past behaviors, Shackle posited, it will be of limited value in assessments of the future—because the future by definition will always contain uncertainties. Shackle’s aim was to identify the optimal way in which economic actors could cope with “the counter-expected and unexpected”—the point at which “time’s sudden mockery reveals their supposed knowledge to be hollow.” [2] As we move into the IoE age, the challenge of time’s sudden mockery is set to become increasingly familiar to policymakers.
Shackle’s unique insight—the subject of numerous papers, books and conferences in ensuing decades—was the significance of imagination at the center of economic dynamics. In a less fluid technological age, the traditional steady-state approach to policymaking was for the most part adequate: mandating the use of seatbelts and airbags in cars, for example, was a pretty safe regulatory bet given that cars will always have seats. But today, Shackle’s approach is more useful. He believed that policymakers (and other economic actors) should combine knowledge of the past with the use of imagination to create a “thought map” that envisions all probable and possible future scenarios. They should then base policy decisions on a wide range of such scenarios (even if they appear to be conflicting), and ensure that those decisions are flexible enough to encompass new scenarios as they emerge over time.[3]
Such an approach to policymaking may sound simplistic, even unscientific. But it may be regulators’ best option if they are to (a) remain relevant in the technology arena, and (b) avoid stifling innovation in a regulatory version of iatrogenesis—a violation of the physicians’ fundamental dictum to “first do no harm.”
What would this entail in practice? Primarily, it would mean focusing policy objectives on intent and outcomes, rather than narrowly regulating specific technologies or actions (e.g., data collection). It also would entail thinking about the future of technology on a broad, holistic level (which in turn means constantly rethinking the impact of related future scenarios). Technology policies often are being shaped today at the component level, in silos—e.g., devices, cloud services, security, data protection, privacy, etc. Individually, these are undoubtedly challenging issues, but in the emerging technology ecosystem they are interrelated. It would be inefficient, ineffective and potentially damaging to innovation to regulate at this level—rather than to consider the IoE as a holistic system, envision (or as Shackle would have it, imagine) a wide range of potential scenarios at that level, and then develop a flexible policy framework able to adapt dynamically as the ecosystem evolves.
This in turn will require the use of principles-based policies as the core of ecosystem governance—for example, guiding principles for trustworthy data practices, and agreed industry codes of conduct that will enable a range of implementations of those principles. These might include holistic and contextual privacy principles; fair and appropriate use of data; secure data transmission and storage; transparency of autonomous decision processes, subject to human review and redress; and trackability of data. Trackability is critical: since it is possible to log every event in the ecosystem, technology can play a key role inreinforcing policies related to it. Adaptive, dynamic policy systems, then, require a novel, holistic, adaptive approach that could be based on the precepts established by Shackle and those who have built on his legacy.[4]
There are signs that some policymakers—or at least their advisors—understand the need for a radical approach. A recent European Commission report on the Internet of Things acknowledges the need for new thinking around policymaking if the European Union is to “accelerate and improve the development” of the industry. It advocates a “soft law” approach (something the U.S. Federal Trade Commission also may be considering) that appears to encompass a regulatory framework of principles, codes of conduct, and monitoring—all predicated on “a rich set of scenarios” that are “not limited to past and present information.” Perhaps Shackle already has a few admirers in the regulatory world.
Peter Haynes is a senior fellow with the Strategic Foresight Initiative.
1. Shackle, G.L.S. 1991, p. xi. Epistemics and Economics: A Critique of Economic Doctrines. Piscataway, N.J.: Transaction Publishers.
2. Ibid., p. 447.
3. Further thoughts on long-term forecasting can be found here.
4. For a more nuanced description of what has been called “the imagination economy,” see Shackle, G.L.S. 1983, The Bounds of Unknowledge. In J. Wiseman (ed), Beyond Positive Economics. New York: St. Martin's Press.
No comments:
Post a Comment