BY THOMAS VARTANIAN
As an op-ed in The Hill on Jan. 24 explained, the U.S. has no comprehensive national strategy to address the development and deployment of artificial intelligence.
Since then, the Trump administration has held summits, published reports and even established a select committee of government officials to study it. But experts say that the U.S. is falling behind in the race for dominance in the field. Vladimir Putin has said that the artificial intelligence (AI) winner will be the "ruler of the world."
The way that people and businesses move, save, store and transmit money is being revolutionized by technology, and AI is increasing the velocity and scope of those changes.
As part of the country’s critical infrastructure, the highest priority should be given to the development of a financial services and capital markets strategy to foster AI innovations while at the same time protecting against its unprecedented threats.
The authors of the January op-ed recommended a national commission on AI. That is a terrific idea, but I would go a step further.
We need a national commission on financial technology. It should be comprised of representatives of government, business, academia and the public, and its mandate should be to propose alternative financial technology strategies that the administration and Congress can consider and implement.
We are not that many years away from a time when financial systems operated through closed proprietary networks. The many breaches in corporate security demonstrate the risks of participating in the open architecture of the internet. AI will further complicate this dynamic.
The fact that scientists admit that they don’t know where artificial general intelligence (AGI) — the technological analogue to the way that the human brain works — will go and whether it can ultimately be controlled suggests caution about giving intelligent machines access to financial systems without the kinds of barriers, firewalls and protocols that can constrain it.
In his book, "Life 3.0," Massachusetts Institute of Technology Professor Max Tegmark warns that the increasing difference between the relative speed of decision-making by humans and AI may lead to a “superintelligent machine [that] may well use its intellectual superpowers to outwit its human jailers …”.
Recent government publications talk up the innovations of AI and do reference challenges affecting employment, privacy, online security, bias and intellectual property.
In September, The House Subcommittee on Information Technology concluded that "AI has the potential to disrupt every sector of society in both anticipated and unanticipated ways” but does not go much further.
No one seems to want to be the skunk at the technology garden party, so the possibility of a financial Armageddon is rarely discussed.
About two dozen countries, including the United States, have released papers on the use and development of AI in the past two years. China, however, has laid out a plan for global AI dominance by 2030, increasing its AI spending 200 percent between 2000 and 2015.
It has already deployed a form of algorithmic governance to monitor its own population through facial recognition and by 2020 will impose sweeping social evaluation profiling tools to reward and punish citizens based on their social scores.
Since global acceptance of AI rules of engagement is not realistic, there must be a cop on that beat. Hostile nations, terrorists and other rogue players will continue to develop and deploy increasingly enhanced forms of AI to attack and undermine financial systems and other critical infrastructure.
One such deployment of malicious AI could disrupt global economic balance, so any carrot for participation in a global AI accord must be matched by a big stick for aberrant behavior.
Proposed approaches range from a master AI regulator to algorithmic accountability that incentivizes businesses to verify that their AI systems act as intended and identify and rectify harmful outcomes.
Unless the United States takes the AI lead and creates a coalition of nations to implement such preemptive offensive and defensive strategies, there will be no counterweight short of military power to respond to those who would weaponize AI.
The penalties for unleashing malicious AI must include crippling financial, technological and military sanctions. Indicting perpetrators in Russia, China, Iran or North Korea — people who will never appear in the U.S. for prosecution — won’t begin to deter these malicious actions.
The findings and recommendations of presidential and congressional commissions are often ignored. But the stakes are rarely this high. Someone needs to get the ball rolling to ensure that AI develops in a manner that continues to empower and be controlled by benevolent humans.
If the United States does not take the lead, others will, and our future will be left to the winds of chance or the whims of fanatics. Nothing less than the control of global economies is up for grabs.
Thomas Vartanian is the founder and executive director of and professor of law at the Financial Regulation & Technology Institute at the Antonin Scalia Law School at George Mason University.
No comments:
Post a Comment