By Jack Corrigan
However, without better standards for measuring the performance and trustworthiness of AI tools, officials said, the government could have a tough time striking that balance.
On Monday, NIST released its much-anticipated guidance on how the government should approach developing technical and ethical standards for artificial intelligence. Though it doesn’t include any specific regulations or policies, the plan outlines multiple initiatives that would help the government promote the responsible use of AI and lists a number of high-level principles that should inform any future standards for the tech.
The strategy also stresses the need to develop technologies that would help agencies better study and assess the quality of AI-powered systems. Such tools, which include standardized testing mechanisms and robust performance metrics, would allow the government to better understand individual systems and determine how to develop effective standards.
“It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance and privacy,” NIST officials wrote in the plan. “While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions.”
NIST’s plan was born out of a February executive order that called on agencies to ramp up their investments in AI as global competitors like China work to bolster their own AI capabilities. The strategy comes as one of the government’s first and most concrete steps toward placing guardrails on a technology that could have significant negative repercussions if left unchecked.
The AI standards developed in the years ahead should be flexible enough to adapt to new technologies while also minimizing bias and protecting individual privacy, the agency said. While some standards will apply across the broader AI marketplace, NIST advised the government to also examine whether specific applications require more targeted standards and regulations.
“The degree of potential risk presented by particular AI technologies and systems will help to drive decision making about the need for specific AI standards and standards-related tools,” officials said.
As the government begins developing rules for AI, it’s also important to remember the importance of timing, according to NIST. Standards that come too early could get in the way of innovation, officials said, but if they come too late, it will be difficult to get industry to agree to them voluntarily. As such, agencies need to constantly look outside of government to gauge the current state of AI and understand when federal action may be needed.
“The government’s meaningful engagement ... is necessary, but not sufficient, for the nation to maintain its leadership in this competitive realm,” NIST said. “Active involvement and leadership by the private sector, as well as academia, is required.”
In the plan, NIST officials said government leaders should work to better coordinate agencies’ efforts to understand AI and develop standards for the tech. To that end, they recommended the White House designate a member of the National Science and Technology Council to oversee AI standards and urged agencies to study the approaches tech companies are taking to steer their own AI development efforts.
NIST also advised the government to invest in research that focuses on understanding AI trustworthiness and incorporating those metrics into future standards. Expanding public-private partnerships could also help inform federal AI standards, they said, and increasing cooperation with international partners could help address many of the national security concerns related to the tech.
“Public trust, security, and privacy considerations remain critical components of our approach to setting AI technical standards,” U.S. Chief Technology Officer Michael Kratsios said in a statement. “As put forward by NIST, federal guidance for AI standards development will support reliable, robust and trustworthy systems and ensure AI is created and applied for the benefit of the American people.”
No comments:
Post a Comment