The cybersabotage campaign on Iran’s nuclear facilities didn’t just damage centrifuges. It undermined digital security everywhere.
A few months after President Obama took office in 2009, he announced that securing the nation's critical infrastructure -- its power generators, its dams, its airports, and its trading floors -- was a top priority for his administration. Intruders had already probed the electrical grid, and Obama made it clear the status quo around unsecured systems was unacceptable. A year later, however, a sophisticated digital weapon was discovered on computers in Iran that was designed to attack a uranium enrichment plant near the town of Natanz. The virus, dubbed Stuxnet, would eventually be identified by journalists and security experts as a U.S.-engineered attack.
Stuxnet was unprecedented in that it was the first malicious code found in the wild that was built not to steal data, but to physically destroy equipment controlled by the computers it infected—in this case, the cylindrical centrifuges Iran uses to enrich uranium gas.
Much has been said about Stuxnet in the years since its discovery. But little of that talk has focused on how use of the digital weapon undermined Obama’s stated priority of protecting critical infrastructure, placed that vulnerable infrastructure in the crosshairs of retaliatory attacks, and illuminated our country’s often-contradictory policies on cyberwarfare and critical infrastructure security.
Even less has been said about Stuxnet’s use of five so-called “zero-day” exploits to spread itself and the troubling security implications of the government's stockpile of zero-days -- malicious code designed to attack previously-unknown vulnerabilities in computer software.
Because a zero-day vulnerability is unknown, there is no patch available yet to fix it and no signatures available to detect exploit code built to attack it. Hackers and cyber criminals uncover these vulnerabilities and develop zero-day exploits to gain entry to susceptible systems and slip a virus or Trojan horse onto them, like a burglar using a crowbar to pry open a window and slip into a house. But organizations like the NSA and the U.S. military also use them to hack into systems for surveillance purposes, and even for sabotage, such as the case with the centrifuges in Iran.
Generally when security researchers uncover zero-day vulnerabilities in software, they disclose them to the vendor to be fixed; to do otherwise would leave critical infrastructure systems and other computers open to attack from criminal hackers, corporate spies and foreign intelligence agencies. But when the NSA uncovers a zero-day vulnerability, it has traditionally kept the information secret in order to exploit the security hole in the systems of adversaries. In doing so, it leaves critical systems in the U.S—government computers and other systems that control the electric grid and the financial sector—vulnerable to attack.
It's a government model that relies on keeping everyone vulnerable so that a targeted few can be hacked—the equivalent of withholding vaccination from an entire population so that a select few can be infected with a strategic biological virus.
It's also a policy that pits the NSA’s offensive practices against the Department of Homeland Security's defensive ones, since it's the latter's job to help secure critical infrastructure. That’s more than just poor policy. It’s a combination that could someday lead to disaster.
Much has been said about Stuxnet in the years since its discovery. But little of that talk has focused on how use of the digital weapon placed our own vulnerable infrastructure in the crosshairs of retaliatory attacks.
None of this would be so troubling if the use of zero-days in Stuxnet were an isolated event. But the U.S. government has been collecting zero day vulnerabilities and exploits for about a decade, resulting in a flourishing market to meet this demand and a burgeoning arms race against other countries racing to stockpile their own zero day tools. The trade in zero days used to be confined to the underground hacker forums, but in the last ten years, it's gone commercial and become populated with small boutique firms whose sole business is zero-day bug hunting and large defense contractors and staffing agencies that employ teams of professional hackers to find security holes and create exploits for governments to attack them. Today, a zero-day exploit can sell for anywhere from $1,000 to $1 million. Thanks to the injection of government dollars, what was once a small and murky underground trade has ballooned into a vast, unregulated cyber weapons bazaar.
When I spoke with former NSA and CIA Director Michael Hayden for a book about Stuxnet, he defended the government's general use of zero-days, without acknowledging that the U.S. was behind Stuxnet, by citing the “Nobody But U.S.” rule the NSA uses for deciding when to withhold information and when to disclose it: The NSA only keeps a vulnerability secret if it judges that “Nobody But Us” could exploit it. “How unique is our knowledge of this,” he said, “or our ability to exploit this compared to others?”
At first glance, the argument seems reasonable. But ask anyone in the cybersecurity industry about the principle behind it, known as “security through obscurity”—relying on the safety of a system by assuming that nobody else knows about its vulnerability—and you’ll find it’s widely discredited. In the words of Obama’s former cybersecurity advisor Howard Schmidt: “It’s pretty naive to believe that with a newly discovered zero day, you are the only one in the world that's discovered it. Whether it’s another government, a researcher or someone else who sells exploits, you may have it by yourself for a few hours or for a few days, but you sure are not going to have it alone for long.”
Odds are that while Stuxnet was exploiting the vulnerabilities it used to get into Iran's uranium enrichment plant, a hacker or cyberwarrior from another nation state was exploiting the same vulnerabilities in other systems—possibly even systems in the U.S.
Capitol Hill has largely ignored the issue of zero days, except to talk about them in a general sense in relation to attacks on companies like the retailer Target.
So last December, the President’s Review Group on Intelligence and Communications Technologies, convened in the wake of the Edward Snowden leaks to develop reforms for the intelligence community, concluded that the U.S. government should use zero-day exploits only for “high priority intelligence collection.” In almost all instances for widely used software, the group wrote, it's in the national interest to patch vulnerabilities rather than use them for intelligence collection. When the government does decide to use a zero-day hole, the period for exploiting it should be limited, after which it should be disclosed. The review board also asserted that decisions about when to use or disclose zero-days should be subject to a multi-agency review and oversight and not left in the hands of the NSA.
A few months later, the New York Times reported that the White House had “reinvigorated” an interagency process for deciding when to share information about zero-day vulnerabilities so they could be patched. “When Federal agencies discover a new vulnerability in commercial and open source software … it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose,” a government statement said. Unless there is “a clear national security or law enforcement need,” the bias would lean toward disclosure. That addition of “law enforcement” to the caveat has critics concerned, since this greatly expands the pool of opportunities for using zero-days. The announcement also said nothing at all about a time limitation.
Recently, the NSA director asserted in a speech at Stanford University that his agency is committed to disclosing security holes to vendors. But he also reserved the right to keep certain vulnerabilities close-hold. What the criteria for keeping a zero-day secret was, he wouldn’t say, other than to repeat what Hayden had told me -- that vulnerabilities deemed too difficult for anyone else to exploit were fair game for the NSA to withhold.
Recently I pressed Senator Ron Wyden – a leading NSA critic -- on why lawmakers, despite their seemingly overt concerns about the security of U.S. critical infrastructure, have remained silent on the government’s use of zero-days. He took a few days to respond. When he did, he seemed ready to take a stand. “Even temporary withholding of vulnerabilities should be done only under pressing circumstances where it is truly necessary for American security. In the past, intelligence agencies have too often been allowed to take reckless actions that undermine the internet and harm huge numbers of Americans for little or no security gain.”
To properly ensure a more responsible approach going forward, he said, “will require the executive branch to share information about these decisions not only with members of Congress, but also with specialized staff who possess appropriate legal and technical expertise. If Congress allows the executive branch to withhold information about these decisions from cleared congressional staff, it will represent a failure to perform its oversight duties, to protect the American public and to protect our economy.”
It’s time for lawmakers to get serious about the security of U.S. systems and decide whether our need to defend against digital assaults is greater than our need to attack others.
No comments:
Post a Comment