Is the NSA Doing More Harm Than Good in Not Disclosing Exploits?
Inside the complicated national security calculus behind disclosing zero-day vulnerabilities.
The current debate surrounding the Vulnerabilities Equities Process (VEP) — the process by which the U.S. government decides whether to disclose newly discovered software vulnerabilities or keep them secret for possible use — is admittedly rather tedious. One side accuses NSA of “exploit hoarding” and insists the agency should disclose more discovered vulnerabilities in the interest of public safety. The other side counters that the government should retain a responsible amount of so-called zero-day exploits and that it discloses them when reasonable. Both sides, however, often talk past the obvious point that there will always be vulnerabilities the NSA needs to retain for national security reasons. Even those who encourage the NSA to prioritize defense over retention of vulnerabilities for offensive use should acknowledge that disclosure of a vulnerability makes us more secure only if it is either already in the hands of — or independently discovered — by an adversary.
More difficult and far more interesting dilemmas arise when the NSA discovers an adversary’s capability either because the adversary stole the NSA’s own tools or because the NSA has compromised the adversary’s infrastructure. In these circumstances there are clear defensive advantages to disclosing the vulnerability so that it may be patched as there is no longer just a speculation as to whether someone else knows or might discover it. At the same time, however, the equities are more complex because disclosing the vulnerability would risk compromising separate NSA capabilities.
This is known as the Coventry problem. Coventry was a city in Great Britain that suffered devastating bombing raids during World War II. The (perhaps apocryphal) story goes that the British government knew about the impending attacks on Coventry. But to avoid compromising the wartime signals intelligence that the Allies obtained from intercepting and decrypting high-level German communications, Churchill allegedly ordered that nothing be done to protect the city.
This difficult problem — wherein revealing knowledge reveals capabilities — would benefit from public examination in the context of vulnerability disclosure.
2013 was a bad year for the NSA. Some unknown entity that now calls itself the “ShadowBrokers” managed to steal three exploitation suites — one targeting routers, one targeting mailservers, and one targeting Microsoft Windows, which included five zero-days. This unknown actor also managed to compromise the Internet-side Windows workstation of a Texas Tailored Access Operations (TAO) analyst responsible for portions of a campaign infiltrating and monitoring SWIFT financial transactions in the Middle East. There has been no public disclosure or even anonymous leaks about how the ShadowBrokers could have accomplished this feat.
Prior to their tools being stolen in 2013, it was difficult to claim that the NSA was at all wrong to withhold the Windows exploits. These were the crown-jewels for TAO, the NSA’s most elite hacking group, providing a way to walk through almost any important Windows network on the planet; meanwhile, there was no evidence that of other actor had the same or similar capability. It is easy to imagine how the Vulnerabilities Equities Process (VEP) concluded that it was appropriate for the government to retain these kind of exploits.
It also appears that the NSA’s pre-2013 decision on those zero-days did not negatively impact U.S. security, as only one of the five zero-days was seen elsewhere — and that use could be attributed to either NSA or the ShadowBrokers. Therefore, it seems unlikely that disclosing these vulnerabilities would have actually deprived an adversary of some capability. The hard VEP problem arose after the tools were stolen.
Let’s assume that the NSA was aware of the ShadowBrokers theft (it might not have known, but if so that’s a separate, and far larger, problem). If, upon learning of the theft, the NSA disclosed the vulnerabilities to Microsoft, then it would risk alerting the ShadowBrokers that the NSA knew that the hackers had NSA tools. Thus, disclosure would potentially compromise the sources and methods through which the NSA learned of the ShadowBroker’s theft and the fact of what had been stolen. If the NSA detected the ShadowBrokers through standard network monitoring of its own systems, then disclosure to Microsoft would not have posed much of an issue — as the agency’s methods of detection on that front are inherently obvious. But detection though less conventional means might implicate a significant capability that could be disrupted by notifying Microsoft (and therefore the ShadowBrokers).
Yet by not notifying Microsoft, the NSA’s inaction enabled the ShadowBrokers to use these stolen tools against U.S. and global targets. It is hard to know which decision the NSA should have made in such a scenario: defend U.S. targets (and implicitly disclose our knowledge) or to remain silent.
There is an even harder problem to consider. It is the NSA’s job to compromise our adversaries’ network exploitation infrastructure. In this process, the NSA undoubtedly discovers exploits. So what should it do about those?
If the NSA discloses an exploit it is highly likely to compromise sources and methods. Even the use of a cutout (rather than the current credit of an “anonymous source” the NSA used in the ShadowBrokers case) may not be sufficient since many modern exploits actually take advantage of chains of vulnerabilities. An adversary could expect one link to break through independent discovery but elimination of an entire chain tells the adversary that someone compromised their exploit.
Counterbalancing all this is that a disclosure acts to directly protect U.S. systems, including (and especially) the national security critical systems the NSA is in charge of defending. Even the most minor prioritization of defense would suggest that removing an adversary’s capability to attack our computers is a substantial benefit.
Ultimately, the issue is whether the short-term defensive move of breaking an exploit is sufficient to override the long-term defensive benefit of continued access. How does the calculation change if the adversary detects and removes the NSA’s access — and if the adversary starts using the exploits in a more aggressive manner?
There is no clean line but a lot of messy questions and complex scenarios. That makes this the far more interesting policy problem than the generic “is-disclosure-good?” framing. I’m comfortable with the NSA keeping some NOBUS (“NObody But US”) exploits. It is much more awkward when the NSA knows of exploits in the hands of others that others don’t know that the NSA knows.
Photo credit: PAUL J. RICHARDS/AFP/Getty Images