Responsible Disclosure is Irresponsible

By John Viega
January 23, 2009 | Comments: 3

I was pretty amused recently when two people I respect went at each other over vulnerability disclosure, quickly devolving into name-calling. It's always fun to watch a flame war (nobody got compared to Hitler, but one person did get compared to senile old Grandpa Simpson, walking around with his pants down).

But, to some degree, the two guys seemed to be talking past each other. One was arguing that full disclosure puts end users at risk, and the other was arguing that finding and fixing bugs is an important part of keeping code secure.

I happen to agree with both of them. Yes, if we didn't have good guys finding and fixing problems in code, there would be all the more problems for bad guys to find and leverage in their quest to take over the world. This is particularly the case because many development organizations don't invest in fixing problems, because there aren't good incentives for everyone to invest here (plus, there isn't much of a talent pool for this kind of work).

But, most of the problems in software that bad guys leverage are problems that the good guys have found and publicized.

If we hold to these two arguments, it seems that we can either live in a world where we hide our security problems, but are at risk from bad guys easily finding lots of security problems, or we can live in a world where the good guys hand the bad guys a roadmap on how to be bad on a silver platter.

In the "keep it secret" model, how are people protected? First, one hopes that it is difficult for bad guys to find security problems without source code. And, one hopes that the software vendors try to keep the security problems out of their code in the first place. And, they hope that when the bad guys are leveraging problems in the real world, it will quickly get back to the vendor and the vendor will want to protect people.

In the "let it all hang out" model, security flaws in software are made public. Usually, the vendor gets a few months of advanced notice, so hopefully people are protected by a patch that, we have to hope they will install in a timely manner.

But, in the real world, both of these models have their advantages, but still pretty much suck, because they leave people pretty vulnerable.

In the "let it all hang out" model, the bad guys will prey on the fact that most people don't keep their software up to date. They will then take the flaws that the good guys find, and use them to attack systems that aren't patched. This puts the burden of security on the end-user. And, because there are thousands of security problems disclosed every year (often in important software), people are constantly at risk. Bad guys try to leverage flaws quickly, before people do patch. And they assume that soon enough, there will be more flaws they can exploit, thanks to the good guys.

In the "keep it secret" model, vendors often won't find out about flaws when they're being used in targeted attacks. And, because people don't hear about specific security problems, it's a lot harder to put pressure on vendors to spend money to fix their security problems. In this world, there are a lot more security problems out there (not as much investment in finding and fixing). Yet, the bad guys have to do a lot more work and much more technical work to find the problems they can exploit, so they are unlikely to be as profitable in leveraging security problems. They will either need to spend a lot more money to find software flaws they can exploit, or they will hold on to the flaws they have, and only use them in targeted attacks.

You might say the first scenario looks better, because we should rely on people to keep their systems up to date. However, we know that even people that are well educated on the issues often don't patch in a timely manner. That's just a reality we have to deal with. And, it's a rational thing... there are some good reasons for it:

  1. Users might want to make sure updates are stable before we install them. Nobody likes it when an important program stops functioning properly.
  2. A user might not be entitled to the update, because he or she is using a version that is so old the vendor doesn't support it anymore, and that person doesn't want to pay for something newer, for whatever reason.
  3. It may not be clear that there are any security implications to the update. Certainly, some geeks assume that any update removes security problems (though, if new updates have lots of new code, maybe there are actually more security problems, rather than fewer). But most people don't have the "always patch" mindset.
  4. The risk is perceived to be low. Even I will admit to going for days without installing Apple's OS X security updates, because I feel that I'm not engaging in any risky behavior, and my machine is protected by other measures (e.g., NAT). Of course, I realize there is still some risk (e.g., a malicious ad... which is why I tend to update my browser immediately when it has security problems). Whether wrongly or rightly, people feel pretty safe on the Internet in general (if that weren't the case, there would be much more demand for more and better security).
The fallacy in comparing and contrasting these two "sides" is in assuming they constitute the only options. In fact, they don't. The "keep it secret" model is the world we lived in 10-15 years ago, and it sucked. The "let it all hang out" model is the world we live in today, which still sucks (though not as much). But, I envision a better world. To figure out what we should be doing better, it's instructive to look at the history of disclosure (at a very high level), and see where it's failed.

Back in the early 90's, not too many people cared about their software having security flaws, mainly because few people were on the Internet. There were some people on local Windows networks in their workplace, but few people worry about the insider threat coming electronically, when there are more direct ways.

Yet, researchers were starting to figure out that software could have security flaws, and that those flaws can have disastrous consequences... particularly, that bad guys can, if circumstances are right, take over a machine from the other side of the world, remotely running whatever code they like.

At that time, researchers were generally pretty altruistic (meaning there weren't too many economic insensitive keeping them from following their own interests over the greater good). They didn't want these flaws to be used by bad guys. So, they tended to contact the software vendors, to tell them about the problems they found, and how to fix them.

Most companies just ignored people reporting these security flaws, or dragged their feet indefinitely, promising fixes, where none were forthcoming. Companies typically aren't altruistic. Sure, they want their customers to be safe. But, they didn't want to incur the cost of understanding and fixing the problem (many security researchers vastly underestimate the impact on development costs). From the point of view of the company, customers weren't demanding security. And, they didn't see too much risk, because the good guys were the only one who knew about the problem. Sure, the bad guys might find out about a problem, but until there was evidence that they had, it seemed reasonable to do nothing. Many people assumed the bad guys would never go looking, or that if they did go looking, they probably wouldn't find the same particular problem (which is an interesting issue I won't discuss further right now).

The good guys didn't like leaving people at risk. And so, by 1993, some people decided that they'd try to force vendors to do the right thing, by threatening to disclose their problems to the world, if they don't fix those issues.

This approach actually worked. Disclosure helped build awareness. In particular, disclosure of flaws in Microsoft products caught the attention of some tech reporters, which not only put pressure on them to fix their bugs, it eventually gave them a bad reputation for security, due to the sheer volume of problems.

That doesn't mean everything has gone smoothly. Some vendors have felt blackmailed, believing that disclosure puts their customers at risk. This is particularly the case when vulnerabilities get disclosed before the vendor gets a chance to fix the problem, and get the fix into customer's hands.

As a result, most people in the vulnerability research community eventually decided "full disclosure" was probably not the right thing. The shift moved toward "responsible disclosure". This term might mean slightly different things to different people. But, in general, it implies that vendors well get advance notice of a problem, and two to three months to fix the problem, and get the fix to their customers.

That sounds a lot more reasonable, but there are still a few problems:


  1. 60 or even 90 days might seem like a lot to a vulnerability researcher, or even for some developers. But for those on the business side, who look at all the things that need to happen to get software to consumers, it can often be too little time

  2. Even if the vendor can move in 90 days, it's unreasonable to think that customers will upgrade in that timeframe

  3. If the vendor actually fixes the problem, why the heck should the world be told about it anyway?

Let's look at that last point in more detail. The people on the disclosure side would say that, if they're not, fewer people will patch. But, on the other side of the argument, disclosure increases the likelihood of exploit for people who do not patch quickly (because the bad guys have the good guys doing their job for them). Plus, even if there is disclosure, it's very rare that the average consumer will notice the security risk (it basically needs to be reported on in the press, or something similar). And, people who are well versed in IT, should already assume that every patch might potentially contain security fixes.

At the end of the day, this question boils down for me to, "how much does disclosure help the bad guys?" The answer is, "a ton!". In their most recent Global Internet Threat Report, Symantec reported that they detected 15 "zero day" vulnerabilities in 2007, meaning they found 15 vulnerabilities being exploited in the wild before the vulnerabilities were disclosed to the public. But, according to the Computer Emergency Response Team, there were at least 7,236 vulnerabilities disclosed in 2007.

Nobody has published explicit numbers that I have seen, but the vast majority of malware that leverages a security flaw (easily above 95%) uses a vulnerability that is public information.

Of course, that doesn't mean that nobody sits on undisclosed security flaws. I also know plenty of people who do, including the US Government (to be used strategically against America's enemies). And the bad guys obviously will. But, such security flaws tend to be used very cautiously, in hopes of keeping their weapon effective for as long as possible.

At the end of the day, if we stopped disclosing problems when vendors have fixed the issue, the bad guys would find more vulnerabilities themselves, yes. But, we'll be making it far more expensive to be bad.

All evidence I've seen indicates that, if a vendor is going to fix a problem, then disclosure is a bad thing for the average software user. So, why does it still happen?

The short answer is that the vulnerability researchers want the fame, fortune and glory. The economic interests of this community is no longer aligned with the interests of the end user. Individual researchers want to get their name out there, so they can make more money. They can also sell vulnerabilities to companies like TippingPoint, who disclose vulnerabilities to publicize their own companies. They're not just promoting their own technical prowess to enterprise customers, they're also trying to show that they protect people against more stuff faster, because they find these problems and protect their customers before public disclosure. So, they're making people far less safe in order to market their own company.

Wasn't the purpose of disclosure to make people safer by forcing vendors to fix problems in their software? Microsoft fixes problems as soon as they can now, and yet people insist on giving the bad guys the keys to the kingdom. We've certainly lost sight of what's important as an industry.

I think it's right to have disclosure as the "nuclear option" to provide an incentive to vendors to do the right thing, instead of never fixing problems. But, at the same time, I think disclosure happens far more often than it should (or at least far earlier than it should), if we as an industry are going to get the best possible result.

I think the industry should move to the following disclosure practices, which I will call "smart disclosure":

  1. The vulnerability finder contacts the vendor through standard means (generally, by mailing security@domainname), and works with the vendor to validate the bug, and set a schedule for future fixing, and regular communication between the parties.
  2. The vulnerability finder does not disclose while the vendor is behaving reasonably and acting in good faith. The finder should expect that patching can take a few months (especially when a problem requires rearchitecting to fix), and that getting a release out might even take a year or so (mostly dependent on a release schedule).
  3. If, at any point, the vendor is not acting in good faith to protect customers as quickly as reasonable, the finder should give 30 days notice for the vendor to correct behavior. If it doesn't correct, then the finder is free to disclose publicly.
  4. If a problem is being exploited in the wild, the vendor must acknowledge the problem and provide its schedule to the public within two business days from the point they were notified.
  5. For two years, disclosure is controlled by the software vendor, if it happens at all. After two years, the finder may disclose, but must give the vendor 30 days notice of the exact date and time. Generally, the vendor should produce documentation that acknowledges the finders role, whenever there is disclosure.

Problems in disclosure often happen around scheduling and communication. Communication is usually easy to fix... the vendor and finder should agree to regular status meetings, generally bi-weekly. And, they should try to respect each other, even if they don't understand each other.

As for scheduling, I have found that most vulnerability researchers do not understand the way large software development shops tend to operate, and have unreasonable expectations on when and how fixes can happen.

In the typical large development firm, you have to deal with issues like:


  1. The resources needed for part of the job may not be available. Maybe the only person who knows the code base in question is on vacation, or is working on some other project, where there are also time critical issues.

  2. There seem to be more important priorities. Frankly, many customers have higher priorities than security, and companies should try to make their customers as happy as possible. For instance, I've seen many cases where customers would know about a security bug and still give much more weight to other features, because they didn't expect the bug would be exploited in their environment (for instance, because the affected software is behind the corporate firewall, meaning only people with internal access to the company could be a risk).

  3. People don't understand the problem. Some of the bugs are extremely complex to understand and reproduce. And, the vulnerability researchers do report things as bugs that aren't, surprisingly often.

  4. It's generally unacceptable to release software changes without some confidence that the whole release will work "well enough". In the free software/open source community, and even in the shareware community, it is pretty common to let your users be your testers. But larger corporations typically have many customers that demand quality. As a result, every release, big or small, will generally go through a couple of months of rigorous testing after all the code has been written.

  5. Enterprise customers do not like frequent releases, and do not always deploy quickly. They often want no more than one release a year, because there's a cost associated with distributing the release, teaching people what's changed, and so on. Many organizations also insist on doing their own testing of a release before they deploy it, which can be expensive.

  6. Most companies don't know how to deal with security vulnerabilities internally. Often, the finder won't contact the "right" person, they'll just contact some guy on the right product team, or whoever at the company they can find an email address for. And then, they expect all the right things to happen. But, employees often don't know how to find the right team to deal with the problem. I've seen many cases where there actually is a team that manages response, but the person who got the contact ignored all the emails talking about it, and then contacted the wrong person (usually, someone on the product team), who also didn't know what to do, and just let stuff fall on the floor. Few companies even have people who are experienced at managing such things, never mind documented internal processes for how to respond.


In short, things are more complex and take longer than the average vulnerability researcher expects.

I've also found that most software vendors don't know anything about the security side, and don't know how to keep the finder happy. Many people assume the vulnerability finder is a "bad guy", which isn't generally the case. Sure motives usually aren't fully altruistic, but most vulnerability reearchers don't want to see bugs exploited in the wild. Anyway, people usually get over the shock when a few minutes on Google confirms that this is, indeed, how the industry works. Generally, if the finder is polite, and points to something that documents the expectations they have (e.g., to something detailing smart disclosure or whatever form of responsible disclosure), then generally companies want to do the right thing.

I realize that the vulnerability finders are doing a good thing, even though the primary reason they do it is for the publicity. We can't take away their economic incentive to do the job, and expect they'll still do it because they're altruistic. This is why smart disclosure has a clause that finders can eventually disclose, no matter what. But, we want it to be far enough out that people who are reasonable about updating will be protected. We then should encourage software vendors to give security warnings when people are running software that is more than a year behind in updates.

There could be a few more objections to my argument, that I'd like to address:

  1. Many companies such as Microsoft are supportive of responsible disclosure. The security industry today, as a culture, has already taken for granted the notion that "responsible disclosure" is good. A few people have argued this notion, but on the whole, people seem to assume that, since it's better than the non-disclosure days, it is right. But when you get outside of the security community, do you really think that product managers are happy about disclosure? It hurts the reputation of the product and company, while putting the product's users at risk. They might not complain too loudly for fear of looking bad if they get called out by the press for "not caring about security". I think it is mostly irrelevant what companies think, anyhow...
  2. Shouldn't companies be required to let their users know when there is a problem, at least when the patch is issued? As an industry, we have learned to take it as a given that software has security problems. Even if you've removed all the ones people have been able to find, there are probably more waiting to be found. As long as a problem isn't in the hands of a bad guy, it seems to be in the user's best interest to not know about the specific problem, because if he doesn't know, the bad guy is less likely to find out. Note that most large software vendors spend money looking for security problems in their own software. Typically, they silently fix the problems they know about, and it is very rare to see disclosure of such vulnerabilities (though it does occasionally happen... I'd say from experience that it's far fewer than 1 in 100 fixed security problems, and it is almost always the case that the bugs get disclosed years after the patch).
  3. But won't the bad guys just reverse engineer your patch and find the security issues? If security fixes are rolled into an actual release, where there are tons of other changes, generally not. Now, if the release is explicitly a security enhancing release, the bad guy WILL reverse engineer it and find the problems. They won't be masked by thousands of innocuous code changes. That means, if Microsoft keeps up its "Patch Tuesday" tradition, they absolutely should keep disclosing.
  4. What if we disclose the problem at a high level, but not in enough detail to reconstruct the problem? If you tell people there is a problem, and give them a general sense of where to look, then you've cut their costs tremendously. Look at what happened with the major bug that Dan Kaminsky found in DNS last year. Once Dan acknowledged there was a bug, trying to get people to patch in advance of the disclosure, a small segment of the vulnerability research community went off and rediscovered it, and published it to a blog. The bad guys went off and did the same.
  5. Disclosure is known to dramatically improve the security of software. Why wouldn't you do it? Telling vendors and having them fix problems improves the security of software. Disclosure when it isn't necessary (i.e., when the vendor would have cooperated) doesn't add any security to the software. However, it does tend take away from the security of the end user, who is far more likely to be at risk.
  6. There are already several markets for buying and selling vulnerabilities. Yes, and if we make it so the bad guys have to go to the market, instead of giving them vulnerabilities to exploit for free, then we will drive up the cost of vulnerabilities, because the demand will go up. Nothing changes in terms of vendors being pressured to fix their security problems in a timely manner.
  7. But aren't you putting people at risk by not trying to strong-arm them into upgrading by telling them there's a security hole? Well, we know that the culture is that people tend not to patch, and not to even pay attention to whether they might need to patch. In larger companies that buy patch management software, they will tend to pay more attention, but still often have reasons for not patching. By giving away vulnerability information, we're going to get a small percentage of people who do patch, but put a lot of people at risk as a result. That's not worth the trade-off. We should just teach system administrators to assume that every release might have security fixes.
  8. In cryptography, it's generally accepted that systems should be public, and that people should try to break them publicly, otherwise you don't get a secure system. I agree that the more eyes we have trying to find security problems, the more problems we can find, and we can hopefully use that to make the world safer. But, if people are already using systems, it doesn't follow that everybody ends up safer. In cryptography, we certainly prefer that breaking attempts happen before people deploy systems. That's why the process for government standards around AES (block cipher) and SHA (cryptographic hash functions) is a multi-year process. We don't encourage people to use whatever cryptographic primitives they want, and then put lots of people at risk by finding the flaws after the fact. In the crypto world, it makes sense to spend a lot to build a few secure primitives, have the world vet them, and then have them phase out older primitives that might be near or past the end of their effective lifetime. But, this kind of review is not even close to cost-effective for most systems. There would be too many systems that need review. It would be a nice parallel if we built standard secure programming primitives and had people review those. Which is something that absolutely should happen, but note that the costs of migrating code to new primitives is often extraordinarily high (can require major rewriting).

    By the way, I can point to a few crypto systems that have a good chance of being broken, but nobody seems to care enough about the systems in the academic community to review it (most of the crypto community has moved to proofs of security for most things, and for protocols, if you don't have a security proof, it's not worth anybody's time to look at it, if the system isn't in heavy use). Putting the details of a broken system out there is bad for customers if the good guys aren't going to find the flaws before the customers start using the system, because you're making it easier for the bad guys to find the flaws. That is, openness only works if people participate. The supply of security flaws is still enormous, whereas the supply of vulnerability researchers who are any good is still very low in comparison.

While I do think smart disclosure is the right way to go, I think the culture we have today is pretty ingrained, and will be difficult to change. Particularly, I don't expect that Microsoft would stop Patch Tuesday. First, it's not in the economic interests of the vulnerability community to delay taking credit for finding vulnerabilities. So even though they're hurting end users, they are unlikely to be supportive of any improvements. Since they will be evangelizing to the security community and beyond, there is a good chance that, if Microsoft tries to move from a monthly patch model to practices based around smart disclosure, there will be a backlash. The vulnerability researchers will try to paint them as NOT caring about security, even though they're doing the best thing for their customers. Heck, I'm sure that there would even be plenty of people within Microsoft who are so indoctrinated in today's security culture that they'd also disapprove of a move away from Patch Tuesday.

Therefore, I don't really expect anything to change. I hope it does, and I'd like to see governments legislate disclosure practices that are in the best interests of their people, or something like that. But, I do want to emphasize to those of you who aren't caught up in the culture of today's security industry, how the industry is doing you a huge disservice. Particularly, the many companies that find vulnerabilities as a way to market their own security products (a list that even includes big names such as IBM) are giving tons of ammo to the bad guys, and making the world a less secure place for the rest of us.

---
If you want to discuss this with me, or just flame me, feel free to message me on twitter, @viega.


You might also be interested in:

3 Comments

Great article, John.

In a future post, you may want to compare/contrast fixing of vulnerabilities in Open Source versus proprietary/closed software. For example, if Apache releases an update to the 2.0.x server, then the bad guys will know the release contains security fixes which are easily discovered in Apache's Subversion repository. The only way that Apache can even attempt to "hide" security fixes is in the latest release of software among a raft of other changes. However, the larger install base (and, thus, the majority of affected users) use prior releases. As a result, we pretty much have to abide by the "responsible disclosure" pattern.

If software continues its trend towards more Open Source, then this could have an interesting effect on the patterns of disclosure possible.

Thanks, Greg! It's certainly true that when you provide public access to the source repository, it becomes easier to pick out the security fixes. It's definitely a difficult spot, because many of your users aren't going to upgrade in a timely manner, whether or not you guys disclose your security flaws.

I think in the case of open source, I haven't seen enough data to indicate whether the bad guys really are likely to be picking out the security vulnerabilities that are swept under the rug (via internal audit). It might still be too time consuming, with too few positive results for them to be worth it. If they're not too likely to get ammo from it, or if they're not going to use the ammo too often, then the overall risk to your user base might be better for sweeping things under the rug. I think it's likely the right thing to do, but I'm sure data could persuade me that the average impact across the whole user base is better if you can get that early notice to the few people who are likely to upgrade (with closed source software it seems pretty clear that silent fixing is better).

If trying to hide your fixes in a sea of check-ins really is the best strategy for minimizing customer impact, then I don't think you have to abide by "responsible disclosure". But, in the real world, almost everyone who comes to you with a security bug is going to want their credit quickly, and won't like the idea of holding back on info.

I also think that this will probably vary on the complexity of bugs. If you've got an OSS project with 1000 bad strcpys, they are easy to find, and that probably shouldn't be swept under the rug. But in something like Apache, with a good track record, where we know lots of people have gone looking and found nothing, I think you can assume the problems that do surface aren't particularly easy to identify (though on the rare occasion people come up with a new vulnerability type, you should reevaluate that).

So, in summary, I think that, for open source, the best course is probably the same as I recommended for proprietary software, but it's less clear. Since it's less clear though, it's even less likely that practices are going to change.

In regards to objection 3, you may want to look at the Automatic Patch-Based Exploit Generation paper by Brumley et al. at http://www.cs.cmu.edu/~dbrumley/pubs/apeg.pdf

News Topics

Recommended for You

Got a Question?