Thursday, July 19, 2007

Security Reporting Policies That Encourage Responsible Disclosure?

I was reading Jeremiah's piece recently about the CSI working group he was on dealing with liability for security researchers, especially those working in the web application space. It got me thinking about creating disclosure policies that serve several purposes:
  • Encourage Responsible Disclosure (subject to interpretation)
  • Provide clear expectations and ground rules
  • Protect researchers who disclose responsibly - ie. waive liability for researchers that follow the predefined rules
I'm working to contact a few of the people involved in the CSI report to find examples of disclosure policies that achieve the above goals. In my mind I'd want the policy to have roughly these items:

  1. Tell the company first about vulnerabilities
  2. Don't sell the vulnerability or otherwise distribute it until hearing back from the company
  3. Don't exploit the vulnerability other than necessary to demonstrate the weakness.
    1. Example: If there is an authorization issue, use two of your own accounts, don't break-in to someone else's.
  4. Do these things, and we guarantee we won't go after you for doing vulnerability research on our site.
  5. If you're helpful, we'll try to run a thank-you page listing you. We don't however pay for vulnerabilities.
If you have pointers of good disclosure/reporting policies I'd love the pointer. I looked at a number of the major providers and I didn't see any policies that really hit this one on the head.

Google
  • Overall, good page
  • Doesn't include waiver for the researcher
Yahoo
  • Doesn't mention responsible disclosure
  • Doesn't include waiver for researcher
MySpace
  • I couldn't really find their security reporting page/info.
  • http://www.myspace.com/security points to a really odd place
Facebook
  • Not much in the way of reporting a security vulnerability
  • No waiver of liability

Wednesday, July 18, 2007

Pharmaceutical Liability vs. Software Liability

I've written in the past about software security liability and how difficult it is to create high quality software that is free from defects.

One of the problems, as was pointed out before, is that software and computers don't have a fixed use that can be anticipated during the development cycle, and consequently saying that software isn't "fit for purpose" is a really tough judgment call.

I started thinking of other products where bad outcomes happen even during correct use,where the flaws aren't necessarily the fault of the manufacturer. Pharmaceuticals come to mind as a product that have:
  • Large safety concerns
  • Potentially large benefits (antibiotics sure are nice, aren't they?)
  • Per-individual side effects that are tricky to predict
Pharmaceutical companies develop drugs using an extensive process to try and ensure safety. The list of things they do is extensive:
  • Pre-Approval
    • Computer testing of toxicity
    • Animal testing of toxicity
    • Stage-1 trials in humans (small group) to test toxicity and effects
    • Stage-2 trials (larger number of people) to determine drug efficacy
    • Stage-3 clinical trials (hundreds to thousands of people over 1-3 years) to determine efficacy, adverse effects, etc.
    • Drug interaction trials and labeling
    • Extensive documentation trail
    • Get FDA Approval
  • Post Approval
    • Adverse event reporting capability
    • Updates to labeling
    • Constant quality checks
Despite all of these steps sometimes someone suffers an adverse event from taking a medication. When they do they, rightly in some cases - wrongly in others, blame the pharmaceutical company for a defective product. Sometimes the causes of the problem are:
  • Individual "allergic" reaction
  • Complicated or unforeseen drug interaction
  • Unsafe Product
    • Long-term safety issues that didn't surface during clinical trials.
Depending on the specific cause its hard to always blame the problem on the pharmaceutical manufacturer.

A pharmaceutical company can no more anticipate individual allergic reactions than a software vendor can guess at how someone is going to use their software. What matters most in determining liability is the level of due diligence and proper process that went into the product development, not the outcome itself.

All of this costs money. Current estimates are that developing a drug and bringing it to market costs approximately $800-million dollars. Individual manufacturing costs are generally low such that the first pill that comes off the production line costs $800-million and each additional pill costs 5-cents.

There is a big discussion going on right now about flaws in the process on the legal, FDA, and Pharmaceutical side. Right now its tricky to bring smaller targeted drugs to market because the costs are prohibitive to develop and gain approval for a new medication. The Economist had a few recent pieces of how drug companies are trying to develop targeted medications and how FDA regulation may be doing more harm than good in some cases.

If you've read this far you may be asking yourself what this has to do with software liability and software security. The points are:
  • Other products are subject to heavy regulation but still manage to turn a profit
  • The quality of the process doesn't always guarantee a quality outcome - especially in the face of uncertain product use
  • If we impose too much liability on software manufacturers we could drastically raise prices and/or reduce the amount of software available
  • Sometimes regulation does more harm than good
Just some points to think about next time it appears that questions of software liability are simple.