Saturday, April 19, 2008

Metrics and Audience

There has been some chatter recently about a post Pete Lindstrom made about Microsoft's SDL and their publicly disclosed metrics. I chimed in on Pete's blog as well as on the Microsoft SDL blog, here is a little more.

The fundamental confusion here is about the audience for the vulnerability numbers, and metrics in general.

There are several audiences here:
  1. Microsoft's customers, competitors, and the public at large.
  2. Security folks, especially software security folks that want to improve the quality of their software.
  3. People who want more metrics about all things generally, the costs of security, etc.
Microsoft's vulnerabilities in shipped software metric is really only targeted to audience #1. Like it or not, what customers care about, as Michael Howard rightly points out, is how likely they are to get attacked/hacked, and how many patches they have to deploy. Microsoft for its part also cares about #1 for the reasons above, and the fact that releasing patches is extraordinarily expensive.

Security folks, especially those working on their own software security initiatives find the vulnerabilities metric mostly useless. It gives us no insight into *how* Microsoft has achieved the reduction in vulnerabilities. What we'd all love to know is how much each element of the SDL contributes to reducing vulnerabilities. A percentage break out on how effective each element is, Training, Threat Modeling, Testing, at reducing vulnerability counts, especially as broken out by design/architecture defects and implementation defects.

At the same time, I'm willing to acknowledge that developing these metrics is a full time job for multiple people. And, tracking the metrics over time is difficult, since its hard to normalize the defects between products and across time. New attacks are always surfacing, so how do you track the impact of new attack types across time. How do you track the impact of better code scanning/static-analysis tools over time. As the tool improves you'll find more defects when you run it, but that will skew your metrics somewhat.

The fundamentally unanswered question though is how do we actually measure the security of software. From a general customer standpoint what you care about is how likely you are to get attacked and compromised for one piece of software vs. another, what that software is going to cost to buy and run, etc.

For the security community what we're looking for is a metric that more closely tracks the "real" security properties of a piece of software. How hard it is for the expert to attack, how it does in real world deployments, etc.

Unfortunately no one metric is going to capture this. As I've previously mentioned the QoP workshop is a great place to go if you're looking for answers to the "how do we measure software security" question. But if what you want to know is how much is it generally going to cost me to run/implement a piece of software, looking at things like number of required patches and their frequency/severity, then perhaps Microsoft's vulnerability metric is for you.

Saturday, April 12, 2008

My Favorite RSA Sessions

I spent the whole week up at the RSA conference including the Monday before attending a few pre-conference activities. If you didn't get to go but know someone who did, I thought I'd recommend a few of the sessions I found most informative. I attended more sessions than the ones below but the talks below seemed to resonate the most for me.


DEV-201 Implementing a Secure SDLC: From Principle to Practice

This session was a fantastic overview of the SDL practices that EMC has been implementing for the last 2 years. A pretty good overview of what it takes to rollout the SDL against a bunch of products.



DEV-301 Effective Integration of Fuzzing into Development Life Cycle


A really good overview of what fuzzing is, how to think about the different types of fuzzing, and what types of applications it works best on.



AUTH-403 Knowledge-Based Authentication (KBA) in Action at Bank of New York Mellon


An excellent overview of what BNY-Mellon went through in implementing KBA for part of their authentication process. They deployed Verid to help customers sign up to the site. If you're not familiar with KBA, think about how the credit reporting agencies authenticate you for getting your credit report. They ask you a bunch of questions about your bills, payments, etc. that they figure only you will know. A KBA system such as Verid can do the same but pulls data from a lot more sources so it can ask things about former addresses, phone numbers, employers, etc. BNY-Mellon has put together a pretty good program, they are collecting great metrics about the success of the program, and the presenters were also excellent. Probably the best session I saw all around, even though it was one of the least technical.



GOV-401 Will Your Web Research Land You in Jail?


Sara Peters, the editor of the 2007 CSI report on web vulnerability research and the law gave an overview presentation of the report. On the one hand I was a little disappointed because this material was actually relatively dated because RSA makes people submit their papers/presentations so early. On the other hand it was nice to revisit this topic since it was this report that prompted the vulnerability disclosure policy I helped author last year.