Saturday, April 07, 2007

Security metrics and developer training/certification

I was reading up on the new SANS Software Security Institute and its an interesting concept. One debate that been raging for a long time though, is the question of what types of certifications we want for certain things...

  • Multiple Choice
  • Multiple Choice + Freeform answer
  • Both of the above plus a hands-on component
In the case of secure programming I'm fairly committed to a hands-on component of any sort of certification. Partially its a question of basic knowledge versus application. I'll be interested to see whether we have a way of benchmarking how good the code is that certified developers write vs. other people.

It occurs to me that the SANS certification might be more useful for those reviewing code for security flaws, rather than those writing code. We're not asking the test-taker to produce anything, but we are asking them to review code in the test and find/explain flaws. When it comes to hiring time I'm more likely to consider this a meaningful cert for those who want to do reviews, pen-tests, etc. rather than those I want to write secure code. Or, more precisely, its more of a standalone credential for the tester/reviewer than it is for the developer.

On to a slightly related topic - Metrics.

One of my current problems is determining how I'm going to measure the success of my application security program. What sorts of metrics do we care about? A few things that spring to mind are:
  • Defect rates (per lines of code, module, etc)
  • Individual/group error/defect rate
  • Success of process at catching defects in either arch/design/implementation early
  • Code/site coverage using standard toolkits/frameworks for things like input validation, output filtering, etc.
  • Remediation time for defects
Management speak/philosophy tells us that we get what we measure. What we hold people accountable for, and how we create incentives, determines what people produce. With that in mind, how do we structure our metrics to get the outcome we want.

I think unfortunately that its going to be an experiment of putting in place some metrics, seeing how they influence people's behavior, and then modifying them on an ongoing basis to get the results I want. One critical aspect of this approach of course if that you need to have easy to gather metrics that don't require a lot of human intervention to generate. Otherwise you can't do a lot of iteration over it.

More on this as I think of it. Reading through Microsoft's SDL book right now and hopefully I'll get a few ideas there.

No comments: