Monday, June 04, 2007

Questions About Software Testing and Assurance

Measuring assurance in security is difficult. On the one hand we'd like to be able to objectively measure the work-factor an attacker must expend in order to successfully attack something. Unfortunately it is the rare case where we have this level of certainty for the work-factor. Only in situations where the work reduces to breaking a cryptosystem can we have any true estimation of the work required to break our software, and this of course relies on proper cryptosystem operations from the code and operational (key management, etc) perspectives.

James Whittaker's recent piece on the Microsoft SDL blog about "Testing in the SDL" is quite interesting when looked at through the lens of assurance, which is what he says he is describing:

Security testing has been – and will always be – about assurance, that is, assuring that the product as built and shipped has been thoroughly tested for potential vulnerabilities. Bug detection and removal will continue to be important until a time in the future comes when we can deploy provably secure systems. However, we’re likely never to get to such a future without learning the lessons that testing is teaching right now. Anyone can write a system and call it secure – it’s only through watching real systems fail in real ways that we learn to get better. Moral of the story – testing is by far the best way to show us what we’re doing wrong in software development.
What is so unfortunate about this statement is that James is correct. In the vast majority of cases we simply can't reduce any element of the software to a purely objective measure of work-factor for an attacker. We're generally still so bad at writing software that we end up designing security in - but in the end - rely on testing to give us any certainty about the quality of the software we've created.

We lack the ability to analytically evaluate software from a quality and/or security perspective and we're left with the crude approximation that is testing. Its sort of like needing to to an integral of something and discovering that the function has to be calculated numerically and so you're left with an approximation rather than a closed-form solution. You know that no matter how you try you're just getting an approximation of the answer and that is the best you can do. The real world is like that sometimes but oh to have the certainty that a simple equation would give you.

I think the one saving grace here is that we know that reusable components are a good approach to solving this problem. Robert Auger wrote a nice piece on this titled "The Business Case for Security Frameworks." In it he argues that reusable security frameworks are a great initial defense against insecure coding practices. I'll agree wholeheartedly and on top of that throw in that once you start using frameworks like this, you hopefully reduce your testing burden to new code and new attack vectors rather than going over the same ground again and again. James I'm sure knows this all too well.

There has been quite a bit of discussion about exactly this sort of analysis at Microsoft with respect to Vista and how its newly created and properly engineered code will fare over time against attackers. Despite recent bogus claims to the contrary - Vista seems to be holding up pretty well.

What I'd still love to have though is an analytical way of measuring code quality and/or work-factor for attack to have a "real" measure of how resistant to attack a piece of software is. Schneier has been writing forever about an Underwriter's Laboratory for security. I wish I could feel more confident that we are getting closer to having an answer to this type of question.

No comments: