Wednesday, October 31, 2007

We need InfoSec incident data like NASA got from pilots

You may or may not have seen the coverage lately about a survey NASA did of airline pilots about the frequency of close calls in airline safety. There has been a bit of scuffle about whether to release the data publicly because of fears it might erode consumer confidence in airline safety....

Today news reports are out that NASA will be publicly releasing the data. I don't have details on the study yet. It will be interesting to compare the data from this survey, that hopefully had a scientific basis, to InfoSec surveys such as the CSI/FBI which we've mostly all come to hate because of its poor methodology, etc.

Jeremiah posted the results of his latest web application security survey and the results aren't great.... well, the state of security isn't great anyway. Might be nice to put together a broader survey to see how many incidents we're really having out there.

Tuesday, October 23, 2007

Software Security Metrics and Commentary - Part 2

Part 1 here

In Part-1 of this entry I talked about the first 5 metrics from the paper "A Metrics Framework to Drive Application Security Improvement".

In part-2 of this piece I'll try to cover the remaining 5 metrics as well as discuss a few thoughts on translating survivability/Quality-of-Protection into upstream SDL metrics.

First, onto the other five metrics from the paper:
  • Injection Flaws
    • Again, I think the metric posited in the paper is too tilted towards incident discovery rather than prevention. Just like the XSS metric I added - OutputValidation , this is really the key to prevention here. Most static analysis tools can detect tainted input and have a set of untrusted input functions (things that read from sockets, stdin, etc). It should be relatively straightforward to model our own application-specific output functions to detect where we're handing unchecked/unfiltered input to an output routine, potentially those across a trust boundary. If we can model these, we can at least make sure we have good sanitization coverage for each output type. We'll want to have this type of output filtering anyway, we might as well combine metrics from our XSS example.
  • Improper Error Handling
    • I think the metric posed in the paper - counting unchecked returns is a pretty good idea. This isn't going to catch web-server layer errors unfortunately, and won't necessarily detect errors in things like app servers, db-layers, etc. We can test for these, but the best metrics might be those related to following secure configuration guidance such as the CIS guide for individual web servers and/or app servers. The CIS benchmark for example requires a compliant configuration to handle standard web errors (4xx and 5xx) through rewrites and/or custom handlers. There are cases (SOAP comes to mind) where we need to throw a 5xx error back to a client, but this is the exception rather than the norm. Configuring application and web servers to minimize this sort of data disclosure is certainly a good thing, and in this sense we can check for compliance at this layer as almost a binary config - you pass the CIS guidance or you don't.
  • Insecure Storage
    • I don't think the metric of percent encrypted hard drives is really a meaningful metric in this context. If we look at typical web attacks that fall into this category we'd be looking at exploits that leak things like passwords, CC-data, etc. that is stored in an improper manner on the webserver. Some of this is going to be related to the implementation in the code, and so our best bet is probably a detailed audit of each piece of information that falls into this criticality range to confirm that it is being handled in an appropriate manner. I struggle to find a concrete metric that helps to measure this however. PercentCriticalDataCovered for proper encryption/hashing technique? Still not a very convincing metric unfortunately.
  • Application Denial of Service
    • Two metrics spring to mind here:
      • Memory/Resource Allocations Before Authentication
      • Memory Leaks
    • Both of these are a lot more likely to lead to application denial of service than any other errors I can think of. Both of these should be minimized. Tracking them and having the absolute fewest of them is probably a good bet. That doesn't mean we're not going to have a DoS issue, but these are at least 2 places to look.
  • Insecure Configuration Management
    • This item probably goes back to the same metrics I posited for Improper Error Handling. Things like the CIS benchmarks for OS, webserver, and appserver are our first pass candidates for measuring this.
On the question of survivability I was struck by a presentation and paper Steve Bellovin did last year about this topic at the first Metricon - "On the Brittleness of Software and the Infeasibility of Security Metrics." He published a paper and a presentation about it.

Steve makes what I believe are two major points in this paper:
  • Software is brittle, it fails catastrophically
  • Unlike other engineering disciplines, we don't know how to get to certainty about the strength of a piece of software.
I won't disagree with either of these points, but to an extent you can say this about all new technologies. We've had catastrophic failures in physical engineering before as well. Old materials fail in sometimes new ways, new materials fail in unpredictable ways, and we still rely on sample and testing for analysis of a batch of materials.

The Quality of Protection workshop at the CCS conference is probably the best place to look for research in this area. Previous papers from the workshop can be found here. This years conference and workshop is starting next week, if you're in the DC area and interested in software security metrics it looks like its going to be a good event. The accepted papers list contains a number of papers that I think might shed some light on my speculation above.

I plan to put together a few more thoughts on brittle failure modes of software in a followup to this, I haven't had time to pull all of my thoughts together yet.

Tuesday, October 09, 2007

SQL Injection Humor?

If you're an application security geek at all, then you must read today's xkcd.

I've always said there aren't enough SQL Injection jokes...

Monday, October 08, 2007

Apologies and Data Breaches

I just listened to an NPR piece - "Practice of Hospital Apologies Is Gaining Ground."

There has been quite a bit of research in the last few years that the differentiating factor between a doctor who gets sued for malpractice and one who does not is how much time they spend with their patient, and how humble they are.

The NPR piece details how at least one hospital now has a practice of apologizing to patients who have adverse outcomes, or where there was a missed diagnosis. It turns out that many patients sue not because of the mistake, but because of how they are treated. Being upfront and honest with the patient about the mistake, and apologizing, seems to have a positive impact.

Makes me wonder if there is a lesson in here for companies that have data breaches. Maybe getting out front of the issue like TD Ameritrade (not really out front given how long it was going on, but out from of the major press) will help them in the end with respect to how successful the class action suits are, etc.

I guess we'll just have to see.