Wednesday, April 25, 2007

Preventing Software Security Liability Through Development Methodology

After reading the Theories paper by the badsoftware.com folks... I wanted to focus on the "Technological Risk Management" piece of puzzle. Let's assume for a minute that the "Fault" model of software liability won't work and evaluate then what constitutes effective "Technological Risk Management."

If we take the risk management approach we still need to determine what constitutes effective risk management in the software development lifecycle. We need some metrics to judge what SDLC models work for improving security, and which don't.

Reading through Howard and Lipner's SDL Book they make the categorical claim that though the SDL isn't necessarily the only way to improve software security, it is proven to work.

It gets me thinking about how we can create effective ways of measuring software development processes for their security oriented results. If we can, then we can start to baseline SDLC methodologies according to their security-risk-reduction potential to class them into good/bad/ugly and start to make decisions about what software we buy based on software development methodology alone.

That said, in all processes there is a lot of wiggle room. It is possible to write horrible software at the CMM-level-5 because the specifications aren't useful or the underlying idea is flawed. CMM-5 doesn't guarantee good software, it just guarantees we understand the process that created it.

There are several software specific SDLC's, I think I need to do a comparison of them to fully appreciate the differences between them before I can make a valid comparison.
Gary McGraw of Cigital has a nice blog entry giving an analysis of how their process differs from Microsoft's. I think one of these days I'm going to have to meet with the Cigital folks in person to get their take on appropriate approaches for a proper security development model for large online applications. Seems like in Gary's piece he doesn't believe that Microsoft's SDL is necessarily well-suited to the website environment.

I'm inclined to believe that all three of these approaches will yield good results if applied with some discipline and flexibility regardless of the underlying application. The problem is I can't try all three throughout my organization - so again I'm stuck making a non-empirical choice.

Software Engineering Disasters

I was reading Richard Bejtlich's blog piece on engineering disasters the other day and I started thinking about software engineering disasters. And by disasters I'm not thinking of things that were commercial flops, but rather things that were highly unreliable.

Risks Digest is chock-full of stories of software failures but I'm a little more interested in regular commercial software that is/was a complete failure. Things that simply never worked as desired, crashed more often than you could count, corrupted your data more often than you thought was possible, etc.

With that in mind my two top mentions for this category are:
Windows ME I don't think needs a lot of introduction. The Wikipedia article does a good job of explaining why PCWorld called it the 4th worst tech product of all time.

Relatively high on my list, and a friend of mine's, is Legato Networker. Networker is a system backup program that once rivaled Veritas Backup software in the enterprise but has since shrunk. The thing I remember most about Networker was its main catalog daemon that seemed to crash pretty much every chance it got, and in doing so would completely corrupt the backup catalog forcing you to restore a previous version form tape. It got so bad we started calling the daemon the "Catalog Corruption Daemon" since that seemed to be its sole purpose in life.

I'd be interested to see what other pieces of truly abysmal pieces of software we can come up with that simply never worked as promised.



http://taosecurity.blogspot.com/2005/09/engineering-disaster-lessons-for.html

Sunday, April 22, 2007

Don't Let PCI turn into FISMA

When I attended the San Jose OWASP meeting a week ago Bernie Weidel gave a briefing on PCI compliance related to Web Application Security.

During his presentation he talked about how to get involved with setting/influencing the PCI standards themselves. I made a remark that even if we all got together and weakened the standard as part of the PCI governance process, the Card providers (Visa, MasterCard, etc) would simply revert to their own standards. Bernie looked shocked that a security person would recommend weakening a standard. To a large extent I stand by my remark and perhaps it can best be explained in relation to FISMA.

Much has been written on the flaws in the FISMA approach which focuses too much on paperwork-level compliance and not enough of effectiveness of controls, and leeway to implement appropriate controls for a given environment. Richard Bejtlich even has an outline of how things could be improved.

In light of complaints about FISMA I think we can learn what to do and not to do with respect to PCI. The folks over at Ambersail even asked "What would you change?"

For me - the biggest things I'd change would be related to flexibility in implementation of a security program - and much more explicit linking of the PCI standard and the auditing guidelines. Nothing is more frustrating than trying to implement a proper security program, and having to constantly go to one's auditor and explain a new set of controls being explored, have them turn around and get clarification from Visa, and finally get back to you about whether Visa approves.

Just like in the FISMA case where folks spend a lot more time documenting than they do an actual security processes, PCI has the possibility of failing this way.

I work for a rather large financial services firm. It is in our best interests financially to exceed PCI security requirements in almost all cases. If I do this and create my own documentation, controls, etc. around achieving a level of security I consider appropriate, each new regulation and standard that comes out is simply more overhead for me. It doesn't add to my security, it just forces me to fill out more audit documentation, spend more time and money on auditors, without adding anything to my bottom line from a security perspective.

Unified/universal standards are often the solution to this problem, so that I can pass one audit, and provide those details in the same format to all of my partners to demonstrate compliance with their security requirements. What I don't need are multiple overlapping standards that cost me extra money without improving my security.

You can argue that most merchants and processors aren't going to comply without a stronger standard with lots of mandatory audits and control points. And you may be right. But from the seat I'm in more mandatory audits simply costs me money that I could better spend on improving my security, not on auditors and paperwork.

More on how companies use weaker standards and federal standards to weaken state-by-state approaches in a later post.

Monday, April 16, 2007

More on Software Liability

So, I finally got around to doing some more research on the topic and there is quite the treasure-trove of information out there.

Its funny how many problems and issues is computing and computer security have been thought of before and you just have to go looking. In the area of computer security someone once said to me
All the good work on computer security was done in the 1960's, everything since then has just been relearning old lessons.


So, as I was saying I started doing some research on the topic and besides the more recent Schneier pieces about it I've found some much more formal treatments of the issue. And by "research" I really mean - google search for "software liability."

Back to the story. I found a couple of interesting links about it:
There are others, but those should be a good start for reading since the badsoftware.com site has a ton of data.

Makes me remember that I ought to register for the Workshop on the Economics of Information Security coming up in June. In the end it comes down to an economics argument, and possible some finely nuanced questions. One little snippet I'll quote from the bad software folks relates to something my friend Adam sent me on this subject.

I think this is one of those problems that is inherent in general purpose tooling… screwdrivers, hammers, and the like. Maybe the PC and the general purpose SW on it are like a screwdriver/hammer/pliers kind of kit. And it’s difficult to hold the mfr, the hw store, or anyone else liable for general mayhem you perform with a tool. Only really specific things, like the head of the hammer flies off in the most traditional use, etc. On that analogy you could perhaps establish “traditional use” standards, like a std C lib has been around long enough that is has to protect against certain kinds of buffer overruns. It’s so classic it’s like putting a nail in a wall. But that wouldn’t help you with anything new… ?


The interesting reply comes from the badsoftware folks...

I won't explore the nuances of the definitional discussions here. Instead, here's a simplification that makes the legal problem clear. Suppose we define a defect as failure to meet the specification. What happens when the program does something obviously bad (crashes your hard disk) that was never covered in the spec? Surely, the law shouldn't classify this as non-defective. On the other hand, suppose we define a defect as any aspect of the program that makes it unfit for use. Unfit for who? What use? When? And what is it about the program that makes it unfit? If a customer specified an impossibly complex user interface, and the seller built a program that matches that spec, is it the seller's fault if the program is too hard to use? Under one definition, the law will sometimes fail to compensate buyers of products that are genuinely, seriously defective. Under the other definition, the law will sometimes force sellers to pay buyers even when the product is not defective at all.

This is a classic problem in classification systems. A decision rule that is less complex than the situation being classified will make mistakes. Sometimes buyers will lose when they should win. Sometimes sellers will lose. Both sides will have great stories of unfairness to print in the newspapers.

Second problem with the fault-based approach: We don't know how to define "competence" when we're talking about software development or software testing services. I'll come back to this later, in the discussion of professional liability.

Third problem: I don't know how to make a software product that has zero defects. Despite results that show we can dramatically reduce the number of coding errors (Ferguson, Humphrey, Khajenoori, Macke, & Matuya, 1997; Humphrey, 1997), I don't think anyone else knows how to make zero-defect software either. If we create too much pressure on software developers to make perfect products, they'll all go bankrupt and the industry will go away.

In sum, finding fault has appeal, but it has its limits as a basis for liability.


I hope I haven't quoted Cem Kaner too much here, seems like this is reasonably fair use...

A few things to think about anyway. More when I've done a little more reading.

Tuesday, April 10, 2007

It can't be done

I feel that I'd be remiss in my duties if I didn't respond to my previous post on Preventing HTTP response splitting with request/response identifiers? with a clarification that it can't be done.

It reminds me of a scene from the movie Awakenings...

Dr. Malcolm Sayer: I was to extract one decagram of miolyn from four tons of earthworms.
Hospital Director: Really?
Dr. Malcolm Sayer: Yes. I was on that project for five years. I was the only one who believed in it, everyone else said it couldn't be done.
Dr. Kaufman: It can't.
Dr. Malcolm Sayer: I know that now, I proved it.

So, I proposed my little scheme for preventing HTTP Response Splitting and Amit Klein was nice enough to point out all of the flaws in my argument and scheme. I don't feel like a beaten man though. In all fairness the HTTP protocol and HTML are lacking a whole bunch of security features that makes certain attacks all but inevitable - or at least not preventable through architectural means...

Look for more crackpot security schemes here in the near future.

Saturday, April 07, 2007

Security metrics and developer training/certification

I was reading up on the new SANS Software Security Institute and its an interesting concept. One debate that been raging for a long time though, is the question of what types of certifications we want for certain things...

  • Multiple Choice
  • Multiple Choice + Freeform answer
  • Both of the above plus a hands-on component
In the case of secure programming I'm fairly committed to a hands-on component of any sort of certification. Partially its a question of basic knowledge versus application. I'll be interested to see whether we have a way of benchmarking how good the code is that certified developers write vs. other people.

It occurs to me that the SANS certification might be more useful for those reviewing code for security flaws, rather than those writing code. We're not asking the test-taker to produce anything, but we are asking them to review code in the test and find/explain flaws. When it comes to hiring time I'm more likely to consider this a meaningful cert for those who want to do reviews, pen-tests, etc. rather than those I want to write secure code. Or, more precisely, its more of a standalone credential for the tester/reviewer than it is for the developer.

On to a slightly related topic - Metrics.

One of my current problems is determining how I'm going to measure the success of my application security program. What sorts of metrics do we care about? A few things that spring to mind are:
  • Defect rates (per lines of code, module, etc)
  • Individual/group error/defect rate
  • Success of process at catching defects in either arch/design/implementation early
  • Code/site coverage using standard toolkits/frameworks for things like input validation, output filtering, etc.
  • Remediation time for defects
Management speak/philosophy tells us that we get what we measure. What we hold people accountable for, and how we create incentives, determines what people produce. With that in mind, how do we structure our metrics to get the outcome we want.

I think unfortunately that its going to be an experiment of putting in place some metrics, seeing how they influence people's behavior, and then modifying them on an ongoing basis to get the results I want. One critical aspect of this approach of course if that you need to have easy to gather metrics that don't require a lot of human intervention to generate. Otherwise you can't do a lot of iteration over it.

More on this as I think of it. Reading through Microsoft's SDL book right now and hopefully I'll get a few ideas there.

Tuesday, April 03, 2007

Identity Theft Protection?

A friend sent me a link today about an Identity Theft Protection service called Lifelock. http://www.lifelock.com.

I'm not sure what to make of the service, and I'm not sure I understand what regulatory regime it operates under.

They claim to aggregate multiple freely available services for protecting yourself from identity theft, etc. Its an interesting idea, though its unfortunate that the state of affairs in the US for data and personal privacy is that I have to pay money to protect myself from identity theft, rather than all of the onus being on the people already holding my data not to mess things up.

Since a friend sent this to me, I'll quote his mail and then give what I think are reasonable answers...

This is interesting … do you think their model is credible?

The obvious vulnerabilities to me seemed like

(1) lifelock going out of business

(2) lifelock employees or affiliates compromising you

(3) a criminal learns you are a member and either

(a) also gets your phone physically or (b) gets your number transferred to a new phone account somewhere else, which can be done remotely with the phone acct number and ssn.

Then opens an account and, acting as you via your phone, allows the action.

(4) criminal compromises whatever token you use to identify yourself to lifelock, then calls up (or log in), pretends to be you, changes the contact phone number to a new number, and he’s off to the races

1. Entirely possible. It isn't clear based on past internet company bankruptcies exactly how the disposition of their private data was handled. Since they aren't necessarily regulated as a bank, money transfer agent, phone company, etc. it isn't clear to me exactly what regulations would apply to them, who buys them, etc.

2. Entirely possible, and hard to evaluate without knowing a lot about how they run their internal operations. I met some folks who ran security at MBNA and they were pretty over the top. Dual-control for everything, no one with root on Unix machines, etc. Whether they do the same thing or not is certainly be an interesting question.

3. Stealing phones, etc. is a risk for most people that rely on this sort of thing. As I've written about before in this blog about 2FA for certain financial service providers, its a hard problem to solve. Banks are required to eat the losses if they get defrauded this way, not sure what would apply in the Lifelock case. It is a current problem for most folks though that if someone has access to your physical mail for example that they can intercept lots of out of band communications destined for you, thus leading to impersonation.

4. Impersonation via cracking of authentication tokens isn't unique to Lifelock. What isn't clear is what your remedies will be when your information gets stolen. Assuming their service actually works, even if your information is stolen you'll get alerted to misuse of your identity, at least for certain cases.

What most disturbs me about the service though is that I need to purchase it at all. In other countries (notably places in Europe) there are already services provided by banks and credit firms that notify you every time your report is pulled, every time someone wants to get credit in our name, etc.

In the US we don't have equivalent protections, though they have been discussed at least briefly as part of the whole identity theft and breach notification regulations going round in most states and at the federal level.

Lifelock reminds me of the service the phone companies provide you to block telephone solicitation. They will put a block on your line so someone has to listen to a special recording, put in a code, etc. before they get connected to you. What is amazing is that the telephone company is the one selling your information in the first place, and now you're paying them to stop their customers from calling you. Pretty nice protection racket if you can get it.

At least Lifelock isn't a division of Experian.