Its funny how many problems and issues is computing and computer security have been thought of before and you just have to go looking. In the area of computer security someone once said to me
All the good work on computer security was done in the 1960's, everything since then has just been relearning old lessons.
So, as I was saying I started doing some research on the topic and besides the more recent Schneier pieces about it I've found some much more formal treatments of the issue. And by "research" I really mean - google search for "software liability."
Back to the story. I found a couple of interesting links about it:
Makes me remember that I ought to register for the Workshop on the Economics of Information Security coming up in June. In the end it comes down to an economics argument, and possible some finely nuanced questions. One little snippet I'll quote from the bad software folks relates to something my friend Adam sent me on this subject.
I think this is one of those problems that is inherent in general purpose tooling… screwdrivers, hammers, and the like. Maybe the PC and the general purpose SW on it are like a screwdriver/hammer/pliers kind of kit. And it’s difficult to hold the mfr, the hw store, or anyone else liable for general mayhem you perform with a tool. Only really specific things, like the head of the hammer flies off in the most traditional use, etc. On that analogy you could perhaps establish “traditional use” standards, like a std C lib has been around long enough that is has to protect against certain kinds of buffer overruns. It’s so classic it’s like putting a nail in a wall. But that wouldn’t help you with anything new… ?
The interesting reply comes from the badsoftware folks...
I won't explore the nuances of the definitional discussions here. Instead, here's a simplification that makes the legal problem clear. Suppose we define a defect as failure to meet the specification. What happens when the program does something obviously bad (crashes your hard disk) that was never covered in the spec? Surely, the law shouldn't classify this as non-defective. On the other hand, suppose we define a defect as any aspect of the program that makes it unfit for use. Unfit for who? What use? When? And what is it about the program that makes it unfit? If a customer specified an impossibly complex user interface, and the seller built a program that matches that spec, is it the seller's fault if the program is too hard to use? Under one definition, the law will sometimes fail to compensate buyers of products that are genuinely, seriously defective. Under the other definition, the law will sometimes force sellers to pay buyers even when the product is not defective at all.
This is a classic problem in classification systems. A decision rule that is less complex than the situation being classified will make mistakes. Sometimes buyers will lose when they should win. Sometimes sellers will lose. Both sides will have great stories of unfairness to print in the newspapers.
Second problem with the fault-based approach: We don't know how to define "competence" when we're talking about software development or software testing services. I'll come back to this later, in the discussion of professional liability.
Third problem: I don't know how to make a software product that has zero defects. Despite results that show we can dramatically reduce the number of coding errors (Ferguson, Humphrey, Khajenoori, Macke, & Matuya, 1997; Humphrey, 1997), I don't think anyone else knows how to make zero-defect software either. If we create too much pressure on software developers to make perfect products, they'll all go bankrupt and the industry will go away.
In sum, finding fault has appeal, but it has its limits as a basis for liability.
I hope I haven't quoted Cem Kaner too much here, seems like this is reasonably fair use...
A few things to think about anyway. More when I've done a little more reading.