Thursday, May 31, 2007

Analyzing Software Failures

A few months ago I wrote a small piece called "Most Web Security is Like Finding New Ways to Spend the Loot."

I was thinking again about this topic and how best to contribute to the security world, and how much good finding new attacks does versus finding new ways of defending against things.

I'm reminded of accident investigations in the real world. Airplanes rarely crash. When they do, its something noteworthy. We have an organization called the NTSB set up to investigate serious transportation accidents and they investigate every plane crash. Why investigate all airplane crashes and not all car crashes? Because airplane crashes are pretty rare and we believe we probably have an anomaly and something interesting to learn by investigating an airplane crash. The same cannot be said about car crashes. They are simply too frequent and too "routine" to bother investigating.

When we think about civil engineering and failure analysis we don't generally spend a lot of time on every roof that caves in at a cheap poorly constructed strip mall. We spend a lot of time investigating why bridges fail, why skywalks fail, etc. These are things that were presumably highly engineered to tight tolerances, where a lot of effort was spent or should have been spent to ensure safety and where nevertheless something went wrong. We start with the premise that by examining this anomaly we can learn something that will teach us how to build or design better the next time around.

In software security its pretty amazing how much time we spend doing the opposite. How much time we spend analyzing applications that were poorly designed, that were never designed to resist attacks, prevent disasters, etc. We seem to never tire of finding a new security flaw in myspace, yahoo, google mail, etc.

What do we learn from finding these vulnerabilities? That we don't in general design very good software? That we have fundamental flaws in our design, architecture, and tools that cause these failures? We've known that for years. No new analysis is going to tell me its hard to design a high assurance application in PHP. We already know that.

The types of vulnerability and attack research that interests me are those that actually show a brand new type of attack against a piece of software that was well written, had good threat analysis done, where the developers, architects, and designers really were focused on security and somehow still left themselves vulnerable to an attack.

It isn't necessarily that these are rare situations. Microsoft Vista has had a number of security vulnerabilities since its launch despite the best efforts of Microsoft to eradicate them. Analysis like that provided by Michael Howard about the ANI bug in Vista is what we need more of. We need more companies with mature secure software development practices to tell us why bugs occur despite their best efforts. We need more people to be as open as Microsoft is being about how the best designed processes fail and we need to learn from that to get it better the next time around.

In the transportation world this is the role that the NTSB plays. The NTSB shows us how despite our best efforts we still have accidents, and they do it without partisanship or prejudice. The focus of the investigation is root cause analysis and how we can get it right the next time around.

I read this same idea on someone's blog recently as applied security breaches. They proposed an NTSB sort of arrangement for investigating security breaches so that we get this same sort of learning out those situations. If anyone can point me to the piece I'll properly attribute it here.

Along these same lines we ought to have an NTSB equivalent for software security so that we can learn from the mistakes of the past. Bryan Cantrill at Sun and I shared a small exchange on his blog about this topic related to pathological systems and I referred to Henry Petroski and his work in failure analysis in civil engineering. Peter Neumann's Risks Digest is the closest we come to a general forum for this sort of thing in the software world and I'm always amazed (and depressed) by how few software engineers have ever heard of Risks, much less read it.

Why is it that so few companies are willing to be as public as Microsoft has been about their SDL, and how do we encourage more companies to participate in failure analysis so that we can learn collectively to develop better software?

Tuesday, May 29, 2007

About the Host?

Mr. Hoff had a piece yesterday that was thought provoking but I think he failed to mention a few concepts that further make the point he was trying to make.

I think a piece missing from his analysis is one related to threats and types of security you're trying to achieve.

The question of network-level security controls vs. host-based security controls can be made in the context of a corporate network, the Internet in general, but it can also be made in the case of someone like an ASP.

If we take the case of a large ASP of some sort - Google, Yahoo, etc. we find that firewalls are already just about useless. Except for PCI requiring them, I doubt most people would even bother having their webservers behind one. They'd probably prefer something lighter weight such as an ACL or whatnot. I'm allowing in only 2 ports (80 and 443) anyway and if I just don't run anything else on the systems in question I don't get a lot of benefit out of the firewall, etc.

When I start looking at my threats though I'm left with 2 primary threats.
  • My users
  • Their machines (and associated malware)

Network security controls don't help me much against either of these when I actually want the users to interact with my web application. And, for both my users and my sanity, we'd better hope they have good host security controls in place while they are accessing my site, their bank account, credit card accounts, etc. If they don't, network security controls aren't going to do a whole lot of good.

Sure, you might ask how they got infected with the malware in the first place, but I'm betting that a firewall or other network security device suitable for the end-user wouldn't have helped a lot in this situation either.

I'm not arguing that network security controls don't have a place, but the higher up the stack the attackers push, the less effective certain network security controls are going to be.

Monday, May 28, 2007

Vulnerability Management for Custom Software

Writing last night's post about security evolution got me thinking a bit about vulnerability management and how best to handle it for custom written software.

In general vulnerability management you can get a lot of traction using a network-based scanner along with local scanning using user credentials. This technique which is not prevalent in most mainstream scanners gives you decent coverage for standard software, configuration settings, etc. It can even give you the heads-up on a vulnerability in a version of a library you're using, especially when you easily report the version via banners and/or via standard library locations on a filesystem.

Managing vulnerabilities gets substantially trickier though when you move to homegrown software. Keeping a proper inventory of every library, toolkit, configuration setting, algorithm, etc. you use and then being able to watch for vulnerabilities in them is quite tricky. Network vulnerability scanners like Qualys or nCircle will do a pretty good job banner grabbing, detecting versions on certain toolkits, connections, etc. What they can't tell you though is that you're using a buggy version on libz or libxml inside one of your own applications.

How do you handle situations like these?

In general I resort to managing an inventory of tools I have in use at any given time and using vulnerability alerting services to tell me about new vulnerabilities in those toolkits.

I imagine there has to be a better way to do this. I'd really like to be able to list the toolkits I have in some more generic format that vulnerability alerting services and/or vulnerability scanners are capable of understanding and telling me about vulnerabilities I may have. Rather than going through screen after screen of tools in my vulnerability alert service to configure my alerts, I'd like to be able to publish a list of software/tools in use to them (including versions) and then have them alert me when they know one of my components has a potential security flaw.

Sure, I'll still have to validate the vulnerability announcement. I'll still have to see whether my implementation, technique, etc. is vulnerable to the specific exploit, but at least I'd get a trimmed down list of things to worry about without going nuts.

Anyone ever seen a service for doing this or considered creating a module for any of the standard vulnerability scanners? I think it would be a valuable service and would save me and others a lot of time.

Sunday, May 27, 2007

Security Evolved or Security Ignored?

I was reading a piece by Rudolph Araujo and it struck me how his analysis of the evolution of security is both spot-on, and also ignores quite a number of factors that I think are important.

Rudolph's point is that security is reactive to security threats and we develop security practices and defensive techniques only in reaction to attacks and new threats. I'd argue that warfare isn't all that different. In fact, I'm not sure of too many areas where there are attackers and defenders where this isn't the case. At the same time I do take issue with a few of his pronouncements that we're always reacting.

I started doing network and system security in 1994 or so. I was administering a network of SGI Indy systems on a college network and had to secure them against both outsiders as well as insiders. Some of the techniques we used to secure the machines and know our systems were good:
  • Automated configuration management (homegrown tools)
  • File integrity checking (tripwire)
  • Restricted Shells
  • Known-good builds (done by hand)
  • Hardened defaults (services off, extensive logging enabled)
    • Wietse Venema was a god, is probably still a god. Logdaemon and tcpwrappers were some of the best tools ever for securing a network.
  • Network forensics via TAMU netlogger
  • Automated log monitoring tools (home grown)
  • Keeping up with patches
  • Remote port checking (strobe)
Looking at the list above its pretty amazing both how far we've progressed, and how little in 13+ years. The number of truly new and useful tools for managing and securing systems just hasn't gone up that much. We knew back in 1994 that we needed to have automated configuration management, file integrity monitoring, hardened builds, etc. We knew we wanted good logging, network forensics, etc.
  • How much different than tripwire in 1994 are file integrity monitors today?
  • How much better is system logging than is was then?
  • How much better are network forensics than they were then?
  • Has anything gotten any better?
Maybe one reason we don't evolve is that we keep trying to reinvent the wheel in the security product space rather than just tuning tools we already have. Pretty much every item I list above is a major component of a tool we use today, but now I can buy it from 10 different vendors with a lot of hype, configuration options, and pretty much the exact same functionality you'd have used 13+ years ago. Is that progress? I'm not so sure.

I think the problem that is actually being highlighted is that is computers, like in car design and fashion, what was old is new again over and over.

Most of the components above get added together and get called NAC or autonomous systems.
We take networking scanning and such and add in a bit of vulnerability data and we get vulnerability scanners. Sort of useful, but maybe if I just turned off ports I'm not using and didn't have such a complicated setup the problem would fix itself.

For the most part we know what we need to do, we know how the attackers are going to attack, we've just spread ourselves so thin that we can't actually defend against them anymore.

We've seen the enemy, and it is us. The fundamental thing that needs to evolve isn't the technology, its our use of it. As we're growing our use of technology, pushing the technology further and further, we're coming to understand the human limits of running it.

Now that I'm helping manage security for a much larger organization 13+ years later I'm not worried (all that much) about new and novel attacks, I'm worried about tracking and managing the assets I have, how they are configured, managed, monitored, etc. I'm worried about who is using what data when, whether they are copying it, sending it, releasing it, etc.

Where we need the evolution is in systems that work better from the beginning. We need to make sure that the same old security problems don't keep coming up again and again and again.
  • I need operating systems that ship without all services turned on.
  • I need operating systems that let me easily set a security policy, and alert me to deviations.
  • I need operating systems that monitor their integrity, tell me about deviations, and let me automate alerts.
  • I need common logging across all of my devices so that I don't need a super complicated SEM to manage and interpret everything.
  • I need a flying car
Ok, I threw the last one in because its always on everyone's wish list. What I'm getting at is that my wish list hasn't changed in more than a decade and I'm still waiting.

So, while in the end I don't necessarily disagree with Rudolph on the reactivity point - maybe things are even worse than he thinks. We've spent the last 15 years making pretty much zero progress on anything.

Microsoft's "The Security Development Lifecycle" - Chapter Two

Chapter 2: Current Software Development Methods Fail to Produce Secure Software
Chapter 1 Here

Chapter 2 of the SDL book focuses on other development methodologies and how successful they are in addressing security concerns and in actually delivering secure software. The premise is that existing software development methodologies don't produce secure code. The authors don't quite make the controversial claim that they cannot produce secure code, but that they don't in practice.

You could of course extend this claim to showing that pretty much nothing produces secure code since we both can't define and are reasonably sure it doesn't exist. Let's stick to their unstated premise though - current software development methodologies aren't focused enough on security to produce secure code. This seems like a more straightforward and fair reading of their intent.

Four claims are examined:
  • "Given enough eyeballs, all bugs are shallow"
  • Proprietary software development methods
  • Agile software development methods
  • Common Criteria (CC)
One slightly problematic issue with these 4 items is that they aren't all of the same type. The Common Criteria, while it is a method of specifying and testing for requirements, was never positioned as a mechanism for the development of secure software. It was positioned as a way to specify the security requirements for a system and evaluate whether they have been met.

That said, I can't find any fault with their analysis of the flaws of the "enough eyeballs" approach. We've seen again and again in regular products, open source or not, that lifetime of product and number of people working on it doesn't necessarily reduce bugs except in the presence of a methodology and requirement/goal to specifically do so.

Other than their potentially flawed analysis on the Common Criteria (and I'm willing to admit I'm not the expert on CC intent/design) I can't say I took much away from this chapter. We know that current software development practices are flawed, and I'm even willing to believe the MS-SDL is better - I wouldn't be reading the book otherwise.

Tuesday, May 22, 2007

.bank TLD - Still not convinced

I read Mikko's response to the .bank TLD criticism and I think I still have to take issue with a few of the things being proposed. Rsnake already had a nice writeup as well, hopefully I won't duplicate too much of it.

Mikko says:

People are stupid and would not notice such a new address scheme.

The main point of such a new TLD would not be that users would suddenly get a clue and would learn to read the web addresses correctly (although for those who do read the URLs, this would be obviously be an improvement). The main point is that it would allow the users' software to work better. Security software and browser toolbars would essentially have a "white list" to work with.



My main problem with this argument is I'm not at all clear on what software is going to do.
  • Not let you visit non-.bank sites?
  • Not let you enter your banking password on non-.bank sites?
  • Strip links in email that don't say .bank?
I'm not sure that having a new domain gets us anywhere in actually stopping phishing, people getting fooled by this sort of thing, etc. It doesn't help with email security since you'd still have to sign email, you'd still need SPF, etc.

To Rsnake's point:

Now that you’ve read it, here are my thoughts. Yes, .bank will solve some heuristics problems. No, it won’t solve all of them. Banks hiring external marketing departments, regional divisions, loan offices, etc… etc… that all are owned by the parent will not be able to afford their own .bank TLD and will not be protected. Piggybacking off the parent URL is an equally bad idea for XSS phishing attacks. And if the banks allowed external organizations to piggyback how wold that solve your problem of extended validation of the site?


I face this issue all of the time. I don't want to host third-parties on my core domains for multiple reasons including cookie security. I do want them to have an EV cert (silly, but policy) and yet I still want people to semi-trust them. None of this is solved by having a .bank domain.

I'm just not sure what type of attacks this new TLD really prevents. If someone can give me a type of attack that it prevents I'll start thinking more seriously about it.

Wednesday, May 16, 2007

Microsoft's "The Security Development Lifecycle" - Chapter One

I started reading Microsoft's "The Security Development Lifecycle" a few weeks ago but got distracted and didn't finish. I figured I'd start the book over from the beginning and write commentary on a chapter-by-chapter basis. Hopefully this is useful if only for my own record keeping.

**********************************
Chapter 1 - Enough is Enough: The Threats Have Changed


Chapter One is an overview of why security is important, why everyone needs to be concerned about it, but specifically developers. I already buy the premise so the first chapter isn't exactly a hard sell. That said - I do have a few sections I find interesting and/or controversial.

Overall I agree with the definitions and distinctions made between Security and Privacy as concepts. Drawing this distinction isn't all that useful though for the general SDL practitioner. Its kind of a sad commentary on who the assumed audience is that the authors have to give this definition at all in the book. One disconcerting piece of this section though is their attempt to justify the SDL based on Privacy but not Security concerns. Justifying the SDL and software security in general is an extremely lengthly discussion and the subject of much debate about how much security to bake in, how many defects to fix, etc. Privacy is a big buzzword now though especially in places that are heavily regulated. Its a big stick just like SOX was 2-3 years ago to get management attention.

In their section on Reliability they mention an OpenBSD security/reliability item in BIND back from 2004. The OpenBSD team categorized the issue as one of reliability back then, and the authors appear to agree with this assessment. Little did they know this topic would explode again recently with another OpenBSD defect that the OpenBSD folks would try to categorize as a reliability issue but many folks disagreed. I have to say that I'm clearly in the reliability issues can be security issues camp and so I disagree with the author's position on this topic. I also disagree with their assessment that no major vendors distinguish between reliability and security concerns. Plenty of vendors post plenty of reliability patches that fix defects that aren't directly exploitable to cause an outage. If something can be exploited to cause both, it is both a reliability (availability) and a security defect.

In the next section on quality I'm a little confused. The book is supposed to be about securing software, but again takes a break and examines security failures in general.

In their section "Why Major Software Vendors Should Create More Secure Software" they take a bit of a jab at Apache based on the progress they finally made in shipping a more secure IIS. While they properly point out (I didn't check it, taking them at their word) that Apache had more security vulnerabilities/defects than did IIS in 2006, they make the leap that these were the cause of more compromises on Apache. Proper configuration management, ease of configuration management, and "secure by default" configurations go a long way towards achieving security and are part of the job of shipping "secure software." But to claim that the differential in Apache vs. IIS attacks is due solely to security defects is perhaps pushing the point a bit much.

Overall Chapter One is a decent start to the book though it moves between too many subject areas to focus on them enough in the space alloted. I know it gets better from here on in (I read ahead) but I think the intro could have been a little tighter.

Identity Theft Protection the Next Racket?

I saw a post by someone in security the other day asking whether anything could trump NAC as the big thing in security literature, conferences, etc. The only thing these days that is coming close is data disclosure protection tools. Things like Tablus, Vontu, etc.

In the consumer space things are a little different as to how vendors want to get a slice of the consumer's wallet. It used to be Anti-virus, then it was personal firewalls, and finally it was spyware detection and prevention. Based on recent trends I think there is an up-and-comer in the area of identity protection.

I previously blogged about Lifelock and just yeterday about Guardid. Seems like everywhere I look there is someone pitching a new product to protect the average consumer against identity theft. Even the banks and credit card companies are in the picture.

So, as a new thread on the blog I'm going to try and write about any of these I come across, good or bad, to see what we can find as this topic starts exploding.

If you know of any solutions in this area being marketed to the individual consumer as a protection device or service, let me know.


Follow-up: I started doing a little more research after I wrote the above and came across this article from Money Magazine back in August 2005 - The ID theft protection racket. I'm obviously not the frst person to notice this trend. The article makes for an interesting read actually, though it only reviews a few products. I think I will still do product reviews on things that look especially interesting from either a positive or negative perspective.

Tuesday, May 15, 2007

Snakeoil or Legitimate Product?

A friend forwarded me a link for the "ID Vault" asking whether it is bogus or not. After reading through the site I honestly can't tell whether this device is well intending or a complete waste of time.

The ID-Vault is a USB token with secure storage for passwords on it. For only $30 you get a token that can store 30 usernames and passwords and automate your logins to major financial sites with a single click and you entering your self-selected PIN. Quite pricey for the ability to store 40 usernames and passwords, but so far so good right?

As I look closer at the site though I start to get a little more disturbed. They keep talking about smartcards and such and they may actually use something like gemplus uses on their cards. But I'm not sure I see the point. All over the Guardid site I see all sorts of claims about this token being two-factor authentication, about how it will prevent identity theft, and how its tremendously secure as compared to typing in your password. All this is, is a token that auto-populates a web-browser with your username and password...

Several facts are clear:
  • The card isn't really a smartcard. It doesn't appear to do crypto operations itself, and even if it does the data it is passing back and forth are usernames and passwords.
  • The card purports to be more secure than typing in your username and password, but the threats it protects against (namely - malware) can read any of its data also. So, at best its a band-aid and as soon as it becomes popular the malware writers will target it just like the do other applications.
  • There aren't any documents about how they protect against brute forcing the PIN.
  • This token costs a lot for probably not a large increase, if any, in security.
Now, if folks like Citibank and others started actually issuing certificates you could store on your smartcard and authentication doing actual smartcard type things such as challenge-response, maybe this sort of things would catch on. There are already a large number of people in that field though and I don't think the Verisign, RSA, and Alladin folks are sweating Guardid much.

If I had $30 to spare I suppose I'd buy one of these silly things and do a real evaluation but it just doesn't feel worth it.

Saturday, May 12, 2007

What Kind of Training is Important?

I was reading an article recently by Dave Ladd of Microsoft about security education vs. security training and it got me thinking about a conversation I had with Gary McGraw the other day.

We were discussing whether Ajax really represents a new attack vector. Gary was discussing the flaws as they relate to developers not understanding that they are pushing state to the client and that most of them don't understand the vulnerabilities of the client-side state they are creating.

I argued that the same is true for many web developers who don't have a firm grounding in technology fundamentals and use tools such as Struts for web development. Tools like Struts and other MVC web architectures abstract developers from how the web actually works are pretties it up for them in frameworks that let them pretend they are writing client-only applications. Most of the flaws that something like Ajax presents are just a new twist on "don't trust the client."

On the one hand this technique is amazingly productive from the developer point of view. They can develop according to well known ideas, constructs, and let the frameworks handle the fact they we're dealing with the web, HTTP, DNS, etc. Ideally developers shouldn't have to understand every bit of the technology stack they are using to get their job done.

The problem comes when these frameworks and this developer mentality meets the world of security. In security the fundamentals of the technologies being used actually are important, do determine what a developer can do, and expose vulnerabilities that a developer has to specifically understand in order to avoid.

When we think of training people on web application security we usually start from security first principles, but we don't usually start from technology first principles. We teach developers threat modeling, security requirements analysis, attack vectors, etc. What we often forget to teach them though are some of the technology fundamentals about the tools they are using.

I can't even begin to count the number of discussions I've had with web developers who don't understand HTTP basics, what the protocol actually looks like, what cookies really are, how browsers handle them, etc. They don't understand TCP/IP, DNS, ethernet, etc.

Without the basic understanding of things like the HTTP protocol, the fact that it truly is stateless, how cookies help graft state on top of it, and so on, a developer can't possibly hope to do a realistic job of threat analysis against their application. And if they aren't responsible for any of the low-level parsing of that type of data, they are never going to properly frame the discussion based on first principles.

When it comes to security education vs. security training, I'd prefer to start with the basics so that everyone is speaking the same language. As such, I think any good training program on security will include either formally, or informally, a number of sessions on technology fundamentals.

At most of my previous employers we started brown-bag lunch sessions to do basic knowledge transfer to everyone. Things like DNS-101, HTTP-101, Crypto-101, TCP/IP-101, and so on. We found that almost everyone with a desire to do the right thing and learn either attended if they didn't know the material, or was interested enough to ask questions afterwards to make sure they were up to speed.

These sorts of informal education sessions are invaluable for building credibility of the security folks, spreading knowledge throughout an organization, and creating a culture where knowledge is valued and respected.

Doing classes like these also helps with the security training you want to do as well. Once a sufficient number of folks are versed in the same basic vocabulary, its a lot easier to teach the advanced concepts and expect that they can apply their new background knowledge to more advanced things like threat modeling.

Thursday, May 03, 2007

Under or Over Specify Security Requirements

In having a discussion today with a colleague about security architecture and SDL governance it struck me how hard writing proper security requirements is.

If you look at a standard such as PCI it jumps all over the place from high-level strategic (prevent intrusions) language to exacting specifications of how long passwords must be and what encryption algorithms and wireless security mechanisms must be used.

Compare PCI to another fairly comprehensive standard such as ISO-17799 and you'll be surprised how high-level and imprecise the ISO standard is.

In developing a set of security policies, standards, and then working with teams to specify project specific requirements we're often torn between specifying the goals we want to achieve, and the exact mechanisms and implementations that we believe will meet those goals.

The larger the organization gets the less often you'll get to be involved in a given project, interact with a given team, etc.

Its a tough balance to strike however between the security organization over specifying and under specifying.

In general we want to have fewer components, solutions, etc. to audit, have assurance over, etc. At the same time trying to standardize everything, especially in a development organization, risks killing the exact innovation you're trying to encourage.

There are certain cases where a regulation or auditor is going to force a certain standard on you that seems too specific to be in a high level standard or policy but which nevertheless is mandatory. This is why you find some many policies that specify exact encryption algorithms, exact wireless security mechanisms, etc. In a regulated arena you really don't have a choice but to make these standards.

Given my choice though I'm torn on where to draw the line between specifying basic security properties that each product/feature/system must meet, and writing requirements and standards so specific that only one technology or solution could possibly be judged compliant.