Tuesday, November 27, 2007

Some Comments on PayPal's Security Vulnerability Disclosure Policy

Thanks to the several places that have written about this policy in the last few days.

I was personally involved in crafting the policy and while I can't make commitments or speak officially for PayPal I thought I'd take a few minutes to explain our thinking on a few of the items in the policy.

First, a few points. PayPal didn't have a great system for reporting security issues until this new policy came out. Our goals in creating and publishing the policy were several:
  • Improve the security of our site by getting security issues disclosed to us responsibly.
  • Create an easy mechanism for people to report a security vulnerability to us. We chose email since we figured security researchers would like it better than a form.
  • Create incentives for disclosure and remove disincentives (threat of legal liability)
  • Make clear our expectations in these areas, since this is a new and evolving area of security vulnerability disclosure with more than a little legal uncertainty.
  • Through our policy - set a standard we hope others can follow.
We carefully constructed the language in the policy with our privacy lawyers to ensure that we were not over-promising with respect to legal liability. We looked at other disclosure policies, and we settled on the policy you can find here.

A few specific notes are in order:

  • We will revise the policy over time based on user feedback.
  • We are serious in our commitment to rapidly address any discovered security issues with the site. Our language around reasonable timeframe is slightly vague because we don't want to over-promise on how quickly we can resolve an issue.
  • We do expect to get back to researchers quickly with confirmation of a reported issue and tracking data on how we're doing resolving it.
Let me now address a few concerns/comments people have specifically raised.

Chris Shiflett said:
Since data can be anything, how do we know if we view data without authorization? Don't most people assume they're authorized to view something if they're allowed to view it? Does intent matter?
While we don't want users to test the security of the PayPal site, should they do so they should be careful to minimize the disruption caused by their testing. If you start tinkering with URLs to see whether you can view certain data, do it between two accounts you control, don't try to view other people's data. There is a fine line between testing responsibly and irresponsibly and we're encouraging people to stay on the more responsible side of the line.

From Don's post:
I got a creepy feeling about actually trusting the statement. I will probably never attempt to test the security of PayPal’s site, but for those who do I would hate for the disclosure statement to change suddenly.
As I said earlier, we do believe the policy is a work in progress. We will modify it from time to time to allay concerns, improve its effectiveness, etc. Our goal however is to encourage responsible disclosure. I hope that intent behind the policy is enough to allay people's potential fears.

One final note on the statement - "Allow us reasonable time to respond to the issue before disclosing it publicly." We struggled over the wording on this more than any other element of the policy. It is a tricky business to get the right balance between early disclosure, our commitment to protect our customers and their data, and people's desire to know about the security of a given website or service. That said, we're committed to working with researchers when an issue is reported to us and we'll decide reasonable on a case-by-case basis.

We're hoping that this policy strikes a good balance between our desire for responsible disclosure, and not discouraging researchers from coming forward.

Again, I'm not a spokesperson for PayPal, so this post naturally represents my personal beliefs about this policy not a firm binding statement of company policy. That said - I welcome your comments.

Poll: How Important is a POC When Getting Someone to Fix a Security Issue

Working on security inside a company that takes security seriously sometimes blinds me to how other people work and the challenges they face with getting security issues taken seriously.

I've noticed that lots of people that work as consultants and/or inside companies have to jump through lots of hoops to get a security vulnerability taken seriously.

In many cases I see people spending hours and hours crafting a working proof-of-concept exploit for a vulnerability and needing to actually demonstrate that exploit to get the issue taken seriously.

To understand this better, I set up a small poll to get some data about why people are needing to craft a working POC when demonstrating a vulnerability exists.

I've only ever had to do this once, and yet it seems that every time I read about a penetration test I see people spending lots of time crafting sample exploits rather than spending more time on finding more vulnerabilities, or fixing classes of vulnerabilities that are similar and offering solutions to those.

In my experience the only time a POC has been really useful is when I need to make sure that the person fixing the issue has the necessary information/tests to make sure they've closed the issue.

For those who do penetration tests (network or application) - how often do you feel that you need to create working POCs for exploits in order for the company's management to take it seriously?

Monday, November 26, 2007


New grammar captcha system :) Quite funny if you ask me. Mostly a joke I suppose, but its Monday, so what the heck.

Tuesday, November 20, 2007

Data Leakage/Linkage Mystery

I have a mystery that came up tonight that I'm hoping someone can help me figure out.

I have a Yahoo! account that I hardly ever use anymore. I check it once every 6 months or so for email, but it remains unused otherwise. I do have my IM client Adium set to log into the account , but I don't ever use it for chatting. I also don't have the account generally associated with any of my other accounts, and it doesn't even have my real name on it.

Tonight I logged into yahoo-mail and checked the mailbox for said account. Delightfully I found several emails from Jayde.com to my unused yahoo mailbox, but with information about this blog.

Somehow I received mail to my unused yahoo account mentioning this blog.

I've never linked the two email addresses, I don't ever log into the yahoo email address, and haven't sent/received mail from it in forever.

The messages were dated back in March...

So, now I'm wondering how these two data items got linked.

  • Advertising site that is buying data and/or access logs and linking disparate things together?
  • Malware?
  • Weird CSRF or some-such?
Any ideas? I'm not sweating it too badly I suppose, but it is slightly disconcerting.

Friday, November 09, 2007

Limiting Process Privileges Should Be Easier

I was reading DJB's retrospective on 10 years of qmail security and while I'll comment on a few of his thoughts in a separate post, one thing that struck me was his discussion of how to create a relatively effective process sandbox for a process:

  • Prohibit new files, new sockets, etc., by setting the current and maximum RLIMIT_NOFILE limits to 0.
  • Prohibit filesystem access: chdir and chroot to an empty directory.
  • Choose a uid dedicated to this process ID. This can be as simple as adding the process ID to a base uid, as long as other system-administration tools stay away from the same uid range.
  • Ensure that nothing is running under the uid: fork a child to run setuid(targetuid), kill(-1,SIGKILL), and _exit(0), and then check that the child exited normally.
  • Prohibit kill(), ptrace(), etc., by setting gid and uid to the target uid.
  • Prohibit fork(), by setting the current and maximum RLIMIT_NPROC limits to 0.
  • Set the desired limits on memory allocation and other resource allocation.
  • Run the rest of the program.

If doing all of the above steps seems like a bit much, then perhaps what you're sensing is that the architectural model that makes it hard for a process to drop privs, restrict what it can do, etc. is simply wrong in most operating systems.

What strikes me about the above example is that it ought to be a lot easier for a developer/administrator to define the policy for a given process and its run environment, without having to know this much arcana about exactly how to do it.

Luckily, there are a few OS-supplied solutions to the problem that while not perfect and still tricky to implement, are at least a step in the right direction.

Windows Server 2008
  • Microsoft has introduced service hardening and reduced privileges in Server-2008.
  • Based on what I can tell their new wizard and SCM in general are structured more around least privilege than some of the other operating systems. At least from an ease-of-use standpoint.
  • On Linux we have several options.
    • SELinux
    • AppArmor
  • I haven't looked extensively at either of them yet but I'll try to look into whether their policy model is better/worse than the options above.
  • Leopard introduces a new process sandboxing mechanism. Unfortunately the details are a bit sketchy. The Matasano guys have a writeup of it, but I haven't seen any details on the exact mechanisms and/or configuration.

Wednesday, November 07, 2007

The Point of Breach Notification Laws

Back in August I wrote a small piece - "Data Breaches and Privacy Violations Aren't Just About Identity Theft". Ben Wright left a comment there that I never responded to. Here goes...

He said:
Peter Huber argues in Forbes that there is no "privacy" in our social security numbers or credit card numbers. The "secrecy" of those things does not really authenticate us. So this business of giving people lots of notices about compromise of their numbers seems pointless.
I hate to rehash all that has been written about breach notification laws but I don't see a lot written on the public policy reasons for breach disclosure/notification laws. Well..., I don't hate rehashing too much, here goes.

There are reasonably several justifications for breach notification laws:

  1. Accountability of the data custodian
  2. Alerting the data owner of the breach
  3. Collecting public policy data on frequency and manner of breaches so that we can prevent them in the future
Whether the data in question has value, the disclosing party certainly didn't uphold their end of the bargain. What we're seeing lately is that there is no shame in having had a data breach. So, we're seeing that #1 isn't all that useful from a public policy perspective. If breaches don't result in a significant financial loss, then companies won't care so much to protect the data in their custody.

The main public policy value of breach notification laws as written today is probably #3. Interesting in and of itself, but because of the nature of the breaches it isn't clear that the costs of the breach notification are worth the costs of disclosure. Or, more specifically, it isn't clear that the public notice with specifics-per-company is serving us perfectly. An anonymous repository of details and types of incidents would accomplish roughly the same public policy goal without all of the associated costs.

I'm not arguing that companies shouldn't disclose, but I have yet to see an analysis of the costs on both sides of the issue. I'm hoping someone can point me to one.

Part of the argument of course hinges on the responsibility of companies to not disclose data entrusted to them and the rights that the data owner has. There are costs of our current regime however, and based on public reaction to data breaches (continuing to do business with said firms as if no incident had occurred) perhaps people aren't as interested in breach notification as we thought.

Safety feedback loops and new car safety features

Wired has an article today titled - "Is Car Safety Technology Replacing Common Sense?" The author of the article is concerned that all of the safety features in cars will in the end make them less safe as drivers become less and less accustomed to needing to pay attention while driving.

This argument reminds me a little bit of the snarky Apple ad about Windows UAC. It is a fine line between creating computer systems that try to prevent users from making mistakes, and ones that allow the end user the flexibility to actually use the computer they purchased. Witness of course Leopard's new feature that asks you to confirm you want to run something you just downloaded from the Net, and how it fails to run certain programs whose digital signature doesn't match anymore - which is leading to no end of annoyances for Skype and WoW users.

I was struck by one line in the article:

I always thought that as the driver, watching the road ahead for slow-moving vehicles and cars that dart into my lane — not to mention checking left or right to make sure its clear before changing lanes — was my job.
It is humorous to me to hear this same line repeated again and again as new safety features and technologies come out in products.

  • It used to be my job to pump the brakes to stop on a slippery surface. Now ABS helps me do it better in almost all cases.
  • It used to be my job to harden my operating system like a madman. Now most operating systems are shipping with slightly more reasonable defaults for things. Not perfect (witness Leopard's firewall) but getting better.
  • It used to be my job to determine whether a website and an email are real or spoofed. Now I have browser toolbars, email spoofing filters, etc. to help me out so I don't have to do each of them manually.
Sure, there are cases where relying on technology can have disastrous consequences and fail in a brittle fashion.

I don't know that its anything but an empirical question whether a safety or security technology actually makes things better.

Wednesday, October 31, 2007

We need InfoSec incident data like NASA got from pilots

You may or may not have seen the coverage lately about a survey NASA did of airline pilots about the frequency of close calls in airline safety. There has been a bit of scuffle about whether to release the data publicly because of fears it might erode consumer confidence in airline safety....

Today news reports are out that NASA will be publicly releasing the data. I don't have details on the study yet. It will be interesting to compare the data from this survey, that hopefully had a scientific basis, to InfoSec surveys such as the CSI/FBI which we've mostly all come to hate because of its poor methodology, etc.

Jeremiah posted the results of his latest web application security survey and the results aren't great.... well, the state of security isn't great anyway. Might be nice to put together a broader survey to see how many incidents we're really having out there.

Tuesday, October 23, 2007

Software Security Metrics and Commentary - Part 2

Part 1 here

In Part-1 of this entry I talked about the first 5 metrics from the paper "A Metrics Framework to Drive Application Security Improvement".

In part-2 of this piece I'll try to cover the remaining 5 metrics as well as discuss a few thoughts on translating survivability/Quality-of-Protection into upstream SDL metrics.

First, onto the other five metrics from the paper:
  • Injection Flaws
    • Again, I think the metric posited in the paper is too tilted towards incident discovery rather than prevention. Just like the XSS metric I added - OutputValidation , this is really the key to prevention here. Most static analysis tools can detect tainted input and have a set of untrusted input functions (things that read from sockets, stdin, etc). It should be relatively straightforward to model our own application-specific output functions to detect where we're handing unchecked/unfiltered input to an output routine, potentially those across a trust boundary. If we can model these, we can at least make sure we have good sanitization coverage for each output type. We'll want to have this type of output filtering anyway, we might as well combine metrics from our XSS example.
  • Improper Error Handling
    • I think the metric posed in the paper - counting unchecked returns is a pretty good idea. This isn't going to catch web-server layer errors unfortunately, and won't necessarily detect errors in things like app servers, db-layers, etc. We can test for these, but the best metrics might be those related to following secure configuration guidance such as the CIS guide for individual web servers and/or app servers. The CIS benchmark for example requires a compliant configuration to handle standard web errors (4xx and 5xx) through rewrites and/or custom handlers. There are cases (SOAP comes to mind) where we need to throw a 5xx error back to a client, but this is the exception rather than the norm. Configuring application and web servers to minimize this sort of data disclosure is certainly a good thing, and in this sense we can check for compliance at this layer as almost a binary config - you pass the CIS guidance or you don't.
  • Insecure Storage
    • I don't think the metric of percent encrypted hard drives is really a meaningful metric in this context. If we look at typical web attacks that fall into this category we'd be looking at exploits that leak things like passwords, CC-data, etc. that is stored in an improper manner on the webserver. Some of this is going to be related to the implementation in the code, and so our best bet is probably a detailed audit of each piece of information that falls into this criticality range to confirm that it is being handled in an appropriate manner. I struggle to find a concrete metric that helps to measure this however. PercentCriticalDataCovered for proper encryption/hashing technique? Still not a very convincing metric unfortunately.
  • Application Denial of Service
    • Two metrics spring to mind here:
      • Memory/Resource Allocations Before Authentication
      • Memory Leaks
    • Both of these are a lot more likely to lead to application denial of service than any other errors I can think of. Both of these should be minimized. Tracking them and having the absolute fewest of them is probably a good bet. That doesn't mean we're not going to have a DoS issue, but these are at least 2 places to look.
  • Insecure Configuration Management
    • This item probably goes back to the same metrics I posited for Improper Error Handling. Things like the CIS benchmarks for OS, webserver, and appserver are our first pass candidates for measuring this.
On the question of survivability I was struck by a presentation and paper Steve Bellovin did last year about this topic at the first Metricon - "On the Brittleness of Software and the Infeasibility of Security Metrics." He published a paper and a presentation about it.

Steve makes what I believe are two major points in this paper:
  • Software is brittle, it fails catastrophically
  • Unlike other engineering disciplines, we don't know how to get to certainty about the strength of a piece of software.
I won't disagree with either of these points, but to an extent you can say this about all new technologies. We've had catastrophic failures in physical engineering before as well. Old materials fail in sometimes new ways, new materials fail in unpredictable ways, and we still rely on sample and testing for analysis of a batch of materials.

The Quality of Protection workshop at the CCS conference is probably the best place to look for research in this area. Previous papers from the workshop can be found here. This years conference and workshop is starting next week, if you're in the DC area and interested in software security metrics it looks like its going to be a good event. The accepted papers list contains a number of papers that I think might shed some light on my speculation above.

I plan to put together a few more thoughts on brittle failure modes of software in a followup to this, I haven't had time to pull all of my thoughts together yet.

Tuesday, October 09, 2007

SQL Injection Humor?

If you're an application security geek at all, then you must read today's xkcd.

I've always said there aren't enough SQL Injection jokes...

Monday, October 08, 2007

Apologies and Data Breaches

I just listened to an NPR piece - "Practice of Hospital Apologies Is Gaining Ground."

There has been quite a bit of research in the last few years that the differentiating factor between a doctor who gets sued for malpractice and one who does not is how much time they spend with their patient, and how humble they are.

The NPR piece details how at least one hospital now has a practice of apologizing to patients who have adverse outcomes, or where there was a missed diagnosis. It turns out that many patients sue not because of the mistake, but because of how they are treated. Being upfront and honest with the patient about the mistake, and apologizing, seems to have a positive impact.

Makes me wonder if there is a lesson in here for companies that have data breaches. Maybe getting out front of the issue like TD Ameritrade (not really out front given how long it was going on, but out from of the major press) will help them in the end with respect to how successful the class action suits are, etc.

I guess we'll just have to see.

Monday, September 17, 2007

Software Security Metrics and Commentary on "Metrics Framework" Paper

I was reading the paper "A Metrics Framework to Drive Application Security Improvement" recently and some thoughts started to gel about what types of web application security metrics are meaningful.

This is going to be part-1 of 2 about the paper and software security metrics. In this first installment I comment on the metrics from the paper and provide what I believe are reasonable replacement metrics for 5 of the 10 in the paper. In Part-2 I'll take on the next 5 as well as discuss some other thoughts on what metrics matter for measuring web application security.

The paper is actually a good introduction on how to think about measuring software security, but I think a few of the metrics miss the mark slightly.

In the paper they analyze software metrics in three phases of an application's lifecycle:
  1. Design
  2. Deployment
  3. Runtime
The paper uses the OWASP top-10 as the basis for measure and comes up with metrics that will tell us how we're doing against it.

The goal of metrics should be, where possible, to create objective measures of something. Whereas some of the metrics described in the paper are quite objective, others are more than a little fuzzy and I don't think represent reasonable ways to measure security.

First, the Top-10 and associated metrics from the paper (and you'll have to bear with me as I try to create tables in blogger):

OWASP ItemMetricApp PhaseMethod
PercentValidatedInputDesignManual review
Broken Access ControlAnomalousSessionCountRuntime?Audit Trail review?
Broken Authentication / Session ManagementBrokenAccountCountRuntimeAccount Review
Cross-Site-ScriptingXsiteVulnCountDeployment?Pen Test Tool
Buffer OverflowOverflowVulnCountDeploymentVuln Testing Tools?
Injection FlawsInjectionFlawCountRuntimePen Testing
Improper Error HandlingNoErrorCheckCount (?)DesignStatic Analysis
Insecure StoragePercentServersNoDiskEncryption (?)RuntimeManual review
Application Denial of Service??RuntimePen Testing?
Insecure Configuration ManagementService Accounts with Weak PasswordsRuntimeManual review

I think unfortunately that this set of metrics misses the mark a little bit. I question whether pen testing for buffer overflows or XSS is really the right way to develop a sustainable metric. A necessary assurance component to be sure, but not necessarily the first metric I'd focus on if I'm asking the question "How secure is my app?" I'm loathe to rely on testing for the bulk of my metrics.

A few of the metrics above are unmeasurable or inappropriate I think. Its hard for me to imagine how we'd measure AnomalousSessionCount appropriately. Seems like if we had proper instrumentation for detecting these as described in the paper, we probably wouldn't have any in the first place.. I'm not so sure about BrokenAccountCount being representative of issues in authentication and session management either.

As I'm working on building my web application security metrics I'm trying to focus on things in the design phase. For the majority of flaws I'd like to develop a design-phase metric that captures how I'm doing against the vulnerability. This gives me the best chance to influence development rather than filing bugs after the fact. It is possible that some of these metrics simply don't exist in a meaningful way. You can't measure configuration management in your design phase for example.

Rather than just being destructive here is my modified group of metrics.
  • Unvalidated Input
    • I actually like the metric from the paper. Measuring input validation schemes against the percent of input they cover is a pretty good metric for this. Don't forget that web applications can have inputs other than html forms, etc. Make sure that any/all user input (cookies, http headers, etc.) are covered.
  • Broken Access Control
    • Unfortunately this one is a tricky metric to get our hands around. Ideally we'd like to be able to say that our data model has proper object ownership and we could simply validate that we call our model appropriately for each access attempt. This is unlikely to be the case in most web applications.
    • I'd really break this metric down into Application-Feature and Data access control. For Application-Feature access control I'd make sure that I have a well-defined authorization framework that maps users and their permissions or roles to application features, and then measure coverage the same way I would for input filtering.
    • For Data access control, I unfortunately don't have a good model right now to create a design-time metric, or any metric for that matter.
  • Broken Authentication and Session Management
    • For a general application I again come back to use of frameworks to handle these common chores. I'd want to make sure that I have a proper authentication and session management scheme/framework that is resistant to all of the threats I think are important. The important metric is coverage of all application entry points against this framework. When implemented at the infrastructure level using a package such as Siteminder or Sun Access Manager, auditing configuration files for protected URLs ought to get me good coverage.
    • From a testing perspective I can also spider the application and/or review my webserver logs and compare accesses URLs against the authentication definition and make sure everything is covered appropriately.
  • Cross-Site-Scripting
    • From a design perspective there are two things that matter for XSS vulnerability.
      • Input Filtering
      • Output Filtering
    • The best metrics therefore for measuring XSS vulnerability is a combination of the InputValidation Metric and an equivalent OutputValidation metric.
  • Buffer Overflow
    • In general buffer overflows are the result of improperly handled user input. Within a web application we ought to be able to handle most of these issues with our InputValidation metrics, but there are going to be cases where downstream we handle the data in an unsafe way. Unfortunately our best techniques for detecting and eradicating them are going to be either dynamic languages where we don't get buffer overflows, or lots of static analysis and strict code reviews of all places we handle static-sized buffers. One partial solution is to simply use an environment that isn't itself to buffer overflows. This makes analyzing the web application for buffer overflows pretty easy.
    • For those who insist on writing web applications in languages such as C/C++ our best defense is to avoid the use of static buffers and strictly code-review those places where we do use static buffers to analyze inputs for proper bounds checking. One useful measure would be PercentBoundsCheckedInput which we can theoretically catch with a static analyzer. They are pretty decent currently at finding these.
      • One problem with the metric from the paper was a focus not on the web application itself but on its platform. I'm not sure that we're working at the right level when we start considering OS vulnerabilities when reviewing web applications. They are certainly however part of the picture and a meaningful vulnerability.
In part-2 of this piece I'll try to cover the remaining 5 metrics as well as discuss a few thoughts on translating survivability/Quality-of-Protection into upstream SDL metrics.

Sunday, September 16, 2007

Why Don't Financial Institutions Have Vulnerability Reporting Policies Online?

You may remember I did a bit on vulnerability reporting policies a little while ago. I was interested in crafting a vulnerability disclosure policy that was responsible both for the company posting it, security researchers, but also took into account the liability issues surrounding security researchers testing web applications.

In my previous piece I pulled together a quick summary of the public-facing security reporting policies (or lack thereof) for a number of big sites on the web. Recently I started doing the same for financial institutions. I tried finding disclosure policies online for major financial institutions such as Citibank, Wells Fargo, Washington Mutual, Chase, Fidelity, etc. I was unable to find a externally accessible security reporting/disclosure policy for any of the major financial institutions I looked at.

Why is that?
  • Fear that a disclosure policy makes it look like they could have a security issue?
  • Worried about too many people contacting them about bogus issues?
  • They don't want to be the first to publish one?
I'm not suggesting that everyone ought to follow the RSPolicy but maybe they ought to have something online rather than nothing?

Buffer Overflows are like Hospital-Acquired Infections?

I was listening to NPR a few weeks ago and heard an interesting piece about new policies being implemented related to "Avoidable Errors."

The idea is that certain medical outcomes are always the results of medical negligence rather than inherent issues in medicine such as patient differences, etc. A few things that fall into the avoidable category are:
  • Common hospital-acquired infections
    • Urinary tract infections for example are extremely rare when proper protocols are followed.
  • Blatant surgical errors
    • Tools left in patient for example. There are easy ways to make 100% sure this doesn't happen.
What is most interesting I think is that historically there have been problems with these issues, but we now have good procedures for avoiding these in almost circumstances. There will be corner cases as the article points out, but these are by far the exception.

For historical context we didn't used to understand that we needed to sterilize needs and/or use them only once. Needles used to be expensive and so we reused them, but we discovered infection rates were unacceptably high. We created low-cost disposable needles and we use those now instead because they are safer.

Similarly we continue to program in languages that make avoiding things like buffer overflows tricky. Not impossible, but tricky. Given the attention to buffer overflows, the fact that we have tools to completely eliminate them from regular code, I'd say they fall into the same category as surgical tools left inside the patient - negligence.

A key quote from Lucien Leape of the Harvard School of Public Health:

Today, he says, dozens of safe practices have been developed to prevent such errors. But he says there hasn't been enough of a push for hospitals to put them into use.

"I think it's fair to say that progress in patient safety up to now has relied on altruism. It's been largely accomplished by good people trying to do the right thing," Leape says. "And what we're saying is that hasn't gotten us far enough, and now we'll go to what really moves things in our society, which is money."

Maybe I should start putting money-back guarantees in my contracts with software vendors so they owe me a partial refund for every buffer overflow that gets exploited/announced in their code?

Tuesday, September 11, 2007

Thoughts on OWASP Day San Jose/San Francisco

Last Thursday 9/6/2007 we had a combination San Jose/San Francisco OWASP day at the eBay campus. Details on the program are at: https://www.owasp.org/index.php/San_Jose

The turnout was great, somewhere between 40 and 50 people, I didn't get an exact count. There were two sessions for the evening:
  • A talk by Tom Stracener of Cenzic on XSS
  • A panel discussion on Privacy with a pretty broad group of security folks and some people in adjacent areas such as Law and Privacy proper.
The panel discussion was really the part of the night I was looking forward to. I think the discussion rambled a bit between several different areas:
  1. What is Privacy?
  2. What are a companies obligations to protect Privacy? Legal, Ethical, Moral, good business sense, etc.
  3. How do companies, especially large ones that operate in multiple states or are multinationals, deal with all of the different privacy regulations?
  4. How do we integrate Privacy concerns into security operations, secure development, etc.
I'll admit that #4 was the topic I was hoping would get a decent amount of coverage, but despite my efforts to prod the panel in that direction we didn't really come up with an answer.

The best discussion of the night in my mind came on point #3. How do large companies manage to diverse privacy regulations and policies across jurisdictions...

All of the panelists in this area made two points:
  1. Set a baseline policy that encompasses the vast majority of your requirements and implement it across the board. This way you don't have to continuously manage to specific privacy regulations as you've embodied them in your general policy.
  2. Setting the privacy policies and controls around it is an exercise in risk management. People don't often look at writing policies as managing risk, but that is exactly what policies do.
The good thing about the panel was that there were plenty of people with expertise in Privacy considerations. The bad part was that there was little discussion of how we actually do software development with Privacy in mind. Of the people writing about SDL, the Microsoft people have been most vocal in talking about how to integrate Privacy evaluations into their SDLC. For an example, see this post.

If nothing else was achieved last Thursday we had great turnout for the local OWASP event, better than I've seen so far. We also got to try out part of the space that will be used for the fall conference. I think it went well, but I guess we'll have to get the other folks present to weigh-in with their thoughts since I'm obviously a little biased.

Friday, August 31, 2007

FUD About Ruby on Rails?

James McGovern has a piece "The Insecurity of Ruby on Rails" that Alex picked up on and I think the whole idea is a little overblown....

The points raised by James were:
  1. Java has a security manager, Ruby does not.
  2. None of the common static analysis tools cover Ruby
I'll address both of these...

  1. I have yet to come across a single Java application that actually uses Java's security manager to specify security controls, access rights, etc. While there are certainly the hooks to do so, and some tools like Netegrity, Sun Access Mgr, etc. will allow you to override Java's native security manager with this implementation, this is by far the exception rather than the norm for server-side code.
    1. Note:We're not talking about client sandboxing here, where Java's security manager policy does come into play by default.

  2. No static analysis tools cover Ruby. True, but irrelevant. It is perfectly possible to write secure code without the assistance of a static analysis tool. Its just a lot easier to do so with one. Fact is, there isn't good static analysis capability for many languages including Ruby, Python, Perl, and so on.
The upshot of this, I think the premise is a bit flawed and maybe I'm overreacting to a relatively short thought provoking piece, but I thought I'd throw my 2-cents in there...

Tuesday, August 28, 2007

OWASP Day/Week - September 6th

Get in on the fun.....

OWASP Day : Day of Worldwide OWASP 1 day conferences on the topic "Privacy in the 21st Century" : Thursday 6th Sep 2007


I'll be at the San Jose meeting, it should be interesting.


Friday, August 03, 2007

What is Safe Enough?

I wrote a piece a little bit ago comparing software security and liability to liability in the pharmaceutical industry.

Wired had a great article today about drug safety titled "FDA Drug Standards: What's Safe Enough?" I think a few of their points are pretty relevant to the discussion:

Does the FDA advisory panel's decision mean Avandia is safe?

It's safe enough, according to 22 of the 23 scientists on the FDA panel. That means the drug's benefits -- decreasing blood-sugar levels -- are more important than the potential risks cited in the Journal study. Plus, it's not even clear that the harm indicated in the study was caused by the drug.

And, more on how we measure safe...

I'm not convinced. Why is the FDA approving drugs that may not be safe?

Before a drug is released, clinical trials study thousands of patients. But deadly complications to new drugs are often extremely rare and don't emerge until millions of people have taken the drug.


The FDA must weigh many factors when it comes to deciding whether to keep a drug on the market. Do the benefits outweigh the risks? Do other drugs on the market treat the disease with fewer side effects? As reporter Trevor Butterworth said recently on The Huffington Post: "What if we save 20 out of 100 people from going blind, but increase the risk of heart attack for four out of 100? Is this acceptable? No one really has a good answer."

I think this answer is a really good one to think about when you're developing software. Defining what safe enough is varies a lot by product, market, customer, type of data you're processing, etc.

The takeaway I suppose is that even where it truly it life-and-death there aren't easy answers to these types of questions. It makes me feel a little better I guess...

Thursday, July 19, 2007

Security Reporting Policies That Encourage Responsible Disclosure?

I was reading Jeremiah's piece recently about the CSI working group he was on dealing with liability for security researchers, especially those working in the web application space. It got me thinking about creating disclosure policies that serve several purposes:
  • Encourage Responsible Disclosure (subject to interpretation)
  • Provide clear expectations and ground rules
  • Protect researchers who disclose responsibly - ie. waive liability for researchers that follow the predefined rules
I'm working to contact a few of the people involved in the CSI report to find examples of disclosure policies that achieve the above goals. In my mind I'd want the policy to have roughly these items:

  1. Tell the company first about vulnerabilities
  2. Don't sell the vulnerability or otherwise distribute it until hearing back from the company
  3. Don't exploit the vulnerability other than necessary to demonstrate the weakness.
    1. Example: If there is an authorization issue, use two of your own accounts, don't break-in to someone else's.
  4. Do these things, and we guarantee we won't go after you for doing vulnerability research on our site.
  5. If you're helpful, we'll try to run a thank-you page listing you. We don't however pay for vulnerabilities.
If you have pointers of good disclosure/reporting policies I'd love the pointer. I looked at a number of the major providers and I didn't see any policies that really hit this one on the head.

  • Overall, good page
  • Doesn't include waiver for the researcher
  • Doesn't mention responsible disclosure
  • Doesn't include waiver for researcher
  • I couldn't really find their security reporting page/info.
  • http://www.myspace.com/security points to a really odd place
  • Not much in the way of reporting a security vulnerability
  • No waiver of liability

Wednesday, July 18, 2007

Pharmaceutical Liability vs. Software Liability

I've written in the past about software security liability and how difficult it is to create high quality software that is free from defects.

One of the problems, as was pointed out before, is that software and computers don't have a fixed use that can be anticipated during the development cycle, and consequently saying that software isn't "fit for purpose" is a really tough judgment call.

I started thinking of other products where bad outcomes happen even during correct use,where the flaws aren't necessarily the fault of the manufacturer. Pharmaceuticals come to mind as a product that have:
  • Large safety concerns
  • Potentially large benefits (antibiotics sure are nice, aren't they?)
  • Per-individual side effects that are tricky to predict
Pharmaceutical companies develop drugs using an extensive process to try and ensure safety. The list of things they do is extensive:
  • Pre-Approval
    • Computer testing of toxicity
    • Animal testing of toxicity
    • Stage-1 trials in humans (small group) to test toxicity and effects
    • Stage-2 trials (larger number of people) to determine drug efficacy
    • Stage-3 clinical trials (hundreds to thousands of people over 1-3 years) to determine efficacy, adverse effects, etc.
    • Drug interaction trials and labeling
    • Extensive documentation trail
    • Get FDA Approval
  • Post Approval
    • Adverse event reporting capability
    • Updates to labeling
    • Constant quality checks
Despite all of these steps sometimes someone suffers an adverse event from taking a medication. When they do they, rightly in some cases - wrongly in others, blame the pharmaceutical company for a defective product. Sometimes the causes of the problem are:
  • Individual "allergic" reaction
  • Complicated or unforeseen drug interaction
  • Unsafe Product
    • Long-term safety issues that didn't surface during clinical trials.
Depending on the specific cause its hard to always blame the problem on the pharmaceutical manufacturer.

A pharmaceutical company can no more anticipate individual allergic reactions than a software vendor can guess at how someone is going to use their software. What matters most in determining liability is the level of due diligence and proper process that went into the product development, not the outcome itself.

All of this costs money. Current estimates are that developing a drug and bringing it to market costs approximately $800-million dollars. Individual manufacturing costs are generally low such that the first pill that comes off the production line costs $800-million and each additional pill costs 5-cents.

There is a big discussion going on right now about flaws in the process on the legal, FDA, and Pharmaceutical side. Right now its tricky to bring smaller targeted drugs to market because the costs are prohibitive to develop and gain approval for a new medication. The Economist had a few recent pieces of how drug companies are trying to develop targeted medications and how FDA regulation may be doing more harm than good in some cases.

If you've read this far you may be asking yourself what this has to do with software liability and software security. The points are:
  • Other products are subject to heavy regulation but still manage to turn a profit
  • The quality of the process doesn't always guarantee a quality outcome - especially in the face of uncertain product use
  • If we impose too much liability on software manufacturers we could drastically raise prices and/or reduce the amount of software available
  • Sometimes regulation does more harm than good
Just some points to think about next time it appears that questions of software liability are simple.

Friday, June 29, 2007

Data Breaches and Privacy Violations Aren't Just About Identity Theft

Chris Walsh over at the EmergentChaos blog had a piece the other day about some of the research they are doing on breach disclosures and what we can learn from them. I made come comments about data breaches as they relate to identity theft and at the time was pretty well convinced that what matters about data breaches is just identity theft.

After reading a follow-up comment from Chris and Dissent, and then the next piece by Adam -
It's not all about "identity theft" - I think I need to regroup.

Adam made the point that:

Data breaches are not meaningful because of identity theft.

They are about honesty about a commitment that an organization has made while collecting data, and a failure to meet that commitment. They're about people's privacy, as the Astroglide and Victoria's Secret cases make clear.

This is a very good point and one I'd lost site of in my previous comments. Protecting privacy is about the implied or explicit agreement between the data provider and the data repository/protector. A breach of this agreement constitutes a privacy violation, regardless of whether the law requires disclosure.

One of the problems with current disclosure laws is that their focus is entirely on identity theft. SB-1386 (and most if not all of the other disclosure laws) only kick in if your personally identifiable information and private identifier (bank account number, SSN, CC#, etc) is released as well. The end effect of this sort of disclosure regulatory regime is a focus not on privacy but on identity theft. As Adam rightly points out a lot of damage can be done through privacy violations without requiring the possibility of identity theft.

Adam listed two obvious examples of data disclosures that had nothing to do with identity theft but that nevertheless were violations of privacy agreements made between the data owner and the data custodian. Another would be AOL's release of search data.

I'm toying with a few thoughts on how you can modify the existing US regulatory regime without undesired effects. The US is distinctive from other the EU example for our relatively lax privacy regulation, but there are at least a few consumer friendly results that go along with it such as cheaper financing, etc. Trade-offs abound and we don't make decisions in an informed fashion about almost any of them. But more on that later when I have a little more time to think.

Wednesday, June 27, 2007

Banning Programming Languages to Fix Problems?

Michael Howard had an interesting piece the other day on the SDL blog and also gave an interview about some similar topics. The subject I'd like to address is the banning of certain things during the development process and the theoretical and practical aspects of it. I'd like to show that banning function calls and enforcing annotations as a coding practice is reasonably far down the path to saying that C/C++ isn't such a good programming language from a security perspective and that to achieve higher levels of assurance we need to go further down that road.

I'd like to touch on a few points in this piece:
  • The SDL vs. Legacy Code
  • Banning Programming Constructs, Functions, and/or Whole Languages
  • A Sliding Scale of Assurance
The SDL vs. Legacy Code

One of the things I'm struck by in Microsoft's SDL book is how pragmatic it is with respect to making software more secure. They point out repeatedly in the SDL book that products have been delayed and features scrapped because they either weren't secure, or couln't be made secure. While I can't argue with them about whether they've done this, there are at least a few places that we can point out a less than stellar track record of reducing feature sets to improve security. Web Browsers and how Windows does file associates come to mind as big security vs. feature wars that the feature folks seem to have won.

Michael and the other folks at Microsoft have done a ton of good work in creating, refining, and implementing their secure software development methodology, SDL. The SDL however reflects the realities of the Microsoft situation - millions of lines of legacy code. No one with that much legacy code can afford to start over from scratch. Anyone who suggested writing the next version of Word in All C# would probably get a swift kick. So, the Microsoft security folks are left with taking the practical approach. Given the fact that there is a lot of legacy code, and given that they aren't going to make a wholesale switch to another programming language, what is the best they can do in C++.

I think this is a fair approach but I don't really believe for a minute that if you asked one of the security guys there what programming language they want new things written in they'd pick C++ over C# for anything other than things like a kernel.

Banning Programming Constructs, Functions, and/or Whole Languages

Michael said in his recent piece:

In all of our SDL education we stress the point that .NET code is not a security cure-all, and we make sure that developers understand that if you choose to ignore the golden rule about never trusting input no language or tool will save you. Using safe string functions can help mitigate risks of buffer overruns in C/C++, but won’t help with integer arithmetic, XSS, SQL injection, canonicalization or crypto issues.

The key point is that languages are just tools; anyone using a tool needs to understand the strengths and limitations of any given tool in order to make informed use of the tool.

One final thought; in my opinion, well-educated software developers using C/C++ who follow secure programming practices and use appropriate tools will deliver more secure software than developers using Java or C# who do not follow sound security discipline.
I can't argue with Michael's point. I also don't know a lot of people in the security world that believe eliminating C/C++ automatically makes you secure. I do know quite a number of people though that make the good case that programming in C# or Java significantly reduces the exposure to certain classes of vulnerabilities and overall results in more secure software when using a develop trained in security.

WhiteHat Security's numbers of the general presence of web application security vulnerabilities per language seems to bear this out. In general applications written in ASP.NET and Java have fewer security vulnerabilities in them due to language constructs and secure coding frameworks.

I think you only have to look at what Microsoft has done with banning certain functions and requiring annotations to see that they have already taken some steps in the - C++ is bad - direction.

Why do they ban certain function calls such as strcpy, etc? Is it because the function inherently cannot be used safely? No. The reason is that its tremendously hard to use safely, and by removing it from the programmers options and replacing it with something that is easier to use, they improve the security of their software.

Why do they do annotations? They do them so that their static analyzers have an easier time in ferreting out certain classes of security defects.

If we could train developers to use all of the features of the programming language correctly, we wouldn't have to worry about either of these things. We'd simply do training and get on our way. The reality is that we cannot rely on developers to use certain portions of the language properly. We've shown repeatedly that certain function calls are the root of the vast majority of the security vulnerabilities out there. Most of the function calls that Microsoft has banned are ones that have been found to result in buffer overflows.

If we take as our starting position that developers can and will make mistakes, and we're wiling to enforce certain rules on them to constrain how they can use a programming language, why not take the next step and ask what else we can do to improve the security of the code we deliver?

There are several other areas of C++ that are problematic from a security perspective. Memory allocation comes to mind. The number of flaws we see from misuse of memory allocation is huge. Why not switch to a language that makes it fundamentally harder or impossible to have these kinds of issues?

Take the annotations Microsoft is using to hint their static analyzers. What these amount to are lightweight programming-by-contract constructs that we use for after-the-fact analysis rather than just forcing them through the programming language and calling violations errors. why not switch to a programming language that forces annotations rather than treating them as an after-the-fact construct?

A Sliding Scale of Assurance

In the end what we have is a sliding scale of assurance and functionality. Processes such as the SDL that mandate processes but don't require us to switch tools only get us so far down the assurance path. It is further than many/most companies are willing to go and credit to Microsoft for making such an effort. At the same time there are things the SDL doesn't necessarily do:
  • Enforce the use of tools and programming languages that are more likely to reduce security vulnerabilities
  • Eliminate software features that are truly problematic from a security standpoint and yet users have come to expect
If we want the SDL to be about delivering fundamentally more secure software at higher assurance levels, then it must mandate certain programming methodologies that result in higher quality code. Without these sorts of mandates we're still just putting band-aids on our existing software development processes.

Assurance comes at a cost. Development costs, testing costs, etc. When we switch tools that eliminate certain classes of vulnerabilities both remove the effort that developers would spend on securing the code, and also on the time the testers spend looking for certain classes of defects. In the end this is what Microsoft has done by removing certain programming constructs, they just haven't made the slightly bigger jump to another programming language for reasons I've already explained above.


What I think this comes down to is use C++ if you must to muck with legacy code, but please don't do new development there. From a security perspective its going to be a lot more expensive if you do.

Monday, June 25, 2007

More on Software Liability

About five weeks ago Symantec messes up their AV signatures and accidentally classified some Windows system files as viruses. The files were only part of the Simplified Chinese version of the OS, so presumably this didn't get as much testing as a regular configuration.

Yesterday it was announced that they would compensate those folks who got hammered with some compensation.

I'm going to be very interested to see how this plays out, whether the lawsuits move forward, etc. This is a pretty clear example of harm done by Symantec, and certainly not intended behavior. Not clear whether this falls into the "didn't test enough" category of mistakes or what. Perhaps they bypassed their internal processes to release these signatures? Either way I bet they are hoping they have a good audit trail internally to show exactly how/why this happened.

I'll be watching this one to see whether any of these folks persist in their lawsuits and whether this ends up making any case law about software liability.

Wednesday, June 20, 2007

On Bad Post Titles

Sometimes you start writing an entry intending to cover a subject a certain way but by the time you're done you've sort of switched gears but you already wrote the title and you forget to go back and fix it.

Rothman pointed today to my post from the other day "Building Effective Metrics." He rightly pointed out that the piece isn't really about metrics. I think he's slightly off the mark though on his statement that I was writing mostly about risk management.

I think the point I was making was about culture change and secondarily about risk management. The old story/analogy about a frog in boiling water is at least slightly appropriate. Though when I went to look up the story I found out more than I wanted to on the Wikipedia article for "boiling frog."

If you want to achieve success in implementing new parts of a security program, you've got to start with the sustainable processes and make them routine before you can get an organization to actually make progress on reducing risk. Thats a really short synopsis....

The piece could probably have been better titled, so I guess I'll just try to do better next time.

Sunday, June 17, 2007

Building Effective Metrics

The topic of metrics for Information Security comes up quite often. I've been in quite a number of situations where a relatively sparse infosec program exists and no metrics exist. The question often comes up of what types of metrics to gather first to measure program status, effectiveness, etc. And, when rolling out a new element of an infosec program what metrics to focus on first.

I've come to the conclusion that process maturity based metrics are the best thing to worry about when you're building an infosec program or a new feature of an existing program.

Let's take several areas of Infosec and examine my premise.
  • Vulnerability Management (discovery and remediation)
  • Anti-virus software
  • Software Security
Vulnerability Management

When you're first starting to build a vulnerability management program you're worried about a few things:
  1. Measuring existing vulnerabilities
  2. Remediating vulnerabilities
  3. Eventually, reducing the number of vulnerabilities that get "deployed"
Most people try to tackle these items in numerical order. They buy a vulnerability scanner, they start scanning their network, they come up with a giant list of security vulnerabilities, and then they try to tackle #2, remediation. They generally set the bar pretty high in terms of what they expect the organization to fix. For example, all level 5,4,3 vulnerabilities in a Qualys scan. They push the vulnerability report to the systems administration staff, tell them to go fix the vulnerabilities, and wait an eternity to hear back about the vulnerabilities, what has been done, etc. Usually they get upset that things are being fixed faster, that new vulnerabilities surface faster than they can close the old ones, and they either give up and start ignoring their vulnerability scans, or they get extremely frustrated with the admins and a constant battle ensues.

Instead, I like to tackle these items in reverse order of the above list:
  1. Reduce the number of new vulnerabilities that get deployed
  2. Implement a remediation process
  3. Search for vulnerabilities and feed them into #2.
In my experience most people want to go a good job at what they do. They don't want to release systems with holes in them, have their systems get hacked, etc. Unfortunately they aren't security experts and don't know what to focus on. They need assistance and prescriptive guidance on exactly what to do and when to do it.

Step 1: Reduce the Number of New Vulnerabilities

Start with something like a system hardening guide and approved software list. You pick things like the CIS hardening standards and ensure that all new systems getting built go out the door with your hardening applied. In this way you cut down on the number of new vulnerabilities you're introducing into your environment.

Step 2: Implement a Remediation Process with Metrics

Work on a remediation process. Focus on elements such as:
  • Who is responsible for requesting something be remediated
  • Who is responsible for determining the scope of the request and its priority/severity
  • What testing has to be done, and who must approve it in order to push something to the environment
  • How do you track status through each of these items including time taken, roadblocks, etc.
  • How much did it cost to fix each vulnerability
Building your remediation process before you start up the firehose gives you several advantages:
  1. You can start slow at feeding vulnerabilities into the remediation process and get useful metrics about the costs of remediation.
  2. You don't cause undue friction with the operations staff by asking them to take on too much too soon.
  3. You have a well-established process for fixing any/all vulnerabilities you discover.
Once you've got this process created you can measure how effectively you're remediating any given vulnerability you discover. You have process metrics for your remediation process, rather than an ad-hoc best effort situation.

Step 3: Search for Vulnerabilities and Feed Them to Your Remediation Process

One you have a repeatable remediation process, you're ready to start feeding the process new vulnerabilities. In an organization that isn't used to routine patching, turning off services, remediating vulnerabilities you can't start out with a firehose of vulnerabilities to unprepared staff. The best approach is to use the metrics you've created in step-2 and be selective with what vulnerabilities you ask to be fixed. Once you have the process in place you can choose to stat with a subset of your vulnerabilities - your example your Qualys level-5 vulnerabilities. Ramp up slowly to the organization so that you can adequately measure the impact of your changes, the value they are providing, and the costs of remediating.

Get people used to being accountable for fixing vulnerabilities, for testing the fixes, and for measuring the results. Once you have that in place you're free to ramp up the security level you want to achieve in a measured fashion.

Eventually, once you finally have a handle on these three steps you can move on to more advanced metrics such as:
  • Average time to remediation
  • Overall vulnerability score
Until you have the first three pieces in place though focusing on your overall risk/vulnerability isn't that interesting. Even if you don't like the score, you're never going to get it lower without a repeatable process in place to remediate.

More on process related metrics for Anti-virus and Software Security in a later post.

Microsoft's "The Security Development Lifecycle" - Chapter Five

Chapter 5: "Stage 0: Education and Awareness"
Chapter 4 here

The authors credit roughly speaking, two things with the success of the SDL at Microsoft:
  1. Executive support
  2. Education and Awareness
Arguably they couldn't have achieved #2 without #1 but its interesting that they rank education and awareness as highly as they do in the SDL, given how much we've read lately about how successful most companies are/aren't in the general security awareness campaigns.

One interesting point in the introduction to the chapter is the reminder that secure software isn't the same as security software. Secure software is software that is resistant to attack, security software is software that is intended to specifically address a security concern.

The chapter has a history of security training at Microsoft. Based on the descriptions of training even before the formal SDL Microsoft was spending considerable money on training and development of its engineers. My guess is that if you've already got a corporate culture of training and education, implementing the specific training required for the SDL is going to be a lot easier than it would be at a place that doesn't already take training that seriously.

The chapter also has an overview of the current training courses for secure development available at Microsoft. I'm hoping that their future plans include making these public even on a for-fee basis so that the rest of the world can benefit from some of the work they have done.

One the sections in the chapter is on the value of exercises/labs as part of the training. They added a lab section to their thread modeling class and feedback scores went up and presumably the student's understanding of the materials as well.

Having attended and given several security training sessions I can definitely recommend this approach. I've had software security training from both Aspect and the old @stake folks and both classes had an interactive lab component. I took away a lot more from the courses than I have from most of the other security classes I've ever done.

One other interesting section in this chapter is Measuring Knowledge. At the time of the book's writing Microsoft didn't have a testing or certification program in place for their developers. I haven't had a chance to catch up with the SDL guys to see what their take is on the new SANS Software Security Initiative. I'll be interested to see how the SSI stuff shakes out and Microsoft's involvement.

Overall its interesting to see how much attention and dedication Microsoft has made to the SDL from a training perspective. The costs of the training alone in an organization the size of Microsoft is going to be enormous.

If you don't already have a robust internal training program in place in your organization this chapter does give a few hints on how to build one o the cheap. At the same time the chapter is more about the structure of the Microsoft training program than exactly how to go about building one. At the end of the chapter you're fairly convinced that you need a robust training program, but if you don't already have one you're going to be searching for a lot of external help to build one.

How I Got Started in Security and the Value of a Mentor

So, A few people out there have been blogging about how they got their start in security. I figured I'm exactly the sort of exhibitionist that would post that sort of thing, so here goes.

Warning: This entry is long and probably more than a little boring and self-indulgent. You've been warned..

I've been doing paid IT work for roughly 14 years. I got my start doing it for pay as a student at the University of Chicago in the main student computing lab. I was doing basic PC, Mac, and later Unix administration. I had a pretty strong Unix background from a few years I spent as a student at RPI where student computing was an exclusively Unix affair.

After working at the University computing lab as a regular worker I was put in charge along with a colleague of running a new cluster of SGI Indy machines. Our job was pure Unix system administration of 9 SGI machines. We were responsible for all aspects of system administration and I learned very early on that doing system administration at a University is rather different than doing it in most other environments...
  • Permissive culture and lack of definitive policies
  • Security not a priority except insofar as it caused the machines to be unavailable.
  • Insider attacks are at least as prevalent as outsiders
So, I cut my security teeth in that environment. Even though I was officially a Unix admin, I spent 50%+ of my time on security concerns. I even brought up one of the first semi-official kerberos realms at the UofC. Only about 5 of us used it at the time, but it sure did teach me a lot about distributed authentication.

One person I'd like to single out for how much he helped me in learning about security is Bob Bartlett. When I first started doing sysadmin at the UofC Bob was relatively new to the main computing group. Part of the University ethos and culture is a respect to educating, training, and mentoring. When I was just a student I used to go and hang out in Bob's cube area when I had some free time to see what sorts of things I could pick up on. Bob was the most amazingly patient guy I ever met. No matter how many stupid questions I asked, crazy schemes I came up with, he weathered the storm and never told me to stop coming around. I learned a lot about Unix security, the value of lots of layered defenses, how to do forensics of a compromised machine, etc.

Its amazing how much value you can get out of a good mentor. How they can show you ways of thinking, ways of working, how to interact with other people, etc. I can't say I learned all of those lessons and I'm certainly not a Bob clone, but of all people he's probably most to blame for me being in security today.

I spent two more years working at the UofC maintaining the main interactive unix machines for the campus. I talked a bit in an earlier post about how I don't think we've come that far in the last 15 years, but maybe I'm just jaded.

I then spent 4 years at Abbott Laboratories working in the pharmaceutical research division doing Unix admin. I wasn't officially in charge of security but since I was roughly the only person in the whole group that knew a lot about the subject I became the firewall administrator, ACE server administrator, in charge of network security monitoring and forensics, etc. I brought up the first network IDS there using first Shadow and then NFR.

The area I worked in was highly regulated so I got my full dose of filling out logbooks, worrying about audits, etc. It helped in our paranoia that Abbott had a quite a number of adverse regulatory issues during those times which made us that much more serious about security. That said the regulations that apply to the pharmaceutical business aren't that different that other regulations such as PCI. They are supposed to guarantee a certain level of security, but half the time they just result in a lot more paperwork, etc.

After 4 years at Abbott I left to go work for a software company in downtown Chicago to be the sole security person. I was responsible for all aspects of security except physical. I spent 5 years at CCC working on pretty much everything security - Policies, Procedures, SOX, Firewalls, IDS, Application Security (coding standards, threat modeling, application pen testing), vendor relationships and contracts, etc.

The scope of the job was great but unfortunately the industry they were in wasn't in need of the kind of serious security I was really looking to do. So, I started looking and eventually moved to the Bay Area to take a job with a large financial services firm. I don't like to talk about who it is, but if you use google and linkedin it can't be that hard to figure it out.

I think one of the main skills I bring to the table is my background doing a lot of different IT work for a lot of different types of environments. I worked for a University, a heavily regulated pharmeceutcal, a software company, and a financial services firm. I've done everything form desktop support to large system unix admin to software security work. I think its both breadth and depth that are to be valued in Information Security. Hopefully I've got some of both but I guess you be the judge.

Saturday, June 09, 2007

Microsoft's "The Security Development Lifecycle" - Chapter Four

Chapter 4: SDL for Management
Chapter 3 here

Chapter 4 is about ensuring the success of the SDL through management support. As such its really the first chapter that I think starts to address some of the core issues of the SDL. This chapter makes a lot of sense in relation to Dave Ladd's post from the other day about culture change.

There are a few key takeaways for this Chapter:
  • Management support is critical for implementing a software security effort
  • Continuous messaging, reinforcement, recognition for a job well done, and training are all a part of the management support and culture change
  • The SDL can add 15-20% overhead when it is first implemented but considerably less thereafter.
Chapter four again includes some background on what drove Microsoft to implement the SDL. Without throwing cold water on the four main reasons outlined:
  • Customer complaints
  • Actual attacks
  • Press coverage - bad PR
  • Frequent patching required diverting developers from new products to maintenance
I think another concern is liability. Microsoft's position as a monopoly in the desktop space makes them much more liable than other software companies might be. Regardless of the specifics of the list, all of these point to a purely economic analysis for why Microsoft chose to go down the SDL route. The SDL isn't in place merely because a bunch of smart developers thought Microsoft should develop more secure software. The SDL is in place because Microsoft management made the decision that the benefits of the SDL outweighed the costs of its implementation. This is a key point in gaining and sustaining management support.

Now a review of a few specifics of the chapter where I think they got it right and/or I have a few small comments.

On page-44 they discuss the vulnerability rate in Microsoft products post-SDL and the improvements seen. Similar to earlier chapters they claim that the discovered vulnerability rate has gone down and try to correlate that remaining vulnerabilities in the codebase. I'm not sure I agree with this assessment methodology. The jury is still out on 0day attacks, people not releasing vulnerabilities in certain key pieces of software, etc. Forensics data isn't widespread either as it relates to attack vectors. Most companies aren't very public about exactly how they were attacked, what vector was used, etc. So, we're a little blind on exactly how many attackers there have been against SQL-Server 2000sp3. Perhaps Microsoft has better forensics data than other folks because people actually report to them hacks?

Another point made is that as Microsoft's software has gotten harder to attack the attackers have focused their energy elsewhere. Its like the old joke....

Two guys are in the woods and they come across an angry grizzly bear. The bear starts to chase after them. One guy says "I'm sure glad I wore my running shoes today". The other guy says "It doesn't matter what shoes you are wearing, you can't outrun the bear." The first guy responds "I don't have to outrun the bear - I just have to outrun you."

On page-48 the section "Factors That Affect the Cost of SDL" they discuss the costs for existing vs. new development. One assumption that seems to be baked into their analysis is that implementing the SDL is an all-or-nothing sort of endeavor, and that roughly speaking it happens all at once. While it doesn't change the costs to implement over a long term or short-term, much of the language of the book seems to speak as if the SDL springs into being all at once.

On page-50 in the section "Rules of Thumb" they give the estimate that the SDL can cost 15-20% in initial implementation costs but less thereafter as a product matures. I'd be interested to know how many positive side effects they see from SDL implementation that aren't directly security related. Many elements of the SDL enforce a certain rigor on the development process that might not exist otherwise. You can't do proper threat analysis without good architecture documentation. And, the architecture documentation has to be accurate or the value of the threat analysis is going to be lower. In organizations that don't have a robust software development process to begin with I'm guessing that the artifacts of the SDL provide considerable value outside of security.

My only complaint so far about chapter four is that they don't spend more time discussing metrics. Key to gaining and retaining management support is demonstrating the effectiveness of the program. Chapter four has a limited section on SDL metrics and I can see that some chapters have explicit sections on metrics while other don't. The metrics provided in Chapter four aren't very extensive and they are sort of meta-metrics about the SDL - so we'll see whether this is covered in more detail later.

Monday, June 04, 2007

Questions About Software Testing and Assurance

Measuring assurance in security is difficult. On the one hand we'd like to be able to objectively measure the work-factor an attacker must expend in order to successfully attack something. Unfortunately it is the rare case where we have this level of certainty for the work-factor. Only in situations where the work reduces to breaking a cryptosystem can we have any true estimation of the work required to break our software, and this of course relies on proper cryptosystem operations from the code and operational (key management, etc) perspectives.

James Whittaker's recent piece on the Microsoft SDL blog about "Testing in the SDL" is quite interesting when looked at through the lens of assurance, which is what he says he is describing:

Security testing has been – and will always be – about assurance, that is, assuring that the product as built and shipped has been thoroughly tested for potential vulnerabilities. Bug detection and removal will continue to be important until a time in the future comes when we can deploy provably secure systems. However, we’re likely never to get to such a future without learning the lessons that testing is teaching right now. Anyone can write a system and call it secure – it’s only through watching real systems fail in real ways that we learn to get better. Moral of the story – testing is by far the best way to show us what we’re doing wrong in software development.
What is so unfortunate about this statement is that James is correct. In the vast majority of cases we simply can't reduce any element of the software to a purely objective measure of work-factor for an attacker. We're generally still so bad at writing software that we end up designing security in - but in the end - rely on testing to give us any certainty about the quality of the software we've created.

We lack the ability to analytically evaluate software from a quality and/or security perspective and we're left with the crude approximation that is testing. Its sort of like needing to to an integral of something and discovering that the function has to be calculated numerically and so you're left with an approximation rather than a closed-form solution. You know that no matter how you try you're just getting an approximation of the answer and that is the best you can do. The real world is like that sometimes but oh to have the certainty that a simple equation would give you.

I think the one saving grace here is that we know that reusable components are a good approach to solving this problem. Robert Auger wrote a nice piece on this titled "The Business Case for Security Frameworks." In it he argues that reusable security frameworks are a great initial defense against insecure coding practices. I'll agree wholeheartedly and on top of that throw in that once you start using frameworks like this, you hopefully reduce your testing burden to new code and new attack vectors rather than going over the same ground again and again. James I'm sure knows this all too well.

There has been quite a bit of discussion about exactly this sort of analysis at Microsoft with respect to Vista and how its newly created and properly engineered code will fare over time against attackers. Despite recent bogus claims to the contrary - Vista seems to be holding up pretty well.

What I'd still love to have though is an analytical way of measuring code quality and/or work-factor for attack to have a "real" measure of how resistant to attack a piece of software is. Schneier has been writing forever about an Underwriter's Laboratory for security. I wish I could feel more confident that we are getting closer to having an answer to this type of question.

Blogger Stupidity - Only Not the Kind You Think

I posted last night about a problem with my feed and questioned whether the problem was with Feedburner or Blogger. Turns out the problem was with a blogger - me.

In Blogger an item is created and titled based on the original title you give it. If you subsequently edit the title you end up creating a brand-new entry rather than overwriting the original item with a new title.

So, I ended up with two identical items last night except for the title. Its kind of counter-intuitive that blogger does this. In most tools the instance of something you're editing doesn't change just because you change the title. Turns out that in blogger they've turned that on its head.

Not sure why it needs to work that way, but it does - and now you've been warned.

Sunday, June 03, 2007

Feedburner or Blogger Stupidity?

So, I wrote a blog entry earlier tonight and spelled the title wrong. Looks like feedburner picked it up twice. I really only wrote one entry about the SDL Chapter Three.... I didn't really write another entry about Chapter Thre

I haven't yet determined who to blame for picking this up wrong. My guess is that feedburner is keying on title rather than URL for the blog entry, something perhaps they could see their way clear to fixing.

Microsoft's "The Security Development Lifecycle" - Chapter Three

Chapter 3: A Short History of the SDL at Microsoft
Chapter 2 Here

Chapter 3 is mainly a history of the SDL at Microsoft. While interesting for a historical perspective, the main bits from a security knowledge perspective are:

  • Developing the SDL was a long process and meant a lot of culture change at Microsoft
  • Developing the SDL took an iterative approach and evolved over time
What I found most interesting about the chapter was the bit of sleight of hand as regards Microsoft's lack of security in Windows-95 and other close in proximity releases. The authors say:

Windows 95 was designed to connect to corporate networks that provided shared file and print infrastructures and to connect to the Internet as a client system, but the primary focus of security efforts was the browser - and even there, the understanding of security needs was much different from what it is today.
I can interpret this statement two ways:
  • Charitable: We've know more now than we did then or reasonably could or should have known then, and so didn't incorporate a lot of security features and process into Windows 95.
  • Uncharitable: We didn't pay a lot of attention back then to security. We could have and should have but we didn't. As such our understanding was less than it really should have been and we did a crummy job with Windows 95.
I'm inclined to believe the truth is somewhere between these two points. I don't want to point fingers at anyone in particular about it but the idea that an operating system ought to have some security built into it, and that the Internet or networks in general could be scary places wasn't exactly unknown in the mid-1990's. It isn't as if computer security was invented in 1995 or something after all. At the same time other single-user systems weren't necessarily worse than Windows 95 either, they just didn't have the presence that it did.

The only other complaint I can have about this chapter surrounds the vulnerability measurement metric they use to measure software security in a few of the examples. A difference is shown between SQL Server 2000 and SQL Server 2000 SP3. The statement made is that while SQL Server 2000 had 6 vulnerabilities reported and handled over its lifecycle up to SP3, SQL Server 2000 SP3 had only 3 vulnerabilities reported in the next 3 years. Unfortunately at this point of the book we haven't yet covered software security metrics so until I get there I can't make a strong methodology complaint. Using this sort of statistic seems a bit misleading to me however. Sure vulnerability reduction is a key metric, but reported vulnerabilities isn't necessarily the key metric to focus on.

Saturday, June 02, 2007

More thoughts on training

My wife pointed out a very interesting article to me yesterday that she'd come across while reading Richard Dawkins's site. The article was about a lack of training/education in medical schools and how this lack of a basic understanding of evolution is at least partially to blame for drug-resistant bacteria.

I also read Dave Ladd's excellent piece, "Oil Change or Culture Change". Dave says:

Furthermore, many of the processes used by SDL (and other methodologies) are generally acknowledged as effective in identifying and mitigating security threats. Pondering this notion is what lead me to my realization about culture change, and prompted a question: If this stuff has been around for awhile in some shape or form, why aren’t more people doing it?

Dave's point is that we're not dealing with new knowledge here for the most part. We're dealing with a failure in education. In my previous posting about security training and what training is important I mentioned I'd had a conversation with Gary McGraw. Reading the pieces on Dawkin's site and Dave's piece made me remember something else Gary said that I think is very appropriate. I'm paraphrasing here so I hope I'll be forgiven but when I asked about the source of the problem Gary responded with an answer about your first engineering vs. first CS class.

Engineering Class: Professor opens class showing video of horrible engineering accident. Maybe something like the Tacoma Narrows bridge or the Challenger accident. In ominous voiceover - "If you don't study hard and do a good job, you could build something like this. People could die!!!!! Don't mess up, this is serious stuff."

Computer Science Class: Hello, look at this cool stuff you can do. Let's write a program that prints "Hello, World".

I think Gary is right. The culture we're trying to change is corporate culture, but it is equally computer science and programming culture. In a sense we have a chicken and egg problem. Until we have more companies demanding to treat software development as an engineering discipline, our universities won't be motivated to turn out students that treat it as such. And until schools start turning out software engineers rather than software developers we aren't going to have the talent necessary to get corporate culture change.

I don't know that we're at some sort of crisis point for software development education, I'd hate to be that melodramatic. At the same time I think what we're seeing is a disconnect between the realities of what it means for software engineering to exist as a true discipline, and our capability of achieving it. If we started with the mindset that it is engineering we're doing rather than "development" then we might stand a chance at making some progress.

I'll be interested in knowing the differences that exist at different university programs in CS and or CSE to see whether I'm wrong. Maybe there is broad support for a CS curriculum based in engineering rather than development.

Time for a bit of research on different CS programs and their focus.

Thursday, May 31, 2007

Analyzing Software Failures

A few months ago I wrote a small piece called "Most Web Security is Like Finding New Ways to Spend the Loot."

I was thinking again about this topic and how best to contribute to the security world, and how much good finding new attacks does versus finding new ways of defending against things.

I'm reminded of accident investigations in the real world. Airplanes rarely crash. When they do, its something noteworthy. We have an organization called the NTSB set up to investigate serious transportation accidents and they investigate every plane crash. Why investigate all airplane crashes and not all car crashes? Because airplane crashes are pretty rare and we believe we probably have an anomaly and something interesting to learn by investigating an airplane crash. The same cannot be said about car crashes. They are simply too frequent and too "routine" to bother investigating.

When we think about civil engineering and failure analysis we don't generally spend a lot of time on every roof that caves in at a cheap poorly constructed strip mall. We spend a lot of time investigating why bridges fail, why skywalks fail, etc. These are things that were presumably highly engineered to tight tolerances, where a lot of effort was spent or should have been spent to ensure safety and where nevertheless something went wrong. We start with the premise that by examining this anomaly we can learn something that will teach us how to build or design better the next time around.

In software security its pretty amazing how much time we spend doing the opposite. How much time we spend analyzing applications that were poorly designed, that were never designed to resist attacks, prevent disasters, etc. We seem to never tire of finding a new security flaw in myspace, yahoo, google mail, etc.

What do we learn from finding these vulnerabilities? That we don't in general design very good software? That we have fundamental flaws in our design, architecture, and tools that cause these failures? We've known that for years. No new analysis is going to tell me its hard to design a high assurance application in PHP. We already know that.

The types of vulnerability and attack research that interests me are those that actually show a brand new type of attack against a piece of software that was well written, had good threat analysis done, where the developers, architects, and designers really were focused on security and somehow still left themselves vulnerable to an attack.

It isn't necessarily that these are rare situations. Microsoft Vista has had a number of security vulnerabilities since its launch despite the best efforts of Microsoft to eradicate them. Analysis like that provided by Michael Howard about the ANI bug in Vista is what we need more of. We need more companies with mature secure software development practices to tell us why bugs occur despite their best efforts. We need more people to be as open as Microsoft is being about how the best designed processes fail and we need to learn from that to get it better the next time around.

In the transportation world this is the role that the NTSB plays. The NTSB shows us how despite our best efforts we still have accidents, and they do it without partisanship or prejudice. The focus of the investigation is root cause analysis and how we can get it right the next time around.

I read this same idea on someone's blog recently as applied security breaches. They proposed an NTSB sort of arrangement for investigating security breaches so that we get this same sort of learning out those situations. If anyone can point me to the piece I'll properly attribute it here.

Along these same lines we ought to have an NTSB equivalent for software security so that we can learn from the mistakes of the past. Bryan Cantrill at Sun and I shared a small exchange on his blog about this topic related to pathological systems and I referred to Henry Petroski and his work in failure analysis in civil engineering. Peter Neumann's Risks Digest is the closest we come to a general forum for this sort of thing in the software world and I'm always amazed (and depressed) by how few software engineers have ever heard of Risks, much less read it.

Why is it that so few companies are willing to be as public as Microsoft has been about their SDL, and how do we encourage more companies to participate in failure analysis so that we can learn collectively to develop better software?