Last Thursday I wrote a piece about the case of Sullivan v. Ritz (and Faulk). I put the word armchair in the title because I'm not a lawyer and so my analysis was both simplistic and rather brief.
Today Mark Rasch released a much longer article on this same subject, "Mother, May I." As usual, Mark gives an excellent explanation of the underlying legal topics - the relation of physical world common law notions and rules concerning trespass. I highly recommend you read Mark's article if you're interested in the intersection of computer security and the law.
Mark also points to an excellent paper by Orin Kerr, - CYBERCRIME’S SCOPE: INTERPRETING "ACCESS” AND “AUTHORIZATION” IN COMPUTER MISUSE STATUTES. I read this paper some time ago and I've been searching for it ever since to no avail. If you're not a lawyer you usually don't have access to the right search engines/tools to find these sorts of things. Kerr's article is also an excellent read if you're not happy with the analysis the Mark gives of the current law, or you understand the analysis and don't like that words like "access" and "authorization" aren't well defined in the statutes.
Wednesday, January 23, 2008
Thursday, January 17, 2008
Armchair Legal Analysis of Sierra v. Ritz
You may have heard about the case of Sierra Corporate Design, Inc. v. David Ritz.
There has been lots of griping and complaining about the fact that doing zone transfers might be illegal. I thought I'd try to give the quick analysis of the case. I'm sure I'm missing a few things here and I'm not a lawyer, but I am a little tired of "hackers" complaining about their rights to do whatever they want being trampled... You can read the judgment here.
In this case David Ritz is being punished for performing unauthorized DNS zone transfers of Sierra Corporate Design's network.
The problem at the federal level is that the CFAA (Computer Fraud and Abuse Act). North Dakota's statute appears to have roughly the same language.
The CFAA has relatively consistently been interpreted so that "Accessing a computer without authorization" hinges on whether the owner of the computer wanted you to perform your action or didn't. The presence or absence of controls to prevent access being generally irrelevant. They have relied on the traditional definition of trespass and attempted to apply it to the electr0nic world.
In the physical world trespass is relatively easy to understand, police, etc. There are obviously corner cases where you can trespass onto unmarked land, not realize you're trespassing, etc. There is a lot of case law for these. At the same time though, if you see a house, you know it isn't your house, and you walk into it, you're trespassing whether or not they locked the door. It is quite clear that you weren't invited and not locking the door doesn't remove the rights of the home owner to prevent trespass.
In the electronic world for example it gets a lot murkier. If I mistype a URL into a tool and attempt to access someone's machine, its pretty clear from both intent and network traffic what was going on. At the same time though, let's say I send a ton of traffic at you, or I start fingerprinting your system. Intent is really the key question here.
Did I knowingly attempt to access your computer without authorization? What was my intent? It is generally the answers to these questions that would be at play in court.
In this specific case a DNS zone transfer isn't the sort of thing you mistakenly do. It isn't isn't the type of data that people generally try to get from other sites as part of browsing the net, etc. In general, and in this case its pretty apparent, you're trying to get data that you wouldn't ordinarily be expecting people to let out. Whether the DNS server was configured to prevent zone transfers isn't really the issue here.
Obviously where this gets tricky is determining whether this is like trespassing onto unmarked land, or walking into someone else's house when they had the door unlocked.
This isn't to say I necessarily agree with the decision, but there is a lot more nuance to this issue than I've seen posted.
There has been lots of griping and complaining about the fact that doing zone transfers might be illegal. I thought I'd try to give the quick analysis of the case. I'm sure I'm missing a few things here and I'm not a lawyer, but I am a little tired of "hackers" complaining about their rights to do whatever they want being trampled... You can read the judgment here.
In this case David Ritz is being punished for performing unauthorized DNS zone transfers of Sierra Corporate Design's network.
The problem at the federal level is that the CFAA (Computer Fraud and Abuse Act). North Dakota's statute appears to have roughly the same language.
The CFAA has relatively consistently been interpreted so that "Accessing a computer without authorization" hinges on whether the owner of the computer wanted you to perform your action or didn't. The presence or absence of controls to prevent access being generally irrelevant. They have relied on the traditional definition of trespass and attempted to apply it to the electr0nic world.
In the physical world trespass is relatively easy to understand, police, etc. There are obviously corner cases where you can trespass onto unmarked land, not realize you're trespassing, etc. There is a lot of case law for these. At the same time though, if you see a house, you know it isn't your house, and you walk into it, you're trespassing whether or not they locked the door. It is quite clear that you weren't invited and not locking the door doesn't remove the rights of the home owner to prevent trespass.
In the electronic world for example it gets a lot murkier. If I mistype a URL into a tool and attempt to access someone's machine, its pretty clear from both intent and network traffic what was going on. At the same time though, let's say I send a ton of traffic at you, or I start fingerprinting your system. Intent is really the key question here.
Did I knowingly attempt to access your computer without authorization? What was my intent? It is generally the answers to these questions that would be at play in court.
In this specific case a DNS zone transfer isn't the sort of thing you mistakenly do. It isn't isn't the type of data that people generally try to get from other sites as part of browsing the net, etc. In general, and in this case its pretty apparent, you're trying to get data that you wouldn't ordinarily be expecting people to let out. Whether the DNS server was configured to prevent zone transfers isn't really the issue here.
Obviously where this gets tricky is determining whether this is like trespassing onto unmarked land, or walking into someone else's house when they had the door unlocked.
This isn't to say I necessarily agree with the decision, but there is a lot more nuance to this issue than I've seen posted.
Wednesday, January 09, 2008
Another Strategy for Getting Started with Application Security
Gary McGraw posted a new article about strategies for getting started with application security and secure coding.
In it he lists 4 approaches for getting started with application security:
I had success with #4, but not using the tools we usually think of for bootstrapping a program, namely static analysis or testing tools.
When I took the position they had already settled on using Netegrity's Siteminder product for a common authentication and authorization scheme across all of the applications. I managed to get them to settle on doing a quasi-RBAC with Siteminder, using it almost as an
identity service as well.
Settling on one common high-quality authentication and authorization tool/framework had three effects:
Just one data point on leading with a tool that focused more on architecture and design than it did on finding defects.
In the end in order to fully implement the program we had to do developer training, build our own frameworks, perform risk assessments against applications, and fully incorporate testing.
The key to getting it started though was adopting a common approach to one area of security via a well-designed tool.
In it he lists 4 approaches for getting started with application security:
- Top-down framework
- Portfolio Risk
- Training First
- Lead with a tool
I had success with #4, but not using the tools we usually think of for bootstrapping a program, namely static analysis or testing tools.
When I took the position they had already settled on using Netegrity's Siteminder product for a common authentication and authorization scheme across all of the applications. I managed to get them to settle on doing a quasi-RBAC with Siteminder, using it almost as an
identity service as well.
Settling on one common high-quality authentication and authorization tool/framework had three effects:
- It removed these services from the realm of development. They just had to integrate with it, but didn't have to figure out all of the corner cases to password changes, etc. that so often crop up, and people mess up in homegrown approaches.
- It convinced developers to build clean interfaces in their code for things like authorization to call out externally and/or have the data provided to them in a standard fashion. By settling on RBAC it also helped a lot with role and permission modeling that did need to happen in the app.
- In a shop that usually wanted to do everything itself, it broke that cycle and people got used to not having to write everything from scratch.
Just one data point on leading with a tool that focused more on architecture and design than it did on finding defects.
In the end in order to fully implement the program we had to do developer training, build our own frameworks, perform risk assessments against applications, and fully incorporate testing.
The key to getting it started though was adopting a common approach to one area of security via a well-designed tool.
Tuesday, November 27, 2007
Some Comments on PayPal's Security Vulnerability Disclosure Policy
Thanks to the several places that have written about this policy in the last few days.
I was personally involved in crafting the policy and while I can't make commitments or speak officially for PayPal I thought I'd take a few minutes to explain our thinking on a few of the items in the policy.
First, a few points. PayPal didn't have a great system for reporting security issues until this new policy came out. Our goals in creating and publishing the policy were several:
A few specific notes are in order:
Chris Shiflett said:
From Don's post:
One final note on the statement - "Allow us reasonable time to respond to the issue before disclosing it publicly." We struggled over the wording on this more than any other element of the policy. It is a tricky business to get the right balance between early disclosure, our commitment to protect our customers and their data, and people's desire to know about the security of a given website or service. That said, we're committed to working with researchers when an issue is reported to us and we'll decide reasonable on a case-by-case basis.
We're hoping that this policy strikes a good balance between our desire for responsible disclosure, and not discouraging researchers from coming forward.
Again, I'm not a spokesperson for PayPal, so this post naturally represents my personal beliefs about this policy not a firm binding statement of company policy. That said - I welcome your comments.
I was personally involved in crafting the policy and while I can't make commitments or speak officially for PayPal I thought I'd take a few minutes to explain our thinking on a few of the items in the policy.
First, a few points. PayPal didn't have a great system for reporting security issues until this new policy came out. Our goals in creating and publishing the policy were several:
- Improve the security of our site by getting security issues disclosed to us responsibly.
- Create an easy mechanism for people to report a security vulnerability to us. We chose email since we figured security researchers would like it better than a form.
- Create incentives for disclosure and remove disincentives (threat of legal liability)
- Make clear our expectations in these areas, since this is a new and evolving area of security vulnerability disclosure with more than a little legal uncertainty.
- Through our policy - set a standard we hope others can follow.
A few specific notes are in order:
- We will revise the policy over time based on user feedback.
- We are serious in our commitment to rapidly address any discovered security issues with the site. Our language around reasonable timeframe is slightly vague because we don't want to over-promise on how quickly we can resolve an issue.
- We do expect to get back to researchers quickly with confirmation of a reported issue and tracking data on how we're doing resolving it.
Chris Shiflett said:
Since data can be anything, how do we know if we view data without authorization? Don't most people assume they're authorized to view something if they're allowed to view it? Does intent matter?While we don't want users to test the security of the PayPal site, should they do so they should be careful to minimize the disruption caused by their testing. If you start tinkering with URLs to see whether you can view certain data, do it between two accounts you control, don't try to view other people's data. There is a fine line between testing responsibly and irresponsibly and we're encouraging people to stay on the more responsible side of the line.
From Don's post:
I got a creepy feeling about actually trusting the statement. I will probably never attempt to test the security of PayPal’s site, but for those who do I would hate for the disclosure statement to change suddenly.As I said earlier, we do believe the policy is a work in progress. We will modify it from time to time to allay concerns, improve its effectiveness, etc. Our goal however is to encourage responsible disclosure. I hope that intent behind the policy is enough to allay people's potential fears.
One final note on the statement - "Allow us reasonable time to respond to the issue before disclosing it publicly." We struggled over the wording on this more than any other element of the policy. It is a tricky business to get the right balance between early disclosure, our commitment to protect our customers and their data, and people's desire to know about the security of a given website or service. That said, we're committed to working with researchers when an issue is reported to us and we'll decide reasonable on a case-by-case basis.
We're hoping that this policy strikes a good balance between our desire for responsible disclosure, and not discouraging researchers from coming forward.
Again, I'm not a spokesperson for PayPal, so this post naturally represents my personal beliefs about this policy not a firm binding statement of company policy. That said - I welcome your comments.
Poll: How Important is a POC When Getting Someone to Fix a Security Issue
Working on security inside a company that takes security seriously sometimes blinds me to how other people work and the challenges they face with getting security issues taken seriously.
I've noticed that lots of people that work as consultants and/or inside companies have to jump through lots of hoops to get a security vulnerability taken seriously.
In many cases I see people spending hours and hours crafting a working proof-of-concept exploit for a vulnerability and needing to actually demonstrate that exploit to get the issue taken seriously.
To understand this better, I set up a small poll to get some data about why people are needing to craft a working POC when demonstrating a vulnerability exists.
I've only ever had to do this once, and yet it seems that every time I read about a penetration test I see people spending lots of time crafting sample exploits rather than spending more time on finding more vulnerabilities, or fixing classes of vulnerabilities that are similar and offering solutions to those.
In my experience the only time a POC has been really useful is when I need to make sure that the person fixing the issue has the necessary information/tests to make sure they've closed the issue.
For those who do penetration tests (network or application) - how often do you feel that you need to create working POCs for exploits in order for the company's management to take it seriously?
I've noticed that lots of people that work as consultants and/or inside companies have to jump through lots of hoops to get a security vulnerability taken seriously.
In many cases I see people spending hours and hours crafting a working proof-of-concept exploit for a vulnerability and needing to actually demonstrate that exploit to get the issue taken seriously.
To understand this better, I set up a small poll to get some data about why people are needing to craft a working POC when demonstrating a vulnerability exists.
I've only ever had to do this once, and yet it seems that every time I read about a penetration test I see people spending lots of time crafting sample exploits rather than spending more time on finding more vulnerabilities, or fixing classes of vulnerabilities that are similar and offering solutions to those.
In my experience the only time a POC has been really useful is when I need to make sure that the person fixing the issue has the necessary information/tests to make sure they've closed the issue.
For those who do penetration tests (network or application) - how often do you feel that you need to create working POCs for exploits in order for the company's management to take it seriously?
Monday, November 26, 2007
New CAPTCHA Systems [HUMOR]
New grammar captcha system :) Quite funny if you ask me. Mostly a joke I suppose, but its Monday, so what the heck.
Tuesday, November 20, 2007
Data Leakage/Linkage Mystery
I have a mystery that came up tonight that I'm hoping someone can help me figure out.
I have a Yahoo! account that I hardly ever use anymore. I check it once every 6 months or so for email, but it remains unused otherwise. I do have my IM client Adium set to log into the account , but I don't ever use it for chatting. I also don't have the account generally associated with any of my other accounts, and it doesn't even have my real name on it.
Tonight I logged into yahoo-mail and checked the mailbox for said account. Delightfully I found several emails from Jayde.com to my unused yahoo mailbox, but with information about this blog.
Somehow I received mail to my unused yahoo account mentioning this blog.
I've never linked the two email addresses, I don't ever log into the yahoo email address, and haven't sent/received mail from it in forever.
The messages were dated back in March...
So, now I'm wondering how these two data items got linked.
I have a Yahoo! account that I hardly ever use anymore. I check it once every 6 months or so for email, but it remains unused otherwise. I do have my IM client Adium set to log into the account , but I don't ever use it for chatting. I also don't have the account generally associated with any of my other accounts, and it doesn't even have my real name on it.
Tonight I logged into yahoo-mail and checked the mailbox for said account. Delightfully I found several emails from Jayde.com to my unused yahoo mailbox, but with information about this blog.
Somehow I received mail to my unused yahoo account mentioning this blog.
I've never linked the two email addresses, I don't ever log into the yahoo email address, and haven't sent/received mail from it in forever.
The messages were dated back in March...
So, now I'm wondering how these two data items got linked.
- Advertising site that is buying data and/or access logs and linking disparate things together?
- Malware?
- Weird CSRF or some-such?
Friday, November 09, 2007
Limiting Process Privileges Should Be Easier
I was reading DJB's retrospective on 10 years of qmail security and while I'll comment on a few of his thoughts in a separate post, one thing that struck me was his discussion of how to create a relatively effective process sandbox for a process:
What strikes me about the above example is that it ought to be a lot easier for a developer/administrator to define the policy for a given process and its run environment, without having to know this much arcana about exactly how to do it.
Luckily, there are a few OS-supplied solutions to the problem that while not perfect and still tricky to implement, are at least a step in the right direction.
Solaris
If doing all of the above steps seems like a bit much, then perhaps what you're sensing is that the architectural model that makes it hard for a process to drop privs, restrict what it can do, etc. is simply wrong in most operating systems.
- Prohibit new files, new sockets, etc., by setting the current and maximum RLIMIT_NOFILE limits to 0.
- Prohibit filesystem access: chdir and chroot to an empty directory.
- Choose a uid dedicated to this process ID. This can be as simple as adding the process ID to a base uid, as long as other system-administration tools stay away from the same uid range.
- Ensure that nothing is running under the uid: fork a child to run setuid(targetuid), kill(-1,SIGKILL), and _exit(0), and then check that the child exited normally.
- Prohibit kill(), ptrace(), etc., by setting gid and uid to the target uid.
- Prohibit fork(), by setting the current and maximum RLIMIT_NPROC limits to 0.
- Set the desired limits on memory allocation and other resource allocation.
- Run the rest of the program.
What strikes me about the above example is that it ought to be a lot easier for a developer/administrator to define the policy for a given process and its run environment, without having to know this much arcana about exactly how to do it.
Luckily, there are a few OS-supplied solutions to the problem that while not perfect and still tricky to implement, are at least a step in the right direction.
Solaris
- Sun has a couple of nice blueprints on how to limit the privileges for a process/service. I think it still isn't quite to the default-deny and allow only what you want stage, but interesting nonetheless.
- Microsoft has introduced service hardening and reduced privileges in Server-2008.
- Based on what I can tell their new wizard and SCM in general are structured more around least privilege than some of the other operating systems. At least from an ease-of-use standpoint.
- On Linux we have several options.
- SELinux
- AppArmor
- I haven't looked extensively at either of them yet but I'll try to look into whether their policy model is better/worse than the options above.
- Leopard introduces a new process sandboxing mechanism. Unfortunately the details are a bit sketchy. The Matasano guys have a writeup of it, but I haven't seen any details on the exact mechanisms and/or configuration.
Wednesday, November 07, 2007
The Point of Breach Notification Laws
Back in August I wrote a small piece - "Data Breaches and Privacy Violations Aren't Just About Identity Theft". Ben Wright left a comment there that I never responded to. Here goes...
He said:
There are reasonably several justifications for breach notification laws:
The main public policy value of breach notification laws as written today is probably #3. Interesting in and of itself, but because of the nature of the breaches it isn't clear that the costs of the breach notification are worth the costs of disclosure. Or, more specifically, it isn't clear that the public notice with specifics-per-company is serving us perfectly. An anonymous repository of details and types of incidents would accomplish roughly the same public policy goal without all of the associated costs.
I'm not arguing that companies shouldn't disclose, but I have yet to see an analysis of the costs on both sides of the issue. I'm hoping someone can point me to one.
Part of the argument of course hinges on the responsibility of companies to not disclose data entrusted to them and the rights that the data owner has. There are costs of our current regime however, and based on public reaction to data breaches (continuing to do business with said firms as if no incident had occurred) perhaps people aren't as interested in breach notification as we thought.
He said:
Peter Huber argues in Forbes that there is no "privacy" in our social security numbers or credit card numbers. The "secrecy" of those things does not really authenticate us. So this business of giving people lots of notices about compromise of their numbers seems pointless.I hate to rehash all that has been written about breach notification laws but I don't see a lot written on the public policy reasons for breach disclosure/notification laws. Well..., I don't hate rehashing too much, here goes.
There are reasonably several justifications for breach notification laws:
- Accountability of the data custodian
- Alerting the data owner of the breach
- Collecting public policy data on frequency and manner of breaches so that we can prevent them in the future
The main public policy value of breach notification laws as written today is probably #3. Interesting in and of itself, but because of the nature of the breaches it isn't clear that the costs of the breach notification are worth the costs of disclosure. Or, more specifically, it isn't clear that the public notice with specifics-per-company is serving us perfectly. An anonymous repository of details and types of incidents would accomplish roughly the same public policy goal without all of the associated costs.
I'm not arguing that companies shouldn't disclose, but I have yet to see an analysis of the costs on both sides of the issue. I'm hoping someone can point me to one.
Part of the argument of course hinges on the responsibility of companies to not disclose data entrusted to them and the rights that the data owner has. There are costs of our current regime however, and based on public reaction to data breaches (continuing to do business with said firms as if no incident had occurred) perhaps people aren't as interested in breach notification as we thought.
Safety feedback loops and new car safety features
Wired has an article today titled - "Is Car Safety Technology Replacing Common Sense?" The author of the article is concerned that all of the safety features in cars will in the end make them less safe as drivers become less and less accustomed to needing to pay attention while driving.
This argument reminds me a little bit of the snarky Apple ad about Windows UAC. It is a fine line between creating computer systems that try to prevent users from making mistakes, and ones that allow the end user the flexibility to actually use the computer they purchased. Witness of course Leopard's new feature that asks you to confirm you want to run something you just downloaded from the Net, and how it fails to run certain programs whose digital signature doesn't match anymore - which is leading to no end of annoyances for Skype and WoW users.
I was struck by one line in the article:
I don't know that its anything but an empirical question whether a safety or security technology actually makes things better.
This argument reminds me a little bit of the snarky Apple ad about Windows UAC. It is a fine line between creating computer systems that try to prevent users from making mistakes, and ones that allow the end user the flexibility to actually use the computer they purchased. Witness of course Leopard's new feature that asks you to confirm you want to run something you just downloaded from the Net, and how it fails to run certain programs whose digital signature doesn't match anymore - which is leading to no end of annoyances for Skype and WoW users.
I was struck by one line in the article:
I always thought that as the driver, watching the road ahead for slow-moving vehicles and cars that dart into my lane — not to mention checking left or right to make sure its clear before changing lanes — was my job.It is humorous to me to hear this same line repeated again and again as new safety features and technologies come out in products.
- It used to be my job to pump the brakes to stop on a slippery surface. Now ABS helps me do it better in almost all cases.
- It used to be my job to harden my operating system like a madman. Now most operating systems are shipping with slightly more reasonable defaults for things. Not perfect (witness Leopard's firewall) but getting better.
- It used to be my job to determine whether a website and an email are real or spoofed. Now I have browser toolbars, email spoofing filters, etc. to help me out so I don't have to do each of them manually.
I don't know that its anything but an empirical question whether a safety or security technology actually makes things better.
Wednesday, October 31, 2007
We need InfoSec incident data like NASA got from pilots
You may or may not have seen the coverage lately about a survey NASA did of airline pilots about the frequency of close calls in airline safety. There has been a bit of scuffle about whether to release the data publicly because of fears it might erode consumer confidence in airline safety....
Today news reports are out that NASA will be publicly releasing the data. I don't have details on the study yet. It will be interesting to compare the data from this survey, that hopefully had a scientific basis, to InfoSec surveys such as the CSI/FBI which we've mostly all come to hate because of its poor methodology, etc.
Jeremiah posted the results of his latest web application security survey and the results aren't great.... well, the state of security isn't great anyway. Might be nice to put together a broader survey to see how many incidents we're really having out there.
Today news reports are out that NASA will be publicly releasing the data. I don't have details on the study yet. It will be interesting to compare the data from this survey, that hopefully had a scientific basis, to InfoSec surveys such as the CSI/FBI which we've mostly all come to hate because of its poor methodology, etc.
Jeremiah posted the results of his latest web application security survey and the results aren't great.... well, the state of security isn't great anyway. Might be nice to put together a broader survey to see how many incidents we're really having out there.
Tuesday, October 23, 2007
Software Security Metrics and Commentary - Part 2
Part 1 here
In Part-1 of this entry I talked about the first 5 metrics from the paper "A Metrics Framework to Drive Application Security Improvement".
In part-2 of this piece I'll try to cover the remaining 5 metrics as well as discuss a few thoughts on translating survivability/Quality-of-Protection into upstream SDL metrics.
First, onto the other five metrics from the paper:
Steve makes what I believe are two major points in this paper:
The Quality of Protection workshop at the CCS conference is probably the best place to look for research in this area. Previous papers from the workshop can be found here. This years conference and workshop is starting next week, if you're in the DC area and interested in software security metrics it looks like its going to be a good event. The accepted papers list contains a number of papers that I think might shed some light on my speculation above.
I plan to put together a few more thoughts on brittle failure modes of software in a followup to this, I haven't had time to pull all of my thoughts together yet.
In Part-1 of this entry I talked about the first 5 metrics from the paper "A Metrics Framework to Drive Application Security Improvement".
In part-2 of this piece I'll try to cover the remaining 5 metrics as well as discuss a few thoughts on translating survivability/Quality-of-Protection into upstream SDL metrics.
First, onto the other five metrics from the paper:
- Injection Flaws
- Again, I think the metric posited in the paper is too tilted towards incident discovery rather than prevention. Just like the XSS metric I added - OutputValidation , this is really the key to prevention here. Most static analysis tools can detect tainted input and have a set of untrusted input functions (things that read from sockets, stdin, etc). It should be relatively straightforward to model our own application-specific output functions to detect where we're handing unchecked/unfiltered input to an output routine, potentially those across a trust boundary. If we can model these, we can at least make sure we have good sanitization coverage for each output type. We'll want to have this type of output filtering anyway, we might as well combine metrics from our XSS example.
- Improper Error Handling
- I think the metric posed in the paper - counting unchecked returns is a pretty good idea. This isn't going to catch web-server layer errors unfortunately, and won't necessarily detect errors in things like app servers, db-layers, etc. We can test for these, but the best metrics might be those related to following secure configuration guidance such as the CIS guide for individual web servers and/or app servers. The CIS benchmark for example requires a compliant configuration to handle standard web errors (4xx and 5xx) through rewrites and/or custom handlers. There are cases (SOAP comes to mind) where we need to throw a 5xx error back to a client, but this is the exception rather than the norm. Configuring application and web servers to minimize this sort of data disclosure is certainly a good thing, and in this sense we can check for compliance at this layer as almost a binary config - you pass the CIS guidance or you don't.
- Insecure Storage
- I don't think the metric of percent encrypted hard drives is really a meaningful metric in this context. If we look at typical web attacks that fall into this category we'd be looking at exploits that leak things like passwords, CC-data, etc. that is stored in an improper manner on the webserver. Some of this is going to be related to the implementation in the code, and so our best bet is probably a detailed audit of each piece of information that falls into this criticality range to confirm that it is being handled in an appropriate manner. I struggle to find a concrete metric that helps to measure this however. PercentCriticalDataCovered for proper encryption/hashing technique? Still not a very convincing metric unfortunately.
- Application Denial of Service
- Two metrics spring to mind here:
- Memory/Resource Allocations Before Authentication
- Memory Leaks
- Both of these are a lot more likely to lead to application denial of service than any other errors I can think of. Both of these should be minimized. Tracking them and having the absolute fewest of them is probably a good bet. That doesn't mean we're not going to have a DoS issue, but these are at least 2 places to look.
- Insecure Configuration Management
- This item probably goes back to the same metrics I posited for Improper Error Handling. Things like the CIS benchmarks for OS, webserver, and appserver are our first pass candidates for measuring this.
Steve makes what I believe are two major points in this paper:
- Software is brittle, it fails catastrophically
- Unlike other engineering disciplines, we don't know how to get to certainty about the strength of a piece of software.
The Quality of Protection workshop at the CCS conference is probably the best place to look for research in this area. Previous papers from the workshop can be found here. This years conference and workshop is starting next week, if you're in the DC area and interested in software security metrics it looks like its going to be a good event. The accepted papers list contains a number of papers that I think might shed some light on my speculation above.
I plan to put together a few more thoughts on brittle failure modes of software in a followup to this, I haven't had time to pull all of my thoughts together yet.
Tuesday, October 09, 2007
SQL Injection Humor?
If you're an application security geek at all, then you must read today's xkcd.
I've always said there aren't enough SQL Injection jokes...
I've always said there aren't enough SQL Injection jokes...
Monday, October 08, 2007
Apologies and Data Breaches
I just listened to an NPR piece - "Practice of Hospital Apologies Is Gaining Ground."
There has been quite a bit of research in the last few years that the differentiating factor between a doctor who gets sued for malpractice and one who does not is how much time they spend with their patient, and how humble they are.
The NPR piece details how at least one hospital now has a practice of apologizing to patients who have adverse outcomes, or where there was a missed diagnosis. It turns out that many patients sue not because of the mistake, but because of how they are treated. Being upfront and honest with the patient about the mistake, and apologizing, seems to have a positive impact.
Makes me wonder if there is a lesson in here for companies that have data breaches. Maybe getting out front of the issue like TD Ameritrade (not really out front given how long it was going on, but out from of the major press) will help them in the end with respect to how successful the class action suits are, etc.
I guess we'll just have to see.
There has been quite a bit of research in the last few years that the differentiating factor between a doctor who gets sued for malpractice and one who does not is how much time they spend with their patient, and how humble they are.
The NPR piece details how at least one hospital now has a practice of apologizing to patients who have adverse outcomes, or where there was a missed diagnosis. It turns out that many patients sue not because of the mistake, but because of how they are treated. Being upfront and honest with the patient about the mistake, and apologizing, seems to have a positive impact.
Makes me wonder if there is a lesson in here for companies that have data breaches. Maybe getting out front of the issue like TD Ameritrade (not really out front given how long it was going on, but out from of the major press) will help them in the end with respect to how successful the class action suits are, etc.
I guess we'll just have to see.
Monday, September 17, 2007
Software Security Metrics and Commentary on "Metrics Framework" Paper
I was reading the paper "A Metrics Framework to Drive Application Security Improvement" recently and some thoughts started to gel about what types of web application security metrics are meaningful.
This is going to be part-1 of 2 about the paper and software security metrics. In this first installment I comment on the metrics from the paper and provide what I believe are reasonable replacement metrics for 5 of the 10 in the paper. In Part-2 I'll take on the next 5 as well as discuss some other thoughts on what metrics matter for measuring web application security.
The paper is actually a good introduction on how to think about measuring software security, but I think a few of the metrics miss the mark slightly.
In the paper they analyze software metrics in three phases of an application's lifecycle:
The goal of metrics should be, where possible, to create objective measures of something. Whereas some of the metrics described in the paper are quite objective, others are more than a little fuzzy and I don't think represent reasonable ways to measure security.
First, the Top-10 and associated metrics from the paper (and you'll have to bear with me as I try to create tables in blogger):
I think unfortunately that this set of metrics misses the mark a little bit. I question whether pen testing for buffer overflows or XSS is really the right way to develop a sustainable metric. A necessary assurance component to be sure, but not necessarily the first metric I'd focus on if I'm asking the question "How secure is my app?" I'm loathe to rely on testing for the bulk of my metrics.
A few of the metrics above are unmeasurable or inappropriate I think. Its hard for me to imagine how we'd measure AnomalousSessionCount appropriately. Seems like if we had proper instrumentation for detecting these as described in the paper, we probably wouldn't have any in the first place.. I'm not so sure about BrokenAccountCount being representative of issues in authentication and session management either.
As I'm working on building my web application security metrics I'm trying to focus on things in the design phase. For the majority of flaws I'd like to develop a design-phase metric that captures how I'm doing against the vulnerability. This gives me the best chance to influence development rather than filing bugs after the fact. It is possible that some of these metrics simply don't exist in a meaningful way. You can't measure configuration management in your design phase for example.
Rather than just being destructive here is my modified group of metrics.
This is going to be part-1 of 2 about the paper and software security metrics. In this first installment I comment on the metrics from the paper and provide what I believe are reasonable replacement metrics for 5 of the 10 in the paper. In Part-2 I'll take on the next 5 as well as discuss some other thoughts on what metrics matter for measuring web application security.
The paper is actually a good introduction on how to think about measuring software security, but I think a few of the metrics miss the mark slightly.
In the paper they analyze software metrics in three phases of an application's lifecycle:
- Design
- Deployment
- Runtime
The goal of metrics should be, where possible, to create objective measures of something. Whereas some of the metrics described in the paper are quite objective, others are more than a little fuzzy and I don't think represent reasonable ways to measure security.
First, the Top-10 and associated metrics from the paper (and you'll have to bear with me as I try to create tables in blogger):
OWASP Item | Metric | App Phase | Method |
---|---|---|---|
UnvalidatedInput | PercentValidatedInput | Design | Manual review |
Broken Access Control | AnomalousSessionCount | Runtime? | Audit Trail review? |
Broken Authentication / Session Management | BrokenAccountCount | Runtime | Account Review |
Cross-Site-Scripting | XsiteVulnCount | Deployment? | Pen Test Tool |
Buffer Overflow | OverflowVulnCount | Deployment | Vuln Testing Tools? |
Injection Flaws | InjectionFlawCount | Runtime | Pen Testing |
Improper Error Handling | NoErrorCheckCount (?) | Design | Static Analysis |
Insecure Storage | PercentServersNoDiskEncryption (?) | Runtime | Manual review |
Application Denial of Service | ?? | Runtime | Pen Testing? |
Insecure Configuration Management | Service Accounts with Weak Passwords | Runtime | Manual review |
I think unfortunately that this set of metrics misses the mark a little bit. I question whether pen testing for buffer overflows or XSS is really the right way to develop a sustainable metric. A necessary assurance component to be sure, but not necessarily the first metric I'd focus on if I'm asking the question "How secure is my app?" I'm loathe to rely on testing for the bulk of my metrics.
A few of the metrics above are unmeasurable or inappropriate I think. Its hard for me to imagine how we'd measure AnomalousSessionCount appropriately. Seems like if we had proper instrumentation for detecting these as described in the paper, we probably wouldn't have any in the first place.. I'm not so sure about BrokenAccountCount being representative of issues in authentication and session management either.
As I'm working on building my web application security metrics I'm trying to focus on things in the design phase. For the majority of flaws I'd like to develop a design-phase metric that captures how I'm doing against the vulnerability. This gives me the best chance to influence development rather than filing bugs after the fact. It is possible that some of these metrics simply don't exist in a meaningful way. You can't measure configuration management in your design phase for example.
Rather than just being destructive here is my modified group of metrics.
- Unvalidated Input
- I actually like the metric from the paper. Measuring input validation schemes against the percent of input they cover is a pretty good metric for this. Don't forget that web applications can have inputs other than html forms, etc. Make sure that any/all user input (cookies, http headers, etc.) are covered.
- Broken Access Control
- Unfortunately this one is a tricky metric to get our hands around. Ideally we'd like to be able to say that our data model has proper object ownership and we could simply validate that we call our model appropriately for each access attempt. This is unlikely to be the case in most web applications.
- I'd really break this metric down into Application-Feature and Data access control. For Application-Feature access control I'd make sure that I have a well-defined authorization framework that maps users and their permissions or roles to application features, and then measure coverage the same way I would for input filtering.
- For Data access control, I unfortunately don't have a good model right now to create a design-time metric, or any metric for that matter.
- Broken Authentication and Session Management
- For a general application I again come back to use of frameworks to handle these common chores. I'd want to make sure that I have a proper authentication and session management scheme/framework that is resistant to all of the threats I think are important. The important metric is coverage of all application entry points against this framework. When implemented at the infrastructure level using a package such as Siteminder or Sun Access Manager, auditing configuration files for protected URLs ought to get me good coverage.
- From a testing perspective I can also spider the application and/or review my webserver logs and compare accesses URLs against the authentication definition and make sure everything is covered appropriately.
- Cross-Site-Scripting
- From a design perspective there are two things that matter for XSS vulnerability.
- Input Filtering
- Output Filtering
- The best metrics therefore for measuring XSS vulnerability is a combination of the InputValidation Metric and an equivalent OutputValidation metric.
- Buffer Overflow
- In general buffer overflows are the result of improperly handled user input. Within a web application we ought to be able to handle most of these issues with our InputValidation metrics, but there are going to be cases where downstream we handle the data in an unsafe way. Unfortunately our best techniques for detecting and eradicating them are going to be either dynamic languages where we don't get buffer overflows, or lots of static analysis and strict code reviews of all places we handle static-sized buffers. One partial solution is to simply use an environment that isn't itself to buffer overflows. This makes analyzing the web application for buffer overflows pretty easy.
- For those who insist on writing web applications in languages such as C/C++ our best defense is to avoid the use of static buffers and strictly code-review those places where we do use static buffers to analyze inputs for proper bounds checking. One useful measure would be PercentBoundsCheckedInput which we can theoretically catch with a static analyzer. They are pretty decent currently at finding these.
- One problem with the metric from the paper was a focus not on the web application itself but on its platform. I'm not sure that we're working at the right level when we start considering OS vulnerabilities when reviewing web applications. They are certainly however part of the picture and a meaningful vulnerability.
Sunday, September 16, 2007
Why Don't Financial Institutions Have Vulnerability Reporting Policies Online?
You may remember I did a bit on vulnerability reporting policies a little while ago. I was interested in crafting a vulnerability disclosure policy that was responsible both for the company posting it, security researchers, but also took into account the liability issues surrounding security researchers testing web applications.
In my previous piece I pulled together a quick summary of the public-facing security reporting policies (or lack thereof) for a number of big sites on the web. Recently I started doing the same for financial institutions. I tried finding disclosure policies online for major financial institutions such as Citibank, Wells Fargo, Washington Mutual, Chase, Fidelity, etc. I was unable to find a externally accessible security reporting/disclosure policy for any of the major financial institutions I looked at.
Why is that?
In my previous piece I pulled together a quick summary of the public-facing security reporting policies (or lack thereof) for a number of big sites on the web. Recently I started doing the same for financial institutions. I tried finding disclosure policies online for major financial institutions such as Citibank, Wells Fargo, Washington Mutual, Chase, Fidelity, etc. I was unable to find a externally accessible security reporting/disclosure policy for any of the major financial institutions I looked at.
Why is that?
- Fear that a disclosure policy makes it look like they could have a security issue?
- Worried about too many people contacting them about bogus issues?
- They don't want to be the first to publish one?
Buffer Overflows are like Hospital-Acquired Infections?
I was listening to NPR a few weeks ago and heard an interesting piece about new policies being implemented related to "Avoidable Errors."
The idea is that certain medical outcomes are always the results of medical negligence rather than inherent issues in medicine such as patient differences, etc. A few things that fall into the avoidable category are:
For historical context we didn't used to understand that we needed to sterilize needs and/or use them only once. Needles used to be expensive and so we reused them, but we discovered infection rates were unacceptably high. We created low-cost disposable needles and we use those now instead because they are safer.
Similarly we continue to program in languages that make avoiding things like buffer overflows tricky. Not impossible, but tricky. Given the attention to buffer overflows, the fact that we have tools to completely eliminate them from regular code, I'd say they fall into the same category as surgical tools left inside the patient - negligence.
A key quote from Lucien Leape of the Harvard School of Public Health:
The idea is that certain medical outcomes are always the results of medical negligence rather than inherent issues in medicine such as patient differences, etc. A few things that fall into the avoidable category are:
- Common hospital-acquired infections
- Urinary tract infections for example are extremely rare when proper protocols are followed.
- Blatant surgical errors
- Tools left in patient for example. There are easy ways to make 100% sure this doesn't happen.
For historical context we didn't used to understand that we needed to sterilize needs and/or use them only once. Needles used to be expensive and so we reused them, but we discovered infection rates were unacceptably high. We created low-cost disposable needles and we use those now instead because they are safer.
Similarly we continue to program in languages that make avoiding things like buffer overflows tricky. Not impossible, but tricky. Given the attention to buffer overflows, the fact that we have tools to completely eliminate them from regular code, I'd say they fall into the same category as surgical tools left inside the patient - negligence.
A key quote from Lucien Leape of the Harvard School of Public Health:
Maybe I should start putting money-back guarantees in my contracts with software vendors so they owe me a partial refund for every buffer overflow that gets exploited/announced in their code?Today, he says, dozens of safe practices have been developed to prevent such errors. But he says there hasn't been enough of a push for hospitals to put them into use.
"I think it's fair to say that progress in patient safety up to now has relied on altruism. It's been largely accomplished by good people trying to do the right thing," Leape says. "And what we're saying is that hasn't gotten us far enough, and now we'll go to what really moves things in our society, which is money."
Tuesday, September 11, 2007
Thoughts on OWASP Day San Jose/San Francisco
Last Thursday 9/6/2007 we had a combination San Jose/San Francisco OWASP day at the eBay campus. Details on the program are at: https://www.owasp.org/index.php/San_Jose
The turnout was great, somewhere between 40 and 50 people, I didn't get an exact count. There were two sessions for the evening:
The best discussion of the night in my mind came on point #3. How do large companies manage to diverse privacy regulations and policies across jurisdictions...
All of the panelists in this area made two points:
If nothing else was achieved last Thursday we had great turnout for the local OWASP event, better than I've seen so far. We also got to try out part of the space that will be used for the fall conference. I think it went well, but I guess we'll have to get the other folks present to weigh-in with their thoughts since I'm obviously a little biased.
The turnout was great, somewhere between 40 and 50 people, I didn't get an exact count. There were two sessions for the evening:
- A talk by Tom Stracener of Cenzic on XSS
- A panel discussion on Privacy with a pretty broad group of security folks and some people in adjacent areas such as Law and Privacy proper.
- What is Privacy?
- What are a companies obligations to protect Privacy? Legal, Ethical, Moral, good business sense, etc.
- How do companies, especially large ones that operate in multiple states or are multinationals, deal with all of the different privacy regulations?
- How do we integrate Privacy concerns into security operations, secure development, etc.
The best discussion of the night in my mind came on point #3. How do large companies manage to diverse privacy regulations and policies across jurisdictions...
All of the panelists in this area made two points:
- Set a baseline policy that encompasses the vast majority of your requirements and implement it across the board. This way you don't have to continuously manage to specific privacy regulations as you've embodied them in your general policy.
- Setting the privacy policies and controls around it is an exercise in risk management. People don't often look at writing policies as managing risk, but that is exactly what policies do.
If nothing else was achieved last Thursday we had great turnout for the local OWASP event, better than I've seen so far. We also got to try out part of the space that will be used for the fall conference. I think it went well, but I guess we'll have to get the other folks present to weigh-in with their thoughts since I'm obviously a little biased.
Friday, August 31, 2007
FUD About Ruby on Rails?
James McGovern has a piece "The Insecurity of Ruby on Rails" that Alex picked up on and I think the whole idea is a little overblown....
The points raised by James were:
The points raised by James were:
- Java has a security manager, Ruby does not.
- None of the common static analysis tools cover Ruby
- I have yet to come across a single Java application that actually uses Java's security manager to specify security controls, access rights, etc. While there are certainly the hooks to do so, and some tools like Netegrity, Sun Access Mgr, etc. will allow you to override Java's native security manager with this implementation, this is by far the exception rather than the norm for server-side code.
- Note:We're not talking about client sandboxing here, where Java's security manager policy does come into play by default.
- No static analysis tools cover Ruby. True, but irrelevant. It is perfectly possible to write secure code without the assistance of a static analysis tool. Its just a lot easier to do so with one. Fact is, there isn't good static analysis capability for many languages including Ruby, Python, Perl, and so on.
Tuesday, August 28, 2007
OWASP Day/Week - September 6th
Get in on the fun.....
OWASP Day : Day of Worldwide OWASP 1 day conferences on the topic "Privacy in the 21st Century" : Thursday 6th Sep 2007
https://www.owasp.org/index.php/OWASP_Day
I'll be at the San Jose meeting, it should be interesting.
https://www.owasp.org/index.php/San_Jose
OWASP Day : Day of Worldwide OWASP 1 day conferences on the topic "Privacy in the 21st Century" : Thursday 6th Sep 2007
https://www.owasp.org/index.php/OWASP_Day
I'll be at the San Jose meeting, it should be interesting.
https://www.owasp.org/index.php/San_Jose
Subscribe to:
Posts (Atom)