Tuesday, November 27, 2007

Some Comments on PayPal's Security Vulnerability Disclosure Policy

Thanks to the several places that have written about this policy in the last few days.

I was personally involved in crafting the policy and while I can't make commitments or speak officially for PayPal I thought I'd take a few minutes to explain our thinking on a few of the items in the policy.

First, a few points. PayPal didn't have a great system for reporting security issues until this new policy came out. Our goals in creating and publishing the policy were several:
  • Improve the security of our site by getting security issues disclosed to us responsibly.
  • Create an easy mechanism for people to report a security vulnerability to us. We chose email since we figured security researchers would like it better than a form.
  • Create incentives for disclosure and remove disincentives (threat of legal liability)
  • Make clear our expectations in these areas, since this is a new and evolving area of security vulnerability disclosure with more than a little legal uncertainty.
  • Through our policy - set a standard we hope others can follow.
We carefully constructed the language in the policy with our privacy lawyers to ensure that we were not over-promising with respect to legal liability. We looked at other disclosure policies, and we settled on the policy you can find here.

A few specific notes are in order:

  • We will revise the policy over time based on user feedback.
  • We are serious in our commitment to rapidly address any discovered security issues with the site. Our language around reasonable timeframe is slightly vague because we don't want to over-promise on how quickly we can resolve an issue.
  • We do expect to get back to researchers quickly with confirmation of a reported issue and tracking data on how we're doing resolving it.
Let me now address a few concerns/comments people have specifically raised.

Chris Shiflett said:
Since data can be anything, how do we know if we view data without authorization? Don't most people assume they're authorized to view something if they're allowed to view it? Does intent matter?
While we don't want users to test the security of the PayPal site, should they do so they should be careful to minimize the disruption caused by their testing. If you start tinkering with URLs to see whether you can view certain data, do it between two accounts you control, don't try to view other people's data. There is a fine line between testing responsibly and irresponsibly and we're encouraging people to stay on the more responsible side of the line.

From Don's post:
I got a creepy feeling about actually trusting the statement. I will probably never attempt to test the security of PayPal’s site, but for those who do I would hate for the disclosure statement to change suddenly.
As I said earlier, we do believe the policy is a work in progress. We will modify it from time to time to allay concerns, improve its effectiveness, etc. Our goal however is to encourage responsible disclosure. I hope that intent behind the policy is enough to allay people's potential fears.

One final note on the statement - "Allow us reasonable time to respond to the issue before disclosing it publicly." We struggled over the wording on this more than any other element of the policy. It is a tricky business to get the right balance between early disclosure, our commitment to protect our customers and their data, and people's desire to know about the security of a given website or service. That said, we're committed to working with researchers when an issue is reported to us and we'll decide reasonable on a case-by-case basis.

We're hoping that this policy strikes a good balance between our desire for responsible disclosure, and not discouraging researchers from coming forward.

Again, I'm not a spokesperson for PayPal, so this post naturally represents my personal beliefs about this policy not a firm binding statement of company policy. That said - I welcome your comments.

Poll: How Important is a POC When Getting Someone to Fix a Security Issue

Working on security inside a company that takes security seriously sometimes blinds me to how other people work and the challenges they face with getting security issues taken seriously.

I've noticed that lots of people that work as consultants and/or inside companies have to jump through lots of hoops to get a security vulnerability taken seriously.

In many cases I see people spending hours and hours crafting a working proof-of-concept exploit for a vulnerability and needing to actually demonstrate that exploit to get the issue taken seriously.

To understand this better, I set up a small poll to get some data about why people are needing to craft a working POC when demonstrating a vulnerability exists.

I've only ever had to do this once, and yet it seems that every time I read about a penetration test I see people spending lots of time crafting sample exploits rather than spending more time on finding more vulnerabilities, or fixing classes of vulnerabilities that are similar and offering solutions to those.

In my experience the only time a POC has been really useful is when I need to make sure that the person fixing the issue has the necessary information/tests to make sure they've closed the issue.

For those who do penetration tests (network or application) - how often do you feel that you need to create working POCs for exploits in order for the company's management to take it seriously?

Monday, November 26, 2007

New CAPTCHA Systems [HUMOR]

New grammar captcha system :) Quite funny if you ask me. Mostly a joke I suppose, but its Monday, so what the heck.

Tuesday, November 20, 2007

Data Leakage/Linkage Mystery

I have a mystery that came up tonight that I'm hoping someone can help me figure out.

I have a Yahoo! account that I hardly ever use anymore. I check it once every 6 months or so for email, but it remains unused otherwise. I do have my IM client Adium set to log into the account , but I don't ever use it for chatting. I also don't have the account generally associated with any of my other accounts, and it doesn't even have my real name on it.

Tonight I logged into yahoo-mail and checked the mailbox for said account. Delightfully I found several emails from Jayde.com to my unused yahoo mailbox, but with information about this blog.

Somehow I received mail to my unused yahoo account mentioning this blog.

I've never linked the two email addresses, I don't ever log into the yahoo email address, and haven't sent/received mail from it in forever.

The messages were dated back in March...

So, now I'm wondering how these two data items got linked.

  • Advertising site that is buying data and/or access logs and linking disparate things together?
  • Malware?
  • Weird CSRF or some-such?
Any ideas? I'm not sweating it too badly I suppose, but it is slightly disconcerting.

Friday, November 09, 2007

Limiting Process Privileges Should Be Easier

I was reading DJB's retrospective on 10 years of qmail security and while I'll comment on a few of his thoughts in a separate post, one thing that struck me was his discussion of how to create a relatively effective process sandbox for a process:

  • Prohibit new files, new sockets, etc., by setting the current and maximum RLIMIT_NOFILE limits to 0.
  • Prohibit filesystem access: chdir and chroot to an empty directory.
  • Choose a uid dedicated to this process ID. This can be as simple as adding the process ID to a base uid, as long as other system-administration tools stay away from the same uid range.
  • Ensure that nothing is running under the uid: fork a child to run setuid(targetuid), kill(-1,SIGKILL), and _exit(0), and then check that the child exited normally.
  • Prohibit kill(), ptrace(), etc., by setting gid and uid to the target uid.
  • Prohibit fork(), by setting the current and maximum RLIMIT_NPROC limits to 0.
  • Set the desired limits on memory allocation and other resource allocation.
  • Run the rest of the program.

If doing all of the above steps seems like a bit much, then perhaps what you're sensing is that the architectural model that makes it hard for a process to drop privs, restrict what it can do, etc. is simply wrong in most operating systems.

What strikes me about the above example is that it ought to be a lot easier for a developer/administrator to define the policy for a given process and its run environment, without having to know this much arcana about exactly how to do it.

Luckily, there are a few OS-supplied solutions to the problem that while not perfect and still tricky to implement, are at least a step in the right direction.

Solaris
Windows Server 2008
  • Microsoft has introduced service hardening and reduced privileges in Server-2008.
  • Based on what I can tell their new wizard and SCM in general are structured more around least privilege than some of the other operating systems. At least from an ease-of-use standpoint.
Linux
  • On Linux we have several options.
    • SELinux
    • AppArmor
  • I haven't looked extensively at either of them yet but I'll try to look into whether their policy model is better/worse than the options above.
MacOS
  • Leopard introduces a new process sandboxing mechanism. Unfortunately the details are a bit sketchy. The Matasano guys have a writeup of it, but I haven't seen any details on the exact mechanisms and/or configuration.

Wednesday, November 07, 2007

The Point of Breach Notification Laws

Back in August I wrote a small piece - "Data Breaches and Privacy Violations Aren't Just About Identity Theft". Ben Wright left a comment there that I never responded to. Here goes...

He said:
Peter Huber argues in Forbes that there is no "privacy" in our social security numbers or credit card numbers. The "secrecy" of those things does not really authenticate us. So this business of giving people lots of notices about compromise of their numbers seems pointless.
I hate to rehash all that has been written about breach notification laws but I don't see a lot written on the public policy reasons for breach disclosure/notification laws. Well..., I don't hate rehashing too much, here goes.

There are reasonably several justifications for breach notification laws:

  1. Accountability of the data custodian
  2. Alerting the data owner of the breach
  3. Collecting public policy data on frequency and manner of breaches so that we can prevent them in the future
Whether the data in question has value, the disclosing party certainly didn't uphold their end of the bargain. What we're seeing lately is that there is no shame in having had a data breach. So, we're seeing that #1 isn't all that useful from a public policy perspective. If breaches don't result in a significant financial loss, then companies won't care so much to protect the data in their custody.

The main public policy value of breach notification laws as written today is probably #3. Interesting in and of itself, but because of the nature of the breaches it isn't clear that the costs of the breach notification are worth the costs of disclosure. Or, more specifically, it isn't clear that the public notice with specifics-per-company is serving us perfectly. An anonymous repository of details and types of incidents would accomplish roughly the same public policy goal without all of the associated costs.

I'm not arguing that companies shouldn't disclose, but I have yet to see an analysis of the costs on both sides of the issue. I'm hoping someone can point me to one.

Part of the argument of course hinges on the responsibility of companies to not disclose data entrusted to them and the rights that the data owner has. There are costs of our current regime however, and based on public reaction to data breaches (continuing to do business with said firms as if no incident had occurred) perhaps people aren't as interested in breach notification as we thought.

Safety feedback loops and new car safety features

Wired has an article today titled - "Is Car Safety Technology Replacing Common Sense?" The author of the article is concerned that all of the safety features in cars will in the end make them less safe as drivers become less and less accustomed to needing to pay attention while driving.

This argument reminds me a little bit of the snarky Apple ad about Windows UAC. It is a fine line between creating computer systems that try to prevent users from making mistakes, and ones that allow the end user the flexibility to actually use the computer they purchased. Witness of course Leopard's new feature that asks you to confirm you want to run something you just downloaded from the Net, and how it fails to run certain programs whose digital signature doesn't match anymore - which is leading to no end of annoyances for Skype and WoW users.

I was struck by one line in the article:

I always thought that as the driver, watching the road ahead for slow-moving vehicles and cars that dart into my lane — not to mention checking left or right to make sure its clear before changing lanes — was my job.
It is humorous to me to hear this same line repeated again and again as new safety features and technologies come out in products.

  • It used to be my job to pump the brakes to stop on a slippery surface. Now ABS helps me do it better in almost all cases.
  • It used to be my job to harden my operating system like a madman. Now most operating systems are shipping with slightly more reasonable defaults for things. Not perfect (witness Leopard's firewall) but getting better.
  • It used to be my job to determine whether a website and an email are real or spoofed. Now I have browser toolbars, email spoofing filters, etc. to help me out so I don't have to do each of them manually.
Sure, there are cases where relying on technology can have disastrous consequences and fail in a brittle fashion.

I don't know that its anything but an empirical question whether a safety or security technology actually makes things better.