Wednesday, January 23, 2008

Mark Rasch Puts Me To Shame

Last Thursday I wrote a piece about the case of Sullivan v. Ritz (and Faulk). I put the word armchair in the title because I'm not a lawyer and so my analysis was both simplistic and rather brief.

Today Mark Rasch released a much longer article on this same subject, "Mother, May I." As usual, Mark gives an excellent explanation of the underlying legal topics - the relation of physical world common law notions and rules concerning trespass. I highly recommend you read Mark's article if you're interested in the intersection of computer security and the law.

Mark also points to an excellent paper by Orin Kerr, - CYBERCRIME’S SCOPE: INTERPRETING "ACCESS” AND “AUTHORIZATION” IN COMPUTER MISUSE STATUTES. I read this paper some time ago and I've been searching for it ever since to no avail. If you're not a lawyer you usually don't have access to the right search engines/tools to find these sorts of things. Kerr's article is also an excellent read if you're not happy with the analysis the Mark gives of the current law, or you understand the analysis and don't like that words like "access" and "authorization" aren't well defined in the statutes.

Thursday, January 17, 2008

Armchair Legal Analysis of Sierra v. Ritz

You may have heard about the case of Sierra Corporate Design, Inc. v. David Ritz.

There has been lots of griping and complaining about the fact that doing zone transfers might be illegal. I thought I'd try to give the quick analysis of the case. I'm sure I'm missing a few things here and I'm not a lawyer, but I am a little tired of "hackers" complaining about their rights to do whatever they want being trampled... You can read the judgment here.

In this case David Ritz is being punished for performing unauthorized DNS zone transfers of Sierra Corporate Design's network.

The problem at the federal level is that the CFAA (Computer Fraud and Abuse Act). North Dakota's statute appears to have roughly the same language.

The CFAA has relatively consistently been interpreted so that "Accessing a computer without authorization" hinges on whether the owner of the computer wanted you to perform your action or didn't. The presence or absence of controls to prevent access being generally irrelevant. They have relied on the traditional definition of trespass and attempted to apply it to the electr0nic world.

In the physical world trespass is relatively easy to understand, police, etc. There are obviously corner cases where you can trespass onto unmarked land, not realize you're trespassing, etc. There is a lot of case law for these. At the same time though, if you see a house, you know it isn't your house, and you walk into it, you're trespassing whether or not they locked the door. It is quite clear that you weren't invited and not locking the door doesn't remove the rights of the home owner to prevent trespass.

In the electronic world for example it gets a lot murkier. If I mistype a URL into a tool and attempt to access someone's machine, its pretty clear from both intent and network traffic what was going on. At the same time though, let's say I send a ton of traffic at you, or I start fingerprinting your system. Intent is really the key question here.

Did I knowingly attempt to access your computer without authorization? What was my intent? It is generally the answers to these questions that would be at play in court.

In this specific case a DNS zone transfer isn't the sort of thing you mistakenly do. It isn't isn't the type of data that people generally try to get from other sites as part of browsing the net, etc. In general, and in this case its pretty apparent, you're trying to get data that you wouldn't ordinarily be expecting people to let out. Whether the DNS server was configured to prevent zone transfers isn't really the issue here.

Obviously where this gets tricky is determining whether this is like trespassing onto unmarked land, or walking into someone else's house when they had the door unlocked.

This isn't to say I necessarily agree with the decision, but there is a lot more nuance to this issue than I've seen posted.

Wednesday, January 09, 2008

Another Strategy for Getting Started with Application Security

Gary McGraw posted a new article about strategies for getting started with application security and secure coding.

In it he lists 4 approaches for getting started with application security:
  1. Top-down framework
  2. Portfolio Risk
  3. Training First
  4. Lead with a tool
I thought I'd share one more possibility that is a slight tweak on option #4 above.

I had success with #4, but not using the tools we usually think of for bootstrapping a program, namely static analysis or testing tools.

When I took the position they had already settled on using Netegrity's Siteminder product for a common authentication and authorization scheme across all of the applications. I managed to get them to settle on doing a quasi-RBAC with Siteminder, using it almost as an
identity service as well.

Settling on one common high-quality authentication and authorization tool/framework had three effects:
  1. It removed these services from the realm of development. They just had to integrate with it, but didn't have to figure out all of the corner cases to password changes, etc. that so often crop up, and people mess up in homegrown approaches.
  2. It convinced developers to build clean interfaces in their code for things like authorization to call out externally and/or have the data provided to them in a standard fashion. By settling on RBAC it also helped a lot with role and permission modeling that did need to happen in the app.
  3. In a shop that usually wanted to do everything itself, it broke that cycle and people got used to not having to write everything from scratch.
It was a bit of a non-standard way to use a tool to bootstrap a security program. They essentially got sold Netegrity originally for the wrong reasons, but they picked it and in implementing it correctly did themselves a huge service.

Just one data point on leading with a tool that focused more on architecture and design than it did on finding defects.

In the end in order to fully implement the program we had to do developer training, build our own frameworks, perform risk assessments against applications, and fully incorporate testing.

The key to getting it started though was adopting a common approach to one area of security via a well-designed tool.