Read Jim Allchin's piece the other day on Security Features vs. Convenience.
Warning: I'm going to play the analogy game again (apologies to Mike Howard). Well, maybe its just the comparison game. It will be interesting to see how companies with large installed bases react to new threats and/or regulatory models for safety and/or security.
Though they aren't identical, cars and computers do live in a complicated ecosytem of other machines, users, etc. They also have safety/security features that vary greatly among different product offering. Equally - these safety offerings differ over time as a result of new engineering, safety studies, consumer perception, and regulation.
Certain car makers have traditionally focused more on security, in some cases despite direct explicit calls for these features from the car buyer. Companies like Volvo and Mercedes. They do a lot of research on new safety features and incorporate them into their products with the implicit consent of the car buyer. Well, explicit in that people keep buying the cars, and they probably do surveys of what people want. But people weren't directly saying they wanted airbags, abs, traction control, adaptive cruise control, etc. Mercedes assumed that its customers would pay a premium for these features. Part of their brand image is safety, and they can add a safety feature of almost any price to a car knowing full well that their luxury audience will pay the extra cost to have the safety feature.
Often, after Mercedes, Volvo, etc. have produced working safety technology we begin to see costs reduce, and the feature move down-market to lower-end cars, lower end from a branding and pricing perspective.
In many cases eventually governments step in and decide that a given safety feature has proven itself to reduce accidents, increase survivability, and they start mandating these features in all cars. Things like airbags, etc.
What is interesting about Allchin's article is how different computers and the computing economy currently is. Companies do invest in safety/security features, but because these safety/security features are so much more a part of the user experience, it isn't simply a question of whether users are willing to pay for the feature - its a question of what their interaction with the feature will be. Seatbelts excluded, most auto safety features don't require much user interaction to be useful. They are passive with respect to user participation.
Disregarding features that directly impact backwards compatibility, its interesting to study users reactions to features such as UAC that do improve security, but can be configured in such a way that they impact user productivity and/or perception.
How many people didn't wear seatbelts in early cars? Similar sort of thing.
Equally interesting is that, at least so far, there isn't a lot of computer regulation around end-user systems and their safety/security posture or profile. Whereas governments regulate lots of devices to try and specify minimum safety requirements, we don't do that with computers. Thus there isn't the same sort of feedback loop of people getting used to a security feature, it becoming mandatory, all vendors including it, and things proceeding in a somewhat safer fashion.
I had a good discussion with a friend last night about how you'd go about crafting basic software liability regulations and I'm sure there are some decent proposals out there. Its a pretty tough nut to crack though. How do you specify minimum standards for functionality of a truly multipurpose machine. Fitness for what purpose?
Perhaps more on that later after I do a little more research.
Tuesday, March 27, 2007
Sunday, March 25, 2007
Bad Analogies?
Ok, so I guess now I'm disagreeing with people far out of my league, but here goes anyway.....
I recently came across a piece that Michael Howard wrote....:
I totally agree that software and items in the physical world are different. But the rules are also different....
In the physical engineering world, we expect engineers to follow formal "threat modeling" for their products. If they don't built the bridge strong enough to not collapse under normal use, they can even be held personally liable. As can their firm, the construction firm, the inspectors, etc.
In the software world we're not actually responsible for anything we produce. We write EULAs that specifically exclude us from liability.
I don't know about you, but I'm not sure I'm want to drive my car across a bridge where I first had to sign a EULA that limited my rights to sue if anything went wrong, and disclaimed any liability and specifically claimed the bridge wasn't necessarily fit for its purpose... I'd probably find another way across the river.
I hate to call Michael disingenuous but I feel that his counter analogy is just flat out wrong. Exclude for a moment, if you will, all of the deliberate attacks against computers. Take a look at computing's track record in just normal reliability under regular operating conditions and I think you'll find that it isn't so hot....
Sure there are different levels of engineering used to build certain cars, etc. At least in the US they all must meet a certain set of basic safety standards before people are allowed to buy them. Same goes for drugs, food, etc. All of these things impact people's safety, and in many cases so do computers. Why do we treat them differently?
If software engineers want to continue to have credibility in the general debate, then they have to start talking about safety, reliability, integrity in the same way that other engineers do.
When the guy building the railroad tracks is told to speed up the project, throw out the requirements and just lay down the tracks and we'll fix it in tracks-2.0 he doesn't just shrug his shoulders and do it. Sure its a regulatory problem, a legal problem, etc. But just like with doctors, lawyers, and professional engineers they all have a code of conduct, ethics, morals that they must abide by. Are there people who skirt the rules, sure. I don't think that diminishes the profession or the code as a whole though.
If I'm an engineer and I'm designing a bridge, car, etc. and I know that my tools are faulty (C, C++, etc) I'm negligent if I go ahead and use them anyway knowing its going to be extremely difficult to prove my results when I'm finished.
Yet in software development we excuse this sort of thing all the time. We used flawed tools, we have flawed infrastructure, we have protocols we know can't withstand the kind of abuse they are up against.
We do have environments where there are constant threats and lives are at stake. In the military when you have a faulty system like this, people die. Then we iterate and produce version-2.0. Hopefully fewer people die. It does make me start to understand how we get mired in red-tape in these sorts of situations, but I do long to finally get a piece of software that doesn't have to disclaim all liability for failures to perform its basic functions.
I recently came across a piece that Michael Howard wrote....:
Perhaps it is my philosophy background that taught me that analogies are actually a really good way of comparing things, making a point, etc.... I'll throw out the question of whether analogies are useful in general and whether comparing computer software to other things is actually a useful endeavor.I have long believed that if someone makes an argument and uses an analogy, then the argument is often weak. But that’s just me!
This is why I usually roll my eyes when I hear statements like, “If [bridges|cars|airplanes] were built like software then…” because comparing physical items and software is just wrong. They are not the same thing, you cannot compare them.
I totally agree that software and items in the physical world are different. But the rules are also different....
In the physical engineering world, we expect engineers to follow formal "threat modeling" for their products. If they don't built the bridge strong enough to not collapse under normal use, they can even be held personally liable. As can their firm, the construction firm, the inspectors, etc.
In the software world we're not actually responsible for anything we produce. We write EULAs that specifically exclude us from liability.
I don't know about you, but I'm not sure I'm want to drive my car across a bridge where I first had to sign a EULA that limited my rights to sue if anything went wrong, and disclaimed any liability and specifically claimed the bridge wasn't necessarily fit for its purpose... I'd probably find another way across the river.
I hate to call Michael disingenuous but I feel that his counter analogy is just flat out wrong. Exclude for a moment, if you will, all of the deliberate attacks against computers. Take a look at computing's track record in just normal reliability under regular operating conditions and I think you'll find that it isn't so hot....
Sure there are different levels of engineering used to build certain cars, etc. At least in the US they all must meet a certain set of basic safety standards before people are allowed to buy them. Same goes for drugs, food, etc. All of these things impact people's safety, and in many cases so do computers. Why do we treat them differently?
If software engineers want to continue to have credibility in the general debate, then they have to start talking about safety, reliability, integrity in the same way that other engineers do.
When the guy building the railroad tracks is told to speed up the project, throw out the requirements and just lay down the tracks and we'll fix it in tracks-2.0 he doesn't just shrug his shoulders and do it. Sure its a regulatory problem, a legal problem, etc. But just like with doctors, lawyers, and professional engineers they all have a code of conduct, ethics, morals that they must abide by. Are there people who skirt the rules, sure. I don't think that diminishes the profession or the code as a whole though.
If I'm an engineer and I'm designing a bridge, car, etc. and I know that my tools are faulty (C, C++, etc) I'm negligent if I go ahead and use them anyway knowing its going to be extremely difficult to prove my results when I'm finished.
Yet in software development we excuse this sort of thing all the time. We used flawed tools, we have flawed infrastructure, we have protocols we know can't withstand the kind of abuse they are up against.
We do have environments where there are constant threats and lives are at stake. In the military when you have a faulty system like this, people die. Then we iterate and produce version-2.0. Hopefully fewer people die. It does make me start to understand how we get mired in red-tape in these sorts of situations, but I do long to finally get a piece of software that doesn't have to disclaim all liability for failures to perform its basic functions.
Thursday, March 22, 2007
Preventing HTTP response splitting with request/response identifiers?
HTTP Response Splitting is a vulnerability in web applications, and in my opinion, also in web browers or the HTTP protocol.
HTTP Response Splitting relies on a web browsers interpreting an unrequested response from a webserver as if it were requested. Part of the vulnerability stems from the fact that the HTTP protocol does not include request identifiers so that a client can match up a response to a request it made. Partly this is because the HTTP protocol is assumed to be ordered from the client and server's perspective. The protocol is purely synchronous, requests and responses happen in order and consequently there is no need to sequence, things are assumed to be in order.
Certain communication protocols include session identifiers and/or request identifiers so that an endpoint application/server can tell what responses belong to what requests and/or session.
There have been proposals to make HTTP asynchronous. The only one I was able to find without a lot of digging actually relied on a lower-level sequencing of packets/events, looks like it dates back many years.
A potential solution to performing async HTTP would be for a browser to include extra HTTP headers indicating both that it can support async HTTP, as well as a request-id. A webserver would then be able to reply asynchronously to a client over a single TCP connection for multiple requests. Depending on configurations and/or standards a client could issue a maximum number of simultaneous requests over the same TCP connection asynchronously. The webserver could respond asynchronously as well, putting the same request-id into the HTTP response headers so that the browser can understand the sequencing.
I'm guessing that this isn't ideal performance wise for a lot of apps, but there are others where it would make a lot of sense. With respect to timeouts for each HTTP request the browser doesn't need to change its policy. It can wait the same amount of time, but it allows the server to process things in a more async fashion than it does currently.
Additionally, if we match response Ids to request Ids, then we prevent HTTP response splitting except in cases where we can do header injection and predict the request-id the browser will generate.
I agree that this is a lot of work to do to prevent response splitting attacks. I haven't tried to model out the performance impacts server or client side for this. I'm guessing in many cases it would be a wash and in others it would result in pretty significant performance improvements without as many network sockets being involved, and perhaps smarter scheduling algorithms on the server side.
Comments appreciated.
HTTP Response Splitting relies on a web browsers interpreting an unrequested response from a webserver as if it were requested. Part of the vulnerability stems from the fact that the HTTP protocol does not include request identifiers so that a client can match up a response to a request it made. Partly this is because the HTTP protocol is assumed to be ordered from the client and server's perspective. The protocol is purely synchronous, requests and responses happen in order and consequently there is no need to sequence, things are assumed to be in order.
Certain communication protocols include session identifiers and/or request identifiers so that an endpoint application/server can tell what responses belong to what requests and/or session.
- TCP has sequence IDs
- DNS has a request identifier
There have been proposals to make HTTP asynchronous. The only one I was able to find without a lot of digging actually relied on a lower-level sequencing of packets/events, looks like it dates back many years.
A potential solution to performing async HTTP would be for a browser to include extra HTTP headers indicating both that it can support async HTTP, as well as a request-id. A webserver would then be able to reply asynchronously to a client over a single TCP connection for multiple requests. Depending on configurations and/or standards a client could issue a maximum number of simultaneous requests over the same TCP connection asynchronously. The webserver could respond asynchronously as well, putting the same request-id into the HTTP response headers so that the browser can understand the sequencing.
I'm guessing that this isn't ideal performance wise for a lot of apps, but there are others where it would make a lot of sense. With respect to timeouts for each HTTP request the browser doesn't need to change its policy. It can wait the same amount of time, but it allows the server to process things in a more async fashion than it does currently.
Additionally, if we match response Ids to request Ids, then we prevent HTTP response splitting except in cases where we can do header injection and predict the request-id the browser will generate.
I agree that this is a lot of work to do to prevent response splitting attacks. I haven't tried to model out the performance impacts server or client side for this. I'm guessing in many cases it would be a wash and in others it would result in pretty significant performance improvements without as many network sockets being involved, and perhaps smarter scheduling algorithms on the server side.
Comments appreciated.
Wednesday, March 21, 2007
Exceeding Authority
I've been thinking a bunch lately about designing systems to prevent misuse, ensure appropriate use, etc.
We talk all the time about dual-control systems and separation of duties and when they are strictly necessary to ensure security or some desired system property. It reminds me of a scene from Dr. Strangelove........
General "Buck" Turgidson: Mr. President, about, uh, 35 minutes ago, General Jack Ripper, the commanding general of, uh, Burpelson Air Force Base, issued an order to the 34 B-52's of his Wing, which were airborne at the time as part of a special exercise we were holding called Operation Drop-Kick. Now, it appears that the order called for the planes to, uh, attack their targets inside Russia. The, uh, planes are fully armed with nuclear weapons with an average load of, um, 40 megatons each. Now, the central display of Russia will indicate the position of the planes. The triangles are their primary targets; the squares are their secondary targets. The aircraft will begin penetrating Russian radar cover within, uh, 25 minutes.
President Merkin Muffley: General Turgidson, I find this very difficult to understand. I was under the impression that I was the only one in authority to order the use of nuclear weapons.
General "Buck" Turgidson: That's right, sir, you are the only person authorized to do so. And although I, uh, hate to judge before all the facts are in, it's beginning to look like, uh, General Ripper exceeded his authority.
Makes you think hard about designing systems to prevent what could happen vs. what you expect to happen, as was nicely pointed out again by Rob Newby.
Sometimes we do go overboard with dual-control and such, sometimes though I'm pretty happy that we design systems with that built in.
Now if we just didn't set the Permissive Action Link codes to all zeros we'd be fine.
Monday, March 19, 2007
Web Security Regression Testing
How are folks approaching regression testing for web app security bugs, especially in cases where you may have remediated a small problem via mod_security or mod_rewrite?
In many cases where you have a code-related issue it is relatively straightforward to write new test cases in your software testing frameworks to test for recurrence and/or correct behavior.
In deployed web applications though you might choose to fix a simple hole via a webserver hack, config change, etc.
Most of the scanners out there could be trained to look for the hole in question and detect whether it recurs. Or, I could use something perl-mechanize to write up some test cases against the potentially vulnerable app.
Anyone have any recommendations for doing this?
I'm open to product ideas and/or toolkits. Ideally all fixes would be done to the originally vulnerable code-base, but in cases where that isn't the right approach, or isn't the initial approach, you still want continuous monitoring for issues.
In many cases where you have a code-related issue it is relatively straightforward to write new test cases in your software testing frameworks to test for recurrence and/or correct behavior.
In deployed web applications though you might choose to fix a simple hole via a webserver hack, config change, etc.
Most of the scanners out there could be trained to look for the hole in question and detect whether it recurs. Or, I could use something perl-mechanize to write up some test cases against the potentially vulnerable app.
Anyone have any recommendations for doing this?
I'm open to product ideas and/or toolkits. Ideally all fixes would be done to the originally vulnerable code-base, but in cases where that isn't the right approach, or isn't the initial approach, you still want continuous monitoring for issues.
Sunday, March 18, 2007
Best webappsec proxy tool for use on a Mac?
What are people's preferences for web security inspection proxy servers on the Mac?
So far I've used:
WebScarab's user interface is a little funky in their multi-panel view where clicking certain arrows and such seems very confusing.
I'll play with them a little more before I make up my mind on which one I'm going to stick with, but if people have preferences I'd love to hear them. Focus on general usability, and then special features like fuzzing support, session-id analysis, etc.
So far I've used:
- WebScarab
- Paros
WebScarab's user interface is a little funky in their multi-panel view where clicking certain arrows and such seems very confusing.
I'll play with them a little more before I make up my mind on which one I'm going to stick with, but if people have preferences I'd love to hear them. Focus on general usability, and then special features like fuzzing support, session-id analysis, etc.
Saturday, March 17, 2007
Which number castle is it, or how deep is the swamp?
Responding to a recent Gene Spafford blog entry I remembered the scene in Monty Python's Holy Grail and had to quote it:
King of Swamp Castle: When I first came here, this was all swamp. Everyone said I was daft to build a castle on a swamp, but I built it all the same, just to show them. It sank into the swamp. So I built a second one. That sank into the swamp. So I built a third. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. And that’s what you’re going to get, Lad, the strongest castle in all of England.
It got me thinking, for many of the IT products and or websites we use, what number castle are we on, and how deep is the swamp we're building on - how many iterations until it stops sinking into the muck?
Microsoft - 8 castles and only some signs we're not still going to sink into the swamp
1. Dos
2. Windows-3.x
3. Windows-95 (and 98, same architecture really)
4. Windows-NT
5. Windows-2000
6. Windows-XP
7. Windows-2003
8. Windows-Vista
Someone else care to count castles and/or speculate on swamp depth for another vendor and/or website?
King of Swamp Castle: When I first came here, this was all swamp. Everyone said I was daft to build a castle on a swamp, but I built it all the same, just to show them. It sank into the swamp. So I built a second one. That sank into the swamp. So I built a third. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. And that’s what you’re going to get, Lad, the strongest castle in all of England.
It got me thinking, for many of the IT products and or websites we use, what number castle are we on, and how deep is the swamp we're building on - how many iterations until it stops sinking into the muck?
Microsoft - 8 castles and only some signs we're not still going to sink into the swamp
1. Dos
2. Windows-3.x
3. Windows-95 (and 98, same architecture really)
4. Windows-NT
5. Windows-2000
6. Windows-XP
7. Windows-2003
8. Windows-Vista
Someone else care to count castles and/or speculate on swamp depth for another vendor and/or website?
Camino strangeness?
I switch between browsers relatively frequently. I use Opera, Firefox-2, IE7, Safari, and Camino with some regularity. Why not... maybe some day I'll like one of them enough to ditch my usual habit of usually using Firefox.
Had an interesting experience just now interacting with Google and Blogger.
Ordinarily in Firefox when I'm logged into a Google site and I hit my blogger site, I get auto logged in when I click the "sign-in" link in the upper right corner. I end up going through the google token generator site but get single-signed-in to blogger.
Not so with Camino. In fact, even after logging in to edit a blog entry my blog site itself doesn't change the title bar to indicate I am logged in. Clicking the "sign in" link takes me to the blogger dashboard, but I don't get the ordinary stuff on the top-nav with Camino that I'd get in Firefox.
Firefox
Works like you'd expect
Safari
Works like you'd expect
Opera-9.02
Open gmail. Works fine. Go to my blog, Opera crashes
Opera-9.10
Works like you'd expect. No more crashing when opening and redirects through the token generator to get SSO to blogger.
Time to pull out the WebScarab and see what's going on.....
Had an interesting experience just now interacting with Google and Blogger.
Ordinarily in Firefox when I'm logged into a Google site and I hit my blogger site, I get auto logged in when I click the "sign-in" link in the upper right corner. I end up going through the google token generator site but get single-signed-in to blogger.
Not so with Camino. In fact, even after logging in to edit a blog entry my blog site itself doesn't change the title bar to indicate I am logged in. Clicking the "sign in" link takes me to the blogger dashboard, but I don't get the ordinary stuff on the top-nav with Camino that I'd get in Firefox.
Firefox
Works like you'd expect
Safari
Works like you'd expect
Opera-9.02
Open gmail. Works fine. Go to my blog, Opera crashes
Opera-9.10
Works like you'd expect. No more crashing when opening and redirects through the token generator to get SSO to blogger.
Time to pull out the WebScarab and see what's going on.....
More banking analysis
Part 2 of 2. Part 1 Here.
Tried to use a few more of my online accounts today.
TRowePrice
No extra authentication required. Tried two different web browers, no prompting for extra authentication. I haven't read the FFIEC guidance enough to know whether brokers are required to obey the FFIEC guidance, or only, strictly speaking, banks.
Schwab
No enhanced authentication here either.
Capital One
No enhanced authentication required here either.
More as I think to log into any more financial sites I happen to have. I think I need to try out my ETrade account shortly.
Tried to use a few more of my online accounts today.
TRowePrice
No extra authentication required. Tried two different web browers, no prompting for extra authentication. I haven't read the FFIEC guidance enough to know whether brokers are required to obey the FFIEC guidance, or only, strictly speaking, banks.
Schwab
No enhanced authentication here either.
Capital One
No enhanced authentication required here either.
More as I think to log into any more financial sites I happen to have. I think I need to try out my ETrade account shortly.
Friday, March 16, 2007
Comparing enhanced authentication options for multiple online banking sites
Part 1 of 2. Part 2 Here
The end of 2006 brought us the deadline for banking sites to implement enhanced authentication per FFIEC guidance. Recently I've started noticing obvious implementations on a number of different financial websites I use. I don't use enough online banks to compare them all, so maybe this will start a discussion of the quality of implementation for several of the implementations.
Chase
I logged into my Chase account the other day from a computer I don't ordinarily use. After putting in my proper username and password I was presented with a new page that said I was using a computer I hadn't used before, and that I'd need to confirm my identity using an out of band means. I was presented with the options of:
I chose to have Chase SMS me the authentication code. Their website helpfully told me that I could expect to receive the code within 2 minutes. I presume they could backlog on the dialout system so want to give you some idea on an SLA for receiving the code, before you try again.
About 20 seconds went by and I got an SMS to my mobile phone. The message contained an 8-digit security code, and after entering it into the simple web-form on Chase's website I was able to access my account. Subsequent access from the same machine has been trouble free.
Citibank
Starting about a month or so ago I started receiving prompts when logging into my Citibank account that I would need to set up some special security questions, etc. in order to have continued access.
After ignoring the prompts for a few logins, I was finally forced to choose a number of secret questions to answer.
So far I've added a bank account for automated payments, and I've scheduled a payment. In no cases have I been prompted for any extra authentication. The FFIEC guidance allows banks to determine what transactions are "high risk" and consequently to only employ enhanced authentication in those cases. Perhaps I haven't triggered any of the high security items yet on the Citibank site.
Wells Fargo
I have a regular bank account with Wells Fargo. So far I don't think I've had to set up any extra secret questions. I also haven't been challenged for any special authentication when logging in from different machines. I haven't tried to pay any bills yet though, and certainly nothing with a large dollar value. Perhaps I just haven't tripped their triggers yet.
WAMU
Pretty much the same thing as for Wells Fargo.
If you're using an online bank and have had experience with any of the new enhanced authentication schemes please let me know. it would be good to catalog what different folks are doing.
And, if I get a chance I'll investigate how Chase is doing their machine-id system to see how robust it is.
The end of 2006 brought us the deadline for banking sites to implement enhanced authentication per FFIEC guidance. Recently I've started noticing obvious implementations on a number of different financial websites I use. I don't use enough online banks to compare them all, so maybe this will start a discussion of the quality of implementation for several of the implementations.
Chase
I logged into my Chase account the other day from a computer I don't ordinarily use. After putting in my proper username and password I was presented with a new page that said I was using a computer I hadn't used before, and that I'd need to confirm my identity using an out of band means. I was presented with the options of:
- Having a security code sent to my mobile phone via SMS
- Having an automated call placed to my home or mobile number (both already registered on their site) where I would hear a recording of a security code
- Having an authentication code emailed to me
I chose to have Chase SMS me the authentication code. Their website helpfully told me that I could expect to receive the code within 2 minutes. I presume they could backlog on the dialout system so want to give you some idea on an SLA for receiving the code, before you try again.
About 20 seconds went by and I got an SMS to my mobile phone. The message contained an 8-digit security code, and after entering it into the simple web-form on Chase's website I was able to access my account. Subsequent access from the same machine has been trouble free.
Citibank
Starting about a month or so ago I started receiving prompts when logging into my Citibank account that I would need to set up some special security questions, etc. in order to have continued access.
After ignoring the prompts for a few logins, I was finally forced to choose a number of secret questions to answer.
So far I've added a bank account for automated payments, and I've scheduled a payment. In no cases have I been prompted for any extra authentication. The FFIEC guidance allows banks to determine what transactions are "high risk" and consequently to only employ enhanced authentication in those cases. Perhaps I haven't triggered any of the high security items yet on the Citibank site.
Wells Fargo
I have a regular bank account with Wells Fargo. So far I don't think I've had to set up any extra secret questions. I also haven't been challenged for any special authentication when logging in from different machines. I haven't tried to pay any bills yet though, and certainly nothing with a large dollar value. Perhaps I just haven't tripped their triggers yet.
WAMU
Pretty much the same thing as for Wells Fargo.
If you're using an online bank and have had experience with any of the new enhanced authentication schemes please let me know. it would be good to catalog what different folks are doing.
And, if I get a chance I'll investigate how Chase is doing their machine-id system to see how robust it is.
Can't help myself
I can't help but comment on a blog post I saw by Rob Newby the other day. Struck a cord about how paranoid the regular security person needs to be........
His original post is here: http://robnewby.blogspot.com/2007/03/one-of-us-has-to.html
If you're worried about collusion and superadmins with access to everything you're a bank, a defense department, or paranoid.
Not that a single "superadmin" should be able to anything by themself, but I'll skip that point for now.
If what you're looking for is protection against all of the scenarios you've outlined then you're not looking for commodity hardware, operating systems, physical security, etc.
Maybe you ought to consider not hooking the system up to anything. And bag checks at the door. And periodic interrogations, surveillance of the admins, credit checks, black-bag jobs to break into their houses, etc.
If you're really worried about multiple people colluding and walking off with the data, then you're going to need more than logging and hope to achieve it.
His original post is here: http://robnewby.blogspot.com/2007/03/one-of-us-has-to.html
If you're worried about collusion and superadmins with access to everything you're a bank, a defense department, or paranoid.
Not that a single "superadmin" should be able to anything by themself, but I'll skip that point for now.
If what you're looking for is protection against all of the scenarios you've outlined then you're not looking for commodity hardware, operating systems, physical security, etc.
Maybe you ought to consider not hooking the system up to anything. And bag checks at the door. And periodic interrogations, surveillance of the admins, credit checks, black-bag jobs to break into their houses, etc.
If you're really worried about multiple people colluding and walking off with the data, then you're going to need more than logging and hope to achieve it.
Tuesday, March 13, 2007
New radio station idea
Based on the recent "vulnerability" or more properly attack against Vista's voice recognition - I'm planning on creating a new radio station or perhaps a new song that I can get put up on Yahoo and/or MySpace. Its called "delete c:\windows\*.*" and thats spoken out as "delete c colon backslash windows backslash start dot star".
Maybe I just need to get someone with a large radio following to say this phrase. Maybe get it on a nice little piece on NPR or something.... How many people you figure have their computers open while they are listening to the morning radio? :)
Maybe I just need to get someone with a large radio following to say this phrase. Maybe get it on a nice little piece on NPR or something.... How many people you figure have their computers open while they are listening to the morning radio? :)
PCI-DSS Clarity or Lack Thereof
In Jeremiah's recent post: "Big trouble if PCI-DSS requires CSRF" I think he both over and understates the problems with PCI-DSS and the PABP (Payment Application Best Practices) from Visa/PCI.
On the one hand we have the problem that if we are required to prevent CSRF attacks from a site we're going to have a lot more vulnerable sites.
On the other hand, neither the PCI-1.1 standard nor the PABP specify the criteria for judging an application or system compliant. Most if not all of the standards have the all-or-nothing flavor to them. Unfortunately, as we know, its rarely that simple.
The PABP focuses on two main areas that are application specific:
The second is much harder to pass because the standard itself doesn't say that you need to have frameworks to prevent attacks, it says you must be preventing them. This means that pretty much every deployed application isn't really compliant, since we know that all applications of a decent size are almost certain to have some sorts of application security vulnerabilities.
What the standard needs is a slightly more proscriptive requirement around the SDLC, a threshold of vulnerabilities that you must reasonably try to prevent, and solid remediation plans should there be a vulnerability discovered and/or audit trails to detect a breach should it occur.
WhiteHat's service actually comes in handy here in that with continuous monitoring of applications you shrink your vulnerability window (theoretically). What WhiteHat isn't monitoring for specifically are cases where there may be something like a stored XSS on your site currently. Unfortunately discovering these programatically is quite difficult, though I'm thinking catching this sort of defacement quickly could be pretty useful. You can always wait for your app/site to show up on the http://sla.ckers.org/ site, but that probably isn't the most efficient way to discover you have an XSS vulnerability..
Where does that leave us from a PCI perspective? Unfortunately we're discovering that as decent a standard as PCI is, and as nicely proscriptive as it is, it still has gaps.
One solution is to do what the government does.
On the one hand we have the problem that if we are required to prevent CSRF attacks from a site we're going to have a lot more vulnerable sites.
On the other hand, neither the PCI-1.1 standard nor the PABP specify the criteria for judging an application or system compliant. Most if not all of the standards have the all-or-nothing flavor to them. Unfortunately, as we know, its rarely that simple.
The PABP focuses on two main areas that are application specific:
- Development Practices and the SDLC
- Actually countermeasures and vulnerabilities in the code
The second is much harder to pass because the standard itself doesn't say that you need to have frameworks to prevent attacks, it says you must be preventing them. This means that pretty much every deployed application isn't really compliant, since we know that all applications of a decent size are almost certain to have some sorts of application security vulnerabilities.
What the standard needs is a slightly more proscriptive requirement around the SDLC, a threshold of vulnerabilities that you must reasonably try to prevent, and solid remediation plans should there be a vulnerability discovered and/or audit trails to detect a breach should it occur.
WhiteHat's service actually comes in handy here in that with continuous monitoring of applications you shrink your vulnerability window (theoretically). What WhiteHat isn't monitoring for specifically are cases where there may be something like a stored XSS on your site currently. Unfortunately discovering these programatically is quite difficult, though I'm thinking catching this sort of defacement quickly could be pretty useful. You can always wait for your app/site to show up on the http://sla.ckers.org/ site, but that probably isn't the most efficient way to discover you have an XSS vulnerability..
Where does that leave us from a PCI perspective? Unfortunately we're discovering that as decent a standard as PCI is, and as nicely proscriptive as it is, it still has gaps.
One solution is to do what the government does.
- Congress passes law
- Federal agency draws up regulations that implement the law
- Federal agency draws up interpretations, implementation guidelines, etc.
- Lots of lawsuits happen, case law is set, and now we have definitive rules
- Life goes on with a lot of $$ spent on compliance.
Saturday, March 10, 2007
The patch didn't take
There is a nasty cold going around the silicon valley area right now. I spend 4 days in a bed last week because of this darned thing with a 101 sometimes 103.5 fever.
I've been great all week until tonight. Just like problems with recurring TCP issues over time, problems with Sun's Java DST patches, my biological patches have failed me.
I guess body-release-34.x didn't have a full regression test of security issues performed against it, because here I am with a stupid fever again.
Stupid immune system - you needed a better testing regime to make sure the new work you were doing didn't overwrite your previous good work.
Oh well, maybe I can figure out a way to turn this extra heat into electricity and sell it back onto the grid.
I've been great all week until tonight. Just like problems with recurring TCP issues over time, problems with Sun's Java DST patches, my biological patches have failed me.
I guess body-release-34.x didn't have a full regression test of security issues performed against it, because here I am with a stupid fever again.
Stupid immune system - you needed a better testing regime to make sure the new work you were doing didn't overwrite your previous good work.
Oh well, maybe I can figure out a way to turn this extra heat into electricity and sell it back onto the grid.
iWeb and .mac blog annoyances
So, I was trying to help my wife tonight with our family blog. We'd recently wanted to start collecting some basic traffic stats from her blog so we set up Google Analytics on it.
Turns out that .mac iWeb blogs are pretty stupid.
The iWeb software desktop app stores all of your data in a crazy index.xml.gz file.
Every time you publish a new blog entry or other page on your site, it statically generates all of the content, updates the files in question, and "publishes" then via their iDisk file sharing.
Don't even get me started on how slow and stupid iDisk is in implementation, speed, etc.
Accessing it, and using iWeb on a brand new Macbook results in the machines completely going to lunch. Pretty odd that parsing a little XML and sending it over the network should lock the whole box up, but whatever.
Apple's Feedback systems on blogs in insanely complicated broken.
Because the website is completely static, they have to quasi-hack comments
Comments are apparently stored in extra html or xml files inside your tree and are dynamically created if/when your page has some magic javascript mumbo-jumbo in it.
Just having the javascript isn't all though. If you want to for example use a handy dandy shell script to append some javascript to the end of the html blog entries for tracking purposes, Apple's website might start forgetting you want comments enabled.
How is this? Well, their commenting mechanism is some sort of website filter keyed on something where you can access the HTML blog file with a parameter on the end, and Apple's ..mac webserver interprets it as a programmatic request rather than a request to serve the HTML.
Example - Let's say you have the page: http://web.mac.com/USERNAME/Blog/4E3ABD27-321ABC.html
When you have comments enabled, the javascript on the page turns around and makes a request for:
Somehow, when you touch a page yourself, Apple's webserver starts 404'ing on the second request above. So, comments don't work.
The only way to fix it appears to be to republish your whole site, which for me took roughly 1 hour to post a few megs of data.Let's just say I'm a little sad.
Turns out that .mac iWeb blogs are pretty stupid.
The iWeb software desktop app stores all of your data in a crazy index.xml.gz file.
Every time you publish a new blog entry or other page on your site, it statically generates all of the content, updates the files in question, and "publishes" then via their iDisk file sharing.
Don't even get me started on how slow and stupid iDisk is in implementation, speed, etc.
Accessing it, and using iWeb on a brand new Macbook results in the machines completely going to lunch. Pretty odd that parsing a little XML and sending it over the network should lock the whole box up, but whatever.
Apple's Feedback systems on blogs in insanely complicated broken.
Because the website is completely static, they have to quasi-hack comments
Comments are apparently stored in extra html or xml files inside your tree and are dynamically created if/when your page has some magic javascript mumbo-jumbo in it.
Just having the javascript isn't all though. If you want to for example use a handy dandy shell script to append some javascript to the end of the html blog entries for tracking purposes, Apple's website might start forgetting you want comments enabled.
How is this? Well, their commenting mechanism is some sort of website filter keyed on something where you can access the HTML blog file with a parameter on the end, and Apple's ..mac webserver interprets it as a programmatic request rather than a request to serve the HTML.
Example - Let's say you have the page: http://web.mac.com/USERNAME/Blog/4E3ABD27-321ABC.html
When you have comments enabled, the javascript on the page turns around and makes a request for:
http://web.mac.com/USERNAME/Blog/4E3ABD27-321ABC.html?wsc=entry.js&ts=12345
Somehow, when you touch a page yourself, Apple's webserver starts 404'ing on the second request above. So, comments don't work.
The only way to fix it appears to be to republish your whole site, which for me took roughly 1 hour to post a few megs of data.Let's just say I'm a little sad.
- Pre-processing all of the content for the pages is pretty simple.
- Most people stopped doing it years ago because its insanely impractical to do it for any decent number of pages, update all of the links, etc.
- Sure it saves Apple a decent amount of CPU on their website.
- It really sucks in terms of extensibility.
Sunday, March 04, 2007
Assurance and Auditing
When is a monoculture good? When its a web application security framework you use in all of your apps, it has been well audited, and you're reasonably sure its working.
Why is this a good thing? Because its easy to verify, to audit, etc. Its also relatively easy to fix if something goes wrong.
I had an interesting discussion with another web security person the other day. He said that he isn't an expert developer, but he knows how to audit web apps, how to incorporate security into the SDLC, and how to monitor for compliance / proper-usage.
It was like I was talking to myself in a mirror.
I know that Jeremiah Grossman wrote about the severe lack of web security auditors out there, and I certainly agree. At the same time, I think we could make an amazing amount of progress if we stopped letting everyone shave with a straight razor and instead trained and/or forced them to use the safety razor.
Me - I learned how to shave using a standard 2-blade Gillette. I didn't learn with a straight razor, and neither should most programmers. Learn to program in Python, Ruby, hell, Java if you must. Or, learn assembly first just for the kicks. But let's stop kidding ourselves that everyone needs to know how to use a straight razor (read - C, C++).
It'd be like requiring all carpenters to learn how to use an adze. Maybe sometime when they have some spare time they can go back and have at it, but perhaps we should stick to the safe and practical. Oh, I know what you're saying... I don't need that kickback guard on my chainsaw or the blade guard on my table saw, and maybe you don't. But the statistics tell me I'd rather most people had them, if I'm in charge of the medical bills :)
Oh, and on the secure-by-default topic, check these guys out:
http://www.sawstop.com/
Pretty slick advancement if you hadn't seen it before. Reminds me again and again of how we ought to design tools (programming languages, frameworks, etc) in the computer world for safety of use rather than just ease of use.
According to some articles I've seen they haven't had a lot of adoption by the consumer market because of costs, but that cabinet makers, furniture makers, etc. have been pretty enthusiastic. They are the ones with the insurance bills, worker's comp bills, etc. At $1000 extra per saw and a single finger loss running 10-20 times that in workers comp, etc. its a pretty easy decision to make. Now, if we didn't have insurance, workers comp, and so on, the the finger loss (or hacked system in our case) wouldn't be the cabinet maker's concern and the money wouldn't get spent.
Just something to mull on.
Why is this a good thing? Because its easy to verify, to audit, etc. Its also relatively easy to fix if something goes wrong.
I had an interesting discussion with another web security person the other day. He said that he isn't an expert developer, but he knows how to audit web apps, how to incorporate security into the SDLC, and how to monitor for compliance / proper-usage.
It was like I was talking to myself in a mirror.
I know that Jeremiah Grossman wrote about the severe lack of web security auditors out there, and I certainly agree. At the same time, I think we could make an amazing amount of progress if we stopped letting everyone shave with a straight razor and instead trained and/or forced them to use the safety razor.
Me - I learned how to shave using a standard 2-blade Gillette. I didn't learn with a straight razor, and neither should most programmers. Learn to program in Python, Ruby, hell, Java if you must. Or, learn assembly first just for the kicks. But let's stop kidding ourselves that everyone needs to know how to use a straight razor (read - C, C++).
It'd be like requiring all carpenters to learn how to use an adze. Maybe sometime when they have some spare time they can go back and have at it, but perhaps we should stick to the safe and practical. Oh, I know what you're saying... I don't need that kickback guard on my chainsaw or the blade guard on my table saw, and maybe you don't. But the statistics tell me I'd rather most people had them, if I'm in charge of the medical bills :)
Oh, and on the secure-by-default topic, check these guys out:
http://www.sawstop.com/
Pretty slick advancement if you hadn't seen it before. Reminds me again and again of how we ought to design tools (programming languages, frameworks, etc) in the computer world for safety of use rather than just ease of use.
According to some articles I've seen they haven't had a lot of adoption by the consumer market because of costs, but that cabinet makers, furniture makers, etc. have been pretty enthusiastic. They are the ones with the insurance bills, worker's comp bills, etc. At $1000 extra per saw and a single finger loss running 10-20 times that in workers comp, etc. its a pretty easy decision to make. Now, if we didn't have insurance, workers comp, and so on, the the finger loss (or hacked system in our case) wouldn't be the cabinet maker's concern and the money wouldn't get spent.
Just something to mull on.
Subscribe to:
Posts (Atom)