Tuesday, February 27, 2007

Preventing Information Leakage

Information Leakage is a fact of life in most web applications. How much time and effort should we spend minimizing it?

I've gone through I don't know how many versions of server/os/middleware hardening guides which all seem to have the standard sections on changing banners, changing default ports, etc. I did a quick search and I wasn't able to find any honeynet/honeypot data on how successful these techniques are in preventing security incidents.

How much do attackers care what your server says it is, if its running SSH or telnet are they going to just try and exploit it anyway?

Which brings me to web applications.... We spend a lot of time designing web applications so that they don't leak information about whether you failed to log in because of a bad account name or a bad password. We spend a lot of time designing password reset functions so that they don't tell the user whether an account name was valid. We send email reminders to users that don't include their account name so that it isn't easily stolen.

How effective are these measures in preventing exploits? I'm going to guess not very though I wish I had some hard data on the subject.

If you're worried about account names leaking, then you'd better not let anyone new sign up for an account, or you'd better make 100% sure you use a good captcha to prevent leakage during the signup.

Or, you'd better hope people don't have to link to each other using their account names, or ever make them public in other postings, etc.

If your web application gets hacked more by people knowing valid login names, I'm going to guess you didn't do a very good job of securing the application.

That said, when do you want to take some measures to prevent this type of information leakage? When the attacks aren't against your site but against your users. When someone wants to perform spearphishing against your users leaking account names and account info makes their job a lot easier. They'll probably manage to grab the data anyway if your site is used by anyone at all, but you might want to make their job a little harder.

But remember, its a tradeoff. If you don't leak some information your users are going to get confused, call you, waste a service-rep's time, and cost you a lot more money than some of the breaches ever would have.

As always, its a tricky balancing act.

Sunday, February 25, 2007

More Engineering

I hate to keep doing this, responding solely to other people's posts.... but here goes again.

Sylvan wrote about other classes of software: Web Application Security Compared to Other Software.

I think that while the point is taken that web apps are thrown together without a lot of thought, most other software is too. Take a look at the number of network vulnerabilities out there, the bugs in something as critical as Cisco's IOS and you'll come to believe there aren't a lot of examples of software being done "right" at least by engineering standards.

There are a few cases where we don't hear of lots of issues - places where either regulation requires a certain standard of due care (financial systems) or where safety is involved (flight control, traction control, x-ray machine, etc).

Its only those cases where there are strong liability concerns that we actually have what could even be called "engineered software." Everything else is, in my estimation (granted I'm not omniscient), pretty crap by comparison. If nothing else its because the people who wrote it may take pride in it, but aren't exactly staking their life on it and/or jail time.

That said, mistakes do still happen. Read Risks Digest for a few months and you'll start hearing of lots of things you couldn't have imagined were possible.

Still, every time I read an article in Risks about root cause analysis on a flight control system or train signal system I'm reminded how far we are from being able to do that sort of analysis and lessons-learned on regular software.

Oh, and on a final note anyone have any estimates on what it would take to fund a real class-action suit against a mainstream software maker for negligence and/or non-fitness for purpose? Wondering what sort of fund we'd need to put together to get a lawyer to take the case and how much we figure it would actually take to litigate and get some case law out there. A dangerous idea no doubt, but worth thinking about.

Thursday, February 22, 2007

Engineering

Read a piece tonight by Sylvan von Stuppe titled: Too Much the Perfectionist.

It got me thinking about engineering again and wondering whether maybe I should have become one.... I'm about to violate one of the engineer's code-of-ethics items (#2, not sure I'm competent) which you can find here: Engineer's Code of Ethics.

Engineers, in the fulfillment of their professional duties, shall:

  1. Hold paramount the safety, health, and welfare of the public.

  2. Perform services only in areas of their competence.

  3. Issue public statements only in an objective and truthful manner.

  4. Act for each employer or client as faithful agents or trustees.

  5. Avoid deceptive acts.

  6. Conduct themselves honorably, responsibly, ethically, and lawfully so as to enhance the honor, reputation, and usefulness of the profession.
Pretty interesting huh. It isn't often that you come across these sorts of statements in the software engineering world because it isn't often that the stakes are life and death.

I am often reminded of these sorts of points every time I hear about shipping software with bugs, putting the onus on the consumer/customer to deal with the issues, security breaches, etc.

It also makes me think more and more of pushing software liability and what it would mean practically. As previously mentioned here and elsewhere until we started holding companies responsible for the products they produced and the safety thereof, they didn't start designing for safety.

It does make me wonder though how much of a chicken and egg problem it is and where to start. What constitutes due-diligence in software engineering?

What constitutes:
  • Due care
  • Adequate safety
  • Reliability
  • Failure rates
Its hard to say what would constitute suitability for purpose. Though at the same time people tend to sue the car company when their car has "sudden unintended acceleration" even though the NHTSA has pretty consistently ruled the cases are due to driver error. So, as with all things your mileage may vary.

The point still stands that we don't yet have any definitions of what constitutes appropriate software engineering, standards of due care, etc. I hate to say it but I'm actually looking forward to the first major lawsuit against a software vendor for a failure in basic suitability to task so that we'll have something to hang our hats on.

Wednesday, February 21, 2007

Security Assumptions

When doing security analysis and architecture its often good to start with first principles. Sometimes they are design principles, sometimes they are process/procedure principles.

Sometimes its the unspoken assumptions that we rely upon that get us in a lot of trouble. We start to take old attacks for granted, assuming they have been removed, mitigated, etc.

For example, Sun recently had a vulnerability in the telnet implementation that we haven't seen since the mid 1990's. It was a relatively simple vulnerability that we all assumed would never surface again and yet it did. I haven't heard of any disastrous effects, but it sure was scary knowing that sort of vulnerability could creep in 10 years after it was originally discovered or widely known and fixed.

Step back for a second though and its clear that the old saying about those who don't remember history are doomed to repeat it does tend to ring true. The number of simple exploits we find out there that get repeated again and again in every new application, operating system, networking device, etc.

So when I sat down the other day to start working on a network architecture I asked:

  • What threats am I really trying to protect against
  • What threats am I willing to skip on
  • What are the assumptions I'm making about the inherent security of my building blocks for the network, etc.
Do I have to remember every single network based attack and make sure I've specifically designed against it? Things like Syn floods, weird packets such as smurf, ping-of-death, etc? Do I have to worry about spoofing on my network?

How do I condense these types on concerns into concrete first principles and designs that eliminate these vulnerabilities from the network or applications themselves? We have all of those tools available to us, and we have all of those design principles, we just forget to use them 99% of the time when we do architecture and design work.

  • Least Privilege
    • Only let systems connect that need to
  • Simplicity
    • Make networks, firewalls, etc. easy to manage.
  • Automation
    • Get humans out of the equation
Its funny how many of the common vulnerabilities we come across again and again that can be mitigated by some of these relatively simply concepts properly applied. Things like turning off unnecessary services.....

I'm not sure what the point of this piece is except that sometimes its good to surface your assumptions about the world, the security context in which you live, etc. Things like:
  • When I call the police they will generally respond to an incident and not demand a bribe to help me (in most of the US anyway.)
  • Most people can't be corrupted for trivial amounts of money so we can rely on dual-control systems for all but the highest value assets.
  • Most people are honest/good. They have a basic respect for the rule of law, ethics, etc.
Its these assumptions though that underlie most of our psychology about security. Our feelings of safety, etc.

I had to do a threat analysis for a third-world country and it was an interesting exercise in questioning which of my security assumptions I was making at any given time, and whether they were valid. You can look at certain parts of the world and throw out the last group of assumptions I just listed. What is striking to western sensibilities is that these are places where if nothing else the western system of common law isn't quite so entrenched. Property rights are a bit more "fluid." Intellectual property isn't respected. Bribery and corruption are commonplace. How to protect yourself in these sorts of environments is quite a challenge, especially if you actually do subscribe to the assumptions above.....

Not sure what the point of this is except to say that sometimes its good to write down the underlying assumptions you use to make decisions, to evaluate the world, and try to evaluate how true they really are.

Wednesday, February 14, 2007

User Education, Computer Safety, and Auto Safety

So, some recent discussions about user education and computer safety had me looking for analogies again.

In a response to Jeremiah I said I thought we ought to compare user education for computer security to user education to other things like driver education. My contention being that we ought to expect some minimal level of proficiency to operate a computer just like we ought to for a car.

The story of automobile safety is more complicated than this though. Cars exist in a complicated ecosystem much like computers on the internet. We have car safety systems and the overall driving environment which includes other drivers, traffic signals, etc.

Car safety systems aren't single-purpose either. Some car safety systems are directed towards the safety of the occupants only: seatbelts, airbags, crumple-zones, safety glass. Other safety systems are designed to protect both the passengers and other drivers by making accidents less likely: ABS, traction-control, AWD.

In general we also configure these safety features to default-on configurations, with the notable exception of seatbelts. We tried the mandatory seatbelts-on feature in the mid-1990's but it was generally rejected and then abandoned when airbags came out. So, we went instead to laws requiring seatbelt usage and some states/localities are even performing random seatbelt checks. The interesting point about these laws is that seatbelts don't actually reduce the dangers to others - they are a safety device that only protects the passengers.

Back to my point...

In cars we have multiple safety systems. We keep improving the safety systems and we try to configure them in a safe-by-default mode. With certain safety features that aren't on by default (seatbelts) we've passed laws to make their use mandatory. On top of that, we have mandatory testing for all drivers, and we impose different driving tests and rules for regular users and "power users", ie. those who drive more complicated or dangerous vehicles such as large trucks.

Cars are a relatively new technology. They haven't been around for more than about 100 years (give or take a little) and the landscape is continually evolving. The US Government (and other governments) realizes that car safety is multi-faceted and regulates not just to drive desired outcomes, but to specify certain mandatory safety features for cars and a mandatory testing regime to assure suitability to purpose. It took us until 1950 to have seatbelts in cars though.

Why is it that we're not willing to do the same for computers?

In the computer world we don't mandate safety features. We don't have any testing standards to ensure the safety standards are being met. We don't have mandatory user education or testing.

But computers can pose a risk to both the user and to others without "proper" use. Its not often that a car sitting in your driveway can cause an accident on the other side of the world. Not so for a computer.

When cars first came out we didn't have seatbelts, traffic lights, etc. Things like ABS and traction control were far off in the distance. What drove their adoption was liability, government regulation, general common sense, and an avoidance of the tragedy of the commons.

Perhaps someday we'll be that lucky in computing.

Of course I'd be remiss if I didn't point out that despite all of the mandatory seatbelt laws we aren't driving down accident and fatality rates that much. Drivers who are tested with and without a seatbelt behave in a more dangerous fashion when they are buckled up, so seatbelt wearing tends to help protect the driver, but maybe not others, as much as we'd have hoped.

I came across a study that I wish I could find a reference for that said that users in a corporate environment with AV software and other prevention mechanisms actually behave in a much more risky fashion than they would with their home computer. Because the business is responsible for ensuring the security of the system and business not personal data is at risk, users are less risk averse using their work computers. This doesn't mean the exploit rates are lower, simply that users aren't as careful when they think they have certain protection mechanisms in place.

Alas...

Tuesday, February 13, 2007

Do we need Web Application Firewalls?

In his recent post: Jeremiah Grossman: We need Web Application Firewalls to work

Jeremiah argues two points in favor on WAFs:

  1. The reality is software has bugs and hence will have vulnerabilities.
  2. Modern development frameworks like ASP.NET, J2EE and others have demonstrated big gains in software quality, but what about the vast majority of the world’s 100+ million websites already riddled with vulnerabilities? Is anyone actually claiming we should go back and fix all that code?
I could be reasonably convinced by the first point, but I'm not 100% and I definitely don't buy #2.

The whole point of a WAF is that someone has to deploy one in front of their application, configure it appropriately, etc. The majority of WAFs aren't deployed at service providers that service end-user websites written in PHP. Even inexpensive solutions like mod_security require lots of configuration, installation, etc. So, using them to fix the vast majority of websites isn't going to help us much.

On point #1 I hear this argued a lot. I guess I'm yet to be convinced. We know a lot more about writing safer software now than we used to. We have much better frameworks than we used to. We know ways to write relatively secure software, we just choose not to.

We didn't use to know how to build reasonably safe cars. We didn't always know how to build safe bridges, steam engines, etc. We do now, for some definition of safety.

What we're lacking is the motivation either from a liability or responsibility perspective. Until we have those structures in place and people forcing us to improve our software quality and security, we won't.

Its just that simple. Putting more semi-effective crutches in place isn't actually going to make us safer.

Monday, February 12, 2007

Most web security is like finding new ways to spend the loot

Being that my degree is in philosophy, I like to think that what makes good commentary and analysis is a good analogy. So here goes.

The web application is like a safe. Some have core vulnerabilities. Once you've cracked the safe all of the novel attacks against the browser are just more ways of spending the loot.

There is a lot of research that goes on in the web application security world. Much of it is interesting but it often focuses on what sorts of attacks can be done against an app that has inadequate access controls, input/output filtering, etc. Even the smallest crack can be exploited to cause all sorts of disarray, problems, etc.

Problem is, it all starts with vulnerabilities that are all the same:

- Inappropriate filtering of input
- Inappropriate filtering of output
- Bad session handling, generation, etc.

Once you find one of those vulnerabilities, whether its XSS, overwriting the user's whole javascript call tree, etc. its all the same...

Is finding new ways of spending stolen money all that interesting? For me, the answer is no. I'm a lot more interested in new types of vulnerabilities, new protection mechanisms, etc. Not all of the billion or so ways I can screw up a site when its subject to XSS attacks.

So, when papers about something like CSRF, same-site bypass of javascript rules, etc. come along I'm interested. Clever attacks once there is a systematic weakness in a site just aren't that interesting anymore.