Friday, June 29, 2007

Data Breaches and Privacy Violations Aren't Just About Identity Theft

Chris Walsh over at the EmergentChaos blog had a piece the other day about some of the research they are doing on breach disclosures and what we can learn from them. I made come comments about data breaches as they relate to identity theft and at the time was pretty well convinced that what matters about data breaches is just identity theft.

After reading a follow-up comment from Chris and Dissent, and then the next piece by Adam -
It's not all about "identity theft" - I think I need to regroup.

Adam made the point that:

Data breaches are not meaningful because of identity theft.

They are about honesty about a commitment that an organization has made while collecting data, and a failure to meet that commitment. They're about people's privacy, as the Astroglide and Victoria's Secret cases make clear.

This is a very good point and one I'd lost site of in my previous comments. Protecting privacy is about the implied or explicit agreement between the data provider and the data repository/protector. A breach of this agreement constitutes a privacy violation, regardless of whether the law requires disclosure.

One of the problems with current disclosure laws is that their focus is entirely on identity theft. SB-1386 (and most if not all of the other disclosure laws) only kick in if your personally identifiable information and private identifier (bank account number, SSN, CC#, etc) is released as well. The end effect of this sort of disclosure regulatory regime is a focus not on privacy but on identity theft. As Adam rightly points out a lot of damage can be done through privacy violations without requiring the possibility of identity theft.

Adam listed two obvious examples of data disclosures that had nothing to do with identity theft but that nevertheless were violations of privacy agreements made between the data owner and the data custodian. Another would be AOL's release of search data.

I'm toying with a few thoughts on how you can modify the existing US regulatory regime without undesired effects. The US is distinctive from other the EU example for our relatively lax privacy regulation, but there are at least a few consumer friendly results that go along with it such as cheaper financing, etc. Trade-offs abound and we don't make decisions in an informed fashion about almost any of them. But more on that later when I have a little more time to think.

Wednesday, June 27, 2007

Banning Programming Languages to Fix Problems?

Michael Howard had an interesting piece the other day on the SDL blog and also gave an interview about some similar topics. The subject I'd like to address is the banning of certain things during the development process and the theoretical and practical aspects of it. I'd like to show that banning function calls and enforcing annotations as a coding practice is reasonably far down the path to saying that C/C++ isn't such a good programming language from a security perspective and that to achieve higher levels of assurance we need to go further down that road.

I'd like to touch on a few points in this piece:
  • The SDL vs. Legacy Code
  • Banning Programming Constructs, Functions, and/or Whole Languages
  • A Sliding Scale of Assurance
The SDL vs. Legacy Code

One of the things I'm struck by in Microsoft's SDL book is how pragmatic it is with respect to making software more secure. They point out repeatedly in the SDL book that products have been delayed and features scrapped because they either weren't secure, or couln't be made secure. While I can't argue with them about whether they've done this, there are at least a few places that we can point out a less than stellar track record of reducing feature sets to improve security. Web Browsers and how Windows does file associates come to mind as big security vs. feature wars that the feature folks seem to have won.

Michael and the other folks at Microsoft have done a ton of good work in creating, refining, and implementing their secure software development methodology, SDL. The SDL however reflects the realities of the Microsoft situation - millions of lines of legacy code. No one with that much legacy code can afford to start over from scratch. Anyone who suggested writing the next version of Word in All C# would probably get a swift kick. So, the Microsoft security folks are left with taking the practical approach. Given the fact that there is a lot of legacy code, and given that they aren't going to make a wholesale switch to another programming language, what is the best they can do in C++.

I think this is a fair approach but I don't really believe for a minute that if you asked one of the security guys there what programming language they want new things written in they'd pick C++ over C# for anything other than things like a kernel.

Banning Programming Constructs, Functions, and/or Whole Languages

Michael said in his recent piece:

In all of our SDL education we stress the point that .NET code is not a security cure-all, and we make sure that developers understand that if you choose to ignore the golden rule about never trusting input no language or tool will save you. Using safe string functions can help mitigate risks of buffer overruns in C/C++, but won’t help with integer arithmetic, XSS, SQL injection, canonicalization or crypto issues.

The key point is that languages are just tools; anyone using a tool needs to understand the strengths and limitations of any given tool in order to make informed use of the tool.

One final thought; in my opinion, well-educated software developers using C/C++ who follow secure programming practices and use appropriate tools will deliver more secure software than developers using Java or C# who do not follow sound security discipline.
I can't argue with Michael's point. I also don't know a lot of people in the security world that believe eliminating C/C++ automatically makes you secure. I do know quite a number of people though that make the good case that programming in C# or Java significantly reduces the exposure to certain classes of vulnerabilities and overall results in more secure software when using a develop trained in security.

WhiteHat Security's numbers of the general presence of web application security vulnerabilities per language seems to bear this out. In general applications written in ASP.NET and Java have fewer security vulnerabilities in them due to language constructs and secure coding frameworks.

I think you only have to look at what Microsoft has done with banning certain functions and requiring annotations to see that they have already taken some steps in the - C++ is bad - direction.

Why do they ban certain function calls such as strcpy, etc? Is it because the function inherently cannot be used safely? No. The reason is that its tremendously hard to use safely, and by removing it from the programmers options and replacing it with something that is easier to use, they improve the security of their software.

Why do they do annotations? They do them so that their static analyzers have an easier time in ferreting out certain classes of security defects.

If we could train developers to use all of the features of the programming language correctly, we wouldn't have to worry about either of these things. We'd simply do training and get on our way. The reality is that we cannot rely on developers to use certain portions of the language properly. We've shown repeatedly that certain function calls are the root of the vast majority of the security vulnerabilities out there. Most of the function calls that Microsoft has banned are ones that have been found to result in buffer overflows.

If we take as our starting position that developers can and will make mistakes, and we're wiling to enforce certain rules on them to constrain how they can use a programming language, why not take the next step and ask what else we can do to improve the security of the code we deliver?

There are several other areas of C++ that are problematic from a security perspective. Memory allocation comes to mind. The number of flaws we see from misuse of memory allocation is huge. Why not switch to a language that makes it fundamentally harder or impossible to have these kinds of issues?

Take the annotations Microsoft is using to hint their static analyzers. What these amount to are lightweight programming-by-contract constructs that we use for after-the-fact analysis rather than just forcing them through the programming language and calling violations errors. why not switch to a programming language that forces annotations rather than treating them as an after-the-fact construct?

A Sliding Scale of Assurance

In the end what we have is a sliding scale of assurance and functionality. Processes such as the SDL that mandate processes but don't require us to switch tools only get us so far down the assurance path. It is further than many/most companies are willing to go and credit to Microsoft for making such an effort. At the same time there are things the SDL doesn't necessarily do:
  • Enforce the use of tools and programming languages that are more likely to reduce security vulnerabilities
  • Eliminate software features that are truly problematic from a security standpoint and yet users have come to expect
If we want the SDL to be about delivering fundamentally more secure software at higher assurance levels, then it must mandate certain programming methodologies that result in higher quality code. Without these sorts of mandates we're still just putting band-aids on our existing software development processes.

Assurance comes at a cost. Development costs, testing costs, etc. When we switch tools that eliminate certain classes of vulnerabilities both remove the effort that developers would spend on securing the code, and also on the time the testers spend looking for certain classes of defects. In the end this is what Microsoft has done by removing certain programming constructs, they just haven't made the slightly bigger jump to another programming language for reasons I've already explained above.

Upshot

What I think this comes down to is use C++ if you must to muck with legacy code, but please don't do new development there. From a security perspective its going to be a lot more expensive if you do.

Monday, June 25, 2007

More on Software Liability

About five weeks ago Symantec messes up their AV signatures and accidentally classified some Windows system files as viruses. The files were only part of the Simplified Chinese version of the OS, so presumably this didn't get as much testing as a regular configuration.

Yesterday it was announced that they would compensate those folks who got hammered with some compensation.

I'm going to be very interested to see how this plays out, whether the lawsuits move forward, etc. This is a pretty clear example of harm done by Symantec, and certainly not intended behavior. Not clear whether this falls into the "didn't test enough" category of mistakes or what. Perhaps they bypassed their internal processes to release these signatures? Either way I bet they are hoping they have a good audit trail internally to show exactly how/why this happened.

I'll be watching this one to see whether any of these folks persist in their lawsuits and whether this ends up making any case law about software liability.

Wednesday, June 20, 2007

On Bad Post Titles

Sometimes you start writing an entry intending to cover a subject a certain way but by the time you're done you've sort of switched gears but you already wrote the title and you forget to go back and fix it.

Rothman pointed today to my post from the other day "Building Effective Metrics." He rightly pointed out that the piece isn't really about metrics. I think he's slightly off the mark though on his statement that I was writing mostly about risk management.

I think the point I was making was about culture change and secondarily about risk management. The old story/analogy about a frog in boiling water is at least slightly appropriate. Though when I went to look up the story I found out more than I wanted to on the Wikipedia article for "boiling frog."

If you want to achieve success in implementing new parts of a security program, you've got to start with the sustainable processes and make them routine before you can get an organization to actually make progress on reducing risk. Thats a really short synopsis....

The piece could probably have been better titled, so I guess I'll just try to do better next time.

Sunday, June 17, 2007

Building Effective Metrics

The topic of metrics for Information Security comes up quite often. I've been in quite a number of situations where a relatively sparse infosec program exists and no metrics exist. The question often comes up of what types of metrics to gather first to measure program status, effectiveness, etc. And, when rolling out a new element of an infosec program what metrics to focus on first.

I've come to the conclusion that process maturity based metrics are the best thing to worry about when you're building an infosec program or a new feature of an existing program.

Let's take several areas of Infosec and examine my premise.
  • Vulnerability Management (discovery and remediation)
  • Anti-virus software
  • Software Security
Vulnerability Management

When you're first starting to build a vulnerability management program you're worried about a few things:
  1. Measuring existing vulnerabilities
  2. Remediating vulnerabilities
  3. Eventually, reducing the number of vulnerabilities that get "deployed"
Most people try to tackle these items in numerical order. They buy a vulnerability scanner, they start scanning their network, they come up with a giant list of security vulnerabilities, and then they try to tackle #2, remediation. They generally set the bar pretty high in terms of what they expect the organization to fix. For example, all level 5,4,3 vulnerabilities in a Qualys scan. They push the vulnerability report to the systems administration staff, tell them to go fix the vulnerabilities, and wait an eternity to hear back about the vulnerabilities, what has been done, etc. Usually they get upset that things are being fixed faster, that new vulnerabilities surface faster than they can close the old ones, and they either give up and start ignoring their vulnerability scans, or they get extremely frustrated with the admins and a constant battle ensues.

Instead, I like to tackle these items in reverse order of the above list:
  1. Reduce the number of new vulnerabilities that get deployed
  2. Implement a remediation process
  3. Search for vulnerabilities and feed them into #2.
In my experience most people want to go a good job at what they do. They don't want to release systems with holes in them, have their systems get hacked, etc. Unfortunately they aren't security experts and don't know what to focus on. They need assistance and prescriptive guidance on exactly what to do and when to do it.

Step 1: Reduce the Number of New Vulnerabilities

Start with something like a system hardening guide and approved software list. You pick things like the CIS hardening standards and ensure that all new systems getting built go out the door with your hardening applied. In this way you cut down on the number of new vulnerabilities you're introducing into your environment.

Step 2: Implement a Remediation Process with Metrics

Work on a remediation process. Focus on elements such as:
  • Who is responsible for requesting something be remediated
  • Who is responsible for determining the scope of the request and its priority/severity
  • What testing has to be done, and who must approve it in order to push something to the environment
  • How do you track status through each of these items including time taken, roadblocks, etc.
  • How much did it cost to fix each vulnerability
Building your remediation process before you start up the firehose gives you several advantages:
  1. You can start slow at feeding vulnerabilities into the remediation process and get useful metrics about the costs of remediation.
  2. You don't cause undue friction with the operations staff by asking them to take on too much too soon.
  3. You have a well-established process for fixing any/all vulnerabilities you discover.
Once you've got this process created you can measure how effectively you're remediating any given vulnerability you discover. You have process metrics for your remediation process, rather than an ad-hoc best effort situation.

Step 3: Search for Vulnerabilities and Feed Them to Your Remediation Process

One you have a repeatable remediation process, you're ready to start feeding the process new vulnerabilities. In an organization that isn't used to routine patching, turning off services, remediating vulnerabilities you can't start out with a firehose of vulnerabilities to unprepared staff. The best approach is to use the metrics you've created in step-2 and be selective with what vulnerabilities you ask to be fixed. Once you have the process in place you can choose to stat with a subset of your vulnerabilities - your example your Qualys level-5 vulnerabilities. Ramp up slowly to the organization so that you can adequately measure the impact of your changes, the value they are providing, and the costs of remediating.

Get people used to being accountable for fixing vulnerabilities, for testing the fixes, and for measuring the results. Once you have that in place you're free to ramp up the security level you want to achieve in a measured fashion.

Eventually, once you finally have a handle on these three steps you can move on to more advanced metrics such as:
  • Average time to remediation
  • Overall vulnerability score
Until you have the first three pieces in place though focusing on your overall risk/vulnerability isn't that interesting. Even if you don't like the score, you're never going to get it lower without a repeatable process in place to remediate.

More on process related metrics for Anti-virus and Software Security in a later post.

Microsoft's "The Security Development Lifecycle" - Chapter Five

Chapter 5: "Stage 0: Education and Awareness"
Chapter 4 here


The authors credit roughly speaking, two things with the success of the SDL at Microsoft:
  1. Executive support
  2. Education and Awareness
Arguably they couldn't have achieved #2 without #1 but its interesting that they rank education and awareness as highly as they do in the SDL, given how much we've read lately about how successful most companies are/aren't in the general security awareness campaigns.

One interesting point in the introduction to the chapter is the reminder that secure software isn't the same as security software. Secure software is software that is resistant to attack, security software is software that is intended to specifically address a security concern.

The chapter has a history of security training at Microsoft. Based on the descriptions of training even before the formal SDL Microsoft was spending considerable money on training and development of its engineers. My guess is that if you've already got a corporate culture of training and education, implementing the specific training required for the SDL is going to be a lot easier than it would be at a place that doesn't already take training that seriously.

The chapter also has an overview of the current training courses for secure development available at Microsoft. I'm hoping that their future plans include making these public even on a for-fee basis so that the rest of the world can benefit from some of the work they have done.

One the sections in the chapter is on the value of exercises/labs as part of the training. They added a lab section to their thread modeling class and feedback scores went up and presumably the student's understanding of the materials as well.

Having attended and given several security training sessions I can definitely recommend this approach. I've had software security training from both Aspect and the old @stake folks and both classes had an interactive lab component. I took away a lot more from the courses than I have from most of the other security classes I've ever done.

One other interesting section in this chapter is Measuring Knowledge. At the time of the book's writing Microsoft didn't have a testing or certification program in place for their developers. I haven't had a chance to catch up with the SDL guys to see what their take is on the new SANS Software Security Initiative. I'll be interested to see how the SSI stuff shakes out and Microsoft's involvement.

Overall its interesting to see how much attention and dedication Microsoft has made to the SDL from a training perspective. The costs of the training alone in an organization the size of Microsoft is going to be enormous.

If you don't already have a robust internal training program in place in your organization this chapter does give a few hints on how to build one o the cheap. At the same time the chapter is more about the structure of the Microsoft training program than exactly how to go about building one. At the end of the chapter you're fairly convinced that you need a robust training program, but if you don't already have one you're going to be searching for a lot of external help to build one.

How I Got Started in Security and the Value of a Mentor

So, A few people out there have been blogging about how they got their start in security. I figured I'm exactly the sort of exhibitionist that would post that sort of thing, so here goes.

Warning: This entry is long and probably more than a little boring and self-indulgent. You've been warned..

I've been doing paid IT work for roughly 14 years. I got my start doing it for pay as a student at the University of Chicago in the main student computing lab. I was doing basic PC, Mac, and later Unix administration. I had a pretty strong Unix background from a few years I spent as a student at RPI where student computing was an exclusively Unix affair.

After working at the University computing lab as a regular worker I was put in charge along with a colleague of running a new cluster of SGI Indy machines. Our job was pure Unix system administration of 9 SGI machines. We were responsible for all aspects of system administration and I learned very early on that doing system administration at a University is rather different than doing it in most other environments...
  • Permissive culture and lack of definitive policies
  • Security not a priority except insofar as it caused the machines to be unavailable.
  • Insider attacks are at least as prevalent as outsiders
So, I cut my security teeth in that environment. Even though I was officially a Unix admin, I spent 50%+ of my time on security concerns. I even brought up one of the first semi-official kerberos realms at the UofC. Only about 5 of us used it at the time, but it sure did teach me a lot about distributed authentication.

One person I'd like to single out for how much he helped me in learning about security is Bob Bartlett. When I first started doing sysadmin at the UofC Bob was relatively new to the main computing group. Part of the University ethos and culture is a respect to educating, training, and mentoring. When I was just a student I used to go and hang out in Bob's cube area when I had some free time to see what sorts of things I could pick up on. Bob was the most amazingly patient guy I ever met. No matter how many stupid questions I asked, crazy schemes I came up with, he weathered the storm and never told me to stop coming around. I learned a lot about Unix security, the value of lots of layered defenses, how to do forensics of a compromised machine, etc.

Its amazing how much value you can get out of a good mentor. How they can show you ways of thinking, ways of working, how to interact with other people, etc. I can't say I learned all of those lessons and I'm certainly not a Bob clone, but of all people he's probably most to blame for me being in security today.

I spent two more years working at the UofC maintaining the main interactive unix machines for the campus. I talked a bit in an earlier post about how I don't think we've come that far in the last 15 years, but maybe I'm just jaded.

I then spent 4 years at Abbott Laboratories working in the pharmaceutical research division doing Unix admin. I wasn't officially in charge of security but since I was roughly the only person in the whole group that knew a lot about the subject I became the firewall administrator, ACE server administrator, in charge of network security monitoring and forensics, etc. I brought up the first network IDS there using first Shadow and then NFR.

The area I worked in was highly regulated so I got my full dose of filling out logbooks, worrying about audits, etc. It helped in our paranoia that Abbott had a quite a number of adverse regulatory issues during those times which made us that much more serious about security. That said the regulations that apply to the pharmaceutical business aren't that different that other regulations such as PCI. They are supposed to guarantee a certain level of security, but half the time they just result in a lot more paperwork, etc.

After 4 years at Abbott I left to go work for a software company in downtown Chicago to be the sole security person. I was responsible for all aspects of security except physical. I spent 5 years at CCC working on pretty much everything security - Policies, Procedures, SOX, Firewalls, IDS, Application Security (coding standards, threat modeling, application pen testing), vendor relationships and contracts, etc.

The scope of the job was great but unfortunately the industry they were in wasn't in need of the kind of serious security I was really looking to do. So, I started looking and eventually moved to the Bay Area to take a job with a large financial services firm. I don't like to talk about who it is, but if you use google and linkedin it can't be that hard to figure it out.

I think one of the main skills I bring to the table is my background doing a lot of different IT work for a lot of different types of environments. I worked for a University, a heavily regulated pharmeceutcal, a software company, and a financial services firm. I've done everything form desktop support to large system unix admin to software security work. I think its both breadth and depth that are to be valued in Information Security. Hopefully I've got some of both but I guess you be the judge.

Saturday, June 09, 2007

Microsoft's "The Security Development Lifecycle" - Chapter Four

Chapter 4: SDL for Management
Chapter 3 here

Chapter 4 is about ensuring the success of the SDL through management support. As such its really the first chapter that I think starts to address some of the core issues of the SDL. This chapter makes a lot of sense in relation to Dave Ladd's post from the other day about culture change.

There are a few key takeaways for this Chapter:
  • Management support is critical for implementing a software security effort
  • Continuous messaging, reinforcement, recognition for a job well done, and training are all a part of the management support and culture change
  • The SDL can add 15-20% overhead when it is first implemented but considerably less thereafter.
Chapter four again includes some background on what drove Microsoft to implement the SDL. Without throwing cold water on the four main reasons outlined:
  • Customer complaints
  • Actual attacks
  • Press coverage - bad PR
  • Frequent patching required diverting developers from new products to maintenance
I think another concern is liability. Microsoft's position as a monopoly in the desktop space makes them much more liable than other software companies might be. Regardless of the specifics of the list, all of these point to a purely economic analysis for why Microsoft chose to go down the SDL route. The SDL isn't in place merely because a bunch of smart developers thought Microsoft should develop more secure software. The SDL is in place because Microsoft management made the decision that the benefits of the SDL outweighed the costs of its implementation. This is a key point in gaining and sustaining management support.

Now a review of a few specifics of the chapter where I think they got it right and/or I have a few small comments.

On page-44 they discuss the vulnerability rate in Microsoft products post-SDL and the improvements seen. Similar to earlier chapters they claim that the discovered vulnerability rate has gone down and try to correlate that remaining vulnerabilities in the codebase. I'm not sure I agree with this assessment methodology. The jury is still out on 0day attacks, people not releasing vulnerabilities in certain key pieces of software, etc. Forensics data isn't widespread either as it relates to attack vectors. Most companies aren't very public about exactly how they were attacked, what vector was used, etc. So, we're a little blind on exactly how many attackers there have been against SQL-Server 2000sp3. Perhaps Microsoft has better forensics data than other folks because people actually report to them hacks?

Another point made is that as Microsoft's software has gotten harder to attack the attackers have focused their energy elsewhere. Its like the old joke....

Two guys are in the woods and they come across an angry grizzly bear. The bear starts to chase after them. One guy says "I'm sure glad I wore my running shoes today". The other guy says "It doesn't matter what shoes you are wearing, you can't outrun the bear." The first guy responds "I don't have to outrun the bear - I just have to outrun you."

On page-48 the section "Factors That Affect the Cost of SDL" they discuss the costs for existing vs. new development. One assumption that seems to be baked into their analysis is that implementing the SDL is an all-or-nothing sort of endeavor, and that roughly speaking it happens all at once. While it doesn't change the costs to implement over a long term or short-term, much of the language of the book seems to speak as if the SDL springs into being all at once.

On page-50 in the section "Rules of Thumb" they give the estimate that the SDL can cost 15-20% in initial implementation costs but less thereafter as a product matures. I'd be interested to know how many positive side effects they see from SDL implementation that aren't directly security related. Many elements of the SDL enforce a certain rigor on the development process that might not exist otherwise. You can't do proper threat analysis without good architecture documentation. And, the architecture documentation has to be accurate or the value of the threat analysis is going to be lower. In organizations that don't have a robust software development process to begin with I'm guessing that the artifacts of the SDL provide considerable value outside of security.

My only complaint so far about chapter four is that they don't spend more time discussing metrics. Key to gaining and retaining management support is demonstrating the effectiveness of the program. Chapter four has a limited section on SDL metrics and I can see that some chapters have explicit sections on metrics while other don't. The metrics provided in Chapter four aren't very extensive and they are sort of meta-metrics about the SDL - so we'll see whether this is covered in more detail later.

Monday, June 04, 2007

Questions About Software Testing and Assurance

Measuring assurance in security is difficult. On the one hand we'd like to be able to objectively measure the work-factor an attacker must expend in order to successfully attack something. Unfortunately it is the rare case where we have this level of certainty for the work-factor. Only in situations where the work reduces to breaking a cryptosystem can we have any true estimation of the work required to break our software, and this of course relies on proper cryptosystem operations from the code and operational (key management, etc) perspectives.

James Whittaker's recent piece on the Microsoft SDL blog about "Testing in the SDL" is quite interesting when looked at through the lens of assurance, which is what he says he is describing:

Security testing has been – and will always be – about assurance, that is, assuring that the product as built and shipped has been thoroughly tested for potential vulnerabilities. Bug detection and removal will continue to be important until a time in the future comes when we can deploy provably secure systems. However, we’re likely never to get to such a future without learning the lessons that testing is teaching right now. Anyone can write a system and call it secure – it’s only through watching real systems fail in real ways that we learn to get better. Moral of the story – testing is by far the best way to show us what we’re doing wrong in software development.
What is so unfortunate about this statement is that James is correct. In the vast majority of cases we simply can't reduce any element of the software to a purely objective measure of work-factor for an attacker. We're generally still so bad at writing software that we end up designing security in - but in the end - rely on testing to give us any certainty about the quality of the software we've created.

We lack the ability to analytically evaluate software from a quality and/or security perspective and we're left with the crude approximation that is testing. Its sort of like needing to to an integral of something and discovering that the function has to be calculated numerically and so you're left with an approximation rather than a closed-form solution. You know that no matter how you try you're just getting an approximation of the answer and that is the best you can do. The real world is like that sometimes but oh to have the certainty that a simple equation would give you.

I think the one saving grace here is that we know that reusable components are a good approach to solving this problem. Robert Auger wrote a nice piece on this titled "The Business Case for Security Frameworks." In it he argues that reusable security frameworks are a great initial defense against insecure coding practices. I'll agree wholeheartedly and on top of that throw in that once you start using frameworks like this, you hopefully reduce your testing burden to new code and new attack vectors rather than going over the same ground again and again. James I'm sure knows this all too well.

There has been quite a bit of discussion about exactly this sort of analysis at Microsoft with respect to Vista and how its newly created and properly engineered code will fare over time against attackers. Despite recent bogus claims to the contrary - Vista seems to be holding up pretty well.

What I'd still love to have though is an analytical way of measuring code quality and/or work-factor for attack to have a "real" measure of how resistant to attack a piece of software is. Schneier has been writing forever about an Underwriter's Laboratory for security. I wish I could feel more confident that we are getting closer to having an answer to this type of question.

Blogger Stupidity - Only Not the Kind You Think

I posted last night about a problem with my feed and questioned whether the problem was with Feedburner or Blogger. Turns out the problem was with a blogger - me.

In Blogger an item is created and titled based on the original title you give it. If you subsequently edit the title you end up creating a brand-new entry rather than overwriting the original item with a new title.

So, I ended up with two identical items last night except for the title. Its kind of counter-intuitive that blogger does this. In most tools the instance of something you're editing doesn't change just because you change the title. Turns out that in blogger they've turned that on its head.

Not sure why it needs to work that way, but it does - and now you've been warned.

Sunday, June 03, 2007

Feedburner or Blogger Stupidity?

So, I wrote a blog entry earlier tonight and spelled the title wrong. Looks like feedburner picked it up twice. I really only wrote one entry about the SDL Chapter Three.... I didn't really write another entry about Chapter Thre

I haven't yet determined who to blame for picking this up wrong. My guess is that feedburner is keying on title rather than URL for the blog entry, something perhaps they could see their way clear to fixing.

Microsoft's "The Security Development Lifecycle" - Chapter Three

Chapter 3: A Short History of the SDL at Microsoft
Chapter 2 Here

Chapter 3 is mainly a history of the SDL at Microsoft. While interesting for a historical perspective, the main bits from a security knowledge perspective are:

  • Developing the SDL was a long process and meant a lot of culture change at Microsoft
  • Developing the SDL took an iterative approach and evolved over time
What I found most interesting about the chapter was the bit of sleight of hand as regards Microsoft's lack of security in Windows-95 and other close in proximity releases. The authors say:

Windows 95 was designed to connect to corporate networks that provided shared file and print infrastructures and to connect to the Internet as a client system, but the primary focus of security efforts was the browser - and even there, the understanding of security needs was much different from what it is today.
I can interpret this statement two ways:
  • Charitable: We've know more now than we did then or reasonably could or should have known then, and so didn't incorporate a lot of security features and process into Windows 95.
  • Uncharitable: We didn't pay a lot of attention back then to security. We could have and should have but we didn't. As such our understanding was less than it really should have been and we did a crummy job with Windows 95.
I'm inclined to believe the truth is somewhere between these two points. I don't want to point fingers at anyone in particular about it but the idea that an operating system ought to have some security built into it, and that the Internet or networks in general could be scary places wasn't exactly unknown in the mid-1990's. It isn't as if computer security was invented in 1995 or something after all. At the same time other single-user systems weren't necessarily worse than Windows 95 either, they just didn't have the presence that it did.

The only other complaint I can have about this chapter surrounds the vulnerability measurement metric they use to measure software security in a few of the examples. A difference is shown between SQL Server 2000 and SQL Server 2000 SP3. The statement made is that while SQL Server 2000 had 6 vulnerabilities reported and handled over its lifecycle up to SP3, SQL Server 2000 SP3 had only 3 vulnerabilities reported in the next 3 years. Unfortunately at this point of the book we haven't yet covered software security metrics so until I get there I can't make a strong methodology complaint. Using this sort of statistic seems a bit misleading to me however. Sure vulnerability reduction is a key metric, but reported vulnerabilities isn't necessarily the key metric to focus on.

Saturday, June 02, 2007

More thoughts on training

My wife pointed out a very interesting article to me yesterday that she'd come across while reading Richard Dawkins's site. The article was about a lack of training/education in medical schools and how this lack of a basic understanding of evolution is at least partially to blame for drug-resistant bacteria.

I also read Dave Ladd's excellent piece, "Oil Change or Culture Change". Dave says:

Furthermore, many of the processes used by SDL (and other methodologies) are generally acknowledged as effective in identifying and mitigating security threats. Pondering this notion is what lead me to my realization about culture change, and prompted a question: If this stuff has been around for awhile in some shape or form, why aren’t more people doing it?


Dave's point is that we're not dealing with new knowledge here for the most part. We're dealing with a failure in education. In my previous posting about security training and what training is important I mentioned I'd had a conversation with Gary McGraw. Reading the pieces on Dawkin's site and Dave's piece made me remember something else Gary said that I think is very appropriate. I'm paraphrasing here so I hope I'll be forgiven but when I asked about the source of the problem Gary responded with an answer about your first engineering vs. first CS class.

Engineering Class: Professor opens class showing video of horrible engineering accident. Maybe something like the Tacoma Narrows bridge or the Challenger accident. In ominous voiceover - "If you don't study hard and do a good job, you could build something like this. People could die!!!!! Don't mess up, this is serious stuff."

Computer Science Class: Hello, look at this cool stuff you can do. Let's write a program that prints "Hello, World".

I think Gary is right. The culture we're trying to change is corporate culture, but it is equally computer science and programming culture. In a sense we have a chicken and egg problem. Until we have more companies demanding to treat software development as an engineering discipline, our universities won't be motivated to turn out students that treat it as such. And until schools start turning out software engineers rather than software developers we aren't going to have the talent necessary to get corporate culture change.

I don't know that we're at some sort of crisis point for software development education, I'd hate to be that melodramatic. At the same time I think what we're seeing is a disconnect between the realities of what it means for software engineering to exist as a true discipline, and our capability of achieving it. If we started with the mindset that it is engineering we're doing rather than "development" then we might stand a chance at making some progress.

I'll be interested in knowing the differences that exist at different university programs in CS and or CSE to see whether I'm wrong. Maybe there is broad support for a CS curriculum based in engineering rather than development.

Time for a bit of research on different CS programs and their focus.