Rudolph's point is that security is reactive to security threats and we develop security practices and defensive techniques only in reaction to attacks and new threats. I'd argue that warfare isn't all that different. In fact, I'm not sure of too many areas where there are attackers and defenders where this isn't the case. At the same time I do take issue with a few of his pronouncements that we're always reacting.
I started doing network and system security in 1994 or so. I was administering a network of SGI Indy systems on a college network and had to secure them against both outsiders as well as insiders. Some of the techniques we used to secure the machines and know our systems were good:
- Automated configuration management (homegrown tools)
- File integrity checking (tripwire)
- Restricted Shells
- Known-good builds (done by hand)
- Hardened defaults (services off, extensive logging enabled)
- Wietse Venema was a god, is probably still a god. Logdaemon and tcpwrappers were some of the best tools ever for securing a network.
- Network forensics via TAMU netlogger
- Automated log monitoring tools (home grown)
- Keeping up with patches
- Remote port checking (strobe)
- How much different than tripwire in 1994 are file integrity monitors today?
- How much better is system logging than is was then?
- How much better are network forensics than they were then?
- Has anything gotten any better?
I think the problem that is actually being highlighted is that is computers, like in car design and fashion, what was old is new again over and over.
Most of the components above get added together and get called NAC or autonomous systems.
We take networking scanning and such and add in a bit of vulnerability data and we get vulnerability scanners. Sort of useful, but maybe if I just turned off ports I'm not using and didn't have such a complicated setup the problem would fix itself.
For the most part we know what we need to do, we know how the attackers are going to attack, we've just spread ourselves so thin that we can't actually defend against them anymore.
We've seen the enemy, and it is us. The fundamental thing that needs to evolve isn't the technology, its our use of it. As we're growing our use of technology, pushing the technology further and further, we're coming to understand the human limits of running it.
Now that I'm helping manage security for a much larger organization 13+ years later I'm not worried (all that much) about new and novel attacks, I'm worried about tracking and managing the assets I have, how they are configured, managed, monitored, etc. I'm worried about who is using what data when, whether they are copying it, sending it, releasing it, etc.
Where we need the evolution is in systems that work better from the beginning. We need to make sure that the same old security problems don't keep coming up again and again and again.
- I need operating systems that ship without all services turned on.
- I need operating systems that let me easily set a security policy, and alert me to deviations.
- I need operating systems that monitor their integrity, tell me about deviations, and let me automate alerts.
- I need common logging across all of my devices so that I don't need a super complicated SEM to manage and interpret everything.
- I need a flying car
So, while in the end I don't necessarily disagree with Rudolph on the reactivity point - maybe things are even worse than he thinks. We've spent the last 15 years making pretty much zero progress on anything.