Tuesday, August 07, 2012

Whose credentials are they? Mine, or yours?

I've been spending a bunch of time lately thinking about usernames and passwords, and other types of credentials, and concept of "ownership".

When you get a credit card, on the back it typically says something like - "Your card is issued and serviced by XYZ Bank pursuant to a license from Visa USA.  Its use is subject to the terms of your Cardmember agreement".

The credit card isn't really your property, it is the property of the bank, and you are just being allowed to use it for payments.

When you sign up for an account online and create a username and password, that website has a decision to make:

  1. Those credentials belong to the website.  They aren't the users property, they are the property of the website and their use, etc.  is subject entirely to the terms-of-service of that website.
  2. Those credentials belong to the user.  Their use, when the user should use them, where else the user uses them, etc.  are entirely in control of the customer.
Since users often (always?) reuse credentials across websites, etc.  any individual websites attitude towards user credentials is dictated a lot about how they view user credentials.

A website that would like to pretend that credential reuse doesn't occur, or isn't its concern, might not protect them in the same way as a website that believes users maintain a sort of property interest in those credentials, might use them at other sites, and only the user themselves can make a decision about exactly how important those credentials are.

I'm not suggesting that one is right or wrong, but that I think this attitude towards credentials and who owns them can play a major role in how websites view their rights and obligations as it relates to their users.

Thursday, January 05, 2012

Why do people expect so much more from mobile platforms?

Reading Veracode's recent post: Mobile Security – Android vs. iOS, which is an infographic comparing Android and iOS security, I'm left with a few questions, some of which I posted as a comment on their site.

While the graphic does a good job of summarizing the notable differences between these two mobile platforms, I think it approaches the problem with a set of underlying assumptions:

  1. They assume that mobile platforms are fundamentally different that desktop platforms, in terms of what services/facilities/etc.  they should provide.
  2. The assume a different/new/enhanced level of responsibility by the mobile platform vendor for security and privacy than we've typically expected from platform providers.
For example, in the section on basic security capabilities they say - "Security and privacy aren't thoroughly tested and unauthorized access to sensitive data has already occurred in both the App store and Android Marketplace."

While this is undoubtedly true, the same can be said about the PC, the Mac, Linux, and any other software/OS platform that is "open" and doesn't try to control and lock down all third-party software distribution.   

Perhaps the underlying argument is that new platforms should come with more security controls and the ecosystem should be more secure and guaranteed to be so by the platform provider.  I haven't seen those promises made explicitly by mobile platform vendors though they do make it implicitly a lot of times.  

Mostly what I see are people expecting much more from their mobile phone platform than they do from their desktop/laptop platform, and I'm not entirely sure why.  Are there a few new threats?  Sure.   Location privacy, and the ability to perform actions that cost money.  The latter not really being new though as malware that used people's modems to call premium phone numbers is a pretty old-school attack.

I'm all for platforms themselves becoming more secure over time.  Most/all of the mobile platforms have made huge strides in this area over legacy desktop platforms.  

What I don't quite understand is why folks are trying to hold mobile platforms to a higher standard for third-party software that it isn't clear they should be in the business of policing in the first place.

Wednesday, October 05, 2011

Malware prevalence != Infection rates

There have been a number of presentations of late that have tried to document howend-users get infected with malware.

Both Google's malware report and a recent report from CSIS purport to tell us how people get malware, based on how what malware they detect most frequently online, and what exploits it uses to get onto a client machine.

Google goes so far as to say:
Social engineering has increased in frequency significantly and is still rising. However, it’s important to keep this growth in perspective — sites that rely on social engineering comprise only 2% of all sites that distribute malware.

Google may well be right in the numbers they are reporting (I don't doubt their analysis) but this number tells us nothing about the frequency with which users encounter those malicious sites that employ social engineering to infect users.

Percent of sites on the internet is not directly correlated to a sites popularity. As a quick thought experiment, what if facebook.com or twitter.com or even google.com were distributing social-engineering malware. They would represent a very small percent of total websites, and yet a tremendously large number of users.

My hope is that companies such as FireEye can provide the world some details on exactly what exploits they are seeing with that frequency (have they already done that?), but even there the numbers in a corporate environment may not align that well with what a home-user sees, as many companies that deploy FireEye also do web-filtering that prevents users from ever visiting certain types of sites.

The bottom line is that right now we can approximate what causes infections by looking at what the attackers are doing, but we don't truly know which of those attacks are having success and at what frequency.

If someone has more data to provide on that, I'm all ears...

Thursday, May 05, 2011

Combating Cybercrime

Cross-posting this to my personal blog as I'm sure some folks that see this, don't see the other blog: http://www.thesecuritypractice.com/

We've just published a whitepaper titled "Combating Cybercrime: Principles, Policies, and Programs".

You can read a quick summary at this blog post, or download and read the paper itself. While we don't believe we have all of the answers to combating crime online, we do believe we've presented a set of principles as well as several workable policy and technology options that will help make progress against this problem.

Please do let us know your thoughts.

Thank you

Wednesday, March 30, 2011

[Non-Security]Please Help Fight Leukemia


I don't that often use my blog to talk about non-security topics but today I'm making an exception. Last April Leukemia became a very personal topic for me and my family. If you'd like to learn more, please check out: http://svmb.heros.llsevent.org/Elise

Thursday, February 03, 2011

No Browser is an Island

Jeremiah wrote today about web browsers and opt-in security. I think he gets it mostly right (and hey, he pointed at a paper I co-authored so I'm biased) but I think it also misses the mark a little.

Once upon a time there were only two major web browsers, and their user bases were large enough, and users didn't switch, that they had outsized influence on exactly how the web worked. Users had very little choice.

The situation we find ourselves in today is quite different. Users have multiple choices of web browser, especially at home, and are willing to switch to get what they want, or believe they want.

The problem of improving the security of the web, and the security of web browsers, is one of user adoption. For certain classes of security bugs (preventing buffer overflows, etc) the security is mostly transparent to the user. It doesn't change their browsing experience at all.

Unfortunately, many of the changes proposed by the web security community (myself included) have the potential to break large numbers of sites if deployed indiscriminately.

Unless all browsers make changes at the same time, and make them mandatory, etc. with a mutual suicide pact, it can't and won't happen, because users will choose the tool that lets them view more websites, not one that keeps them safer, at least in the sort term. Some users will install a tool (Noscript) to keep themselves safer, not all will.

The upshot is that we aren't going to get universal default security improvements overnight. They are going to continue to be opt-in for the near future, because as Dan Kaminsky is quite fond of saying - "you can't break the web".

This isn't just a technical problem, it is also an economics problem. Without incentives by websites and users to opt-in to newer safer web browsers we are never going to solve this problem universally. Me -I'll be happy if we can at least develop some of the tools to keep us safer, and then let those who want to deploy them do so to keep themselves safer. That action will come from both security conscious sites, and users.

Wednesday, December 29, 2010

Poll Time - What One Problem in Web Security Do You Want to Fix?

It is poll time. Doing a little planning and trying to figure out what people view as the biggest architectural weaknesses on the web security wise. I'm mainly focused on things within HTTP and HTML/JS/CSS themselves, not things at the TLS layer.

There is a small poll on the right hand side of the blog. If you have other ideas, pleas stick them in the comments.

A few things I didn't include as I wasn't sure what to do with them:
  • Fixing XSS. Change core web protocols/technologies to provide a much cleaner code/data separation. Maybe CSP does this well enough?
  • Fixing CA's and how they work. I consider this a related but separate problem.
  • Fixing CSRF. It could make the list and there are several options architecturally such as scope-cookies and/or the Origin header.
[UPDATE-1] - I'm interested in fixing to webservers, browsers, core protocols, etc. Not what individuals writing web apps should do to make their own apps more secure. So, for example, fixing Struts/Spring/etc. would be out of scope for this survey.

[UPDATE-2] - The item in the poll for improving authentication is partially about the HTTP protocol, but also about web browser UI, how auth data gets handled in the Chrome, etc.