Thursday, March 22, 2007

Preventing HTTP response splitting with request/response identifiers?

HTTP Response Splitting is a vulnerability in web applications, and in my opinion, also in web browers or the HTTP protocol.

HTTP Response Splitting relies on a web browsers interpreting an unrequested response from a webserver as if it were requested. Part of the vulnerability stems from the fact that the HTTP protocol does not include request identifiers so that a client can match up a response to a request it made. Partly this is because the HTTP protocol is assumed to be ordered from the client and server's perspective. The protocol is purely synchronous, requests and responses happen in order and consequently there is no need to sequence, things are assumed to be in order.

Certain communication protocols include session identifiers and/or request identifiers so that an endpoint application/server can tell what responses belong to what requests and/or session.

  • TCP has sequence IDs
  • DNS has a request identifier
Both of these IDs prevent spoofing (assuming proper random ID generation). HTTP relies on TCP sequencing to provide its ordering, the underlying protocol assumes a 1-1 relationship between requests and responses, and assumes they will happen in lockstep sequence.

There have been proposals to make HTTP asynchronous. The only one I was able to find without a lot of digging actually relied on a lower-level sequencing of packets/events, looks like it dates back many years.

A potential solution to performing async HTTP would be for a browser to include extra HTTP headers indicating both that it can support async HTTP, as well as a request-id. A webserver would then be able to reply asynchronously to a client over a single TCP connection for multiple requests. Depending on configurations and/or standards a client could issue a maximum number of simultaneous requests over the same TCP connection asynchronously. The webserver could respond asynchronously as well, putting the same request-id into the HTTP response headers so that the browser can understand the sequencing.

I'm guessing that this isn't ideal performance wise for a lot of apps, but there are others where it would make a lot of sense. With respect to timeouts for each HTTP request the browser doesn't need to change its policy. It can wait the same amount of time, but it allows the server to process things in a more async fashion than it does currently.

Additionally, if we match response Ids to request Ids, then we prevent HTTP response splitting except in cases where we can do header injection and predict the request-id the browser will generate.

I agree that this is a lot of work to do to prevent response splitting attacks. I haven't tried to model out the performance impacts server or client side for this. I'm guessing in many cases it would be a wash and in others it would result in pretty significant performance improvements without as many network sockets being involved, and perhaps smarter scheduling algorithms on the server side.

Comments appreciated.


Erwan said...


These are interesting thoughts. Still, I'd like to raise to points.

HTTP response splitting can be used to achieve cache poisoning. An attacker could use it to have a caching front-end or proxy return whatever she wants to every client. This is easily solved, though. It means that your extra header fields would have to be hop-by-hop. A proxy would have to generate its own request ID and send it to the server and use the request ID provided by the client when returning the response.

Another point is compatibility with standard HTTP 1.1. In order to avoid HTTP response splitting using your scheme, the client would have to ignore HTTP 1.1 responses. This is a big concern indeed.

Security Retentive said...

Proxy servers do pose a problem. Because regular old HTTP (not HTTPS) suffers from lots of other potential problems and attacks I'm most interested from the security angle in preventing response splitting over HTTPS. You're right that proxy servers weaken the security model if they are caching.

As for breaking HTTP/1.1 this wouldn't actually break the protocol at all. What is missing is a way for the client and server to reliably negotiate the protocol. If a client requests HTTP/1.1-async and the server responds with HTTP/1.1, is this because it doesn't support it, or because the response itself is the result of http response splitting.

Yet another argument against the HTTP protocol - architecturally it is deeply flawed - unfortunately that ship has sailed at this point in time.