2007-10-07

Integrity without confidentiality

People often focus on confidentiality as being the main goal of security on the Web; SSL is portrayed as something that ensures that when we send a credit card number over the web, it will be kept confidential between us and the company we're sending it to.

I would argue that integrity is at least as important, if not more so.  I'm thinking of integrity in a broad sense, as covering both ensuring that the recipient receives the sender's bits without modification and that the sender is who the recipient thinks it is. I would also include non-repudiation: the sender shouldn't be able to deny that they sent the bits.

Consider books in the physical world.  There are multiple mechanisms that allow us to trust in the integrity of the book:

  • it is expensive and time-consuming to produce something that looks and feels like a real, bound book
  • we obtain books from bookshops or libraries, and we trust them to give us the real thing
  • page number make it hard to remove pages
  • the ISBN allows us to check that something has really been published
  • the legal requirement that publishers deposit a copy of every book published with one or more national libraries (e.g. Library of Congress in the US or the British Library in the UK) ensures that in the unlikely event that the integrity of a book comes into question, there's always a way to determine whether it is authentic

Compare this to the situation in the digital world.  If we want to rely on something published on a web site, it's hard to know what to do.  We can hope the web site believes in the philosophy that Cool URIs don't change; unfortunately such web sites are a minority.  We can download a local copy, but that doesn't prove that the web site was the source of what we downloaded. What's needed is the ability to download and store something locally that proves that a particular entity was a valid representation of a particular resource at a particular time.

SSL is fundamentally not the right kind of protocol for this sort of thing.  It's based on using a handshake to create a secure channel between two endpoints.  In order to provide the necessary proof, you would have to store all the data exchanged during the session. It would work much better to have something message-based, which would allow each request and response to be separately secured.

Another crucial consideration is caching. Caching is what makes the web perform.  SSL is the mainstay of security on the Web.  Unfortunately there's the little problem that if you use SSL, then you lose the ability to cache. You want performance? Yes, Sir, we have that; it's called caching.  You want security. Yes, Sir, we have that too; it's called SSL. Oh, you want performance and security? Err, sorry, we can't do that.

A key step to making caching useable with security is to decouple integrity from confidentiality.  A shared cache isn't going to be very useful if each response is specific to a particular recipient. On the other hand there's no reason why you can't usefully cache responses that have been signed to guarantee their integrity.

I think this is one area where HTTP can learn from WS-Security, which has message-based security and cleanly separates signing (which provides integrity) from encryption (which provides confidentiality).  But of course WS-* doesn't have the caching capability that HTTP provides (and I think it would be pretty difficult to fix WS-* to do caching as well as HTTP does).

My conclusion is that there's a real need for a cache-friendly way to sign HTTP responses. (Being able to sign HTTP requests would also be useful, but that solves a different problem.)

4 comments:

Anonymous said...

Technically, TLS (SSL) does support null encryption which is a signing-only mode. See the TLS_RSA_WITH_NULL_MD5 and TLS_RSA_WITH_NULL_SHA algorithms in RFC 2246.

However it's still not practical because almost no web browsers or servers support the null encryption modes (at least in their default configurations), and the HTTP caching techniques used today are not quite good enough, because even in signing-only mode TLS still employees a handshake.

Unknown said...

A similar requirement for "message-based" integrity was discovered when the Atom Syntax RFC was being drafted. Syndication feeds are frequently composed of entries that are extracted from a variety of other feeds and aggregated together. Of course, one is then left wondering if an aggregated entry is, in fact, a faithful copy of the original. Thus, Atom provides for signing not only an entire Atom feed but also signing of the individual entries within the feed. Thus, when a signed entry is aggregated into another feed, it maintains the signature it was given in its "source" feed.
This ability to aggregate signed entries will become very important when syndication feeds are used to aggregate event information, offers-to-buy, offers-to-sell, etc. whose integrity must be established before they can be relied upon. Supporting such aggregation without loss of integrity is only one of many ways that Atom supports application models that are far more useful than what RSS can support...

bob wyman

Anonymous said...

Return multipart/signed content where one part is the response (eg text/css) and a second part is a application/pkcs7-signature digital signature.

RFC 1847 Secure Multiparts for MIME
RFC 3851 S/MIME v3.1 message specification
RFC 4134 Examples of S/MIME Messages
(section 4.8)

Now... do you know how to get browsers to support this?

Anonymous said...

Digest authentication has an optional integrity check. However, you still need to provide a password, so it's integrity without confidentiality, but not without authentication. Also, caching is still problematic (while you could associate CC: public with it, checking integrity from a cached copy wouldn't be possible), and persisting it would still be a problem.

How about a new header, something like

Content-HMAC: hash="..."; key="http://example.org/key"

(where people who really care will protect the key with https)

The issue with this approach is that you have to buffer the whole response before signing it. Two approaches to that are using a HTTP trailer (easy to persist, but not well-supported), and using a chunk-ext (which *may* be easier to support, but harder to persist, and probably even more streaming-friendly).