2007-10-10

Why not S/MIME?

A couple of people have suggested S/MIME as a possible starting point for a solution to signing HTTP documents. This seems like a suboptimal approach to me; I'm not saying it can't be made to work, but there are a number of things that make me feel it is possible to do significantly better.

  1. MIME multipart does not seem to me to be the HTTP way.  The HTTP way is to put a single entity in a resource, and then have URIs pointing to other parts. The HTTP way to determine where content ends is to count bytes (using Content-Length, or the chunked transfer-coding), not by having a boundary string that you have to search for.  MIME multipart feels like an impostor from the email world.
  2. Conceptually we don't want to change the entity that the client receives, we just want the client to be able to check the integrity of the response.  Providing integrity by completely changing the content that the client receives doesn't seem a good match for the basic task that we are trying to accomplish.
  3. Using multipart/signed will break clients that don't support it. If the signature was in some sort of header, then browsers that didn't understand the header would automatically ignore it; you could then send signed responses without having to worry about whether the clients supported them or not.  With multipart/signed you would have to use content negotiation to avoid breaking clients. Overloading content negotiation to also handle negotiation of integrity checking will interfere with using content negotiation for negotiating content types.
  4. It's useful to be able to negotiate several aspects of digital signatures.  What kind of security token is going be used?  Although X.509 certificates have to be supported, I think WS-Security does the right thing in not restricting itself to these. It might be useful to have straightforward public keys, without any of the X.509 OSI junk, or to use a symmetric, shared secret key.  It's also useful to be able to negotiate which algorithms are used (e.g. SHA-1 vs SHA-256).  Trying to this well with content-negotiation would be tough.  Better to introduce some sort of Accept-Signature header and do it properly.
  5. Careful thought is needed about what exactly needs to be signed. Obviously signing just the content is not enough.  We need to sign at least some of the entity headers as well.  With multipart, we can handle this by putting those headers in the part that is signed.  But this will lead to duplicating headers because some of those headers (like Date) will also need to be included in the HTTP headers: not fatal, but kind of ugly.
  6. A more subtle problem is that it's not really enough to sign the response entity in isolation.  The signature needs to link the response to the request. In the case of a GET, the signature needs to say that the entity is a response to a GET on a particular request URI.  Neither the Location nor the Content-Location headers have quite the same semantics as the request URI.  Also the response may vary depending on other headers (e.g. Accept), as listed in the Vary header in the response.  The signature therefore ought to be able to cover those of the request headers that affect which entity is returned. Also it would be desirable to be able to sign responses to methods other than GET. The signature should probably also cover the status code. I don't see a natural way to fit this into the S/MIME approach.
  7. One of the main points of doing HTTP signing rather than SSL is cache-friendliness. Consider the process of validating a cache entry that has become stale. This works by doing a conditional GET: if the entity body hasn't changed, then the conditional GET will return a 304 along with some new headers, typically including a new Date header. Since the signature typically needs to cover the date, this isn't going to work with multipart/related: the entire entity body would need to be resent so that the Date contained in the relevant MIME part can be updated.  On the other hand if the signature is in the header, then the conditional GET can still return a 304 and include an updated signature header that covers the new date.
  8. Finally, S/MIME has been around for a long time, but it doesn't seem to have got any traction in the HTTP world.

A recent development related to signing in the email world is DomainKeys Identified Mail (DKIM), recently standardized as RFC 4871. This does not build on S/MIME at all.  The signature goes in a header field.  It also doesn't use X.509 certificates and their associated PKI infrastructure; rather it uses public keys distributed using DNS. It looks like a good piece of work to me.

Another interesting development is Amazon's REST authentication scheme. This works by signing headers, although it does so in the context of authentication of the client to the server.  It also uses a shared secret and an HMAC rather than public key cryptography.

Overall I think we can do much better than S/MIME by designing something specifically for HTTP.

2 comments:

Anonymous said...

Just as a counter example I've actually used S/MIME over HTTP in real-live production systems with success. Actually it was a multipart/encrypted which contained a message/http payload. Granted, we wrote the software which lives on both sides of the network (web server module, and browser plugin). Both the HTTP request and HTTP response were wrapped this way, so that messages in both directions were protected and could be authenticated.

By using the message/http payload we solved the problem of how to sign headers. In fact the outermost headers were very minimal; just enough to get the inner body routed to the decryption routines and control any proxies, caching, etc. The method, URL, Etag, etc. that were actually used came from inside the message/http.

You don't have to use PKCS methods either. We took advantage of already having a way to distribute secure secret keys so that we could use much lighter-weight algorithms like AES and HMAC/SHA without having to mess with the X.509 ugliness.

The main disadvantage we found were that some vendors of proxies or load balancing equipment don't correctly follow the HTTP specs and can stumble on multipart/encrypted payloads. Usually the bigger-named and more expensive "enterprise-level" devices were the worst offenders at misinterpreting the specs correctly.

And of course since nobody supports it now, you have to have you own software on each side. So I'm not sure S/MIME is the solution either, but it is certainly possible.

Anonymous said...

Excellent points. Regarding the Accept-Signature idea, by way of example, take a look at http://svn.apache.org/repos/asf/incubator/abdera/java/trunk/security/src/main/java/org/apache/abdera/security/util/servlet/, specifically the DHEncryptedResponseFilter.java class where we use an Accept-Encryption header to negotiate whether or not the response should be encrypted. A similar model for signatures would work well.