If we revise the abstract model for generating a Signature header along the lines suggested in my previous post, we get this:
- Choose which key (security token) to use and create one or more identifiers for it. One possible kind of key would be an X.509 certificate.
- Choose which response headers to sign. This would include at least Content-Type and probably Date and Expires. It would not include hop-to-hop headers.
- Compute the digest (cryptographic hash) of the full entity body of the requested URI. Base64-encode the digest.
- Create a Signature header template; this differs from the final Signature header only in that it has a blank string at the point where the final Signature header will have the base64-encoded signature value. It can specify the following information:
- the type of key;
- one or more identifiers for the key;
- an identifier for the suite of cryptographic algorithms to be used;
- an identifier for the header canonicalization algorithm to be used;
- a list of the names of the response headers to be signed;
- the request URI;
- the base64 encoded digest (from step 4).
- Combine the response headers that are to be signed with the Signature header template.
- Canonicalize the headers from the previous step. This ensures that the canonicalization of the headers as seen by the origin server are the same as the canonicalization of the headers as seen by the client, even if there are one or more HTTP/1.1 conforming proxies between the client and the origin server.
- Compute the cryptographic hash of the canonicalized headers.
- Sign the cryptographic hash created in the previous step. Base64-encode this to create the signature value.
- Create the final Signature header by inserting the base64-encoded signature value from the previous step into the Signature header template from step 5.
Note that when verifying the signature, as well as checking the signature value, you have to compute the digest of the entity body and check that it matches the digest specified in the Signature header.
The syntax could be something like this:
Signature = "Signature" ":" #signature-spec
signature-spec = key-type 1*( ";" signature-param )
key-type = "x509" | key-type-extension
"value" = <"> <Base64 encoded signature> <">
| "canon" = "basic" | canon-extension
| "headers" = <"> 1#field-name <">
| "request-uri" = quoted-string
| "digest" = <"> <Base64 encoded digest> <">
| "crypt" = ( "rsa-sha1" | crypt-extension )
| "key-uri" = quoted-string
| "key-uid" = sha1-fingerprint | uid-extension
sha1-fingerprint = <"> "sha1" 20(":" 2UHEX) <">
UHEX = DIGIT | "A" | "B" | "C" | "D" | "E" | "F"
uid-extension = <"> uid-type ":" 1*uid-char <">
uid-type = token
uid-char = <any CHAR except CTLs, <\> and <">>
key-type-extension = token
canon-extension = token
crypt-extension = token
hash-func-extension = token
token "=" (token | quoted-string)
There are several issues I'm not sure about.
- Should this be generalized to support signing of (some kinds of) HTTP request?
- What is the right way to canonicalize HTTP headers?
- Rather than having a digest parameter, would it be better to use the Digest header from RFC 3230 and then include that in the list of headers to be signed?
- Should the time period during which the signature is valid be specified explicitly by parameters in the Signature header rather than being inferred from other headers, such as Date and Expires (which would of course need to be included in the list of headers to sign)?
- Should support for security tokens other than X.509 certificates be specified?
typo.."the base64 encoded digest (from step 4)." should be "the base64 encoded digest (from step 3)."
Looks good! +1 to do something similar for signing requests. One thing that i also try to do is think about the reverse. Start from what's on the wire and figure out what steps you would need on say the client's end to check the signed response.
Please talk to Ruchith when you get a chance on how we added support for supporting lots of users (w/o adding certs from each of them into the server's keystore). Here's a write up he did based on original notes from werner http://wso2.org/library/255.
That's a lot of data you're packing into one header. Given that everything except the actual signature value, would it be better to have a bunch of Signature-* headers, and declare that all headers in Signature-Signed-Headers + Signature-* are signed?
I admit there's a lot of information. However, in terms of numbers of parameters there are precedents. Think of the Authorization header when using Digest authentication (section 3.2.2 of RFC 2617). Or think of the Cache-Control header.
One of the things that makes me reluctant to split it up is the fact that there can be multiple signatures (in separate Signature headers and/or in a single header separated by commas). If you put things in separate headers, then you are going to have to end up pairing the n-th entries in each of your Signature-* headers. This seems like a significant complexity hit.
The thing that mosts concerns me here is whether we are going to hit limits on header lengths in common implementations (especially limits to lengths after headers with the same name are merged), since several of are parameters can be long. Does anybody know what these limits are?
Really seriously, OAuth ;)
the current signing algorithm that we've defined covers most of the cases. Signing headers, etc, could be defined as either a second version or as an extension.
Currently negotiation of secrets is done with the express intent of identifying users, but alternative methods could be established that allow for key / secret exchange without user intervention.
I've read the OAuth spec. It seems to me it's solving a completely different problem related to controlling access to protected resources. The problem I'm trying to solve is to provide a cache-friendly way to verify the integrity of resources transferred from servers to clients. Can you give some more detail on how you think OAuth could be used to solve this problem?
OAuth specifies essentially two things: a very simple method for exchanging keys and secrets, and a cache-friendly (we hope!) method for verifying the integrity of requests.
One of the primary goals of OAuth was to allow for authenticated requests to be made securely (i.e., without revealing the secrets, and without allowing man in the middle attacks) without requiring SSL.
That it's for authenticated requests to protected resources is only one interpretation of the token / secret pairs. There's work underway to extend the spec to include "discovery", or automatic negotiation of the consumer key, and a similar approach could be used for the token.
OAuth doesn't specify signing the response body, but because it allows for both symmetric and asymmetric keys, adding such an extension would be "trivial". If there were a way to easily negotiate keys and secrets for a more generic HTTP request / response signing approach, then OAuth could adopt greatly simplified signing algorithms. My guess is that negotiating those things will look a lot like OAuth's existing process.
I'm curious if you've made any further progress on this, or seen any implementation.
In the long run, I think this is an important problem to solve; otherwise, approaches like SPDY that are (at least currently) cache-unfriendly will have an advantage.
I have lots of feedback about the details, but AFAICT the big issue facing proposals like this is stripping the signature; i.e., presumably, you're wanting to give UAs a way to detect that a response has been modified by an intermediary, but the only thing that has to be done to circumvent the signature is to strip the header.
This could be worked around by defining some sort of site policy about whether signatures are checked. Unfortunately, AFAICT that would have to be hosted on SSL/TLS to prevent *it* from being compromised -- but that's still better than making the whole site opaque to caches.
One other thing -- this sort of approach requires "dynamic" content to be buffered before sending. That's not always a dealbreaker, but sometimes it may be undesirable (e.g., with very large or very performance-sensitive responses).
I'm struggling to find a way around that; HTTP trailers are required to be optional to understanding/processing the message, so that's probably not going to work here. The only thing that comes to mind is indicating somehow that the signature will come at a particular place near the end of the content (in it), but that seems pretty messy.
I don't know of any further progress, although I'm still convinced it's an important problem.
I agree that the signature stripping issue is an important one, but I think it's separable and depends on what you are using the signatures for. I had a couple of scenarios in mind:
a) The signature is a value-add rather than being essential. For example, a newspaper that provided signed versions of its pages (which might be the same as the print-friendly version) would be providing extra value to readers. The pages could be downloaded and saved complete with the signatures; in the future, even if the page goes away, you still have evidence to prove that a particular entity was at a particular URL on a particular date. That's useful for scholars, journalists, lawyers. If the signature gets stripped, then you don't have that value-add, but there's no security breach.
b) You are doing something like talking to your bank. Often you have an HTML page containing the dynamic content which is (or can be made) relatively small, and that page references a lot of other resources (images, stylesheets) that are relatively large and static and are also shared between multiple users. I envisaged that the HTML page would be served using https, whereas the referenced resources would be served using http with response signing. The browser would be responsible for requiring resources referenced from an https page to be secured either with https or with http response signing.
How important is caching for dynamic content?
The most urgent reason for doing this IMO is that people -- especially at Google -- are starting to talk about running everything over SSL, so as to avoid transparent proxies, transforming/transcoding proxies, and so forth.
While I'm sympathetic to their goals -- pretty much no one wants their network operator mucking about in their Web browsing experience, much less automated agents' traffic -- making the entire protocol stream opaque is hitting it with far too big a hammer, and makes scaling the Web -- especially for smaller sites, and especially for more remote users -- a big problem.
A reliable way to sign HTTP responses would enable people (both publishers and consumers) to have confidence that the content hasn't been mucked about with, and would avoid the need to use SSL in these cases (of course it's still necessary when there are privacy concerns).
Caching can be important to dynamic content, but of course if you're relying on caching buffering isn't such a big deal, necessarily.
Carrying the 'main' page via SSL and using signed referents is an interesting approach; especially if that initial response that carries the policy for remaining ones can be long-lived in cache...
@mnot: "the big issue facing proposals like this is stripping the signature; i.e., presumably, you're wanting to give UAs a way to detect that a response has been modified by an intermediary, but the only thing that has to be done to circumvent the signature is to strip the header."
Why not take the same route as https - a scheme, e.g. httpv (HTTP Verified), that tells the browser only to render the content if the signature header(s) are present and check out, and to put dire warnings up (or not load the assert if it's a child request) if not.
Post a Comment