2007-10-31

E4X not in ES4

I was surprised to find that ES4 does not fold in E4X (although it reserves the syntax).  I had always viewed E4X as being one of the smoothest integrations of XML into a scripting language.  However, it seems that once you dig a bit deeper, it has some problems.

Optional typing in ES4

ES4 takes a very interesting approach to typing.  They've added static typing but made it completely optional.   Variable declarations can optionally be annotated with a type declaration.  However, the variable declarations don't change the run-time semantics of the language.  The only effect of the declarations is that if you run the program in strict mode, then the program will be verified before execution and rejected if type errors are found.  Implementations don't have to support strict mode.  You can still have simple, small footprint implementations that do all checks dynamically.  Users who don't want to be bothered with types can write programs without having to learn anything about the type system.

There's a good paper by Gilad Bracha on Pluggable Type Systems that explains why type systems should be optional not mandatory.   I think he's right. The dichotomy between statically and dynamically typed languages is false: an optional type system allows you to have the benefits of both. The paper goes further and argues that type systems should be not merely optional but pluggable.  I'm not convinced on this.  Pluggable type systems are a great idea if you are a language designer who wants to experiment with type systems; but for a production language, I think it's a fundamental responsibility of the language designer to choose a single type system.

Anyway, it's great to see optional typing being adopted by a mainstream language.

ECMAScript Edition 4

The group working on the next version of ECMAScript (ES4) have released a language overview.  There's a lively discussion on the mailing list about some of the politics behind the evolution of ES4. (The situation appears to be that Microsoft doesn't want major new features in ECMAScript, whereas Mozilla and Adobe want to evolve it rather dramatically.)

2007-10-29

Signing HTTP requests

When I first started thinking about signing HTTP responses, I assumed that signing HTTP requests was a fairly similar problem and that a single solution could deal with signing requests as well as responses.  But after thinking about it some more, I'm not so sure.

The first thing to bear in mind is that signing an HTTP request or response is not an end in itself, but merely a mechanism to achieve a particular goal.  The purpose of the proposal that I've been developing in this series of posts is to allow somebody that receives a representation of a resource to verify the integrity and origin of that representation; the mechanism for achieving this is signing HTTP responses.

The second thing to bear in mind is the advantages of this proposal over https.  Realistically, there's not much point to a proposal in this space unless it has compelling advantages over https. There are two advantages that I find compelling:

  • better performance: clients can verify the integrity of responses without negatively impacting HTTP caching, whereas requests and responses that go over https cannot be cached by proxies;
  • persistent non-repudiation: by this I mean a client that verifies the integrity and origin of a resource can easily persist metadata that makes it possible to subsequently prove what was verified to a third party.

One key factor that allows these advantages is that the proposal does not provide confidentiality.

As compared to other approaches to signing messages (such as S/MIME), the key advantage is that the signature will be automatically ignored by clients that don't understand it, just by virtue of normal HTTP extensibility rules.

If we turn to signing HTTP requests, or more specifically HTTP GET requests, none of the above considerations apply.

  • The goal of signing an HTTP GET request is typically to allow the server to restrict access to resources.
  • If you're really serious about restricting access to resources, and you want to protect against malicious proxies, then you will want to protect the confidentiality of the response; if the request includes a signature that says x authorizes y to access resource r at time t, then the representation of r in the response ought to be encrypted using y's public key.
  • Furthermore, if a server is restricting access to resources, then the signature on the request can't be optional, so the advantage over other message signing approaches such as S/MIME disappears.
  • Adding signatures to HTTP GET requests is inherently going to inhibit caching. A cached response to a request signed by x for resource r cannot in general be used to respond to a request signed by y for resource r.
  • Neither of the compelling advantages (better performance, and persistent non-repudiation) which I mentioned above applies any longer.

On the other hand, if we consider signing HTTP PUT (and possibly POST) requests, then there seems to be more commonality.  Signing an HTTP PUT request serves the goal of allowing the server to verify the integrity and origin of the representation of a resource transferred from the client.  Although I don't think there will be a significant performance advantage over https, persistent non-repudiation could be useful.

I think my conclusion is that it's better to think of the proposal not as a proposal for signing HTTP responses, but as a proposal for allowing verification of the origin and integrity of transfers of representations of resources. When considered in this light, signing of HTTP GET requests doesn't really fit in.

By the way, I'm not saying HTTP request signing isn't a useful technique. For example, OAuth is using it to solve an important problem: allowing users to grant applications limited access to private resources. But I think that's a very different problem from the problem that I'm trying to solve.

2007-10-16

HTTP response signing strawman

If we revise the abstract model for generating a Signature header along the lines suggested in my previous post, we get this:

  1. Choose which key (security token) to use and create one or more identifiers for it.  One possible kind of key would be an X.509 certificate.
  2. Choose which response headers to sign.  This would include at least Content-Type and probably Date and Expires.  It would not include hop-to-hop headers.
  3. Compute the digest (cryptographic hash) of the full entity body of the requested URI. Base64-encode the digest.
  4. Create a  Signature header template; this differs from the final Signature header only in that it has a blank string at the point where the final Signature header will have the base64-encoded signature value. It can specify the following information:
    • the type of key;
    • one or more identifiers for the key;
    • an identifier for the suite of cryptographic algorithms to be used;
    • an identifier for the header canonicalization algorithm to be used;
    • a list of the names of the response headers to be signed;
    • the request URI;
    • the base64 encoded digest (from step 4).
  5. Combine the response headers that are to be signed with the Signature header template.
  6. Canonicalize the headers from the previous step.  This ensures that the canonicalization of the headers as seen by the origin server are the same as the canonicalization of the headers as seen by the client, even if there are one or more HTTP/1.1 conforming proxies between the client and the origin server.
  7. Compute the cryptographic hash of the canonicalized headers.
  8. Sign the cryptographic hash created in the previous step.  Base64-encode this to create the signature value.
  9. Create the final Signature header by inserting the base64-encoded signature value from the previous step into the Signature header template from step 5.

Note that when verifying the signature, as well as checking the signature value, you have to compute the digest of the entity body and check that it matches the digest specified in the Signature header.

The syntax could be something like this:

Signature = "Signature" ":" #signature-spec
signature-spec = key-type 1*( ";" signature-param )
key-type = "x509" | key-type-extension
signature-param =
   "value" = <"> <Base64 encoded signature> <">
   | "canon" = "basic" | canon-extension
   | "headers" = <"> 1#field-name <">
   | "request-uri" = quoted-string
   | "digest" = <"> <Base64 encoded digest> <">
   | "crypt" = ( "rsa-sha1" | crypt-extension )
   | "key-uri" = quoted-string
   | "key-uid" = sha1-fingerprint | uid-extension
   | signature-param-extension
sha1-fingerprint = <"> "sha1" 20(":" 2UHEX) <">
UHEX = DIGIT | "A" | "B" | "C" | "D" | "E" | "F"
uid-extension = <"> uid-type ":" 1*uid-char <">
uid-type = token
uid-char = <any CHAR except CTLs, <\> and <">>
key-type-extension = token
canon-extension = token
crypt-extension = token
hash-func-extension = token
signature-param-extension =
   token "=" (token | quoted-string)

There are several issues I'm not sure about.

  • Should this be generalized to support signing of (some kinds of) HTTP request?
  • What is the right way to canonicalize HTTP headers?
  • Rather than having a digest parameter, would it be better to use the Digest header from RFC 3230 and then include that in the list of headers to be signed?
  • Should the time period during which the signature is valid be specified explicitly by parameters in the Signature header rather than being inferred from other headers, such as Date and Expires (which would of course need to be included in the list of headers to sign)?
  • Should support for security tokens other than X.509 certificates be specified?

2007-10-15

HTTP: what to sign?

There's been quite a number of useful comments on my previous post, and even an implementation.  The main area where there seems to be disagreement is on the issue of what exactly to sign.

It seems to me that you can look at an HTTP interaction at two different levels:

  • at a low level, it consist of request and response messages;
  • at a slightly higher level, it consists of the transfer of the representations of resources.

With a simple GET, there's a one-to-one correspondence between a response message a representation transfer.  But with fancier HTTP features, like HEAD or conditional GET or ranges or the proposed PATCH method, these two levels start to diverge: the messages aren't independent entities in themselves, they are artifacts of the client attempting to efficiently synchronize the representation of the resource that it has with the current representation defined by the origin server.

The question then arises of whether, at an abstract level, the right thing to sign is messages or resource representations.  I think the right answer is resource representations: those are things whose integrity is important to applications.  For example, in the response to the HEAD message, the signature wouldn't simply sign the response to the HEAD message; rather it would cover the entity that would have been returned by a GET. The Signature header would thus be allowed in similar situations to the ETag header and would correspond to the same thing that a strong entity tag corresponds to.

It's important to remember that the representation of the resource doesn't consist of just the data in the entity body.  It also includes the metadata in the entity headers.  At the very least, I think you would want to sign the Content-Type header. Note that there are some headers that you definitely wouldn't want to sign, in particular hop-to-hop headers.  I don't think there's a single right answer as to which headers to sign, which means that the Signature header will need to explicitly identify which headers it is signing.

With this approach the signature doesn't need to cover the request.  However, it does need to relate the representation to a particular resource. Otherwise there's a nasty attack possible: the bad guy can replace the response to a request for one resource with the response to a request for another resource. (Suppose http://www.example.com/products/x/price returns the price of product x; an attacker could completely switch around the price list.)  I think the simplest way to solve this is for the Signature header in the response to include a uri="request_uri" parameter, where request_uri is the URI of the resource whose representation is being signed. This allows the signature verification process to work with just the response headers and body as input, which should simplify plugging this feature into implementations.

Although not including the request headers in the signature simplifies things, it must be recognized that it does lose some functionality. When there are multiple variants, the signature can't prove that you've got the right variant. However, I think that's a reasonable tradeoff.  Even if the request headers were signed, sometimes the response depends on things that aren't in the request, like the client's IP address (as indicated by Vary: *). The response can at least indicate that the response is one of several possible variants, by including Content-Location, Content-Language and/or Vary headers amongst the signed response headers.

The signature will also need to include information about the time during which the relationship between the representation and the resource applies.  I haven't figured out exactly how this should work.  It might be a matter of signing some combination of Date, Last-Modified, Expires and Cache-Control (specifically the s-maxage and maxage directives) headers, or it might involve adding timestamp parameters to the Signature header.

To summarize, the signature in the response should assert that a particular entity is a representation of a particular resource at a particular time.

2007-10-12

HTTP response signing abstract model

I've argued that there's a need for an HTTP-specific mechanism for signing HTTP responses. So let's try and design one.  (Usually at this point, I would start coding, but with security-related stuff, I think it's better to have more discussion up front.)

Before drilling down into syntax, I would like to work out the design at a more abstract level.  Let's suppose that the mechanism will take the form of a new Signature header.  Here is my current thinking as to the steps involved in constructing a Signature header:

  1. Choose which security token to use and create one or more identifiers for it.
  2. Choose the suite of cryptographic algorithms to use.
  3. Choose which request and response headers to sign.
  4. Compute the cryptographic hashes of the request and response entity bodies if they are present.  Base64-encode those hashes.
  5. Create a  Signature header template containing the information from steps 1 to 4; this differs from the final Signature header only in that it has a blank string at the point where the final Signature header will have the base64-encoded signatures value.
  6. Canonicalize the request headers.
  7. Combine the response headers with the Signature header template and canonicalize them.
  8. Create a string that encodes the HTTP method, the request URI, the canonicalized request headers from step 6, the response status, and the canonicalized response headers from step 7.
  9. Compute the cryptographic hash of the string created in the previous step.
  10. Sign the cryptographic hash created in the previous step.  Base64-encode this to create the signature value.
  11. Create the final Signature header by inserting the base64-encoded signature value from the previous step into the Signature header template from step 5.

Let's flesh this out with a bit of q&a.

  • What kinds of security token can be used?
    At least X.509 certificates should be supported.  But there should be the potential to support other kinds of token.
  • How are security tokens identified?
    It depends on the type.  For X.509, it would make sense to have a URI that allowed the client to fetch the certificate.  It would also be desirable to have an identifier that uniquely identifies the certificate, so that the client can tell whether it already has the certificate without having to go fetch it.  As far as I can tell, in the X.509 case, people mostly use the SHA-1 hash of the DER encoding of the certificate for this.
  • How does the server know what kind of signature (if any) the client wants?
    The client can provide a Want-Signature header in the request.  (It makes more sense to me to call this Want-Signature than Accept-Signature, just as RFC 3230 uses Want-Digest, since any client should be able to accept any signature.)
  • What if the response uses a transfer encoding?
    It's the entity body that's hashed in step 4. Thus, the sender computes the hash of the entity body before applying the transfer encoding.
  • What if the response uses a content encoding?
    It's the entity body that's hashed. Thus, the sender computes the hash of the entity body after applying the content encoding. From 7.2 of RFC 2616:
      entity-body := Content-Encoding( Content-Type( data ) )
  • What if the response contains a partial entity body (and a Content-Range header)?
    The hash covers the full entity-body (that is. the hash is computed by the recipient after all the partial entity bodies have been combined into the full entity-body).  In the terminology of RFC 3230, it covers the instance (where there is a relevant instance), rather than the entity. Thus the hash identifies the same thing as a strong entity tag (which in my view makes the terminology of RFC 3230 rather unfortunate).
  • Why and how are headers canonicalized?
    As I understand the HTTP spec, proxies are allowed to change various semantically insignificant syntactic details of the headers before they pass them on.  For example, section 2.2 of RFC 2616 says any linear white space may be replaced with a single SP. Section 4.2 says explicitly proxies must not change the relative order of field values with the same header field name, but it seems to imply that proxies are allowed to change syntactic details of the header fields that are not syntactically significant (such as the relative order of headers with different field names).  It is not clear to me to what extent section 13.5.2 overrides this. Perhaps some HTTP wizard can help me out here. In any case, there needs to be just enough canonicalization to ensure that the canonicalization of the headers sent by the origin server will be the same as the canonicalization of the headers as seen by the client, no matter what HTTP/1.1 conforming proxies (or commonly-deployed non-conforming proxies) might be on the path between the origin server and the client. I would note that the Amazon REST Authentication scheme does quite extensive canonicalization.
  • Can there be multiple signatures?
    Yes. In the normal HTTP style, the Signature header should support a comma-separated list of signatures. The order of this list would be significant.  There should be a way for each signature in the list to specify which of the previous signatures in the list are included in what it signs.  There's a semantic difference between two independent signatures, and a later signature endorsing an earlier signature.
  • How about streaming?
    Tricky.  The fundamental problem is that HTTP 1.1 isn't very good at enabling the interleaved delivery of data and metadata.  This is one of the things that Roy Fielding's Waka is supposed to fix.  I don't think it's a good idea to put a lot of complexity into the design of a specific header to fix a generic HTTP problem.  The fact that the hash of the entity body is computed in a separate, independent preliminary step makes it a bit easier for the server to precompute the hash when the content is static. Also the signature header could be put in a trailer field when using a chunked transfer-coding.  However, although this helps the server, it screws the client's ability to stream, because the client needs to know the hash algorithm up front.  And, of course, it also requires all the proxies between the client and the server to support trailer fields properly.

2007-10-10

Why not S/MIME?

A couple of people have suggested S/MIME as a possible starting point for a solution to signing HTTP documents. This seems like a suboptimal approach to me; I'm not saying it can't be made to work, but there are a number of things that make me feel it is possible to do significantly better.

  1. MIME multipart does not seem to me to be the HTTP way.  The HTTP way is to put a single entity in a resource, and then have URIs pointing to other parts. The HTTP way to determine where content ends is to count bytes (using Content-Length, or the chunked transfer-coding), not by having a boundary string that you have to search for.  MIME multipart feels like an impostor from the email world.
  2. Conceptually we don't want to change the entity that the client receives, we just want the client to be able to check the integrity of the response.  Providing integrity by completely changing the content that the client receives doesn't seem a good match for the basic task that we are trying to accomplish.
  3. Using multipart/signed will break clients that don't support it. If the signature was in some sort of header, then browsers that didn't understand the header would automatically ignore it; you could then send signed responses without having to worry about whether the clients supported them or not.  With multipart/signed you would have to use content negotiation to avoid breaking clients. Overloading content negotiation to also handle negotiation of integrity checking will interfere with using content negotiation for negotiating content types.
  4. It's useful to be able to negotiate several aspects of digital signatures.  What kind of security token is going be used?  Although X.509 certificates have to be supported, I think WS-Security does the right thing in not restricting itself to these. It might be useful to have straightforward public keys, without any of the X.509 OSI junk, or to use a symmetric, shared secret key.  It's also useful to be able to negotiate which algorithms are used (e.g. SHA-1 vs SHA-256).  Trying to this well with content-negotiation would be tough.  Better to introduce some sort of Accept-Signature header and do it properly.
  5. Careful thought is needed about what exactly needs to be signed. Obviously signing just the content is not enough.  We need to sign at least some of the entity headers as well.  With multipart, we can handle this by putting those headers in the part that is signed.  But this will lead to duplicating headers because some of those headers (like Date) will also need to be included in the HTTP headers: not fatal, but kind of ugly.
  6. A more subtle problem is that it's not really enough to sign the response entity in isolation.  The signature needs to link the response to the request. In the case of a GET, the signature needs to say that the entity is a response to a GET on a particular request URI.  Neither the Location nor the Content-Location headers have quite the same semantics as the request URI.  Also the response may vary depending on other headers (e.g. Accept), as listed in the Vary header in the response.  The signature therefore ought to be able to cover those of the request headers that affect which entity is returned. Also it would be desirable to be able to sign responses to methods other than GET. The signature should probably also cover the status code. I don't see a natural way to fit this into the S/MIME approach.
  7. One of the main points of doing HTTP signing rather than SSL is cache-friendliness. Consider the process of validating a cache entry that has become stale. This works by doing a conditional GET: if the entity body hasn't changed, then the conditional GET will return a 304 along with some new headers, typically including a new Date header. Since the signature typically needs to cover the date, this isn't going to work with multipart/related: the entire entity body would need to be resent so that the Date contained in the relevant MIME part can be updated.  On the other hand if the signature is in the header, then the conditional GET can still return a 304 and include an updated signature header that covers the new date.
  8. Finally, S/MIME has been around for a long time, but it doesn't seem to have got any traction in the HTTP world.

A recent development related to signing in the email world is DomainKeys Identified Mail (DKIM), recently standardized as RFC 4871. This does not build on S/MIME at all.  The signature goes in a header field.  It also doesn't use X.509 certificates and their associated PKI infrastructure; rather it uses public keys distributed using DNS. It looks like a good piece of work to me.

Another interesting development is Amazon's REST authentication scheme. This works by signing headers, although it does so in the context of authentication of the client to the server.  It also uses a shared secret and an HMAC rather than public key cryptography.

Overall I think we can do much better than S/MIME by designing something specifically for HTTP.

2007-10-07

Integrity without confidentiality

People often focus on confidentiality as being the main goal of security on the Web; SSL is portrayed as something that ensures that when we send a credit card number over the web, it will be kept confidential between us and the company we're sending it to.

I would argue that integrity is at least as important, if not more so.  I'm thinking of integrity in a broad sense, as covering both ensuring that the recipient receives the sender's bits without modification and that the sender is who the recipient thinks it is. I would also include non-repudiation: the sender shouldn't be able to deny that they sent the bits.

Consider books in the physical world.  There are multiple mechanisms that allow us to trust in the integrity of the book:

  • it is expensive and time-consuming to produce something that looks and feels like a real, bound book
  • we obtain books from bookshops or libraries, and we trust them to give us the real thing
  • page number make it hard to remove pages
  • the ISBN allows us to check that something has really been published
  • the legal requirement that publishers deposit a copy of every book published with one or more national libraries (e.g. Library of Congress in the US or the British Library in the UK) ensures that in the unlikely event that the integrity of a book comes into question, there's always a way to determine whether it is authentic

Compare this to the situation in the digital world.  If we want to rely on something published on a web site, it's hard to know what to do.  We can hope the web site believes in the philosophy that Cool URIs don't change; unfortunately such web sites are a minority.  We can download a local copy, but that doesn't prove that the web site was the source of what we downloaded. What's needed is the ability to download and store something locally that proves that a particular entity was a valid representation of a particular resource at a particular time.

SSL is fundamentally not the right kind of protocol for this sort of thing.  It's based on using a handshake to create a secure channel between two endpoints.  In order to provide the necessary proof, you would have to store all the data exchanged during the session. It would work much better to have something message-based, which would allow each request and response to be separately secured.

Another crucial consideration is caching. Caching is what makes the web perform.  SSL is the mainstay of security on the Web.  Unfortunately there's the little problem that if you use SSL, then you lose the ability to cache. You want performance? Yes, Sir, we have that; it's called caching.  You want security. Yes, Sir, we have that too; it's called SSL. Oh, you want performance and security? Err, sorry, we can't do that.

A key step to making caching useable with security is to decouple integrity from confidentiality.  A shared cache isn't going to be very useful if each response is specific to a particular recipient. On the other hand there's no reason why you can't usefully cache responses that have been signed to guarantee their integrity.

I think this is one area where HTTP can learn from WS-Security, which has message-based security and cleanly separates signing (which provides integrity) from encryption (which provides confidentiality).  But of course WS-* doesn't have the caching capability that HTTP provides (and I think it would be pretty difficult to fix WS-* to do caching as well as HTTP does).

My conclusion is that there's a real need for a cache-friendly way to sign HTTP responses. (Being able to sign HTTP requests would also be useful, but that solves a different problem.)

2007-10-04

Bytes not infosets

Security is the one area where the WS-* world has developed a set of standards that provide significantly more functionality than has so far been standardized in the REST world. I don't believe that this is an inherent limitation of REST; I'm convinced there's an opportunity to standardize better security for the REST world. So I've been giving quite a lot of thought to the issue of what the REST world can learn from WS-Security (and its numerous related standards).

Peter Gutmann has a thought-provoking piece on his web site in which he argues that XML security (i.e. XML-DSig and XML encryption) are fundamentally broken. He argues that the fundamental causes of this brokenness are as follows:

1. XML is an inherently unstable and therefore unsignable data format. XML-DSig attempts to fix this via canonicalization rules, but they don't really work.

2. The use of an "If it isn't XML, it's crap" design approach that lead to the rejection of conventional, proven designs in an attempt to prove that XML was more flexible than existing stuff.

He also complains of the difficulty of supporting XML in general-purpose security toolkit:

It's impossible to create something that's simply a security component that you can plug in wherever you need it, because XML security is inseparable from the underlying XML processing system.

I would suggest that there are two different ways to view XML:

  1. the concrete view: in this view, interchanging XML is all about interchanging sequences of bytes in the concrete syntax defined by XML 1.0
  2. the infoset view: in this view, interchanging XML is all about interchanging abstract structures representing XML infosets; the syntax used to represent the infoset is just a detail to be specified by a binding (the infoset view tends to lead to bindings up the wazoo)

I think each of these views has its place.  The infoset is an invaluable conceptual tool for thinking about XML processing. However, I think there's been an unfortunate tendency in the XML world (and the WS-* world) to overemphasize the infoset view at the expense of the concrete view.  I believe this tendency underlies a lot of the problems that Gutmann complains of.

  • There's nothing unstable or unsignable about an XML document under the concrete view.  It's just a blob of bytes that you can hash and sign as easily as anything else (putting external entities on one side for the moment).
  • The infoset view makes it hard to accommodate non-XML formats as first-class citizens.  If your central data model is the XML infoset, then everything that isn't XML has to get mapped into XML in order to be accommodated. For example, the WS-* world has MTOM. This tends to lead to reinventing XML versions of things just so they can be a first-class citizens in an infoset-oriented world.
  • If you look at everything as an infoset, then it starts to look natural to use XML for things that XML isn't all that good at. For example, if your message body is an XML infoset and your message headers are infosets, then it looks like a reasonable choice to use XML as your envelope format to combine your body with your headers. But using XML as a container format like this leads you into all the complexity and inefficiency of XML Security, since you need to be able to sign the things that you put in containers.  It's much simpler to use a container format that works on bytes, like zip or MIME multipart.
  • The infoset view leads to emphasize APIs that work with XML using abstractions, such a trees or events, that are at the infoset level, rather than work with XML at a more concrete level using byte streams or character streams. Although infoset-level APIs are needed for processing XML, when you use infoset-level APIs for interchanging XML between separate components, I believe you pay a significant price in terms of flexibility and generality. In particular, using infoset-level APIs at trust boundaries seems like a bad idea.

My conclusion is this: one aspect of the WS-* approach that should not be carried over to the REST world is the emphasis on XML infosets.

2007-10-01

Practical Principles for Computer Security

Well, I really am a lousy blogger. I am in awe of all those who manage to put out interesting posts, day after day, month after month.  I guess it must get easier with practice. Anyway, I think I'm going to try out a "little and often" approach to posting.

I have found myself getting more and more interested in security over the last couple of years.  I think the single most inspiring paper I have read on security is Practical Principles for Computer Security by Butler Lampson (who has had a hand in inventing an incredibly impressive range of technologies including personal computing, ethernet, laser printers, two-phase commit and WYSIWYG editors).  I won't try and summarize it. If you are at all interested in security and you haven't read it, you should do so.