2010-02-12

A tour of the open standards used by Google Buzz

The thing I find most attractive about Google Buzz is its stated commitment to open standards:

We believe that the social web works best when it works like the rest of the web — many sites linked together by simple open standards.

So I took a bit of time to look over the standards involved. I’ll focus here on the standards that are new to me.

One key design decision in Google Buzz is that individuals in the social web should be identifiable by email addresses (or at least strings that look like email addresses).  On balance I agree with this decision: although it is perhaps better from a purist Web architecture perspective to use URIs for this, I think email addresses work much better from a UI perspective.

Google Buzz therefore has some standards to address the resulting discovery problem: how to associate metadata with something that looks like an email address. There are two key standards here:

  • XRD. This is a simple XML format developed by the OASIS XRI TC for representing metadata about a resource in a generic way. This looks very reasonable and I am happy to see that it is free of any XRI cruft. It seems quite similar to RDDL.
  • WebFinger. This provides a mechanism for getting from an email address to an XRD file.  It’s a two-step process based on HTTP.  First of all you HTTP get an XRD file from a well-known URI constructed using the domain part of the email address (the well-known URI follows the Defining Well-Known URIs and host-meta Internet Drafts). This per-domain XRD file provides (amongst other things) a URI template that tells you how to construct a URI for an email address in that domain; dereferencing this URI will give you an XRD representation of metadata related to that email address.  There seem to be some noises about a JSON serialization, which makes sense: JSON seems like a good fit for this problem. 

One of the many interesting things you can do with such a discovery mechanism is to associate a public key with an individual.  There’s a spec called Magic Signatures that defines this.  Magic Signatures correctly eschews all the usual X.509 cruft, which is completely unnecessary here; all you need is a simple RSA public key.  My one quibble would be that it invents its own format for public keys, when there is already a perfectly good standard format for this: the DER encoding of the RSAPublicKey ASN.1 structure (defined by RFC 3477/PKCS#1), as used by eg OpenSSL.

Note that for this to be secure, WebFinger needs to fetch the XRD files in a secure way, which means either using SSL or signing the XRD file using XML-DSig; in both these cases it is leveraging the existing X.509 infrastructure. The key architectural decision here is to use the X.509 infrastructure to establish trust at the domain level, and then to use Web technologies to extend that chain of trust from the domain to the individual. From a deployment perspective, I think this will work well for things like Gmail and Facebook, where you have many users per domain.  The challenge will be do make it work well for things like Google Apps for your Domain, where the number of users per domain may be few.  At the moment, Google Apps requires the domain administrator only to set up some DNS records.  The problem is that DNS isn’t secure (at least until DNSSEC is widely deployed).  Here’s one possible solution: the user’s domain (e.g. jclark.com) would have an SRV record pointing to a host in the provider’s domain (e.g. foo.google.com); the XRD is fetched using HTTP, but is signed using XML-DSig and  an X.509 certificate for the user’s domain.  The WebFinger service provider (e.g. Google) would take care of issuing these certificates, perhaps with flags to limit their usage to WebFinger (Google already verifies domain control as part of the Google Apps setup process). The trusted roots here might be different from the normal browser vendor determined HTTPS roots.

The other part of Magic Signatures is billed as a simpler alternative to XML-DSig which also works for JSON. The key idea here is to avoid the whole concept of signing an XML information item and thus avoid the need for canonicalization.  Instead you sign a byte sequence, which is encoded in base64 as the content of an XML element (or as a JSON string).  I don’t agree with the idea of always requiring base64 encoding of the content to be signed: that seems to unnecessarily throw away many of the benefits of a textual format.  Instead, when the byte sequence that you are signing is representing a Unicode string, you should be able to represent the Unicode string directly as the content of an XML element or as a JSON string, using the built-in quoting mechanisms of XML (character references/entities and CDATA sections) or JSON. The Unicode string that results from XML or JSON parsing would be UTF-8 encoded before the standard signature algorithm is applied. A more fundamental problem with Magic Signatures is that it loses the key feature of XML-DSig (particularly with enveloped signatures) that applications that don’t know or care about signing can still understand the signed data, simply by ignoring the signature.  I completely sympathize with the desire to avoid the complexity of XML-DSig, but I’m unconvinced that Magic Signatures is the right way to do so. Note that XRD has a dependency on XML-DSig, but it specifies a very limited profile of XML-DSig, which radically reduces the complexity of XML-DSig processing. For JSON, I think i

There are also standards that extend  Atom. The simplest are just content extensions:

  • Atom Activity Extensions provides semantic markup for social networking activities (such as "liking" something or posting something). This makes good sense to me.
  • Media RSS Module provides extensions for dealing with multimedia content. These were originally designed by Yahoo for RSS. I don't yet understand how these interact with existing Atom/AtomPub mechanisms for multimedia (content/@src, link).

There are also protocol extensions:

  • PubSubHubbub provides a scalable way of getting near-realtime updates from an Atom feed. The Atom feed includes a link to a “hub”.  An aggregator can then register with hub to be notified when a feed is updated. When a publisher updates a feed, it pings the hub and the hub then updates all the aggregators that have registered with it.  This is intended for server-based aggregators, since the hub uses HTTP POST to notify aggregators.
  • Salmon makes feed aggregation two-way.  Suppose user A uses only social networking site X and user B uses only social networking site Y. If user A wants to network with B, then typically either A has to join Y or B has to join X.  This pushes the world in the direction of having one dominant social network (i.e. Facebook). In the long-term I don’t think this is a good thing.  The above extensions solve part of the problem. X can expose a profile for A that links to an Atom feed, and Y can use this to provide B with information about A. But there’s a problem.  Suppose B wants to comment on one of A’s entries.  How can Y ensure that B’s comment flows back to X, where A can see it?  Note that there may be another user C on another social networking site Z that may want to see B’s comment on A’s entry. The basic idea is simple: the Atom feed for A exposed by X links to a URI to which comments can be posted.  The heavy lifting of Salmon is done by Magic Signatures.  Signing the Atom entries is the key to allowing sites to determine whether to accept comments.

Google seems to planning to use the Open Web Foundation (OWF) for some of these standards.  Although the OWF’s list of members includes many names that I recognize and respect, I don’t really understand why we need the OWF. It seems very similar to the IETF in its emphasis on individual participation.  What was the perceived deficiency in the IETF that motivated the formation of the OWF?

13 comments:

Queen Anne said...

Thanks for writing this up, James!

We're working on the APIs out in the open over at http://groups.google.com/group/google-buzz-api if you or your readers want to join us there.

Regarding the OWF, the OWF isn't a standards body of course. But the OWF offers the OWFa, a permissive license that can be used for open specifications. Many of the specs we're using are licensed under the OWFa. This can happen even before a spec goes on to be standardized in a formal setting.

Orthogonally, many of these protocols are standards by way of the IETF and OASIS, some with their own permissive licenses.

wizkid said...
This comment has been removed by the author.
Unknown said...

I'm confused as to how a bunch of these things are being called "standards" especially by someone like James who's been around the block in this space.

Most of these so-called standards are specs written entirely by Google employees and primarily implemented by Google products (e.g. WebFinger, Salmon, PubSubHubub). In that case, why don't we call Twitter's API or the Facebook platform standards? After all, their specs are online and there a lot more apps that implement them than any of the above.

I find it extremely troublesome that one company has started a trend of mislabelling proprietary technologies as standards and has gotten members of the press and tech elite repeating their spin.

Queen Anne said...

Dare -- actually, I'm very very careful not to ever use the word "standard" to refer to anything that hasn't gone through the IEFT, OASIS, W3C, ISO, etc. And I ask the people I work with to do the same.

You'll notice I make that very point in my comment above.

You've made it crystal clear that you don't like that Google is pushing for these specs. But what's odd is that Google is building on the very same specs that Microsoft authored or is advocating for -- Atom, AtomPub, Activity Streams, OAuth, etc. Moreover, the lead counsel writing the OWF license agreement is a Microsoft attorney, (and a very good one at that), who is doing it in part so that Microsoft has a publicly-vetted open licensing model to use.

Try not to throw the good parts out—the parts that your own colleagues are writing—just because Google is also championing for them. To be sure, we're not turning them away just because Microsoft had a hand in creating them.

Steve Ivy said...

Dare,

PuSubhubbub was indeed developed by Google employees, but as a side project and the spec has been released for anyone to implement. No licensing issues.

Webfinger is an effort by a number of folks, most influencially Eran Hammer-Lahav (currently of Yahoo). Yes, Googlers have participated in the definition, as have folks from Yahoo, Six Apart, and any number of indie voices.

Unknown said...

Dewitt,
Standards mean a very specific thing in our industry whether they are de facto or de jure standards. Misusing the term is bad for the industry because it allows companies to sling FUD while using what are effective proprietary technologies in their products and it misleads customers.

This is orthogonal to the merits of the underlying technologies or whether Microsoft employees are involved in OWF or Atom-related spec work.

Queen Anne said...

Dare -- right. That's why I immediately left a comment clarifying the difference between standards and specifications the moment I saw James' post. My comment is right above yours.

And to be clear, nearly all of these specs _are_ standards. The ones that aren't, like Activity Streams and Salmon, are still being worked on in the open and no standards body would accept them in the current form anyway. What should I call them instead?

If there's something you'd like to add about what Google or I should be doing differently in building Buzz, I'm all ears. In fact, we'd love it if you'd contribute.

And of course, James knows better than anyone the difference between standards and non-standards. He's written more of them than the rest of us combined. : )

James Clark said...

Dare,

I take your point about using the term "standards". I guess I should have said something like "draft specifications that could plausibly evolve into open standards".

mnot said...

AFAICT the list of names on the OWF site shouldn't be read as an endorsement, but rather an effort by folks to give guidance on what looked like an interesting endeavour.

It's been around for a while, and it's still hard to even say what the OWF is, much less what it does. Beyond the license, of course.

marco said...

is this all ? not more ? :-))))
look our "tour" for yiid.com

http://blog.yiid.org/2010/02/22/openyiid-ein-update/

John said...

James,

Thanks for the comments. If you're interested in filing some issues against the Salmon or Magic Signatures specs, I've opened up an issue tracker at http://code.google.com/p/salmon-protocol/issues/list and tracking issues raised there.

I assume you mean RFC3447 for RSAPublicKey -- it's possible that ASN.1 DER encoding w/base64url would work, it's just very hard to verify that by reading the ASN.1 spec itself. If there's a reasonably way to specify this as a profile of ASN.1 DER (just for the set of integer fields needed for keys) without pulling in all of ASN.1 as a dependency it could be reasonable to use it.

Salmon has a specific problem in that it needs its signatures to survive not only transit but storage in hostile environments (e.g., SQL) prior to re-syndication. Thus the armoring provided by Magic Signatures.

Anonymous said...

Hey, James. How's that New Year's resolution holdin' up?

Shigeru said...

Thanks for good summary!
I've tried making translation into Japanese. [http://kshigeru.blogspot.com/2010/12/tour-of-open-standards-used-by-google.html]
Is there any license term?