When friends and family ask about the Professional Developers Conference I attended this week, I tell them it’s kind of like Microsoft’s State of the Union address. I’ve been to a number of these over the years. This was my first as an employee, and Microsoft’s first as a company fully committed to what I believe are the right principles, patterns, and practices. That’s a big statement, and as always you should consider the source and take it for what it’s worth. But if you’ve followed my work over the years, you’ll spot many familiar themes in the following exegesis of the day two keynote by Don Box and Chris Anderson, and you’ll know why this PDC put a huge smile on my face.
In case you’re unfamiliar with the theatrical genre I call PDC performance art, I should briefly explain. Traditionally, at this show attended by thousands of software developers, a few of Microsoft’s technical leaders come to the stage, write small programs on the fly, and run them. These daring high-wire acts are humorous and entertaining, but also deeply informative. The live code exercises new platform technologies, and tells stories about why and how the audience might want to apply those technologies.
The story that Don and Chris told began with a simple web service, running on a demo machine, that printed out a list of processes — effectively, a Unix ps (process status) command. It was built using several key components and features of the .NET Framework: LINQ (Language Integrated Query) to query for and enumerate the list, WCF (Windows Communication Foundation) to package the query as an HTTP-accessible service, UriTemplate to control the namespace of that service, SyndicationFeed to format the response as an Atom feed, and ServiceHost to run the service on the local machine.
When it ran, this program enabled a browser running on the local machine to surf to a service running on the local machine and view its process list as an Atom feed. This colocation of web client and web service on the local machine is a key pattern that I first explored a decade ago. Dave Winer named the pattern Fractional Horsepower HTTP Server and put it to excellent use in his pioneering blog tool Radio UserLand. The pattern embodies a key underlying principle: symmetry. We have long been conditioned to think of the Internet in terms of clients versus servers (and now services), but that’s an artificial distinction. In the terminology of TCP/IP networking, there are no servers and clients, there are only hosts — that is, peer nodes communicating directly with one another. Firewalls and NATs abolished that symmetry. The newly-announced Azure Services Platform is a technology that can help us restore it.
The next step was to extend the program, adding the ability to kill any of the running processes. The Atom feed was already modeling the process list as a set of URI-addressable resources. To implement the feature in a simple, standard, and discoverable way, it was only necessary to apply the HTTP DELETE verb to those resources. Internally, the program of course had to implement a DeleteProcess method. But that method name need not, and according to RESTful best practices should not, appear in the service’s API. And happily, the service did not — as do so many purportedly RESTful services — expose any URIs that look like this:
Instead it only exposed URIs that look like this:
An HTTP GET method, invoked on this URI, could return information about the process. An HTTP DELETE method invoked on the same URI accomplishes the kill function, and does so without violating the RESTful principle of interface uniformity. Later on we’ll see a nice example of the benefits of that uniformity. But here, let’s notice another key principle at work. I’ve said that the kill operation was discoverable. That’s true thanks to the Atom Publishing Protocol. It defines a hyperlink within each entry that is the RESTful endpoint for update and delete requests targeted at that entry. So the program’s DeleteProcess method queried the Atom feed for those hyperlinks, and used their addresses to create the URI namespace that exposed process deletion to clients.
The general principal at work here is linking. A core tenet of RESTful style is that link-rich hypermedia documents, useful to people because they make it possible to navigate and discover related things, are equally useful to programs for the same reason.
These are, of course, best practices for an ecosystem sustained by web standards like URI, HTTP, and XML. But it was wonderful to see those best practices clearly demonstrated in a PDC keynote. It has not always been so. Trust me, I would have noticed.
On the next turn of the crank, the standalone process viewer and killer was network-enabled thanks to Azure technology that I first told you about a year ago, back when it was known as the Internet Service Bus. Using it, Don and Chris created this endpoint in the cloud:
You can go ahead and click that URL if you like, it’s still live. What you’ll fetch is an empty Atom feed. During the keynote, though, Don and Chris wired that endpoint to the program running on the demo machine onstage. This was accomplished in a purely declarative way, by adding a binding to the program’s configuration file that pointed to the chunk of web namespace whose root is servicebus.windows.net/services/DonAndChrisPDC.
This wasn’t yet a cloud-based service, that came later. At this stage it was still a local service that was advertised in the cloud and made available to the public Internet. To accomplish that, Azure has to enable clients out on the Net to traverse intervening firewalls and NATs and contact the local service. It does so in a way that illustrates another key principle: policy-driven intermediation.
The need for such intermediation was soon apparent when the local service was relaunched with its Azure binding. Now anyone in the world could visit the above URL in a browser, view processes, and even try to delete one. Within seconds, someone did try, and Don shouted: “Stop the service, Chris!” There was no real risk — the program was running in debug mode, with a breakpoint set on DeleteProcess — but it was a great theatrical moment.
Now in fact, the service was secure by default. In order to expose it to the Net in an unauthenticated way, there was a configuration setting that overrode the default security. After removing that, an interactive (i.e., browser-based) request produced a login page. Crucially, that login page did not come from the local service, but rather from Azure which was handling security, as well as connectivity, for the service. The policy in effect was username/password, so after typing in appropriate credentials, interactive access was restored, but now in a controlled way. A different policy — for example, one requiring X.509 certificates or SAML tokens — could be defined in, and enforced by, the Azure fabric.
Next, the local client program that had been accessing the service — first directly, then by way of the Azure cloud — was adapted for the same kind of secure access. To do that, it requested an authentication token from Azure’s access control system, and then inserted that token into the HTTP headers of subsequent requests to the service.
So that was act one. Here was Don’s segue into act two: “Chris, are there other services in the world we might want to program in a similar fashion?”
Why yes, Chris said, and launched Live Desktop. There, courtesy of Live Mesh, were some folders that were synchronized cloud replicas of folders on the local demo machine. Since Live Mesh is also based on Atom feeds, it should be easy to convert a RESTful service that enumerates and deletes OS processes into a RESTful service that enumerates and deletes Live Mesh folders.
It was easy. In the client program, the base URI changed from servicebus.windows.net to user-ctp.windows.net/V0.1/Mesh/MeshObjects. And the authentication token had to change too because, well, to be honest, Azure’s subsystems aren’t yet seamlessly integrated. But that was it. The same LINQ query to find entries in a feed worked exactly as before. Only now it listed folders in the cloud rather than processes on the local machine. That’s the beauty of a uniform HTTP interface in the RESTful style.
Note that the Live Mesh API works symmetrically with respect to the cloud and the local client. The same program that lists folders in the cloud can list folders on your local machine. You just point the URIs at localhost, and use the Fractional Horsepower HTTP Server that’s part of the locally-installed Live Mesh software.
Note also that you don’t have to use any Microsoft technologies to work with these Azure services. The demo program used LINQ, WCF, and — for the Live Mesh stuff — a wrapper library that packages the API for use by .NET software. But any technology for shredding XML and communicating over HTTP will work just fine.
In act three, the focus shifted to Azure’s storage service. Using all the same patterns and principles, the program morphed into one that could upload DLL files into Azure’s blob store, use Azure tables to associate human-readable metadata with the DLLs, and issue a simple relational query against the set of uploaded files.
Finally, in act four, the service that had been running locally, on the demo machine, was adapted — with some minor changes — to work with the local development version of the Azure compute cloud, and then deployed to the staging and production areas of the real cloud.
To sum up, the emerging Microsoft platform not only spans a continuum of programmable devices and services, it also spans a continuum of access styles that are all based on core standards including URI, XML, and HTTP. I think this is a great story, and I’m exceedingly happy to finally be able to tell it.
19 thoughts on “URI, XML, HTTP, REST, and the Azure Services Platform”
It’s not a “great story” it’s just another chapter in Microsofts long history of selling BS. None of this is new or interesting, it’s a rebranding excercise which solves nothing. None of these MS “solutions” ever solve a problem I have in the real world, in fact they create more problems.
In the real world the problems are LOCK IN, TECHNOLOGY CHURN and BAD DEVELOPERS. Lock in promotes hacking and forces rewrites. Churn means that everyone is always learning, all the time, which means the code is ALWAYS PROTOTYPE. Both of which means that too many developers never learn to do anything properly and are stuck perpetually chasing their own ass.
But of course, that’s exactly what MS wants because if people stopped chasing silver bullet hype and started to learn how to program properly instead, MS would lose it’s grip on this industry.
Well, trojan horses since many years had this feature to view & kill remote processes. So whats big deal? There are botnets that do things more complex, really.
There was a point to this blog entry and, congratulations, you managed to miss it spectacularly.
There was a point to this blog entry and, congratulations, you managed to miss it spectacularly.
the show was great. the REST, not so much. fixed representation formats at compile time; missing support for PUT, DELETE, HEAD in almost all services offerings including Silverlight; and custom authentication for each service – along with almost no support for BASIC and DIGEST auth. sure, it’s HTTP, but it barely qualifies as REST. i look forward to MSFT stepping up to the plate and doing the much-needed work to make building highly-scalable, widely-distributed apps using HTTP/REST the default way MSFT devs work.
Isn’t it wonderful that Microsoft has true competition for a change, namely Google which has been much more aggressive about cloud computing than Microsoft up to now?
We live in a great world where we have democratic choice, and not monopoly. Let the battle between Azure and Google Apps begin!
Jon, Please contact me, I’d love you to participate in a conference where you will be able to share your experiences using “commonly-available technologies in unexpected ways to tell stories that make connections, distill experience, and transmit knowledge”.
Tremendously helpful and information.
Could you provide or direct me to a block diagram showing the relationship of WCF, LINQ, SyndicationFeed, UriTemplate and ServiceHost for the two demos you described?
The diagram for teh demo in paragraphs 3 and 4 could start with LINQ querying the database, and end with the user getting answers via HTTP Atom. And the same thing, but using Azure.
Some of the vocabulary is new to me, so a “big picture” diagram would make everything clear.