Tag Archives: azure

How George Bailey can save Delicious

Every Christmas we watch It’s a Wonderful Life. This year I’ll be imagining Jimmy Stewart saying, to a panicked crowd of delicious.com users rushing for the exits, “Now, hold on, it’ll work out, we’ve just got to stick together.”

If you’ve never used the social bookmarking service that began life with the whimsical domain name del.icio.us, here’s the Wikipedia summary. The service began in 2003, and by 2004 had transformed my work practices more profoundly than almost anything else before or since. I’ve written scores of essays explaining how and why. Here are some of my favorites:

2004: Collaborative knowledge gardening

2005: Language evolution with del.icio.us (screencast)

2005: Collaborative filtering with del.icio.us

2006: Del.icio.us is a database

2007: Discovering and teaching principles of information management

2007: Social information management

2008: Twine, del.icio.us, and event-driven service integration

2008: Databasing trusted feeds with del.icio.us

2008: Why and how to blurb your social bookmarks

2009: Collaborative curation as a service

Since the now-infamous leak of an internal Yahoo! slide naming delicious as one of a set of doomed services, there’s been some great gallows humor. Ed Kohler:

The easiest way to shut down Wikileaks would be to have Yahoo! acquire it.

And Anil Dash:

It seems like @pinboardIN is the most successful product Yahoo!’s had a hand in launching in five years. Congrats, @baconmeteor.

Anil is referring to pinboard.in, one of several delicious-like services to which delicious users began fleeing. Pinboard is notable for a clever model in which the price of a lifetime subscription rises with the number of users. When I first checked yesterday morning, that price was $6.90. I signed up at $7.24. Neil Saunders started tracking it at #pinboardwatch; it got to $7.74 last night; it’s $8.17 now. Maybe I should’ve bought 100 accounts at $6.90!

But seriously, this is a moment to reflect on how we can preserve the value we collectively create online. As some of you know, I have made heavy use of delicious in my own service, elmcity. When the news broke, Jeremy Dunck asked: “Bad news for elmcity, huh?”

Actually that’s the least of my worries. The folks who curate elmcity calendar hubs use delicious to configure their hubs, and to list the feeds aggregated by their hubs. It’ll be a bit inconvenient to transition to another bookmarking service, but it’s no big deal. And of course all the existing data is cached in an Azure database; the elmcity service doesn’t depend on live access to delicious.

The real concern is far broader. Millions of us have used delicious to create named sets of online resources. We can recreate our individual collections in other services, but not our collaborative efforts. In Delicious’s Data Policy is Like Setting a Museum on Fire, Marshall Kirkpatrick writes:

One community of non-profit technologists has been bookmarking links with the tag “NPTech” for years – they have 24,028 links categorized as relevant for organizations seeking to change the world and peoples’ lives using technology. Wouldn’t it be good to have that body of data, metadata and curated resources available elsewhere once Delicious is gone?

The problem with “elsewhere,” of course, is that there’s no elsewhere immune to the same business challenges faced by Yahoo!. Maybe now is the time for a new model to emerge. Except it wouldn’t be new at all. The Building and Loan service that George Bailey ran in It’s a Wonderful Life wasn’t a bank, it was a coop, and its customers were shareholders. Could delicious become the first user-owned Internet service? Could we users collectively make Yahoo! an offer, buy in as shareholders, and run the service ourselves?

It’s bound to happen sooner or later. My top Christmas wish: delicious goes first.

Automatic shifting and manual steering on the information superhighway

I’d like to thank the folks at the Berkman Center for listening to my talk yesterday, and for feedback that was skeptical about the very points I know that I need to sharpen. The talk is available here in multiple audio and video formats. The slides are separately available on SlideShare. There are many ways to use these materials. If I wanted to listen and watch, here are the methods I’d choose. For a tethered experience I’d download the original PowerPoint deck from SlideShare and watch it along with the MP3 audio. For an untethered experience I’d look at the slides first, and then dump the MP3 onto a portable player and head out for a run. Finally, if I lacked the time or inclination for either of those modes, but was still curious about the talk, I’d read Ethan Zuckerman’s excellent write-up.

After the talk we had a stimulating discussion that raised questions some of us have been kicking around forever in the blogosphere:

  1. Do “real people” — that is, people who do not self-identify as geeks — actually use feed syndication?

  2. If not directly and intentionally, do they use it indirectly and unconsciously by way of systems that syndicate feeds without drawing attention to the concept?

  3. Does the concept matter?

The third question is the big one for me. From the moment that the blogosphere booted up, I thought that pub/sub syndication — formerly a topic of interest only to engineers of networked information systems — was now becoming a tool that everyone would want to master in order to actively engage with networked information systems. Mastering the principles of pub/sub syndication wasn’t like mastering the principles of automotive technology in order to drive a car. It was, instead, like knowing how to steer the car — a form of knowledge that we don’t fully intuit. I have been driving for over 35 years. But there are things I never learned until we sent our kids to Skid School and participated in the training.

I’ll admit I have waffled on this. After convincing Gardner Campbell that we should expect people to know how to steer their cars on the information superhighway, I began to doubt that was possible. Maybe people don’t just need automatic transmission. Maybe they need automatic steering too. Maybe I was expecting too much.

But Gardner was unfazed by my doubt. He continued to believe that people need to learn how to steer, and he created a Skid School in order to teach them. It’s called the New Media Studies Faculty Seminar, it’s taking place at Baylor University where Gardner teaches, at partner schools, and from wherever else like minds are drawn by the tags that stitch together this distributed and syndicated conversation. Here’s Gardner reflecting on the experience:

Friday, I was scanning the blog feeds to read the HCC blogs about the discussion. Then I clicked over to some of the other sites’ blogs to see what was happening there. Oops! I was brought up short. I thought I’d clicked on a St. Lawrence University blog post. It sure looked like their site. But as I read the post, it was clear to me something had gone wrong. I was reading a description of the discussion at HCC, which had included very thoughtful inquiries into the relationship of information, knowledge, and wisdom. Then I realized that in fact I was reading a description of the HCC discussion — because that’s what they’d talked about at St. Lawrence University as well.

And now my links bear witness to that connection, tell my story of those connections, and enact them anew.

This property of the link — that it is both map and territory — is one I’ve blogged about before (a lucky blog for me, as it elicited three of my Favorite Comments Ever). But now I see something much larger coming into view. Each person enacts the network. At the same time, the network begins to represent and enact the infinities within the persons who make it up. The inside is bigger than the outside. Each part contains the whole, and also contributes to the whole.

The New Media Studies Faculty Seminar has given some educators a lesson in how to steer their own online destinies, and a Skid School course on which to practice their new skills. That pretty much sums up my ambition for the elmcity project too. Automatic transmissions are great. But we really do need to teach folks how to steer.

Jazz in Madison, Wisconsin: A case study in curation

The elmcity project’s newest hub is called Madison Jazz. The curator, Bob Kerwin, will be aggregating jazz-related events in Madison, Wisconsin. Bob thought about creating a Where hub, which merges events from Eventful, Upcoming, and Eventbrite with a curated list of iCalendar feeds. That model works well for hyperlocal websites looking to do general event coverage, like the Falls Church Times and Berkeleyside. But Bob didn’t want to cast that kind of wide net. He just wanted to enumerate jazz-related iCalendar feeds.

So he created a What hub — that is, a topical rather than a geographic hub. It has a geographic aspect, of course, because it serves the jazz scene in Madison. But in this case the topical aspect is dominant. So to create the hub, Bob spun up the delicious account MadisonJazz. And in its metadata bookmark he wrote what=JazzInMadisonWI instead of where=Madison,WI.

If you want to try something like this, for any kind of local or regional or global topic, the first thing you’ll probably want to do — as Bob did — is set up your own iCalendar feed where you record events not otherwise published in a machine-readable way. You can use Google Calendar, or Live Calendar, or Outlook, or Apple iCal, or any other application that publishes an iCalendar feed.

If you are very dedicated, you can enter invidual future events on that calendar. But it’s hard, for me anyway, to do that kind of data entry for single events that will just scroll off the event horizon in a few weeks or months. So for my own hub I use this special kind of curatorial calendar mainly for recurring events. As I use it, the effort invested in data entry pays recurring dividends and builds critical mass for the calendar.

Next, you’ll want to look for existing iCalendar feeds to bookmark. Most often, these are served up by Google Calendar. Other sources include Drupal-based websites, and an assortment of other content management systems. Sadly there’s no easy way to search for these. You have to visit websites relevant to the domain you’re curating, look for the event sections on websites, and then look for iCalendar feeds as alternatives to the standard web views. These are few and far between. Teaching event sponsors how and why to produce such feeds is a central goal of the elmcity project.

When a site does offer a Google Calendar feed, it will often be presented as seen here on the Surrounded By Reality blog. The link to its calendar of events points to this Google Calendar. Its URL looks like this:

1. google.com/calendar/embed?src=surroundedbyreality@gmail.com

That’s not the address of the iCalendar feed, though. It is, instead, a variant that looks like this:

2. google.com/calendar/ical/surroundedbyreality@gmail.com/public/basic.ics

To turn URL #1 into URL #2, just transfer the email address into an URL like #2. Alternatively, click the Google icon on the HTML version to add the calendar to the Google Calendar app, then open its settings, right-click the green ICAL button, and capture the URL of the iCalendar feed that way.

Note that even though a What hub will not automatically aggregate events from Eventful or Upcoming, these services can sometimes provide iCalendar feeds that you’ll want to include. For example, Upcoming lists the Cafe Montmartre as a wine bar and jazz cafe. If there were future events listed there, Bob could add the iCalendar feed for that venue to his list of MadisonJazz bookmarks.

Likewise for Eventful. One of the Google Calendars that Bob Kerwin has collected is for Restaurant Magnus. It is also a Eventful venue that provides an iCalendar feed for its upcoming schedule. If Restaurant Magnus weren’t already publishing its own feed, the Eventful feed would be an alternate source Bob could collect.

For curators of musical events, MySpace is another possible source of iCalendar feeds. For example, the band dot to dot management plays all around the midwest, but has a couple of upcoming shows in Madison. I haven’t been able to persuade anybody at MySpace to export iCalendar feeds for the zillions of musical calendars on its site. But although the elmcity service doesn’t want to be in the business of scraping web pages, it does make exceptions to that rule, and MySpace is one of them. So Bob could bookmark that band’s MySpace web page, filter the results to include only shows in Madison, and bookmark the resulting iCalendar feed.

This should all be much more obvious than it is. Anyone publishing event info online should expect that any publishing tool used for the purpose will export an iCalendar feed. Anyone looking for event info should expect to find it in an iCalendar feed. Anyone wishing to curate events should expect to find lots of feeds that can be combined in many ways for many purposes.

Maybe, as more apps and services support OData, and as more people become generally familiar with the idea of publishing, subscribing to, and mashing up feeds of data … maybe then the model I’m promoting here will resonate more widely. A syndicated network of calendar feeds is just a special case of something much more general: a syndicated network of data feeds. That’s a model lots of people need to know and apply.

Producing and consuming OData feeds: An end-to-end example

Having waxed theoretical about the Open Data Protocol (OData), it’s time to make things more concrete. I’ve been adding instrumentation to monitor the health and performance of my elmcity service. Now I’m using OData to feed the telemetry into Excel. It makes a nice end-to-end example, so let’s unpack it.

Data capture

The web and worker roles in my Azure service take periodic snapshots of a set of Windows performance counters, and store those to an Azure table. Although I could be using the recently-released Azure diagnostics API, I’d already come up with my own approach. I keep a list of the counters I want to measure in another Azure table, shown here in Cerebrata‘s viewer/editor:

When you query an Azure table like this one, the records come back packaged as content elements within Atom entries:

<entry m:etag="W/datetime'2010-02-09T00:00:53.7164253Z'">
<id>http://elmcity.table.core.windows.net/monitor(PartitionKey='ProcessMonitor',
  RowKey='634012704503641218')</id>
<content type="application/xml">
<m:properties>
<d:PartitionKey>ProcessMonitor</d:PartitionKey>
<d:RowKey>634012704503641218</d:RowKey>
<d:HostName>RD00155D317B3F</d:HostName>
<d:ProcName>WaWorkerHost</d:ProcName>
<d:mem_available_mbytes m:type="Edm.Double">1320</d:mem_available_mbytes>
...snip...
<d:tcp_connections_established m:type="Edm.Double">24</d:tcp_connections_established>
</m:properties>
</content>
</entry>

This isn’t immediately obvious if you use the storage client libary that comes with the Azure SDK, which wraps an ADO.NET Data Services abstraction around the Azure table service. But if you peek under the covers using a tool like Eric Lawrence’s astonishingly capable Fiddler, you’ll see nothing but Atom entries. In order to get direct access to them, I don’t actually use the storage client library in the SDK, but instead use an alternate interface that exposes the underlying HTTP/REST machinery.

Exposing data services

If the Azure table service did not require special authentication, it would itself be an OData service that you could point any OData-aware client at. To fetch recent entries from my table of snapshots, for example, you could use this URL in any browser:

GET http://elmcity.table.core.windows.net/monitor?$filter=Timestamp+gt+datetime’2010-02-08&#8242;

(A table named ‘monitor’ is where the telemetry data are stored.)

The table service does require authentication, though, so in order to export data feeds I’m creating wrappers around selected queries. Until recently, I’ve always packaged the query response as a .NET List of Dictionaries. A record in an Azure table maps nicely to a Dictionary. Both are flexible bags of name/value pairs, and a Dictionary is easily consumed from both C# and IronPython.

To enable OData services I just added an alternate method that returns the raw response from an Azure table query. Then I extended the public namespace of my service, adding a /odata mapping that accepts URL parameters for the name of a table, and for the text of a query. I’m doing this in ASP.NET MVC, but there’s nothing special about the technique. If you were working in, say, Rails or Django, it would be just the same. You’d map out a piece of public namespace, and wire it to a parameterized service that returns Atom feeds.

Discovering data services

An OData-aware client can use an Atom service document to find out what feeds are available from a provider. The one I’m using looks kind of like this:

<?xml version='1.0' encoding='utf-8' standalone='yes'?>
<service xmlns:atom='http://www.w3.org/2005/Atom' 
  xmlns:app='http://www.w3.org/2007/app' xmlns='http://www.w3.org/2007/app'>
  <workspace>
    <atom:title>elmcity odata feeds</atom:title>
    <collection href='http://elmcity.cloudapp.net/odata?table=monitor&hours_ago=48'>
      <atom:title>recent monitor data (web and worker roles)</atom:title>
    </collection>
    <collection href="http://elmcity.cloudapp.net/odata?table=monitor&hours_ago=48&
      query=ProcName eq 'WaWebHost'">
      <atom:title>recent monitor data (web roles)</atom:title>
    </collection>
    <collection href="http://elmcity.cloudapp.net/odata?table=monitor&hours_ago=48&
         query=ProcName eq 'WaWorkerHost'">
      <atom:title>recent monitor data (worker roles)</atom:title>
    </collection>
    <collection href="http://elmcity.cloudapp.net/odata?table=counters">
      <atom:title>peformance counters</atom:title>
    </collection>
  </workspace>
</service>

PowerPivot is an Excel add-in that knows about this stuff. Here’s a picture of PowerPivot discovering those feeds:

It’s straightforward for any application or service, written in any language, running in any environment, to enable this kind of discovery.

Using data services

In my case, PowerPivot — which is an add-in that brings some nice business intelligence capability to Excel — makes a good consumer of my data services. Here are some charts that slice my service’s request execution times in a couple of different ways:

Again, it’s straightforward for any application or service, written in any language, running in any environment, to do this kind of thing. It’s all just Atom feeds with data-describing payloads. There’s nothing special about it, which is the whole point. If things pan out as I hope, we’ll have a cornucopia of OData feeds — from our banks, from our Internet service providers, from our governments, and from every other source that currently publishes data on paper, or in less useful electronic formats like PDF and HTML. And we’ll have a variety of OData clients, on mobile devices and on our desktops and in the cloud, that enable us to work with those data feeds.

Where is the money going?

Over the weekend I was poking around in the recipient-reported data at recovery.gov. I filtered the New Hampshire spreadsheet down to items for my town, Keene, and was a bit surprised to find no descriptions in many cases. Here’s the breakdown:

# of awards 25
# of awards with descriptions 05 20%
# of awards without descriptions 20 80%
$ of awards 10,940,770
$ of awards with descriptions 1,260,719 12%
$ of awards without descriptions 9,680,053 88%

In this case, the half-dozen largest awards aren’t described:

award amount funding agency recipient description
EE00161 2,601,788 Sothwestern Community Services Inc
S394A090030 1,471,540 Keene School District
AIP #3-33-SBGP-06-2009 1,298,500 City of Keene
2W-33000209-0 1,129,608 City of Keene
2F-96102301-0 666,379 City of Keene
2F-96102301-0 655,395 City of Keene
0901NHCOS2 600,930 Sothwestern Community Services Inc
2009RKWX0608 459,850 Department of Justice KEENE, CITY OF The COPS Hiring Recovery Program (CHRP) provides funding directly to law enforcement agencies to hire and/or rehire career law enforcement officers in an effort to create and preserve jobs, and to increase their community policing capacity and crime prevention efforts.
NH36S01050109 413,394 Department of Housing and Urban Development KEENE HOUSING AUTHORITY ARRA Capital Fund Grant. Replacement of roofing, siding, and repair of exterior storage sheds on 29 public housing units at a family complex

That got me wondering: Where does the money go? So I built a little app that explores ARRA awards for any city or town: http://elmcity.cloudapp.net/arra. For most places, it seems, the ratio of awards with descriptions to awards without isn’t quite so bad. In the case of Philadelphia, for example, “only” 27% of the dollars awarded ($280 million!) are not described.

But even when the description field is filled in, how much does that tell us about what’s actually being done with the money? We can’t expect to find that information in a spreadsheet at recovery.gov. The knowledge is held collectively by the many people who are involved in the projects funded by these awards.

If we want to materialize a view of that collective knowledge, the ARRA data provides a useful starting point. Every award is identified by an award number. These are, effectively, webscale identifiers — that is, more-or-less unique tags we could use to collate newspaper articles, blog entries, tweets, or any other online chatter about awards.

To promote this idea, the app reports award numbers as search strings. In Keene, for example, the school district got an award for $1.47 million. The award number is S394A090030. If you search for that you’ll find nothing but a link back to a recovery.gov page entitled Where is the Money Going?

Recovery.gov can’t bootstrap itself out of this circular trap. But if we use the tags that it has helpfully provided, we might be able to find out a lot more about where the money is going.

A literary appreciation of the Olson/Zoneinfo/tz database

You will probably never need to know about the Olson database, also known as the Zoneinfo or tz database. And were it not for my elmcity project I never would have looked into it. I knew roughly that this bedrock database is a compendium of definitions of the world’s timezones, plus rules for daylight savings transitions (DST), used by many operating systems and programming languages.

I presumed that it was written Unix-style, in some kind of plain-text format, and that’s true. Here, for example, are top-level DST rules for the United States since 1918:

# Rule NAME FROM  TO    IN   ON         AT      SAVE    LETTER/S
Rule   US   1918  1919  Mar  lastSun    2:00    1:00    D
Rule   US   1918  1919  Oct  lastSun    2:00    0       S
Rule   US   1942  only  Feb  9          2:00    1:00    W # War
Rule   US   1945  only  Aug  14         23:00u  1:00    P # Peace
Rule   US   1945  only  Sep  30         2:00    0       S
Rule   US   1967  2006  Oct  lastSun    2:00    0       S
Rule   US   1967  1973  Apr  lastSun    2:00    1:00    D
Rule   US   1974  only  Jan  6          2:00    1:00    D
Rule   US   1975  only  Feb  23         2:00    1:00    D
Rule   US   1976  1986  Apr  lastSun    2:00    1:00    D
Rule   US   1987  2006  Apr  Sun>=1     2:00    1:00    D
Rule   US   2007  max   Mar  Sun>=8     2:00    1:00    D
Rule   US   2007  max   Nov  Sun>=1     2:00    0       S

What I didn’t appreciate, until I finally unzipped and untarred a copy of ftp://elsie.nci.nih.gov/pub/tzdata2009o.tar.gz, is the historical scholarship scribbled in the margins of this remarkable database, or document, or hybrid of the two.

You can see a glimpse of that scholarship in the above example. The most recent two rules define the latest (2007) change to US daylight savings. The spring forward rule says: “On the second Sunday in March, at 2AM, save one hour, and use D to change EST to EDT.” Likewise, on the fast-approaching first Sunday in November, spend one hour and go back to EST.

But look at the rules for Feb 9 1942 and Aug 14 1945. The letters are W and P instead of D and S. And the comments tell us that during that period there were timezones like Eastern War Time (EWT) and Eastern Peace Time (EPT). Arthur David Olson elaborates:

From Arthur David Olson (2000-09-25):

Last night I heard part of a rebroadcast of a 1945 Arch Oboler radio drama. In the introduction, Oboler spoke of “Eastern Peace Time.” An AltaVista search turned up :”When the time is announced over the radio now, it is ‘Eastern Peace Time’ instead of the old familiar ‘Eastern War Time.’ Peace is wonderful.”

 

Most of this Talmudic scholarship comes from founding contributor Arthur David Olson and editor Paul Eggert, both of whose Wikipedia pages, although referenced from the Zoneinfo page, strangely do not exist.

But the Olson/Eggert commentary is also interspersed with many contributions, like this one about the Mount Washington Observatory.

From Dave Cantor (2004-11-02)

Early this summer I had the occasion to visit the Mount Washington Observatory weather station atop (of course!) Mount Washington [, NH]…. One of the staff members said that the station was on Eastern Standard Time and didn’t change their clocks for Daylight Saving … so that their reports will always have times which are 5 hours behind UTC.

 

Since Mount Washington has a climate all its own, I guess it makes sense for it to have its own time as well.

Here’s a glimpse of Alaska’s timezone history:

From Paul Eggert (2001-05-30):

Howse writes that Alaska switched from the Julian to the Gregorian calendar, and from east-of-GMT to west-of-GMT days, when the US bought it from Russia. This was on 1867-10-18, a Friday; the previous day was 1867-10-06 Julian, also a Friday. Include only the time zone part of this transition, ignoring the switch from Julian to Gregorian, since we can’t represent the Julian calendar.

As far as we know, none of the exact locations mentioned below were permanently inhabited in 1867 by anyone using either calendar. (Yakutat was colonized by the Russians in 1799, but the settlement was destroyed in 1805 by a Yakutat-kon war party.) However, there were nearby inhabitants in some cases and for our purposes perhaps it’s best to simply use the official transition.

 

You have to have a sense of humor about this stuff, and Paul Eggert does:

From Paul Eggert (1999-03-31):

Shanks writes that Michigan started using standard time on 1885-09-18, but Howse writes (pp 124-125, referring to Popular Astronomy, 1901-01) that Detroit kept

local time until 1900 when the City Council decreed that clocks should be put back twenty-eight minutes to Central Standard Time. Half the city obeyed, half refused. After considerable debate, the decision was rescinded and the city reverted to Sun time. A derisive offer to erect a sundial in front of the city hall was referred to the Committee on Sewers. Then, in 1905, Central time was adopted by city vote.

 

This story is too entertaining to be false, so go with Howse over Shanks.

 

The document is chock full of these sorts of you-can’t-make-this-stuff-up tales:

From Paul Eggert (2001-03-06), following a tip by Markus Kuhn:

Pam Belluck reported in the New York Times (2001-01-31) that the Indiana Legislature is considering a bill to adopt DST statewide. Her article mentioned Vevay, whose post office observes a different
time zone from Danner’s Hardware across the street.

 

I love this one about the cranky Portuguese prime minister:

Martin Bruckmann (1996-02-29) reports via Peter Ilieve

that Portugal is reverting to 0:00 by not moving its clocks this spring.
The new Prime Minister was fed up with getting up in the dark in the winter.

 

Of course Gaza could hardly fail to exhibit weirdness:

From Ephraim Silverberg (1997-03-04, 1998-03-16, 1998-12-28, 2000-01-17 and 2000-07-25):

According to the Office of the Secretary General of the Ministry of Interior, there is NO set rule for Daylight-Savings/Standard time changes. One thing is entrenched in law, however: that there must be at least 150 days of daylight savings time annually.

 

The rule names for this zone are poignant too:

# Zone  NAME            GMTOFF  RULES   FORMAT  [UNTIL]
Zone    Asia/Gaza       2:17:52 -       LMT     1900 Oct
                        2:00    Zion    EET     1948 May 15
                        2:00 EgyptAsia  EE%sT   1967 Jun  5
                        2:00    Zion    I%sT    1996
                        2:00    Jordan  EE%sT   1999
                        2:00 Palestine  EE%sT

There’s also some wonderful commentary in the various software libraries that embody the Olson database. Here’s Stuart Bishop on why pytz, the Python implementation, supports almost all of the Olson timezones:

As Saudi Arabia gave up trying to cope with their timezone definition, I see no reason to complicate my code further to cope with them. (I understand the intention was to set sunset to 0:00 local time, the start of the Islamic day. In the best case caused the DST offset to change daily and worst case caused the DST offset to change each instant depending on how you interpreted the ruling.)

 

It’s all deliciously absurd. And according to Paul Eggert, Ben Franklin is having the last laugh:

From Paul Eggert (2001-03-06):

Daylight Saving Time was first suggested as a joke by Benjamin Franklin in his whimsical essay “An Economical Project for Diminishing the Cost of Light” published in the Journal de Paris (1784-04-26). Not everyone is happy with the results.

 

So is Olson/Zoneinfo/tz a database or a document? Clearly both. And its synthesis of the two modes is, I would argue, a nice example of literate programming.

Querying mobile data objects with LINQ

I’m using US census data to look up the estimated populations of the cities and towns running elmcity hubs. The dataset is just plain old CSV (comma-separated variable), a format that’s more popular than ever thanks in part to a new wave of web-based data services like DabbleDB, ManyEyes, and others.

For my purposes, simple pattern matching was enough to look up the population of a city and state. But I’d been meaning to try out LINQtoCSV, the .NET equivalent of my old friend, Python’s csv module. As happens lately, I was struck by the convergence of the languages. Here’s a side-by-side comparison of Python and C# using their respective CSV modules to query for the population of Keene, NH:

Python C#
 
 
i_name = 5
i_statename = 6
 
i_pop2008 = 17
 
 
handle = urllib.urlopen(url)
 
 
 
 
 
 
 
 
reader = csv.reader(
  handle, delimiter=',')
 
 
rows = itertools.ifilter(lambda x : 
  x[i_name].startswith('Keene') and 
  x[i_statename] == 'New Hampshire', 
    reader)
 
found_rows = list(rows)
 
 
 
count = len(found_rows)
 
if ( count > 0 ):
  pop = int(found_rows[0][i_pop2008])    
public class USCensusPopulationData
  {   
  public string NAME;
  public string STATENAME;
  ... etc. ...
  public string POP_2008;
  }
  
var csv = new WebClient().
  DownloadString(url);
  
var stream = new MemoryStream(
  Encoding.UTF8.GetBytes(csv));
var sr = new StreamReader(stream);
var cc = new CsvContext();
var fd = new CsvFileDescription { };
  
var reader = 
  cc.Read<USCensusPopulationData>(sr, fd);
  
 
var rows = reader.ToList();
 
  
  
 
var found_rows = rows.FindAll(row => 
  row.name.StartsWith('Keene') && 
  row.statename == 'New Hampshire');
  
var count = rows.Count;
  
if ( count > 0 )
  pop = Convert.ToInt32(
    found_rows[0].pop_2008)

Things don’t line up quite as neatly as in my earlier example, or as in the A/B comparison (from way back in 2005) between my first LINQ example and Sam Ruby’s Ruby equivalent. But the two examples share a common approach based on iterators and filters.

This idea of running queries over simple text files is something I first ran into long ago in the form of the ODBC Text driver, which provides SQL queries over comma-separated data. I’ve always loved this style of data access, and it remains incredibly handy. Yes, some data sets are huge. But the 80,000 rows of that census file add up to only 8MB. The file isn’t growing quickly, and it can tell a lot of stories. Here’s one:

2000 - 2008 population loss in NH

-8.09% Berlin city
-3.67% Coos County
-1.85% Portsmouth city
-1.85% Plaistow town
-1.78% Balance of Coos County
-1.43% Claremont city
-1.02% Lancaster town
-0.99% Rye town
-0.81% Keene city
-0.23% Nashua city

In both Python and C# you can work directly with the iterators returned by the CSV modules to accomplish this kind of query. Here’s a Python version:

import urllib, itertools, csv

i_name = 5
i_statename = 6
i_pop2000 = 9
i_pop2008 = 17

def make_reader():
  handle = open('pop.csv')
  return csv.reader(handle, delimiter=',')

def unique(rows):
  dict = {}
  for row in rows:
    key = "%s %s %s %s" % (i_name, i_statename, 
      row[i_pop2000], row[i_pop2008])    
    dict[key] = row
  list = []
  for key in dict:
    list.append( dict[key] )
  return list

def percent(row,a,b):
  pct = - (  float(row[a]) / float(row[b]) * 100 - 100 )
  return pct

def change(x,state,minpop=1):
  statename = x[i_statename]
  p2000 = int(x[i_pop2000])
  p2008 = int(x[i_pop2008])
  return (  statename==state and 
            p2008 > minpop   and 
            p2008 < p2000 )

state = 'New Hampshire'

reader = make_reader()
reader.next() # skip fieldnames

rows = itertools.ifilter(lambda x : 
  change(x,state,minpop=3000), reader)

l = list(rows)
l = unique(l)
l.sort(lambda x,y: cmp(percent(x,i_pop2000,i_pop2008),
  percent(y,i_pop2000,i_pop2008)))

for row in l:
  print "%2.2f%% %s" % ( 
       percent(row,i_pop2000,i_pop2008),
       row[i_name] )

A literal C# translation could do all the same things in the same ways: Convert the iterator into a list, use a dictionary to remove duplication, filter the list with a lambda function, sort the list with another lambda function.

As queries grow more complex, though, you tend to want a more declarative style. To do that in Python, you’d likely import the CSV file into a SQL database — perhaps SQLite in order to stay true to the lightweight nature of this example. Then you’d ship queries to the database in the form of SQL statements. But you’re crossing a chasm when you do that. The database’s type system isn’t the same as Python’s. And database’s internal language for writing functions won’t be Python either. In the case of SQLite, there won’t even be an internal language.

With LINQ there’s no chasm to cross. Here’s the LINQ code that produces the same result:

var census_rows = make_reader();

var distinct_rows = census_rows.Distinct(new CensusRowComparer());

var threshold = 3000;

var rows = 
  from row in distinct_rows
  where row.STATENAME == statename
      && Convert.ToInt32(row.POP_2008) > threshold
      && Convert.ToInt32(row.POP_2008) < Convert.ToInt32(row.POP_2000) 
  orderby percent(row.POP_2000,row.POP_2008) 
  select new
    {
    name = row.NAME,
    pop2000 = row.POP_2000,
    pop2008 = row.POP_2008    
    };

 foreach (var row in rows)
   Console.WriteLine("{0:0.00}% {1}",
     percent(row.pop2000,row.pop2008), row.name );

You can see the supporting pieces below. There are a number of aspects to this approach that I’m enjoying. It’s useful, for example, that every row of data becomes an object whose properties are available to the editor and the debugger. But what really delights me is the way that the query context and the results context share the same environment, just as in the Python example above. In this (slightly contrived) example I’m using the percent function in both contexts.

With LINQ to CSV I’m now using four flavors of LINQ in my project. Two are built into the .NET Framework: LINQ to XML, and LINQ to native .NET objects. And two are extensions: LINQ to CSV, and LINQ to JSON. In all four cases, I’m querying some kind of mobile data object: an RSS feed, a binary .NET object retrieved from the Azure blob store, a JSON response, and now a CSV file.

Six years ago I was part of a delegation from InfoWorld that visited Microsoft for a preview of technologies in the pipeline. At a dinner I sat with Anders Hejslberg and listened to him lay out his vision for what would become LINQ. There were two key goals. First, a single environment for query and results. Second, a common approach to many flavors of data.

I think he nailed both pretty well. And it’s timely because the cloud isn’t just an ecosystem of services, it’s also an ecosystem of mobile data objects that come in a variety of flavors.


private static float percent(string a, string b)
  {
  var y0 = float.Parse(a);
  var y1 = float.Parse(b);
  return - ( y0 / y1 * 100 - 100);
  }

private static IEnumerable<USCensusPopulationData> make_reader()
  {
  var h = new FileStream("pop.csv", FileMode.Open);
  var bytes = new byte[h.Length];
  h.Read(bytes, 0, (Int32)h.Length);
  bytes = Encoding.UTF8.GetBytes(Encoding.UTF8.GetString(bytes));
  var stream = new MemoryStream(bytes);
  var sr = new StreamReader(stream);
  var cc = new CsvContext();
  var fd = new CsvFileDescription { };

  var census_rows = cc.Read<USCensusPopulationData>(sr, fd);
  return census_rows;
  }

public class USCensusPopulationData
  {
  public string SUMLEV;
  public string state;
  public string county;
  public string PLACE;
  public string cousub;
  public string NAME;
  public string STATENAME;
  public string POPCENSUS_2000;
  public string POPBASE_2000;
  public string POP_2000;
  public string POP_2001;
  public string POP_2002;
  public string POP_2003;
  public string POP_2004;
  public string POP_2005;
  public string POP_2006;
  public string POP_2007;
  public string POP_2008;

  public override string ToString()
    {
    return
      NAME + ", " + STATENAME + " " + 
      "pop2000=" + POP_2000 + " | " +
      "pop2008=" + POP_2008;
    } 
  }

public class  CensusRowComparer : IEqualityComparer<USCensusPopulationData>
  {
  public bool Equals(USCensusPopulationData x, USCensusPopulationData y)
    {
    return x.NAME == y.NAME && x.STATENAME == y.STATENAME ;
    }

  public int GetHashCode(USCensusPopulationData obj)
    {
    var hash = obj.ToString();
    return hash.GetHashCode();
    }
  }