A conversation with the founders of Princeton’s Center for Information Techology Policy

As information technologies weave their way into every aspect of our personal, professional, and civic lives, there’s a growing need for informed public discussion of their public policy implications. Princeton’s Center for Information Technology Policy (CITP) is one emerging forum for that discussion. My guests on this week’s Innovators show are Ed Felten and David Robinson, who are respectively the director and the associate director of the Center. Ed holds a wonderfully mashed-up job title: He’s professor of computer science and public affairs at Princeton. The Center’s mission, Ed says, is to “do the intellectual import/export work” necessary to build bridges of understanding between information technologists and the rest of society. Widely known as a leading researcher in the field of computer security, he started thinking more broadly about a decade ago:

It was pretty clear that information technology was going to do more than change the way geeks like us do our jobs. It was going to be a big deal for the way society was organized, for the way markets work, for the way people relate to one another.

David adds:

When I was an undergrad at Princeton, one of my frustrations was that there wasn’t enough institutional support for studying issues that relate to digital technology but are not solely technical, and touch many other areas.

In this conversation we discuss the origin and goals of the Center, and then turn our attention to a recent CITP paper entitled Government Data and the Invisible Hand. This widely-cited essay argues that governments should worry less about building full-service web portals and focus more on providing raw data in easily digestible formats that third parties can mash up as needed.

I agree in principle, but argue that governments will need to supply context for the raw data they provide. Consider XBRL (eXtensible Business Reporting Language), the standard that may become mandatory for companies filing with the Securities and Exchange Commission. An XBRL report isn’t just raw data, it’s data contextualized by a set of definitions that capture key principles and practices of accounting.

Nailing down those definitions is brutally hard work, which is why XBRL has been slow to develop. But it’s important and necessary work. The data produced internally by governments will need to be similarly contextualized if we’re going to realize the benefits of making it transparently available.

Of course if we were to wait for governments themselves to finalize definitions before releasing data, we’d wait forever. What’s more, the relevant principles and practices will be, in many cases, far less evident than in the realm of accounting. So I think we all agreed, in the end, that governments should release data early and often, and should emphasize reusable data products over fixed-function web portals. But at the same time, governments should engage in a public dialogue that iteratively refines the data products that are published, and the explanations of what they are intended to mean.

Posted in Uncategorized

5 thoughts on “A conversation with the founders of Princeton’s Center for Information Techology Policy

  1. What was the tool you talked about that allowed you to compare contributions to candidates with the votes they took on particular legislation?

Leave a Reply