A chorus of IT recipes

My all-time favorite scene in The Matrix, if not in all of moviedom, is the one where Trinity needs to know how to fly a helicopter. “Tank, I need a pilot program for a B-212.” Her eyelids flutter while she downloads the skill.

I always used to think there was just one definitive flight instruction implant. But lately, thanks to Ward Cunningham and Mike Caulfield, I’ve started to imagine it a different way.

Here’s a thing that happened today. I needed to test a contribution from Ned Zimmerman that will improve the Hypothesis WordPress plugin. The WordPress setup I’d been using had rotted, it was time for a refresh, and the way you do that nowadays is with a tool called Docker. I’d used it for other things but not yet for WordPress. So of course I searched:

wordpress docker ubuntu

A chorus of recipes came back. I picked the first one and got stuck here with this sad report:

'module' object has on attribute 'connection'

Many have tried to solve this problem. Some have succeeded. But for my particular Linux setup it just wasn’t in the cards. Pretty quickly I pulled the trigger on that approach, went back to the chorus, and tried another recipe which worked like a charm.

The point is that there is no definitive recipe for the task. Circumstances differ. There’s a set of available recipes, some better than others for your particular situation. You want to be able to discover them, then rapidly evaluate them.

Learning by consulting a chorus is something programmers and sysadmins take for granted because a generation of open source practice has built a strong chorus. The band’s been together for a long time, a community knows the tunes.

Can this approach help us master other disciplines? Yes, but only if the work of practitioners is widely available online for review and study. Where that requirement is met, choral explanations ought to be able to flourish.

Augmenting journalism

Silicon Valley’s enthusiasm for a universal basic income follows naturally from a techno-utopian ideology of abundance. As robots displace human workers, they’ll provide more and more of the goods and services that humans need, faster and cheaper and better than we could. We’ll just need to be paid to consume those goods and services.

This narrative reveals a profound failure of imagination. Our greatest tech visionary, Doug Engelbart, wanted to augment human workers, not obsolete them. If an automated economy can free people from drudgework and — more importantly — sustain them, I’m all for it. But I believe that many people want to contribute if they can. Some want to teach. Some want to care for the elderly. Some want to build affordable housing. Some want to explore a field of science. Some want to grow food. Some want to write news stories about local or global issues.

Before we pay people simply to consume, why wouldn’t we subsidize these jobs? People want to do them, too few are available and they pay too poorly, expanding these workforces would benefit everyone.

The argument I’ll make here applies equally to many kinds of jobs, but I’ll focus here on journalism because my friend Joshua Allen invited me to respond to a Facebook post in which he says, in part:

We thought we were creating Borges’ Library of Babel, but we were haplessly ushering in the surveillance state and burning down the journalistic defenses that might have protected us from ascendant Trump.

Joshua writes from the perspective of someone who, like me, celebrated an era of technological progress that hasn’t served society in the ways we imagined it would. But we can’t simply blame the web for the demise of journalism. We mourn the loss of an economic arrangement — news as a profit-making endeavor — that arguably never ought to have existed. At the dawn of the republic it did not.

This is a fundamental of democratic theory: that you have to have an informed citizenry if you’re going to have not even self-government, but any semblance of the rule of law and a constitutional republic, because people in power will almost always gravitate to doing things to benefit themselves that will be to the harm of the Republic, unless they’re held accountable, even if they’re democratically elected. That’s built into our constitutional system. And that’s why the framers of the Constitution were obsessed with a free press; they were obsessed with understanding if you don’t have a credible press system, the Constitution can’t work. And that’s why the Framers in the first several generations of the Republic, members of Congress and the President, put into place extraordinary press subsidies to create a press system that never would have existed had it been left to the market.

— Robert McChesney, in Why We Need to Subsidize Journalism. An Exclusive Interview with Robert W. McChesney and John Nichols

It’s true that a universal basic income would enable passionate journalists like Dave Askins and Mary Morgan to inform their communities in ways otherwise uneconomical. But we can do better than that. The best journalism won’t be produced by human reporters or robot reporters. It will be a collaboration among them.

The hottest topic in Silicon Valley, for good reason, is machine learning. Give the machines enough data, proponents say, and they’ll figure out how to outperform us on tasks that require intelligence — even, perhaps, emotional intelligence. It helps, of course, if the machines can study the people now doing those tasks. So we’ll mentor our displacers, show them the ropes, help them develop and tune their algorithms. The good news is that we will at least play a transitional role before we’re retired to enjoy our universal basic incomes. But what if we don’t want that outcome? And what if it isn’t the best outcome we could get?

Let’s change the narrative. The world needs more and better journalism. Many more want to do that journalism than our current economy can sustain. The best journalism could come from people who are augmented by machine intelligence. Before we pay people to consume it, let’s pay some of them to partner with machines in order to produce quality journalism at scale.

I get to be a blogger

To orient myself to Santa Rosa when we arrived two years ago I attended a couple of city council meetings. At one of them I heard a man introduce himself in a way that got my attention. “I’m Matt Martin,” he said, “and I get to be the executive director of Social Advocates for Youth.” I interpreted that as: “It is my privilege to be the director of SAY.” Last week at a different local event I heard the same thing from another SAY employee. “I’m Ken Quinto and I get to be associate director of development for SAY.” I asked Ken if I was interpreting that figure of speech correctly and he said I was.

Well, I get to be director of partnership and integration for Hypothes.is and also a blogger. Former privileges include: evangelist for Microsoft, pioneering blogger for InfoWorld, freelance web developer and consultant, podcaster for ITConversations, columnist for various tech publications, writer and editor and web developer for BYTE. In all these roles I’ve gotten to explore technological landscapes, tackle interesting problems, connect with people who want to solve them, and write about what I learn.

Once, and for a long time, the writing was my primary work product. When blogging took off in the early 2000s I became fascinated with Dave Winer’s notion that narrating your work — a practice more recently called observable work and working out loud — made sense for everyone, not just writers who got paid to write. I advocated strongly for that practice. But my advice came from a place of privilege. Unlike most people, I was getting paid to write.

I still get to tackle interesting problems and connect with people who want to solve them. But times have changed. For me (and many others) that writing won’t bring the attention or the money that it once did. It’s been hard — really hard — to let go of that. But I’m still the writer I always was. And the practice of work narration that I once advocated from a position of privilege still matters now that I’ve lost that privilege.

The way forward, I think, is to practice what I long preached. I can narrate a piece of work, summarize what I’ve learned, and invite fellow travelers to validate or revise my conclusions. The topics will often be narrow and will appeal to a small audiences. Writing about assistive technology, for example, won’t make pageview counters spin. But it doesn’t have to. It only needs to reach the people who care about the topic, connect me to them, and help us advance the work.

Doing that kind of writing isn’t my day job anymore, and maybe never will be again. But I get to do it if I want to. That is a privilege available to nearly everyone.

Towards accessible annotation: a prototype and some questions

The most basic operation in Hypothes.is — select text on a page, click the Annotate button — is not yet accessible to a visually-impaired person who is using a screenreader. I’ve done a bit of research and come up with an approach that looks like it could work, but also raises many questions. In the spirit of keystroke conservation I want to record here what I think I know, and try to find out what I don’t.

Here’s a screencast of an initial prototype that shows, with the NVDA screen reader active on my system, the following sequence of events:

  • Load the Gettysburg address.
  • Use a key to move a selection from paragraph to paragraph.
  • Hear the selected paragraph.
  • Tab to the Annotate button and hit Enter to annotate the selected paragraph.

It’s a start. Now for some questions:

1. Is this a proper use of the aria-live attribute?

The screenreader can do all sorts of fancy navigation, like skip to the next word, sentence, or paragraph. But its notion of a selection exists within a copy of the document and (so far as I can tell) is not connected to the browsers’s copy. So the prototype uses a mechanism called ARIA Live Regions.

When you use the hotkey to advance to a paragraph and select it, a JavaScript method sets the aria-live attribute on that paragraph. That alone isn’t enough to make the screenreader announce the paragraph, it just tells it to watch the element and read it aloud if it changes. To effect a change, the JS method prepends selected: to the paragraph. Then the screenreader speaks it.

2. Can JavaScript in the browser relate the screenreader’s virtual buffer to the browser’s Document Object Model?

I suspect the answer is no, but I’d love to be proven wrong. If JS in the browser can know what the screenreader knows, the accessibility story would be much better.

3. Is this a proper use of role="link"?

The first iteration of this prototype used a document that mixed paragraphs and lists. Both were selected by the hotkey, but only the list items were read aloud by the screen reader. Then I realized that’s because list items are among the set of things — links, buttons, input boxes, checkboxes, menus — that are primary navigational elements from the screenreader’s perspective. So the version shown in the screencase adds role="link" to the visited-and-selected paragraph. That smells wrong, but what’s right?

4. Is there a polyfill for Selection.modify()?

Navigating by element — paragraph, list item, etc. — is a start. But you want to be able to select the next word (or previous) word or sentence or paragraph or table cell. And you want to be able to extend a selection to include the next word or sentence or paragraph or table cell.

A non-standard technology, Selection.modify(), is headed in that direction, and works today in Firefox and Chrome. But it’s not on a standards track. So is there a library that provides that capability in a cross-browser fashion?

It’s a hard problem. A selection within a paragraph that appears to grab a string of characters is, under the covers, quite likely to cross what are called node boundaries. Here, from an answer on StackOverflow, is a picture of what’s going on:

When a selection includes a superscript3 as shown here, it’s obvious to you what the text of the selection should be: 123456790. But that sequence of characters isn’t readily available to a JavaScript program looking at the page. It has to traverse a sequence of nodes in the browser’s Document Object Model in order to extract a linear stream of text.

It’s doable, and in fact Hypothes.is does just that when you make a selection-based annotation. That gets harder, though, when you want to move or extend that selection by words and paragraphs. So is there a polyfill for Selection.modify()? The closest I’ve found is rangy, are there others?

5. What about key bindings?

The screen reader reserves lots of keystrokes for its own use. If it’s not going to be possible to access its internal representation of the document, how will there be enough keys left over for rich navigation and selection in the browser?

What I Learned While Building an App for the Canvas Learning Management System

Life takes strange turns. I’m connected to the ed-tech world by way of Gardner Campbell, Jim Groom, and Mike Caulfield. They are fierce critics of the academy’s embrace of the Learning Management System (LMS) and are among the leaders of an indie-web movement that arose in opposition to it. So it was odd to find myself working on an app that would enable my company’s product, the Hypothes.is web/PDF annotator, to plug into what’s become the leading LMS, Instructure’s Canvas.

I’m not an educator, and I haven’t been a student since long before the advent of the LMS, so my only knowledge of it was second-hand. Now I can report a first-hand experience, albeit that of a developer building an LMS app, not that of a student or a teacher.

What I learned surprised me in a couple of ways. I’ve found Canvas to be less draconian than I’d been led to expect. More broadly, the LMS ecosystem that’s emerged — based on a standard called Learning Tools Interoperability (LTI), now supported by all the LMS systems — led me to an insight about how the same approach could help unify the emerging ecosystem of annotation systems. Even more broadly, all this has prompted me to reflect on how the modern web platform is both more standardized and more balkanized than ever before.

But first things first. Our Canvas app began with this request from teachers: “How can we enable students to use Hypothes.is to annotate the PDF files we upload to our courses?” There wasn’t any obvious way to integrate our tool into the native Canvas PDF viewer. That left two options. We could perhaps create a plugin, internal to Canvas, based on Hypothes.is and the JavaScript component (Mozilla’s PDF.js) we and others use to convert PDF files into web pages. Or we could create an LTI app that delivers that combo as a service running — like all LTI apps — outside Canvas. We soon found that the first option doesn’t really exist. Canvas is an open source product, but the vast majority of schools use Instructure’s hosted service. Canvas has a plugin mechanism but there seems to be no practical way to use it. I don’t know about other LMSs (yet) but if you want to integrate with Canvas, you’re going to build an app that’s launched from Canvas, runs in a Canvas page, and communicates with Canvas using the standard LTI protocol and (optionally) the Canvas API.

Working out how to do that was a challenge. But with lots of help from ed-tech friends and associates as well as from Instructure, we came up with a nice solution. A teacher who wants to base an assignment on group annotation of a PDF file or a web page adds our LTI app to a course. The app displays a list of the PDFs in the Files area of the course. The teacher selects one of those, or provides the URL of a web page to annotate, then completes the assignment in the usual way by adding a description, setting a date, and defining the grading method if participation will be graded. When the student clicks the assignment link, the PDF or web page shows up in a Canvas page with the Hypothes.is annotator active. The student logs into Hypothes.is, switches to a Hypothes.is private group (if the teacher created one for the course), engages with the document and with other students in the annotation layer, and at some point submits the assignment. What the teacher sees then, in a Canvas tool called Speed Grader, on a per-student basis, is an export of document-linked conversation threads involving that student.

The documents that host those conversations can live anywhere on the web. And the conversations are wide open too. Does the teacher engage with students? Do students engage with one another? Does conversation address predefined questions or happen organically? Do tag conventions govern how annotations cluster within or across documents? Nothing in Hypothes.is dictates any such policies, and nothing in Canvas does either.

Maybe the LMS distorts or impedes learning, I don’t know, I’m not an educator. What I can say is that, from my perspective, Canvas just looks like a content management system that brings groups and documents together in a particular context called a course. That context can be enhanced by external tools, like ours, that enable interaction not only among those groups and documents but also globally. A course might formally enroll a small group of students, but as independent Hypothes.is users they can also interact DS106-style with Hypothes.is users and groups anywhere. The teacher can focus on conversations that involve enrolled students, or zoom out to consider a wider scope. To me, at least, this doesn’t feel like a walled garden. And I credit LTI for that.

The app I’ve written is a thin layer of glue between two components: Canvas and Hypothes.is. LTI defines how they interact, and I’d be lying if I said it was easy to figure out to get our app to launch inside Canvas and respond back to it. But I didn’t need to be an HTTP, HTML, CSS, JavaScript, or Python wizard to get the job done. And that’s fortunate because I’m not one. I just know enough about these technologies to be able to build basic web apps, much like ones I was able to build 20 years ago when the web first became a software platform. The magic for me was always about what simple web apps can do when connected to the networked flow of information among people and computers. My Canvas experience reminded me that we can still tap into that magic.

Why did I need to be reminded? Because while the web’s foundation is stronger than ever, the layers being built on it — so-called frameworks, with names like Angular and Ember (in the browser), Rails and Pyramid (on the server) — are the province of experts. These frameworks help with common tasks — identifying users, managing interaction with them, storing their data — so developers can focus on what their apps do specially. That’s a good and necessary thing when the software is complex, and when it’s written by people who build complex software for a living.

But lots of useful software isn’t that complex, and isn’t written by people who do that for a living. Before the web came along, plenty got built on Lotus 1-2-3, Excel, dBase, and FoxPro, much of it by information workers who weren’t primarily doing that for a living. The early web had that same feel but with an astonishing twist: global connectivity. With only modest programming skill I could, and did, build software that participated in a networked flow of information among people and computers. That was possible for two reasons. First, with HTML and JavaScript (no CSS yet) I could deliver a basic user interface to anyone, anywhere, on any kind of computer. Second, with HTTP I could connect that user interface to components and databases all around the web. Those components and databases were called web sites by the people who viewed them through the lens of the browser. But for me they were also software services. Through the lens of a network-savvy programming language (it was Perl, at the time) the web looked like a library of software modules, and URLs looked like the API (application programming interface) to that library.

If I had to write a Canvas plugin I’d have needed to learn a fair bit about its framework, called Rails, and about Ruby, the language in which that framework is written. And that hard-won knowledge would not have transferred to another LMS built on a different framework and written in a different language. Happily LTI spared me from that fate. I didn’t need to learn that stuff. When our app moves to another LMS it’ll need to know how to pull PDF files out of that other system. And that other system might not yet support all the LTI machinery required for two-way communication. But assuming it does, the app will do exactly what it does now — launch in response to an “API call” (aka URL), deliver a “component” (an annotation-enabled document) — in exactly the same way.

Importantly I wasn’t just spared a deep dive into Rails, the server framework that powers Canvas. I was also spared a deep dive into Angular, the JavaScript framework that powers the Hypothes.is client. That’s because our browser-based app can work as a pluggable component. It’s easy to embed Hypothesis in web pages and not much harder to do the same for PDFs displayed in the browser. All I had to do was the plumbing. I wish that had been easier than it was. But it was doable with modest and general skills. That makes the job accessible to people without elite and specific skills. How many more such people are there? Ten times? A hundred? The force multiplier, whatever it may be, increases the likelihood that useful combinations of software components will find their way into learning environments.

All this brings me back to Hypothes.is, and to the annotation ecosystem that we envision, promote, and expect to participate in. The W3C Web Annotation Working Group is defining standard ways to represent and exchange annotations, so that different kinds of annotation clients and servers can work together as do different kinds of email clients and email servers, or browsers and web servers. Because Hypothes.is implements early variants of those soon-to-be-formalized annotation standards, I’ve been able to do lots of useful integration work. Much of it entails querying our service for annotation data and then filtering, transforming, or cross-linking it. That requires only basic web data wrangling. Some of the work entails injection of that data into web pages. That requires only basic web app development. But until recently I didn’t see a way to democratize the ability to extend the Hypothes.is client.

Here’s a example of the kind of thing I want to be able to do and, more importantly, that I want others to be able to do. Like other social systems we offer tags as a principal way to organize data sets. In Hypothes.is you can use tags to keep track of documents as well as annotations linked to those documents. The tags are freeform. We remember and prompt with the tags you’ve used recently, but there are no rules, you can make up whatever tags you want. That’s great for casual use. If you need a bit more rigor, it’s possible to agree with your collaborators on a restricted set of tags that define key facets of the data you jointly create. But pretty soon you find yourself wishing for more control. You want to define specific lists of terms available in specific contexts for specific purposes.

Hypothes.is uses the Angular framework, as I’ve said. It also relies on a set of components that work only in that framework. One of those, called ngTagsInput, is the tag editor used in Hypothes.is. The good news is that it handles basic tagging quite well, and our developers didn’t need to build that capability, they just plugged it in. The bad news is that in order to do any meaningful work with ngTagsInput, you’d need to learn a lot about it, about how it works within the Angular framework, and about Angular itself. That hard-won knowledge won’t transfer to another JavaScript framework, nor will what you build using that knowledge transfer to another web client built on another framework. A component built in Angular won’t work in Ember just as a component built for Windows won’t work on the Mac.

With any web-based technology there’s always a way to get your foot in the door. In this case, I found a way to hook into ngTagsInput at the point where it asks for a list of terms to fill its picklist. In the Hypothes.is client, that list is kept locally in your browser and contains the tags you’ve used recently. It only required minor surgery to redirect ngTagsInput to a web-based list. That delivered two benefits. The list was controlled, so there was no way to create an invalid tag. And it was shared, so you could synchronize a group on the same list of controlled tags.

A prototype based on that idea has helped some Hypothes.is users manage annotations with shared tag namespaces. But others require deeper customization. Scientific users, in particular, spend increasing time and effort annotating documents, extracting structured information from them, and classifying both the documents and the annotations. For one of them, it wasn’t enough to connect ngTagsInput to a web-based list of terms. People need to see context wrapped around those terms in order to know which ones to pick. That context was available on the server, but there was no way to present it in ngTagsInput. Cracking that component open and working out how to extend it to meet this requirement is a job for an expert. You’d need a different expert to do the same thing for ngTagsInput’s counterpart in a different JavaScript framework. That doesn’t bode well if you want to end up with annotation ecosystem made of standard parts.

So, channeling Douglas Hofstadter, I wondered: “What’s the LTI of annotation?” The answer I came up with, in another prototype, was a way to embed a simple web application in the body of an annotation. Just as my LTI app is launched in the context of a Canvas course, with knowledge of the students and resources in that course as well as API access to both Canvas and to the global network of people and information, so with this little web app. It’s launched in the context of an annotation, with knowledge of the properties of that annotation (document URL, quote, comment, replies, tags) and with API access to both Hypothes.is and to the same global network of people and information. Just as my LTI app requires only basic web development knowledge and ability, so with this annotation app. You don’t need to be an expert to create something useful in this environment. And the thing you do could transfer to another standards-based annotation environment.

There’s nothing new here. We’ve had all these capabilities for 20 years. Trends in modern web software development pile on layers of abstraction and push us toward specialization and make it harder to see the engine under the hood that that runs everything. But if you lift the hood you’ll see that the engine is still there, humming along more smoothly than ever. One popular JavaScript framework, called jQuery, was once widely used mainly to paper over browsers’ incompatible implementations of HTML, JavaScript, CSS, and an underlying technology called the Document Object Model. jQuery is falling into disuse because modern browsers have converged remarkably well on those web standards. Will Angular and Ember and the rest likewise converge on a common system of components? A common framework, even? I hope so; opinions differ; if it does happen it won’t be soon.

Meanwhile Web client apps, in fierce competition with one another and with native mobile apps, will continue to require elite developers who commit to non-portable frameworks. Fair enough. But that doesn’t mean we have to lock out the much larger population of workaday developers who command basic web development skills and can use them to create useful software that works everywhere. We once called Perl the duct tape of the Internet. With a little knowledge of it, you could do a lot. It’s easy to regard that as an era of lost innocence. But a little knowledge of our current flavors of duct tape can still enable many of us to do a lot, if systems are built to allow and encourage that. The LTI ecosystem does. Will the annotation ecosystem follow suit?

Copyright can’t stop annotation of government documents

I’ll admit that the Medium Legal team’s post AB 2880?—?Kill (this) Bill had me at hello:

Fellow Californians, please join us in opposing AB 2880, which would allow and encourage California to extend copyright protection to works made by the state government. We think it’s a bad idea that would wind up limiting Californians’ ability to post and read government information on platforms like Medium.

That sure does sound like a bad idea, and hey, I’m a Californian now too. But when I try to read the actual bill I find it hard to relate its text to Medium Legal’s interpretations, or to some others:

I doubt I’m alone in struggling to connect these interpretations to their evolving source text. Medium Legal says, for example:

AB 2880 requires the state’s Department of General Services to track the copyright status of works created by the state government’s 228,000 employees, and requires every state agency to include intellectual property clauses in every single one of their contracts unless they ask the Department in advance for permission not to do so.

What’s the basis for this interpretation? How do Medium Legal think the text of the bill itself supports it? I find four mentions of the Department of General Services in the bill: (1), (2), (3), (4). To which of these do Medium Legal refer? Do they also rely on the Assembly Third Reading? How? I wish Medium Legal had, while preparing their post, annotated those sources.

The Assembly Third Reading, meanwhile, concludes:

Summary of the bill: In summary, this bill does all of the following:

1) clarifies existing law that state agencies may own, license, and register intellectual property to the extent not inconsistent with the rights of the public to obtain, inspect, copy, publish and otherwise communicate under the California Public Records Act, the California Constitution as provided, and under the First Amendment to the United States Constitution;

2) …

7) …

Analysis Prepared by: Eric Dang / JUD. / (NNN) NNN-NNNN

The same questions apply. How does Eric Dang think the source text supports his interpretation? How do his seven points connect to the bill under analysis? Again, an annotation layer would help us anchor the analysis to its sources.

Medium Legal and Eric Dang used digital tools to make notes supporting their essays. Such notes are, by default, not hyperinked to specific source passages and not available to us as interpretive lenses. Modern web annotation flips that default. Documents remain canonical; notes anchor precisely to words and sentences; the annotation layer is a shareable overlay. There’s no copying, so no basis for the chilling effect that critics of AB 2880 foresee. While the bill might limit Californians’ ability to post and read government information on platforms like Medium, it won’t matter one way or the other to Californians who do such things on platforms like Hypothesis.

Watching animals

On a visit to the Berlin Zoo last month, I watched primates interact with a cleverly-designed game called a poke box. It’s a plexiglas enclosure with shelves. Rows of holes in the enclosure give access to the shelves. Each shelf has a single hole, offset from the holes above and below. The machine drips pellets of food onto the top shelf. The primates, using sticks they’ve stripped and prepared, reach through the holes and tease the pellets through the holes in the shelves, performing one delicate maneuver after another, finally reaching through a slot in the bottom of the enclosure to claim the prize.

How would I perform in a game like that? My hunch: not much better or worse than a group of bonobos I spent a long time watching.

“This so-called behavioural enrichment,” the Zoo says, “is an important area in animal management.” The poke box is undoubtedly enriching the lives of those primates. But they are still prisoners.

My dad was a volunteer at the Philadelphia Zoo. I saw, growing up, how that zoo tried to create more naturalistic settings for its animals. The most successful I can recall was the hummingbird house. Its tiny inhabitants roamed freely in a large open space that looked and felt like a tropical rain forest.

I don’t know if those birds really lived something like a normal life, but it seems at least plausible. I can’t say the same for most of the animals I’ve seen in zoos. Conditions may have improved over the years, but I’m not sure I’ll ever again pay to watch the prisoners perform “enriched” behaviors in “naturalistic” settings.

If my children were still young, that’d be a hard call to make. A visit to the zoo is one of the great experiences of childhood. If there weren’t zoos, how could we replace that experience?

Maybe we don’t need to jail animals. Maybe we just need to improve our ability to observe them in situ. There’s still plenty of open space that is, or could be, conserved. And we’re getting really good at surveillance! Let’s put our new skill to a different use. Fly cameras over areas of wildlife parks inaccessible to tourist vehicles. Enable online visitors to adopt and follow individual animals and their groups. Make it possible for anyone to have the sort of experience Jane Goodall did. The drone cameras sound creepy, but unlike the vehicles that carry tourists through those parks, the drones will keep getting smaller and more unobtrusive.

Could it happen? I’m not holding my breath. So for now I’ll focus on the wildlife I can see right here in Sonoma County. This spring we discovered the Ninth Street Rookery, less than a mile from our house. Each of the two large eucalyptus trees there is home to about a hundred nests in which black-crowned night herons and various species of egret — great, cattle, snowy — raise a cacophony as they raise their young. We visited a couple of times; it’s wonderful. Although, come to think of it, if it were a channel on cams.allaboutbirds.org I could watch and learn a lot more about the lives of these animals. And so could you.