My recent post about redirecting a page of broken links weaves together two different ideas. First, that the titles of the articles on that page of broken links can be used as search terms in alternate links that lead people to those articles’ new locations. Second, that non-programmers can create macros to transform the original links into alternate search-driven links.
There was lots of useful feedback on the first idea. As Herbert Van de Sompel and Michael Nelson pointed out, it was a really bad idea to discard the original URLs, which retain value as lookup keys into one or more web archives. Alan Levine showed how to do that with the Wayback Machine. That method, however, leads the user to sets of snapshots that don’t consistently mirror the original article, because (I think) Wayback’s captures happened both before and after the breakage.
So for now I’ve restored the original page of broken links, alongside the new page of transformed links. I’m grateful for the ensuing discussion about ways to annotate those transformed links so they’re aware of the originals, and so they can tap into evolving services — like Memento — that will make good use of the originals.
The second idea, about tools and automation, drew interesting commentary as well. dyerjohn pointed to NimbleText, though we agreed it’s more suited to tabular than to textual data. Owen Stephens reminded me that the tool I first knew as Freebase Gridworks, then Google Refine, is going strong as OpenRefine. And while it too is more tabular than textual, “the data is line based,” he says, “and sort of tabular if you squint at it hard.” In Using OpenRefine to manipulate HTML he presents a fully-worked example of how to use OpenRefine to do the transformation I made by recording and playing back a macro in my text editor.
Meanwhile, on Twitter, Paul Walk and Les Carr and I were rehashing the old permathread about coding for non-coders.
The point about MS Word styles is spot on. That mechanism asks people to think abstractly, in terms of classes and instances. It’s never caught on. So with my text-transformation puzzle, Les suggests. Even with tools that enable non-coders to solve the puzzle, getting people across the cognitive threshold is Too Hard.
While mulling this over, I happened to watch Jeremy Howard’s TED talk on machine learning. He demonstrates a wonderful partnership between human and machine. The task is to categorize a large set of images. The computer suggests groupings, the human corrects and refines those groupings, the process iterates.
We’ve yet to inject that technology into our everyday productivity tools, but we will. And then, maybe, we will finally start to bridge the gap between coders and non-coders. The computer will watch the styles I create as I write, infer classes, offer to instantiate them for me, and we will iterate that process. Similarly, when I’m doing a repetitive transformation, it will notice what’s happening, infer the algorithm, offer to implement it for me, we’ll run it experimentally on a sample, then iterate.
Maybe in the end what people will most need to learn is not how to design stylistic classes and instances, or how to write code that automates repetitive tasks, but rather how to partner effectively with machines that work with us to make those things happen. Things that are Too Hard for most living humans and all current machines to do on their own.
It’s that problem that makes us needed but my hand is raised to be made redundant. The discussion here was great, I played some with the memento extension, it works when it gets habitual.
I wonder some too when we point to ifttt helps people learn logic. It might but I wonder if a lot take it as a magic formula. The trick is if they can take that experience/awareness and apply it elsewhere. I’m not sure at how you figure that out
Reblogged this on Carpet Bomberz Inc. and commented:
Agreed. I think insofar as a computer AI can watch and see what we’re doing and step in and prompt us with some questions, THAT will be the killer app. It won’t be Clippy the assistant from MS Word, but a friendly prompt saying, “I just watched you do something 3 times in a row, would you like some help doing a bunch of them without having to go through the steps yourself?” Then you got the offer of assistance, it’s timely and non-threatening. You don’t have “turn-on” a Macro recorder to see the steps, it’s already recognized you are doing a repetitive task it can automate. And as Jon points out it’s just a matter of successive approximations until you get the slam dunk, series of steps that gets the heavy lifting done. Then the human can address the exceptions list. The 20-50 examples that didn’t work quite right or the AI felt diverged from the pattern. That exception list is what the human should really be working on, not the 1,000 self-similar items that can be handled with the assistance of an AI.
Brings to mind something Richard P. Gabriel said in Patterns of Software[1]:
What are some of the things that contribute to uninhabitable programs? Overuse
of abstraction and inappropriate compression come to mind.
Classes, inheritance, and styles are all compression tools. They DRY up our artifacts. They replace repetition with reference. Gabriel, writing in JOOP over 20 years ago was post-DRY. He correctly identified the tension between compression on the one hand (and global comprehensibility) and literality (and local comprehensibility) on the other.
[1] http://www.dreamsongs.com/Files/PatternsOfSoftware.pdf
Thanks Bill! I haven’t seen (or didn’t remember) that use of the word compression. It is very apt. Elsewhere in that book:
“Locality is that characteristic of source code that enables a programmer to understand that source by looking at only a small portion of it. Compressed code doesn’t have this property, unless you are using a very fancy programming environment.”
Our ability to navigate compressed code bases is, in other words, made possible by a human/machine partnership. And our ability to create that compression is beginning to work the same way (e.g. refactoring IDEs).
I guess there are two paths forward, and hopefully they are not mutually exclusive. Along one path we rely on machines to evolve code that we will never be able to understand or inhabit. Along the other, we work shoulder-to-shoulder with them.