People who don’t listen to podcasts often ask people who do: “When do you find time to listen?” For me it’s always on long walks or hikes. (I do a lot of cycling too, and have thought about listening then, but wind makes that impractical and cars make it dangerous.) For many years my trusty podcast player was one or another version of the Creative Labs MuVo which, as the ad says, is “ideal for dynamic environments.”
At some point I opted for the convenience of just using my phone. Why carry an extra, single-purpose device when the multi-purpose phone can do everything? That was OK until my Quixotic attachment to Windows Phone became untenable. Not crazy about either of the alternatives, I flipped a coin and wound up with an iPhone. Which, of course, lacks a 3.5mm audio jack. So I got an adapter, but now the setup was hardly “ideal for dynamic environments.” My headset’s connection to the phone was unreliable, and I’d often have to stop walking, reseat it, and restart the podcast.
If you are gadget-minded you are now thinking: “Wireless earbuds!” But no thanks. The last thing I need in my life is more devices to keep track of, charge, and sync with other devices.
I was about to order a new MuVo, and I might still; it’s one of my favorite gadgets ever. But on a recent hike, in a remote area with nobody else around, I suddenly realized I didn’t need the headset at all. I yanked it out, stuck the phone in my pocket, and could hear perfectly well. Bonus: Nothing jammed into my ears.
It’s a bit weird when I do encounter other hikers. Should I pause the audio or not when we cross paths? So far I mostly do, but I don’t think it’s a big deal one way or another.
Adding more devices to solve a device problem amounts to doing the same thing and expecting a different result. I want to remain alert to the possibility that subtracting devices may be the right answer.
There’s a humorous coda to this story. It wasn’t just the headset that was failing to seat securely in the Lightning port. Charging cables were also becoming problematic. A friend suggested a low-tech solution: use a toothpick to pull lint out of the socket. It worked! I suppose I could now go back to using my wired headset on hikes. But I don’t think I will.
For a friend’s memorial I signed up to make a batch of images into a slideshow. All I wanted was the Simplest Possible Thing: a web page that would cycle through a batch of images. It’s been a while since I did something like this, so I looked around and didn’t find anything that seemed simple enough. The recipes I found felt like overkill. Here’s all I wanted to do:
Put the images we’re gathering into a local folder
Run one command to build slideshow.html
Push the images plus slideshow.html to a web folder
Step 1 turned out to be harder than expected because a bunch of the images I got are in Apple’s HEIC format, so I had to find a tool that would convert those to JPG. Sigh.
For step 2 I wrote the script below. A lot of similar recipes you’ll find for this kind of thing will create a trio of HTML, CSS, and JavaScript files. That feels to me like overkill for something as simple as this, I want as few moving parts as possible, so the Python script bundles everything into slideshow.html which is the only thing that needs to be uploaded (along with the images).
Step 3 was simple: I uploaded the JPGs and slideshow.html to a web folder.
Except, whoa, not so fast there, old-timer! True, it’s easy for me, I’ve maintained a personal web server for decades and I don’t think twice about pushing files to it. Once upon a time, when you signed up with an ISP, that was a standard part of the deal: you’d get web hosting, and would use an FTP client — or some kind of ISP-provided web app — to move files to your server.
As I realized a few years ago, that’s now a rare experience. It seems that for most people, it’s far from obvious how to push a chunk of basic web stuff to a basic web server. People know how to upload stuff to Google Drive, or WordPress, but those are not vanilla web hosting environments.
It’s a weird situation. The basic web platform has never been more approachable. Browsers have converged nicely on the core standards. Lots of people could write a simple app like this one. Many more could at least /use/ it. But I suspect it will be easier for many nowadays to install Python and run this script than to push its output to a web server.
I hate to sound like a Grumpy Old Fart. Nobody likes that guy. I don’t want to be that guy. So I’ll just ask: What am I missing here? Are there reasons why it’s no longer important or useful for most people to be able to use the most basic kind of web hosting?
import os
l = [i for i in os.listdir() if i.endswith('.jpg')]
divs = ''
for i in l:
divs += f"""
<div class="slide">
<img src="{i}">
</div>
"""
# Note: In a Python f-string, CSS/JS squiggies ({}) need to be doubled
html = f"""
<html>
<head>
<title>My Title</title>
<style>
body {{ background-color: black }}
.slide {{ text-align: center; display: none; }}
img {{ height: 100% }}
</style>
</head>
<body>
<div id="slideshow">
<div role="list">
{divs}
</div>
</div>
<script>
const slides = document.querySelectorAll('.slide')
const time = 5000
slides[0].style.display = 'block';
let i = 0
setInterval( () => {{
i++
if (i === slides.length) {{
i = 0
}}
for (let j = 0; j <= i; j++ ) {{
if ( j === i ) {{
slides[j].style.display = 'block'
}} else {{
slides[j].style.display = 'none'
}}
}}
}}, time)
</script>
</body>
</html>
"""
with open('slideshow.html', 'w') as f:
f.write(html)
Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about.
All of Dave’s presentations are carefully researched and presented. A detail that has long fascinated me: how the show displays source material. Dave often cites IPCC reports and other sources that are, in raw form, PDF files. He spices up these citations with some impressive animated renderings. Here’s one from the most recent episode.
The progressive rendering of the chart in this example is an even fancier effect than I’ve seen before, and it prompted me to track down the original source. In that clip Dave cites IRENA, the International Renewable Energy Agency, so I visited their site, looked for the cited report, and found it on page 8 of World Energy Transitions Outlook 2022. That link might or might not take you there directly, if not you can scroll to page 8 where you’ll find the chart that’s been animated in the video.
The graphical finesse of Just Have a Think is only icing on the cake. The show reports a constant stream of innovations that collectively give me hope we might accomplish the transition and avoid worst-case scenarios. But still, I wonder. That’s just a pie chart in a PDF file. How did it become the progressive rendering that appears in the video?
In any case, and much more importantly: Dave, thanks for the great work you’re doing!
It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. But 10 of those 24 inches came in one big burst on Oct 24.
Here are a bunch of those raindrops sailing down the Santa Rosa creek to the mouth of the Russian River at Jenner.
With Sam Learner’s amazing River Runner we can follow a drop that fell in the Mayacamas range as it makes its way to the ocean.
Until 2014 I’d only ever lived east of the Mississipi River, in Pennsylvania, Michigan, Maryland, Massachusetts, and New Hampshire. During those decades there may never have been a month with zero precipitation.
I still haven’t adjusted to a region where it can be dry for many months. In 2017, the year of the devastating Tubbs Fire, there was no rain from April through October.
California relies heavily on the dwindling Sierra snowpack for storage and timed release of water. Clearly we need a complementary method of storage and release, and this passage in Kim Stanley Robinson’s Ministry for the Future imagines it beautifully.
Typically the Sierra snowpack held about fifteen million acre-feet of water every spring, releasing it to reservoirs in a slow melt through the long dry summers. The dammed reservoirs in the foothills could hold about forty million acre-feet when full. Then the groundwater basin underneath the central valley could hold around a thousand million acre-feet; and that immense capacity might prove their salvation. In droughts they could pump up groundwater and put it to use; then during flood years they needed to replenish that underground reservoir, by capturing water on the land and not allow it all to spew out the Golden Gate.
…
Now the necessity to replumb the great valley for recharge had forced them to return a hefty percentage of the land to the kind of place it had been before Europeans arrived. The industrial agriculture of yesteryear had turned the valley into a giant factory floor, bereft of anything but products grown for sale; unsustainable ugly, devastated, inhuman, and this in a place that had been called the “Serengeti of North America,” alive with millions of animals, including megafauna like tule elk and grizzly bear and mountain lion and wolves. All those animals had been exterminated along with their habitat, in the first settlers’ frenzied quest to use the valley purely for food production, a kind of secondary gold rush. Now the necessity of dealing with droughts and floods meant that big areas of the valley were restored, and the animals brought back, in a system of wilderness parks or habitat corridors, all running up into the foothills that ringed the central valley on all sides.
The book, which Wikipedia charmingly classifies as cli-fi, grabbed me from page one and never let go. It’s an extraordinary blend of terror and hope. But this passage affected me in the most powerful way. As Marc Reisner’s Cadillac Desert explains, and as I’ve seen for myself, we’ve already engineered the hell out of California’s water systems, with less than stellar results.
Can we redo it and get it right this time? I don’t doubt our technical and industrial capacity. Let’s hope it doesn’t take an event like the one the book opens with — a heat wave in India that kills 20 million people in a week — to summon the will.
I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization.
I was a child of the 1960s, and spent my grade school years in a newly-built suburb of Philadelphia. Commuter culture was well established by then, so the dads in the neighborhood were gone during the day. So were some of the moms, mine included, but many were at home and were able to keep an eye on us kids as we played in back yards after school. And our yards were special. A group of parents had decided not to fence them, thus creating what was effectively a private park. The games we played varied from season to season but always involved a group of kids roaming along that grassy stretch. Nobody was watching us most of the time. Since the kitchens all looked out on the back yards, though, there was benign surveillance. Somebody’s mom might be looking out at any given moment, and if things got out of hand, somebody’s mom would hear that.
For most kids, a generation later, that freedom was gone. Not for ours, though! They were in grade school when BYTE Magazine ended and I began my remote career. Our house became an after-school gathering place for our kids and their friends. With me in my front office, and Luann in her studio in the back, those kids enjoyed a rare combination of freedom and safety. We were mostly working, but at any given moment we could engage with them in ways that most parents never could.
I realized that commuter culture had, for several generations, sucked the daytime life out of neighborhoods. What we initially called telecommuting wasn’t just a way to save time, reduce stress, and burn less fossil fuel. It held the promise of restoring that daytime life.
All this came back to me powerfully at the height of the pandemic lockdown. Walking around the neighborhood on a weekday afternoon I’d see families hanging out, kids playing, parents working on landscaping projects and tinkering in garages, neighbors talking to one another. This was even better than my experience in the 2000s because more people shared it.
Let’s hold that thought. Even if many return to offices on some days of the week, I believe and hope that we’ve normalized working from home on other days. By inhabiting our neighborhoods more fully on weekdays, we can perhaps begin to repair a social fabric frayed by generations of commuter culture.
Meanwhile here is a question to ponder. Why do we say that we are working from and not working at home?
The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone.
“Don’t worry,” I said confidently, “it’ll come back to one us.”
Sure enough, a few days later, on a bike ride, the name popped into my head. I’m sure you’ve had the same experience. This time around it prompted me to think about how that happens.
To me it feels like starting up a background search process that runs for however long it takes, then notifies me when the answer is ready. I know the brain isn’t a computer, and I know this kind of model is suspect, so I wonder what’s really going on.
– Why was I was so sure the name would surface?
– Does a retrieval effort kick off neurochemical change that elaborates over time?
– Before computers, what model did people use to explain this phenomenon?
So far I’ve only got one answer. That spunky little girl was Diana.
A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime.
Although the osteoarthritis was more advanced on the right side, we opted for bilateral replacement because the left side wasn’t far behind. Things hadn’t felt symmetrical in the years leading up to the surgery, and that didn’t change. There’s always a sense that something’s different about the right side.
We’re pretty sure it’s not the hardware. X-rays show that the implants remain firmly seated, and there’s no measurable asymmetry. Something about the software has changed, but there’s been no way to pin down what’s different about the muscles, tendons, and ligaments on that side, whether there’s a correction to be made, and if so, how.
Last month, poking around on my iPhone, I noticed that I’d never opened the Health app. That’s beause I’ve always been ambivalent about the quantified self movement. In college, when I left competive gymnastics and took up running, I avoided tracking time and distance. Even then, before the advent of fancy tech, I knew I was capable of obsessive data-gathering and analysis, and didn’t want to go there. It was enough to just run, enjoy the scenery, and feel the afterglow.
When I launched the Health app, I was surprised to see that it had been counting my steps since I became an iPhone user 18 months ago. Really? I don’t recall opting into that feature.
Still, it was (of course!) fascinating to see the data and trends. And one metric in particular grabbed my attention: Walking Asymmetry.
Walking asymmetry is the percent of time that your steps with one foot are faster or slower than the other foot.
…
An even or symmetrical walk is often an important physical therapy goal when recovering from injury.
Here’s my chart for the past year.
I first saw this in mid-December when the trend was at its peak. What caused it? Well, it’s been rainy here (thankfully!), so I’ve been riding less, maybe that was a factor?
Since then I haven’t biked more, though, and I’ve walked the usual mile or two most days, with longer hikes on weekends. Yet the data suggest that I’ve reversed the trend.
What’s going on here?
Maybe this form of biofeedback worked. Once aware of the asymmetry I subconsciously corrected it. But that doesn’t explain the November/December trend.
Maybe the metric is bogus. A phone in your pocket doesn’t seem like a great way to measure walking asymmetry. I’ve also noticed that my step count and distances vary, on days when I’m riding, in ways that are hard to explain.
I’d like to try some real gait analysis using wearable tech. I suspect that data recorded from a couple of bike rides, mountain hikes, and neighborhood walks could help me understand the forces at play, and that realtime feedback could help me balance those forces.
I wouldn’t want to wear it all the time, though. It’d be a diagnostic and therapeutic tool, not a lifestyle.
I’ve just rediscovered two digital assets that I’d forgotten about.
1. The Reddit username judell, which I created in 2005 and never used. When you visit the page it says “hmm… u/judell hasn’t posted anything” but also reports, in my Trophy Case, that I belong to the 15-year club.
2. The Amazon AWS S3 bucket named simply jon, which I created in 2006 for an InfoWorld blog post and companion column about the birth of Amazon Web Services. As Wikipedia’s timeline shows, AWS started in March of that year.
Care to guess the odds that I could still access both of these assets after leaving them in limbo for 15 years?
Spoiler alert: it was a coin flip.
I’ve had no luck with Reddit so far. The email account I signed up with no longer exists. The support folks kindly switched me to a current email but it’s somehow linked to Educational_Elk_7869 not to judell. I guess we may still get it sorted but the point is that I was not at all surprised by this loss of continuity. I’ve lost control of all kinds of digital assets over the years, including the above-cited InfoWorld article which only Wayback (thank you as always!) now remembers.
When I turned my attention to AWS S3 I was dreading a similar outcome. I’d gone to Microsoft not long after I made that AWS developer account; my early cloud adventures were all in Azure; could I still access those long-dormant AWS resources? Happily: yes.
Here’s the backstory from that 2006 blog post:
Naming
The name of the bucket is jon. The bucket namespace is global which means that as long as jon is owned by my S3 developer account, nobody else can use that name. Will this lead to a namespace land grab? We’ll see. Meanwhile, I’ve got mine, and although I may never again top Jon Stewart as Google’s #1 Jon, his people are going to have to talk to my people if they want my Amazon bucket.
I’m not holding my breath waiting for an offer. Bucket names never mattered in the way domain names do. Still, I would love to be pleasantly surprised!
My newfound interest in AWS is, of course, because Steampipe wraps SQL around a whole bunch of AWS APIs including the one for S3 buckets. So, for example, when exactly did I create that bucket? Of course I can log into the AWS console and click my way to the answer. But I’m all about SQL lately so instead I can do this.
Oh, and there’s the other one I made for Luann the following year. These are pretty cool ARNs (Amazon Resource Names)! I should probably do something with them; the names you can get nowadays are more like Educational_Elk_7869.
Anyway I’m about to learn a great deal about the many AWS APIs that Steampipe can query, check for policy compliance, and join with the APIs of other services. Meanwhile it’s fun to recall that I wrote one of the first reviews of the inaugural AWS product and, in the process, laid claim to some very special S3 bucket names.
Monday will be my first day as community lead for Steampipe, a young open source project that normalizes APIs by way of Postgres foreign data wrappers. The project’s taglines are select * from cloud and query like it’s 1992; the steampipe.io home page nicely illustrates these ideas.
I’ve been thinking about API normalization for a long time. The original proposal for the World Wide Web says:
Databases
A generic tool could perhaps be made to allow any database which uses a commercial DBMS to be displayed as a hypertext view.
We ended up with standard ways for talking to databases — ODBC, JDBC — but not for expressing them on the web.
When I was at Microsoft I was bullish on OData, an outgrowth of Pablo Castro’s wonderful Project Astoria. Part of the promise was that every database-backed website could automatically offer basic API access that wouldn’t require API wrappers for everybody’s favorite programming language. The API was hypertext; a person could navigate it using links and search. Programs wrapped around that API could be useful, but meaningful interaction with data would be possible without them.
(For a great example of what that can feel like, jump into the middle of one of Simon Willison’s datasettes, for example san-francisco.datasettes.com, and start clicking clicking around.)
Back then I wrote a couple of posts on this topic[1, 2]. Many years later OData still hasn’t taken the world by storm. I still think it’s a great idea and would love to see it, or something like it, catch on more broadly. Meanwhile Steampipe takes a different approach. Given a proliferation of APIs and programming aids for them, let’s help by providing a unifying abstraction: SQL.
I’ve done a deep dive into the SQL world over the past few years. The first post in a series I’ve been writing on my adventures with Postgres is what connected me to Steampipe and its sponsor (my new employer) Turbot. When you install Steampipe it brings Postgres along for the ride. Imagine what you could do with data flowing into Postgres from many different APIs and filling up tables you can view, query, join, and expose to tools and systems that talk to Postgres. Well, it’s going to be my job to help imagine, and explain, what’s possible in that scenario.
Meanwhile I need to give some thought to my Twitter tag line: patron saint of trailing edge technologies. It’s funny and it’s true. At BYTE I explored how software based on the Net News Transfer Protocol enabled my team to do things that we use Slack for today. At Microsoft I built a system for community-scale calendaring based on iCalendar. When I picked up NNTP and iCalendar they were already on the trailing edge. Yet they were, and especially in the case of iCalendar still are, capable of doing much more than is commonly understood.
Then of course came web annotation. Although Hypothesis recently shepherded it to W3C standardization it goes all the way back to the Mosaic browser and is exactly the kind of generative tech that fires my imagination. With Hypothesis now well established in education, I hope others will continue to explore the breadth of what’s possible when every document workflow that needs to can readily connect people, activities, and data to selections in documents. If that’s of interest, here are some signposts pointing to scenarios I’ve envisioned and prototyped.
And now it’s SQL. For a long time I set it aside in favor of object, XML, and NoSQL stores. Coming back to it, by way of Postgres, has shown me that:
– Modern SQL is more valuable as a programming language than is commonly understood
– So is Postgres as a programming environment
The tagline query like it’s 1992 seems very on-brand for me. But maybe I should let go of the trailing-edge moniker. Nostalgia isn’t the best way to motivate fresh energy. Maybe query like it’s 2022 sets a better tone? In any case I’m very much looking forward to this next phase.
R0ml Lefkowitz’s The Image of Postgres evokes the Smalltalk experience: reach deeply into a running system, make small changes, see immediate results. There isn’t yet a fullblown IDE for the style of Postgres-based development I describe in this series, though I can envision a VSCode extension that would provide one. But there is certainly a REPL (read-eval-print loop), it’s called psql, and it delivers the kind of immediacy that all REPLs do. In our case there’s also Metabase; it offers a complementary REPL that enhances its power as a lightweight app server.
The Clojure REPL gives the programmer an interactive development experience. When developing new functionality, it enables her to build programs first by performing small tasks manually, as if she were the computer, then gradually make them more and more automated, until the desired functionality is fully programmed. When debugging, the REPL makes the execution of her programs feel tangible: it enables the programmer to rapidly reproduce the problem, observe its symptoms closely, then improvise experiments to rapidly narrow down the cause of the bug and iterate towards a fix.
I feel the same way about the Python REPL, the browser’s REPL, the Metabase REPL, and now also the Postgres REPL. Every function and every materialized view in the analytics system begins as a snippet of code pasted into the psql console (or Metabase). Iteration yields successive results instantly, and those results reflect live data. In How is a Programmer Like a Pathologist Gilad Bracha wrote:
A live program is dynamic; it changes over time; it is animated. A program is alive when it’s running. When you work on a program in a text editor, it is dead.
Tudor Girba amplified the point in a tweet.
In a database-backed system there’s no more direct way to interact with live data than to do so in the database. The Postgres REPL is, of course, a very sharp tool. Here are some ways to handle it carefully.
Find the right balance for tracking incremental change
In Working in a hybrid Metabase / Postgres code base I described how version-controlled files — for Postgres functions and views, and for Metabase questions — repose in GitHub and drive a concordance of docs. I sometimes write code snippets directly in psql or Metabase, but mainly compose in a “repository” (telling word!) where those snippets are “dead” artifacts in a text editor. They come to life when pasted into psql.
A knock on Smalltalk was that it didn’t play nicely with version control. If you focus on the REPL aspect, you could say the same of Python or JavaScript. In any such case there’s a balance to be struck between iterating at the speed of thought and tracking incremental change. Working solo I’ve been inclined toward a fairly granular commit history. In a team context I’d want to leave a chunkier history but still record the ongoing narrative somewhere.
Make it easy to understand the scope and effects of changes
This tooling is still a work in progress, though. The concordance doesn’t yet include Postgres types, for example, nor the tables that are upstream from materialized views. My hypothetical VSCode extension would know about all the artifacts and react immediately when things change.
Make it easy to find and discard unwanted artifacts
Given a function or view named foo, I’ll often write and test a foo2 before transplanting changes back into foo. Because foo may often depend on bar and call baz I wind up also with bar2 and baz2. These artifacts hang around in Postgres until you delete them, which I try to do as I go along.
If foo2 is a memoized function (see this episode), it can be necessary to delete the set of views that it’s going to recreate. I find these with a query.
select
'drop materialized view ' || matviewname || ';' as drop_stmt
from pg_matviews
where matviewname ~* {{ pattern }}
That pattern might be question_and_answer_summary_for_group to find all views based on that function, or _6djxg2yk to find all views for a group, or even [^_]{8,8}$ to find all views made by memoized functions.
I haven’t yet automated the discovery or removal of stale artifacts and references to them. That’s another nice-to-have for the hypothetical IDE.
The Image of Postgres
I’ll give R0ml the last word on this topic.
This is the BYTE magazine cover from August of 1981. In the 70s and the 80s, programming languages had this sort of unique perspective that’s completely lost to history. The way it worked: a programming environment was a virtual machine image, it was a complete copy of your entire virtual machine memory and that was called the image. And then you loaded that up and it had all your functions and your data in it, and then you ran that for a while until you were sort of done and then you saved it out. And this wasn’t just Smalltalk, Lisp worked that way, APL worked that way, it was kind of like Docker only it wasn’t a separate thing because everything worked that way and so you didn’t worry very much about persistence because it was implied. If you had a programming environment it saved everything that you were doing in the programming environment, you didn’t have to separate that part out. A programming environment was a place where you kept all your data and business logic forever.
So then Postgres is kind of like Smalltalk only different.
What’s the difference? Well we took the UI out of Smalltalk and put it in the browser. The rest of it is the same, so really Postgres is an application delivery platform, just like we had back in the 80s.
It’s all nicely RESTful. Interactive elements that can parameterize queries, like search boxes and date pickers, map to URLs. Queries can emit URLs in order to compose themselves with other queries. I came to see this system as a kind of lightweight application server in which to incubate an analytics capability that could later be expressed more richly.
Let’s explore that idea in more detail. Consider this query that finds groups created in the last week.
with group_create_days as (
select
to_char(created, 'YYYY-MM-DD') as day
from "group"
where created > now() - interval '1 week'
)
select
day,
count(*)
from group_create_days
group by day
order by day desc
A Metabase user can edit the query and change the interval to, say, 1 month, but there’s a nicer way to enable that. Terms in double squigglies are Metabase variables. When you type {{interval}} in the query editor, the Variables pane appears.
Here I’m defining the variable’s type as text and providing the default value 1 week. The query sent to Postgres will be the same as above. Note that this won’t work if you omit ::interval. Postgres complains: “ERROR: operator does not exist: timestamp with time zone – character varying.” That’s because Metabase doesn’t support variables of type interval as required for date subtraction. But if you cast the variable to type interval it’ll work.
That’s an improvement. A user of this Metabase question can now type 2 months or 1 year to vary the interval. But while Postgres’ interval syntax is fairly intuitive, this approach still requires people to make an intuitive leap. So here’s a version that eliminates the guessing.
The variable type is now Field Filter; the filtered field is the created column of the group table; the widget type is Relative Date; the default is Last Month. Choosing other intervals is now a point-and-click operation. It’s less flexible — 3 weeks is no longer an option — but friendlier.
Metabase commendably provides URLs that capture these choices. The default in this case is METABASE_SERVER/question/1060?interval=lastmonth. For the Last Year option it becomes interval=lastyear.
Because all Metabase questions that use variables work this way, the notion of Metabase as rudimentary app server expands to sets of interlinked questions. In Working in a hybrid Metabase / Postgres code base I showed the following example.
A Metabase question, #600, runs a query that selects columns from the view top_20_annotated_domains_last_week. It interpolates one of those columns, domain, into an URL that invokes Metabase question #985 and passes the domain as a parameter to that question. In the results for question #600, each row contains a link to a question that reports details about groups that annotated pages at that row’s domain.
This is really powerful stuff. Even without all the advanced capabilities I’ve been discussing in this series — pl/python functions, materialized views — you can do a lot more with the Metabase / Postgres combo than you might think.
For example, here’s a interesting idiom I’ve discovered. It’s often useful to interpolate a Metabase variable into a WHERE clause.
select *
from dashboard_users
where email = {{ email }}
You can make that into a fuzzy search using the case-insensitive regex-match operator ~*.
select *
from dashboard_users
where email ~* {{ email }}
That’ll find a single address regardless of case; you can also find all records matching, say, ucsc.edu. But it requires the user to type some value into the input box. Ideally this query won’t require any input. If none is given, it lists all addresses in the table. If there is input it does a fuzzy match on that input. Here’s a recipe for doing that. Tell Metabase that {{ email }} a required variable, and set its default to any. Then, in the query, do this:
select *
from dashboard_users
where email ~*
case
when {{ email }} = 'any' then ''
else {{ email}}
end
In the default case the matching operator binds to the empty string, so it matches everything and the query returns all rows. For any other input the operator binds to a value that drives a fuzzy search.
This is all very nice, you may think, but even the simplest app server can write to the database as well as read from it, and Metabase can’t. It’s ultimately just a tool that you point at a data warehouse to SELECT data for display in tables and charts. You can’t INSERT or UPDATE or ALTER or DELETE or CALL anything.
Well, it turns out that you can. Here’s a Metabase question that adds a user to the table.
select add_dashboard_user( {{email}} )
How can this possibly work? If add_dashboard_user were a Postgres procedure you could CALL it from psql, but in this context you can only SELECT.
We’ve seen the solution in Postgres set-returning functions that self-memoize as materialized views. A Postgres function written in pl/python can import and use a Python function from a plpython_helpers module. That helper function can invoke psql to CALL a procedure. So this is possible.
We’ve used Metabase for years. It provides a basic, general-purpose UX that’s deeply woven into the fabric of the company. Until recently we thought of it as a read-only system for analytics, so a lot of data management happens in spreadsheets that don’t connect to the data warehouse. It hadn’t occurred to me to leverage that same basic UX for data management too, and that’s going to be a game-changer. I always thought of Metabase as a lightweight app server. With some help from Postgres it turns out to be a more capable one than I thought.
While helping Hypothesis find its way to ed-tech it was my great privilege to explore ways of adapting annotation to other domains including bioscience, journalism, and scholarly publishing. Working across these domains showed me that annotation isn’t just an app you do or don’t adopt. It’s also a service you’d like to be available in every document workflow that connects people to selections in documents.
As I worked through these and other scenarios, I accreted a set of tools for enabling any annotation-aware interaction in any document-oriented workflow. I’ve wanted to package these as a coherent software development kit, that hasn’t happened yet, but here are some of the ingredients that belong in such an SDK.
Creating an annotation from a selection in a document
Two core operations lie at the heart of any annotation system: creating a note that will bind to a selection in a document, and binding (anchoring) that note to its place in the document. A tool that creates an annotation reacts to a selection in a document by forming one or more selectors that describe the selection.
The most important selector is TextQuoteSelector. If I visit http://www.example.com and select the phrase “illustrative examples” and then use Hypothesis to annotate that selection, the payload sent from the client to the server includes this construct.
{
"type": "TextQuoteSelector",
"exact": "illustrative examples",
"prefix": "n\n This domain is for use in ",
"suffix": " in documents. You may use this\n"
}
The Hypothesis client formerly used an NPM module, dom-anchor-text-quote, to derive that info from a selection. It no longer uses that module, and the equivalent code that it does use isn’t separately available. But annotations created using TextQuoteSelectors formed by dom-anchor-text-quote interoperate with those created using the Hypothesis client, and I don’t expect that will change since Hypothesis needs to remain backwards-compatible with itself.
You’ll find something like TextQuoteSelector in any annotation system. It’s formally defined by the W3C here. In the vast majority of cases this is all you need to describe the selection to which an annotation should anchor.
There are, however, cases where TextQuoteSelector won’t suffice. Consider a document that repeats the same passage three times. Given a short selection in the first of those passages, how can a system know that an annotation should anchor to that one, and not the second or third? Another selector, TextPositionSelector (https://www.npmjs.com/package/dom-anchor-text-position), enables a system to know which passage contains the selection.
It records the start and end of the selection in the visible text of an HTML document. Here’s the HTML source of that web page.
<div>
<h1>Example Domain</h1>
<p>This domain is for use in illustrative examples in documents. You may use this
domain in literature without prior coordination or asking for permission.</p>
<p><a href="https://www.iana.org/domains/example">More information...</a></p>
</div>
Here is the visible text to which the TextQuoteSelector refers.
\n\n Example Domain\n This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.\n More information…\n\n\n\n
The positions recorded by a TextQuoteSelector can change for a couple of reasons. If the document is altered, it’s obvious that an annotation’s start and stop numbers might change. Less obviously that can happen even if the document’s text isn’t altered. A news website, for example, may inject different kinds of advertising-related text content from one page load to the next. In that case the positions for two consecutive Hypothesis annotations made on the same selection can differ. So while TextPositionSelector can resolve ambiguity, and provide hints to an annotation system about where to look for matches, the foundation is ultimately TextQuoteSelector.
If you try the first example in the README at https://github.com/judell/TextQuoteAndPosition, you can form your own TextQuoteSelector and TextPositionSelector from a selection in a web page. That repo exists only as a wrapper around the set of modules — dom-anchor-text-quote, dom-anchor-text-position, and wrap-range-text — needed to create and anchor annotations.
Building on these ingredients, HelloWorldAnnotated illustrates a common pattern.
Given a selection in a page, form the selectors needed to post an annotation that targets the selection.
Lead a user through an interaction that influences the content of that annotation.
Post the annotation.
Here is an example of such an interaction. It’s a content-labeling scenario in which a user rates the emotional affect of a selection. This is the kind of thing that can be done with the stock Hypothesis client, but awkwardly because users must reliably add tags like WeakNegative or StrongPositive to represent their ratings. The app prompts for those tags to ensure consistent use of them.
Although the annotation is created by a standalone app, the Hypothesis client can anchor it, display it, and even edit it.
And the Hypothesis service can search for sets of annotations that match the tags WeakNegative or StrongPositive.
There’s powerful synergy at work here. If your annotation scenario requires controlled tags, or a prescribed workflow, you might want to adapt the Hypothesis client to do those things. But it can be easier to create a standalone app that does exactly what you need, while producing annotations that interoperate with the Hypothesis system.
Anchoring an annotation to its place in a document
Using this same set of modules, a tool or system can retrieve an annotation from a web service and anchor it to a document in the place where it belongs. You can try the second example in the README at https://github.com/judell/TextQuoteAndPosition to see how this works.
For a real-world demonstration of this technique, see Science in the Classroom. It’s a project sponsored by The American Association for the Advancement of Science. Graduate students annotate research papers selected from the Science family of journals so that younger students can learn about the terminology, methods, and outcomes of scientific research.
Pre-Hypothesis, annotations on these papers were displayed using Learning Lens, a viewer that color-codes them by category.
Nothing about Learning Lens changed when Hypothesis came into the picture, it just provided a better way to record the annotations. Originally that was done as it’s often done in the absence of a formal way to describe annotation targets, by passing notes like: “highlight the word ‘proximodistal’ in the first paragraph of the abstract, and attach this note to it.” This kind of thing happens a lot, and wherever it does there’s an opportunity to adopt a more rigorous approach. Nowadays at Science in the Classroom the annotators use Hypothesis to describe where notes should anchor, as well as what they should say. When an annotated page loads it searches Hypothesis for annotations that target the page, and inserts them using the same format that’s always been used to drive the Learning Lens. Tags assigned by annotators align with Learning Lens categories. The search looks only for notes from designated annotators, so nothing unwanted will appear.
An annotation-powered survey
The Credibility Coalition is “a research community that fosters collaborative approaches to understanding the veracity, quality and credibility of online information.” We worked with them on a project to test a set of signals that bear on the credibility of news stories. Examples of such signals include:
Title Representativeness (Does the title of an article accurately reflect its content?)
Sources (Does the article cite sources?)
Acknowledgement of uncertainty (Does the author acknowledge uncertainty, or the possibility things might be otherwise?)
Volunteers were asked these questions for each of a set of news stories. Many of the questions were yes/no or multiple choice and could have been handled by any survey tool. But some were different. What does “acknowledgement of uncertainty” look like? You know it when you see it, and you can point to examples. But how can a survey tool solicit answers that refer to selections in documents, and record their locations and contexts?
The answer was to create a survey tool that enabled respondents to answer such questions by highlighting one or more selections. Like the HelloWorldAnnotated example above, this was a bespoke client that guided the user through a prescribed workflow. In this case, that workflow was more complex. And because it was defined in a declarative way, the same app can be used for any survey that requires people to provide answers that refer to selections in web documents.
A JavaScript wrapper for the Hypothesis API
The HelloWorldAnnotated example uses functions from a library, hlib, to post an annotation to the Hypothesis service. That library includes functions for searching and posting annotations using the Hypothesis API. It also includes support for interaction patterns common to annotation apps, most of which occur in facet, a standalone tool that searches, displays, and exports sets of annotations. Supported interactions include:
– Authenticating with an API token
– Creating a picklist of groups accessible to the authenticated user
If you’re working in Python, hypothesis-api is an alternative API wrapper that supports searching for, posting, and parsing annotations.
Notifications
If you’re a publisher who embeds Hypothesis on your site, you can use a wildcard search to find annotations. But it would be helpful to be notified when annotations are posted. h_notify is a tool that uses the Hypothesis API to watch for annotations on individual or wildcard URLs, or from particular users, or in a specified group, or with a specified tag.
When an h_notify-based watcher finds notes in any of these ways, it can send alerts to a Slack channel, or to an email address, or add items to an RSS feed.
At Hypothesis we mainly rely on the Slack option. In this example, user nirgendheim highlighted the word “interacting” in a page on the Hypothesis website.
The watcher sent this notice to our #website channel in Slack.
A member of the support team (Hypothesis handle mdiroberts) saw it there and responded to nirgendheim as shown above. How did nirgendheim know that mdiroberts had responded? The core Hypothesis system sends you an email when somebody replies to one of your notes. h_notify is for bulk monitoring and alerting.
A tiny Hypothesis server
People sometimes ask about connecting the Hypothesis client to an alternate server in order to retain complete control over their data. It’s doable, you can follow the instructions here to build and run your own server, and some people and organizations do that. Depending on need, though, that can entail more effort, and more moving parts, than may be warranted.
Suppose for example you’re part of a team of investigative journalists annotating web pages for a controversial story, or a team of analysts sharing confidential notes on web-based financial reports. The documents you’re annotating are public, but the notes you’re taking in a Hypothesis private group are so sensitive that you’d rather not keep them in the Hypothesis service. You’d ideally like to spin up a minimal server for that purpose: small, simple, and easy to manage within your own infrastructure.
Here’s a proof of concept. This tiny server clocks in just 145 lines of Python with very few dependencies. It uses Python’s batteries-include SQLite module for annotation storage. The web framework is Pyramid only because that’s what I’m familiar with, but could as easily be Flask, the ultra-light framework typically used for this sort of thing.
A tiny app wrapped around those ingredients is all you need to receive JSON payloads from a Hypothesis client, and return JSON payloads when the client searches for annotations to anchor to a page.
The service is dockerized and easy to deploy. To test it I used the fly.iospeedrun to create an instance at https://summer-feather-9970.fly.dev. Then I made the handful of small tweaks to the Hypothesis client shown in client-patches.txt. My method for doing that, typical for quick proofs of concept that vary the Hypothesis client in some small way, goes like this:
Clone the Hypothesis client.
Edit gulpfile.js to say const IS_PRODUCTION_BUILD = false. This turns off minification so it’s possible to read and debug the client code.
Follow the instructions to run the client from a browser extension. After establishing a link between the client repo and browser-extension repo, as per those instructions, use this build command — make build SETTINGS_FILE=settings/chrome-prod.json — to create a browser extension that authenticates to the Hypothesis production service.
In a Chromium browser (e.g. Chrome or Edge or Brave) use chrome://extensions, click Load unpacked, and point to the browser-extension/build directory where you built the extension.
This is the easiest way to create a Hypothesis client in which to try quick experiments. There are tons of source files in the repos, but just a handful of bundles and loose files in the built extension. You can run the extension, search and poke around in those bundles, set breakpoints, make changes, and see immediate results.
In this case I only made the changes shown in client-patches.txt:
In options/index.html I added an input box to name an alternate server.
In options/options.js I sync that value to the cloud and also to the browser’s localStorage.
In the extension bundle I check localStorage for an alternate server and, if present, modify the API request used by the extension to show the number of notes found for a page.
In the sidebar bundle I check localStorage for an alternate server and, if present, modify the API requests used to search for, create, update, and delete annotations.
I don’t recommend this cowboy approach for anything real. If I actually wanted to use this tweaked client I’d create branches of the client and the browser-extension, and transfer the changes into the source files where they belong. If I wanted to share it with a close-knit team I’d zip up the extension so colleagues could unzip and sideload it. If I wanted to share more broadly I could upload the extension to the Chrome web store. I’ve done all these things, and have found that it’s feasible — without forking Hypothesis — to maintain branches that maintain small but strategic changes like this one. But when I’m aiming for a quick proof of concept, I’m happy to be a cowboy.
In any event, here’s the proof. With the tiny server deployed to summer-feather-9970.fly.dev, I poked that address into the tweaked client.
And sure enough, I could search for, create, reply to, update, and delete annotations using that 145-line SQLite-backed server.
The client still authenticates to Hypothesis in the usual way, and behaves normally unless you specify an alternate server. In that case, the server knows nothing about Hypothesis private groups. The client sees it as the Hypothesis public layer, but it’s really the moral equivalent of a private group. Others will see it only if they’re running the same tweaked extension and pointing to the same server. You could probably go quite far with SQLite but, of course, it’s easy to see how you’d swap it out for a more robust database like Postgres.
Signposts
I think of these examples as signposts pointing to a coherent SDK for weaving annotation into any document workflow. They show that it’s feasible to decouple and recombine the core operations: creating an annotation based on a selection in a document, and anchoring an annotation to its place in a document. Why decouple? Reasons are as diverse as the document workflows we engage in. The stock Hypothesis system beautifully supports a wide range of scenarios. Sometimes it’s helpful to replace or augment Hypothesis with a custom app that provides a guided experience for annotators and/or an alternative display for readers. The annotation SDK I envision will make it straightforward for developers to build solutions that leverage the full spectrum of possibility.
In Working with Postgres types I showed an example of a materialized view that depends on a typed set-returning function. Because Postgres knows about that dependency, it won’t allow DROP FUNCTION foo. Instead it requires DROP FUNCTION foo CASCADE.
A similar thing happens with materialized views that depend on tables or other materialized views. Let’s build a cascade of views and consider the implications.
create materialized view v1 as (
select
1 as number,
'note_count' as label
);
SELECT 1
select * from v1;
number | label
-------+-------
1 | note_count
Actually, before continuing the cascade, let’s linger here for a moment. This is a table-like object created without using CREATE TABLE and without explicitly specifying types. But Postgres knows the types.
\d v1;
Materialized view "public.v1"
Column | Type
--------+-----
number | integer
label | text
The read-only view can become a read-write table like so.
create table t1 as (select * from v1);
SELECT 1
select * from t1;
number | label
-------+-------
1 | note_count
\d t1
Table "public.v1"
Column | Type
--------+-----
number | integer
label | text
This ability to derive a table from a materialized view will come in handy later. It’s also just interesting to see how the view’s implicit types become explicit in the table.
OK, let’s continue the cascade.
create materialized view v2 as (
select
number + 1,
label
from v1
);
SELECT 1
select * from v2;
number | label
-------+-------
2 | note_count
create materialized view v3 as (
select
number + 1,
label
from v2
);
SELECT 1
select * from v3;
number | label
-------+-------
3 | note_count
Why do this? Arguably you shouldn’t. Laurenz Albe makes that case in Tracking view dependencies in PostgreSQL. Recognizing that it’s sometimes useful, though, he goes on to provide code that can track recursive view dependencies.
I use cascading views advisedly to augment the use of CTEs and functions described in Postgres functional style. Views that refine views can provide a complementary form of the chunking that aids reasoning in an analytics system. But that’s a topic for another episode. In this episode I’ll describe a problem that arose in a case where there’s only a single level of dependency from a table to a set of dependent materialized views, and discuss my solution to that probem.
Here’s the setup. We have an annotation table that’s reloaded nightly. On an internal dashboard we have a chart based on the materialized view annos_at_month_ends_for_one_year which is derived from the annotation table and, as its name suggests, reports annotation counts on a monthly cycle. At the beginning of the nightly load, this happens: DROP TABLE annotation CASCADE. So the derived view gets dropped and needs to be recreated as part of the nightly process. But that’s a lot of unnecessary work for a chart that only changes monthly.
Here are two ways to protect a view from a cascading drop of the table it depends on. Both reside in a SQL script, monthly.sql, that only runs on the first of every month. First, annos_at_month_ends_for_one_year.
drop materialized view annos_at_month_ends_for_one_year;
create materialized view annos_at_month_ends_for_one_year as (
with last_days as (
select
last_days_of_prior_months(
date(last_month_date() - interval '6 year')
) as last_day
),
monthly_counts as (
select
to_char(last_day, '_YYYY-MM') as end_of_month,
anno_count_between(
date(last_day - interval '1 month'), last_day
) as monthly_annos
from last_days
)
select
end_of_month,
monthly_annos,
sum(monthly_annos) over
(order by end_of_month asc rows
between unbounded preceding and current row
) as cumulative_annos
from monthly_counts
) with data;
Because this view depends indirectly on the annotation table — by way of the function anno_count_between — Postgres doesn’t see the dependency. So the view isn’t affected by the cascading drop of the annotation table. It persists until, once a month, it gets dropped and recreated.
What if you want Postgres to know about such a dependency, so that the view will participate in a cascading drop? You can do this.
create materialized view annos_at_month_ends_for_one_year as (
with depends as (
select * from annotation limit 1
)
last_days as (
),
monthly_counts as (
)
select
*
from monthly_counts;
The depends CTE doesn’t do anything relevant to the query, it just tells Postgres that this view depends on the annotation table.
Here’s another way to protect a view from a cascading drop. This expensive-to-build view depends directly on the annotation table but only needs to be updated monthly. So in this case, cumulative_annotations is a table derived from a temporary materialized view.
create materialized view _cumulative_annotations as (
with data as (
select
to_char(created, 'YYYY-MM') as created
from annotation
group by created
)
select
data.created,
sum(data.count)
over (
order by data.created asc
rows between unbounded preceding and current row
)
from data
group by data.created
order by data.created
drop table cumulative_annotations;
create table cumulative_annotations as (
select * from _cumulative_annotations
);
drop materialized view _cumulative_annotations;
The table cumulative_annotations is only rebuilt once a month. It depends indirectly on the annotation table but Postgres doesn’t see that, so doesn’t include it in the cascading drop.
Here’s the proof.
-- create a table
create table t1 (number int);
insert into t1 (number) values (1);
INSERT 0 1
select * from t1;
number
-------
1
-- derive a view from t1
create materialized view v1 as (select * from t1);
SELECT 1
select * from v1
number
-------
1
-- try to drop t1
drop table t1;
ERROR: cannot drop table t1 because other objects depend on it
DETAIL: materialized view v1 depends on table t1
HINT: Use DROP ... CASCADE to drop the dependent objects too.
-- derive an independent table from t1 by way of a matview
drop materialized view v1;
create materialized view v1 as (select * from t1);
SELECT 1
create table t2 as (select * from v1);
SELECT 1
-- drop the matview
drop materialized view v1;
-- drop t1
drop table t1;
-- no complaint, and t2 still exists
select * from t2;
number
-------
1
These are two ways I’ve found to protect a long-lived result set from the cascading drop of a short-lived table on which it depends. You can hide the dependency behind a function, or you can derive an independent table by way of a transient materialized view. I use them interchangeably, and don’t have a strong preference one way or another. Both lighten the load on the analytics server. Materialized views (or tables) that only need to change weekly or monthly, but were being dropped nightly by cascade from core tables, are now recreated only on their appropriate weekly or monthly cycles.
In this series I’m exploring how to work in a code base that lives in two places: Metabase questions that encapsulate chunks of SQL, and Postgres functions/procedures/views that also encapsulate chunks of SQL. To be effective working with this hybrid collection of SQL chunks, I needed to reason about their interrelationships. One way to do that entailed the creation of a documentation generator that writes a concordance of callers and callees.
Here’s the entry for the function sparklines_for_guid(_guid).
The called by column says that this function is called from two different contexts. One is a Metabase question, All courses: Weekly activity. If you’re viewing that question in Metabase, you’ll find that its SQL text is simply this:
select * from sparklines_for_guid( {{guid}} )
The same function call appears in a procedure, cache warmer, that preemptively memoizes the function for a set of the most active schools in the system. In either case, you can look up the function in the concordance, view its definition, and review how it’s called.
In the definition of sparkline_for_guid, names of other functions (like guid_hashed_view_exists) appear and are linked to their definitions. Similarly, names of views appearing in SELECT or JOIN contexts are linked to their definitions.
Here’s the entry for the function guid_hashed_view_exists. It is called by sparklines_for_guid as well as by functions that drive panels on the school dashboard. It links to the functions it uses: hash_for_guid and exists_view.
Here’s the entry for the view lms_course_groups which appears as a JOIN target in sparklines_for_guid. This central view is invoked — in SELECT or JOIN context — from many functions, from dependent views, and from Metabase questions.
Metabase questions can also “call” other Metabase questions. In A virtuous cycle for analytics I noted: “Queries can emit URLs in order to compose themselves with other queries.” Here’s an example of that.
This Metabase question (985) calls various linked functions, and is called by two other Metabase questions. Here is one of those.
Because this Metabase question (600) emits an URL that refers to 985, it links to the definition of 985. It also links to the view, top_annotated_domains_last_week, from which it SELECTs.
It was straightforward to include Postgres functions, views, and procedures in the concordance since these live in files that reside in the fileystem under source control. Metabase questions, however, live in a database — in our case, a Postgres database that’s separate from our primary Postgres analytics db. In order to extract them into a file I use this SQL snippet.
select
r.id,
m.name as dbname,
r.name,
r.description,
r.dataset_query
from report_card r
join metabase_database m
on m.id = cast(r.dataset_query::jsonb->>'database' as int)
where not r.archived
order by r.id;
The doc generator downloads that Metabase data as a CSV file, queries.csv, and processes it along with the files that contain the definitions of functions, procedures, and views in the Postgres data warehouse. It also emits queries.txt which is a more convenient way to diff changes from one commit to the next.
This technique solved a couple of problems. First, when we were only using Metabase — unaugmented by anything in Postgres — it enabled us to put our Metabase SQL under source control and helped us visualize relationships among queries.
Later, as we augmented Metabase with Postgres functions, procedures, and views, it became even more powerful. Developing a new panel for a school or course dashboard means writing a new memoized function. That process begins as a Metabase question with SQL code that calls existing Postgres functions, and/or JOINs/SELECTs FROM existing Postgres views. Typically it then leads to the creation of new supporting Postgres functions and/or views. All this can be tested by internal users, or even invited external users, in Metabase, with the concordance available to help understand the relationships among the evolving set of functions and views.
When supporting functions and views are finalized, the SQL content of the Metabase question gets wrapped up in a memoized Postgres function that’s invoked from a panel of a dashboard app. At that point the concordance links the new wrapper function to the same set of supporting functions and views. I’ve found this to be an effective way to reason about a hybrid code base as features move from Metabase for prototyping to Postgres in production, while maintaining all the code under source control.
That foundation of source control is necessary, but maybe not sufficient, for a team to consider this whole approach viable. The use of two complementary languages for in-database programming will certainly raise eyebrows, and if it’s not your cup of tea I completely understand. If you do find it appealing, though, one thing you’ll wonder about next will be tooling. I work in VSCode nowadays, for which I’ve not yet found a useful extension for pl/pgsql or pl/python. With metaprogramming life gets even harder for aspiring pl/pgsql or pl/python VSCode extensions. I can envision them, but I’m not holding my breath awaiting them. Meanwhile, two factors enable VSCode to be helpful even without deep language-specific intelligence.
The first factor, and by far the dominant one, is outlining. In Products and capabilities I reflect on how I’ve never adopted an outlining product, but often rely on outlining capability infused into a product. In VSCode that’s “only” basic indentation-driven folding and unfolding. But I find it works remarkably well across SQL queries, views and functions that embed them, CTEs that comprise them, and pl/pgsql or pl/python functions called by them.
The second factor, nascent thus far, is GitHub Copilot. It’s a complementary kind of language intelligence that’s aware of, but not bounded by, what a file extension of .sql or .py implies. It can sometimes discern patterns that mix language syntaxes and offer helpful suggestions. That hasn’t happened often so far, but it’s striking when it does. I don’t yet know the extent to which it may be training me while I train it, or how those interactions might influence others. At this point I’m not a major Copilot booster, but I am very much an interested observer of and participant in the experiment.
All in all, I’m guardedly optimistic that existing or feasible tooling can enable individuals and teams to sanely navigate the hybrid corpus of source code discussed in this series. If you’re along for the ride, you’ll next wonder about debugging and monitoring a system built this way. That’s a topic for a future episode.
– Postgres is more valuable as a programming environment than you might think. (see R0ml Lefkowitz’s The Image of Postgres)
As the patron saint of trailing edge technology it is my duty to explore what’s possible at the intersection of these two premises. The topic for this episode is Postgres functional style. Clearly what I’ve been doing with the combo of pl/python and pl/pgsql is very far from pure functional programming. The self-memoization technique shown in episode 7 is all about mutating state (ed: this means writing stuff down somewhere). But it feels functional to me in the broader sense that I’m using functions to orchestrate behavior that’s expressed in terms of SQL queries.
To help explain what I mean, I’m going to unpack one of the Postgres functions in our library.
This is a function that accepts a school id (aka guid), a start date, and an end date. Its job is to:
– Find all the courses (groups) for that school (guid)
– Filter to those created between the start and end date
– Find all the users in the filtered set of courses
– Filter to just students (i.e. omit instructors)
– Remove duplicate students (i.e., who are in more than one course)
– Return the count of distinct students at the school who annotated in the date range
The production database doesn’t yet store things in ways friendly to this outcome, so doing all this requires some heavy lifting in the analytics data warehouse. Here’s the function that orchestrates that work.
create function count_of_distinct_lms_students_from_to(_guid text, _from date, _to date)
returns bigint as $$
declare count bigint;
begin
1 -- all groups active for the guid in the date range
2 with groups as (
3 select pubid from groups_for_guid(_guid)
4 where group_is_active_from_to(pubid, _from, _to)
5 ),
6 -- usernames in those groups
7 usernames_by_course as (
8 select
9 pubid,
10 (users_in_group(pubid)).username
11 from groups
12 ),
13 -- filtered to just students
14 students_by_course as (
15 select * from usernames_by_course
16 where not is_instructor(username, pubid)
17 )
18 select
19 count (distinct username)
20 from students_by_course
into count;
return count;
end;
$$ language plpgsql;
If you think pl/pgsql is old and clunky, then you are welcome to do this in pl/python instead. There’s negligible difference between how they’re written and how fast they run. It’s the same chunk of SQL either way, and it exemplifies the functional style I’m about to describe.
Two kinds of chunking work together here: CTEs (aka common table expressions, aka WITH clauses) and functions. If you’ve not worked with SQL for a long time, as I hadn’t, then CTEs may be unfamiliar. I think of them as pipelines of table transformations in which each stage of the pipeline gets a name. In this example those names are groups (line 2), usernames_by_course (line 7), and students_by_course (line 14).
The pipeline phases aren’t functions that accept parameters, but I still think of them as being function-like in the sense that they encapsulate named chunks of behavior. The style I’ve settled into aims to make each phase of the pipeline responsible for a single idea (“groups active in the range”, “usernames in those groups”), and to express that idea in a short snippet of SQL.
As I’m developing one of these pipelines, I test each phase. To test the first phase, for example, I’d do this in psql or Metabase.
-- all groups active for the guid in the date range
with groups as (
select pubid from groups_for_guid('8anU0QwbgC2Cq:canvas-lms')
where group_is_active_from_to(pubid, '2021-01-01', '2021-05-01')
)
select * from groups;
And I’d spot-check to make sure the selected groups for that school really are in the date range. Then I’d check the next phase.
-- all groups active for the guid in the date range
with groups as (
),
-- usernames in those groups
usernames_by_course as (
select
pubid,
(users_in_group(pubid)).username
from groups
)
select * from usernames_by_course;
After another sanity check against these results, I’d continue to the next phase, and eventually arrive at the final result. It’s the same approach I take with regular expressions. I am unable to visualize everything that’s happening in a complex regex. But I can reason effectively about a pipeline of matches that occur in easier-to-understand named steps.
Ideally each phase in one of these pipelines requires just a handful of lines of code: few enough to fit within the 7 +- 2 limit of working memory. Postgres functions make that possible. Here are the functions used in this 20-line chunk of SQL.
– groups_for_guid(guid): Returns a table of course ids for a school.
– group_is_active_from_to(pubid, _from, _to): Returns true if the group was created in the range.
– users_in_group(pubid): Returns a table of user info for a course.
– is_instructor(username, pubid): Returns true if that user is an instructor.
Two of these, groups_for_guid and users_in_group, are set-returning functions. As noted in Working with Postgres types, they have the option of returning an explicit Postgres type defined elsewhere, or an implicit Postgres type defined inline. As it happens, both do the latter.
create or replace function groups_for_guid(_guid text)
returns table(
pubid text
) as $$
create or replace function users_in_group (_pubid text)
returns table (
groupid text,
username text,
display_name text
) as $$
The other two, group_is_active_from_to and is_instructor, return boolean values.
All this feels highly readable to me now, but the syntax of line 10 took quite a while to sink in. It helps me to look at what users_in_group(pubid) does in a SELECT context.
Here is an alternate way to write the usernames_by_course CTE at line 7.
-- usernames in those groups
usernames_by_course as (
select
g.pubid,
u.username
from groups g
join users_in_group(g.pubid) u on g.pubid = u.groupid
)
select * from usernames_by_course;
Both do exactly the same thing in very close to the same amount of time. Having mixed the two styles I’m leaning toward the first, but you could go either way or both. What matters more is the mental leverage you wield when writing CTEs and functions together to compose pipelines of transformations, and that others wield when reading and debugging.
I hope I’ve made the case for writing and reading. There’s a case to be made for debugging too, but that’s another episode.
In episode 2 I mentioned three aspects of pl/python that are reasons to use it instead of pl/pgsql: access to Python modules, metaprogramming, and introspection. In episode 5 I discussed metaprogramming, by which I mean using pl/python to compose and run SQL code. This episode features introspection, by which I mean taking advantage of Python’s inspect module to enable a pl/python function to discover its own name.
Why do that? In this context, so that the function can create a materialized view by joining its own name with the value of its first parameter. Here’s the example from episode 5.
questions_and_answers_for_group(_group_id text)
returns setof question_and_answer_for_group as $$
from plpython_helpers import (
exists_group_view,
get_caller_name,
memoize_view_name
)
base_view_name = get_caller_name()
view_name = f'{base_view_name}_{_group_id}'
if exists_group_view(plpy, view_name):
sql = f""" select * from {view_name} """
else:
sql = f"""
-- SQL THAT RETURNS A SETOF QUESTION_AND_ANSWER_FOR_GROUP
"""
memoize_view_name(sql, view_name)
sql = f""" select * from {view_name} """
return plpy.execute(sql)
$$ language plpython3u;
The function drives a panel on the course dashboard. An initial call to, say, questions_and_answers_for_group('P1mQaEEp'), creates the materialized view questions_and_answers_for_group_p1mqaeep and returns SELECT * from the view. Subsequent calls skip creating the view and just return SELECT * from it.
Note that the even though the group name is mixed case; the view name created by Postgres is all lowercase. For example:
create materialized view test_AbC as (select 'ok') with data;
SELECT 1
\d test_AbC
Materialized view "public.test_abc"
I want to think of this as a form of capability injection, but it’s really more like a capability wrapper. The capability is memoization. A function endowed with it can run a SQL query and cache the resulting rows in a materialized view before returning them to a SQL SELECT context. The wrapper is boilerplate code that discovers the function’s name, checks for the existence of a corresponding view, and if it isn’t found, calls memoize_view_name(sql, view_name) to run an arbitrary chunk of SQL code whose result set matches the function’s type. So in short: this pattern wraps memoization around a set-returning pl/python function.
As noted in episode 5, although memoize_view_name is called from pl/python, it is not itself a pl/python function. It’s a normal Python function in a module that’s accessible to the instance of Python that the Postgres pl/python extension is using. In my case that module is just a few small functions in file called plpython_helpers.py, installed (cough, copied) to user postgres‘s ~/.local/lib/python3.8/site-packages/plpython_helpers.py.
So far, there are only two critical functions in that module: get_caller_name() and memoize_view_name.
Here is get_caller_name().
import re
base_view_name = inspect.stack()[1][3].replace('__plpython_procedure_', '')
return re.sub(r'_\d+$', '', base_view_name)
The internal name for a pl/python function created by CREATE FUNCTION foo() looks like __plpython_procedure_foo_981048462. What get_caller_name() returns is just foo.
Given a chunk of SQL and the name of a view, it converts newlines to spaces, base64-encodes the query text, and invokes psql to call a procedure, memoizer, that does the work of running the SQL query and creating the materialized view from those results. So for example the function that yields sparkline data for a school might look like sparkline_data_for_school('af513ee'), and produce the view sparkline_data_for_school_af513ee.
Why shell out to psql here? It may not be necessary, there may be a way to manage the transaction directly within the function, but if so I haven’t found it. I’m very far from being an expert on transaction semantics and will appreciate guidance here if anyone cares to offer it. Meanwhile, this technique seems to work well. memoizer is a Postgres procedure, not a function. Although “stored procedures” is the term that I’ve always associated with in-database programming, I went pretty far down this path using only CREATE FUNCTION, never CREATE PROCEDURE. When I eventually went there I found the distinction between functions and procedures to be a bit slippery. This StackOverflow answer matches what I’ve observed.
PostgreSQL 11 added stored procedures as a new schema object. You can create a new procedure by using the CREATE PROCEDURE statement.
Stored procedures differ from functions in the following ways:
Stored procedures do not have to return anything, and only return a single row when using INOUT parameters.
You can commit and rollback transactions inside stored procedures, but not in functions.
You execute a stored procedure using the CALL statement rather than a SELECT statement.
Unlike functions, procedures cannot be nested in other DDL commands (SELECT, INSERT, UPDATE, DELETE).
Here is the memoizer procedure. It happens to be written in pl/python but could as easily have been written in pl/pgsql using the built-in Postgres decode function. Procedures, like functions, can be written in either language (or others) and share the common Postgres type system.
create procedure memoizer(_sql text, _view_name text) as $$
import base64
decoded_bytes = base64.b64decode(_sql)
decoded_str = str(decoded_bytes, 'utf-8')
create = f"""
create materialized view if not exists {_view_name} as (
{decoded_str}
) with data;
"""
plpy.execute(create)
permit = f"""
grant select on {_view_name} to analytics;
"""
plpy.execute(permit)
$$ language plpython3u;
There’s no plpy.commit() here because psql takes care of that automatically. Eventually I wrote other procedures, some of which do their own committing, but that isn’t needed here.
Of course it’s only possible to shell out to psql from a function because pl/python is an “untrusted” language extension. Recall from episode 1:
The ability to wield any of Python’s built-in or loadable modules inside Postgres brings great power. That entails great responsibility, as the Python extension is “untrusted” (that’s the ‘u’ in ‘plpython3u’) and can do anything Python can do on the host system: read and write files, make network requests.
Using Python’s os.system() to invoke psql is another of those superpowers. It’s not something I do lightly, and if there’s a better/safer way I’m all ears.
Meanwhile, this approach is delivering much value. We have two main dashboards, each of which displays a dozen or so panels. The school dashboard reports on annotation activity across all courses at a school. The course dashboard reports on the documents, and selections within documents, that instructors and students are discussing in the course’s annotation layer. Each panel that appears on the school or course dashboard is the output of a memoized function that is parameterized by a school or course id.
The data warehouse runs on a 24-hour cycle. Within that cycle, the first call to a memoized function takes just as long as it takes to run the SQL wrapped by the function. The cached view only comes into play when the function is called again during the same cycle. That can happen in a few different ways.
– A user reloads a dashboard, or a second user loads it.
– A panel expands or refines the results of another panel. For example, questions_and_answers_for_group() provides a foundation for a family of related functions including:
– A scheduled job invokes a function in order to cache its results before any user asks for them. For example, the time required to cache panels for school dashboards varies a lot. For schools with many active courses it can take minutes to run those queries, so preemptive memoization matters a lot. For schools with fewer active courses it’s OK to memoize on the fly. This method enables flexible cache policy. Across schools we can decide how many of the most-active ones to cache. Within a school, we can decide which courses to cache, e.g. most recent, or most active. The mechanism to display a dashboard panel is always the same function call. The caching done in support of that function is highly configurable.
Caches, of course, must be purged. Since these materialized views depend on core tables it was enough, early on, to do this for views depending on the annotation table.
drop table annotation cascade;
At a certain point, with a growing number of views built during each cycle, the cascade failed.
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
That wasn’t the answer. Instead we switched to enumerating views and dropping them individually. Again that afforded great flexibility. We can scan the names in the pg_matviews system table and match all the memoized views, or just those for a subset of schools, or just particular panels on school or course dashboards. Policies that govern the purging of cached views can be as flexible as those that govern their creation.
One of the compelling aspects of modern SQL is the JSON support built into modern engines, including Postgres. The documentation is well done, but I need examples to motivate my understanding of where and how and why to use such a capability. The one I’ll use in this episode is something I call document hotspots.
Suppose a teacher has asked her students to annotate Arthur Miller’s The Crucible. How can she find the most heavily-annotated passages? They’re visible in the Hypothesis client, of course, but may be sparsely distributed. She can scroll through the 154-page PDF document to find the hotspots, but it will be helpful to see a report that brings them together. Let’s do that.
The Hypothesis system stores annotations using a blend of SQL and JSON datatypes. Consider this sample annotation:
When the Hypothesis client creates that annotation it sends a JSON payload to the server. Likewise, when the client subsequently requests the annotation in order to anchor it to the document, it receives a similar JSON payload.
{
"id": "VLUhcP1-EeuHn5MbnGgJ0w",
"created": "2021-08-15T04:07:39.343275+00:00",
"updated": "2021-08-15T04:07:39.343275+00:00",
"user": "acct:judell@hypothes.is",
"uri": "https://ia800209.us.archive.org/17/items/TheCrucibleFullText/The%20Crucible%20full%20text.pdf",
"text": "\"He is no one's favorite clergyman.\" :-)\n\nhttps://www.thoughtco.com/crucible-character-study-reverend-parris-2713521",
"tags": [],
"group": "__world__",
"permissions": {
"read": [
"group:__world__"
],
"admin": [
"acct:judell@hypothes.is"
],
"update": [
"acct:judell@hypothes.is"
],
"delete": [
"acct:judell@hypothes.is"
]
},
"target": [
{
"source": "https://ia800209.us.archive.org/17/items/TheCrucibleFullText/The%20Crucible%20full%20text.pdf",
"selector": [
{
"end": 44483,
"type": "TextPositionSelector",
"start": 44392
},
{
"type": "TextQuoteSelector",
"exact": " am not some preaching farmer with a book under my arm; I am a graduate of Harvard College.",
"prefix": " sixty-six pound, Mr. Proctor! I",
"suffix": " Giles: Aye, and well instructed"
}
]
}
],
"document": {
"title": [
"The%20Crucible%20full%20text.pdf"
]
},
"links": {
"html": "https://hypothes.is/a/VLUhcP1-EeuHn5MbnGgJ0w",
"incontext": "https://hyp.is/VLUhcP1-EeuHn5MbnGgJ0w/ia800209.us.archive.org/17/items/TheCrucibleFullText/The%20Crucible%20full%20text.pdf",
"json": "https://hypothes.is/api/annotations/VLUhcP1-EeuHn5MbnGgJ0w"
},
"user_info": {
"display_name": "Jon Udell"
},
"flagged": false,
"hidden": false
}
The server mostly shreds this JSON into conventional SQL types. The tags array, for example, is hoisted out of the JSON into a SQL array-of-text. The expression to find its length is a conventional Postgres idiom: array_length(tags,1). Note the second parameter; array_length(tags) is an error, because Postgres arrays can be multidimensional. In this case there’s only one dimension but it’s still necessary to specify that.
A target_selectors column, though, is retained as JSON. These selectors define how an annotation anchors to a target selection in a document. Because selectors are used only by the Hypothesis client, which creates and consumes them in JSON format, there’s no reason to shred them into separate columns. In normal operation, selectors don’t need to be related to core tables. They can live in the database as opaque blobs of JSON.
For some analytic queries, though, it is necessary to peer into those blobs and relate their contents to core tables. There’s a parallel set of functions for working with JSON. For example, the target_selectors column corresponds to the target[0]['selector'] array in the JSON representation. The expression to find the length of that array is jsonb_array_length(target_selectors).
Here’s a similar expression that won’t work: json_array_length(target_selectors). Postgres complains that the function doesn’t exist.
ERROR: function json_array_length(jsonb) does not exist
Hint: No function matches the given name and argument types.
In fact both functions, json_array_length and jsonb_array_length, exist. But Postgres knows the target_selectors column is of type jsonb, not json which is the correct type for the json_array_length function. What’s the difference between json and jsonb?
The json and jsonb data types accept almost identical sets of values as input. The major practical difference is one of efficiency. The json data type stores an exact copy of the input text, which processing functions must reparse on each execution; while jsonb data is stored in a decomposed binary format that makes it slightly slower to input due to added conversion overhead, but significantly faster to process, since no reparsing is needed. jsonb also supports indexing, which can be a significant advantage.
Although I tend to use JSON to refer to data in a variety of contexts, the flavor of JSON in the Postgres queries, views, and functions I’ll discuss will always be jsonb. The input conversion overhead isn’t a problem for analytics work that happens in a data warehouse, and the indexing support is a tremendous enabler.
To illustrate some of the operators common to json and jsonb, here is a query that captures the target_selectors column from the sample annotation.
with example as (
select
id,
target_selectors as selectors
from annotation
where id = '54b52170-fd7e-11eb-879f-931b9c6809d3'
)
select * from example;
The result is a human-readable representation, but the type of selectors is jsonb.
select pg_typeof(selectors) from example;
jsonb
The array-indexing operator, ->, can yield the zeroth element of the array.
select selectors->0 from example;
{"end": 44483, "type": "TextPositionSelector", "start": 44392}
The result is again a human-readable representation of a jsonb type.
select pg_typeof(selectors->0) from example;
jsonb
Another array-indexing operator, ->>, can also yield the zeroth element of the array, but now as type text.
select selectors->>0 from example;
{"end": 44483, "type": "TextPositionSelector", "start": 44392}
The result looks the same, but the type is different.
select pg_typeof(selectors->>0) from example;
text
The -> and ->> operators can also index objects by their keys. These examples work with the object that is the zeroth element of the array.
select selectors->0->'type' from example;
"TextPositionSelector"
select pg_typeof(selectors->0->'type') from example;
jsonb
select selectors->0->>'type' from example;
TextPositionSelector
select pg_typeof(selectors->0->>'type') from example;
text
The Hypothesis system stores the location of a target (i.e, the selection in a document to which an annotation refers) in the target_selectors column we’ve been exploring. It records selectors. TextQuoteSelector represents the selection as the exact highlighted text bracketed by snippets of context. TextPositionSelector represents it as a pair of numbers that mark the beginning and end of the selection. When one range formed by that numeric pair is equal to another, it means two students have annotated the same selection. When a range contains another range, it means one student annotated the containing range, and another student made an overlapping annotation on the contained range. We can use these facts to surface hotspots where annotations overlap exactly or in nested fashion.
To start, let’s have a function to extract the start/end range from an annotation. In a conventional programming language you might iterate through the selectors in the target_selectors array looking for the one with the type TextPositionSelector. That’s possible in pl/pgsql and pl/python, but Postgres affords a more SQL-oriented approach. Given a JSON array, the function jsonb_array_elements returns a table-like object with rows corresponding to array elements.
select jsonb_array_elements(selectors) from example;
{"end": 44483, "type": "TextPositionSelector", "start": 44392}
{"type": "TextQuoteSelector", "exact": " am not some preaching farmer with a book under my arm; I am a graduate of Harvard College.", "prefix": " sixty-six pound, Mr. Proctor! I", "suffix": " Giles: Aye, and well instructed"}
A function can convert the array to rows, select the row of interest, select the start and end values from the row, package the pair of numbers as an array, and return the array.
create function position_from_anno(_id uuid) returns numeric[] as $$
declare range numeric[];
begin
with selectors as (
select jsonb_array_elements(target_selectors) as selector
from annotation
where id = _id
),
position as (
select
selector->>'start'::numeric as startpos,
selector->>'end'::numeric as endpos
from selectors
where selector->>'type' = 'TextPositionSelector'
)
select array[p.startpos, p.endpos]
from position p
into range;
return range;
end;
$$ language plpgsql;
I’ll show how to use position_from_anno to find document hotspots in a later episode. The goal here is just to introduce an example, and to illustrate a few of the JSON functions and operators.
What’s most interesting, I think, is this part.
where selector->>'type' = 'TextPositionSelector'
Although the TextPositionSelector appears as the first element of the selectors array, that isn’t guaranteed. In a conventional language you’d have to walk through the array looking for it. SQL affords a declarative way to find an element in a JSON array.
In episode 2 I mentioned three aspects of pl/python that are reasons to use it instead of pl/pgsql: access to Python modules, metaprogramming, and introspection.
Although this episode focuses on metaprogramming — by which I mean using Python to dynamically compose and run SQL queries — my favorite example combines all three aspects.
The context for the example is an analytics dashboard with a dozen panels, each driven by a pl/plython function that’s parameterized by the id of a school or a course. So, for example, the Questions and Answers panel on the course dashboard is driven by a function, questions_and_answers_for_group(group_id), which wraps a SQL query that:
– calls another pl/python function, questions_for_group(group_id), to find notes in the group that contain question marks
– finds the replies to those notes
– builds a table that summarizes the question/answer pairs
Here’s the SQL wrapped by the questions_and_answers_for_group(group_id) function.
sql = f"""
with questions as (
select *
from questions_for_group('{_group_id}')
),
ids_and_refs as (
select
id,
unnest ("references") as ref
from annotation
where groupid = '{_group_id}'
),
combined as (
select
q.*,
array_agg(ir.id) as reply_ids
from ids_and_refs ir
inner join questions q on q.id = ir.ref
group by q.id, q.url, q.title, q.questioner, q.question, q.quote
),
unnested as (
select
c.url,
c.title,
c.quote,
c.questioner,
c.question,
unnest(reply_ids) as reply_id
from combined c
)
select distinct
course_for_group('{_group_id}') as course,
teacher_for_group('{_group_id}') as teacher,
clean_url(u.url) as url,
u.title,
u.quote,
u.questioner,
(regexp_matches(u.question, '.+\?'))[1] as question,
display_name_from_anno(u.reply_id) as answerer,
text_from_anno(u.reply_id) as answer,
app_host() || '/course/render_questions_and_answers/{_group_id}' as viewer
from unnested u
order by course, teacher, url, title, questioner, question
This isn’t yet what I mean by pl/python metaprogramming. You could as easily wrap this SQL code in a pl/pgsql function. More easily, in fact, because in pl/pgsql you could just write _group_id instead of '{_group_id}'.
To get where we’re going, let’s zoom out and look at the whole questions_and_and_answer_for_group(group_id) function.
questions_and_answers_for_group(_group_id text)
returns setof question_and_answer_for_group as $$
from plpython_helpers import (
exists_group_view,
get_caller_name,
memoize_view_name
)
base_view_name = get_caller_name()
view_name = f'{base_view_name}_{_group_id}'
if exists_group_view(plpy, view_name):
sql = f""" select * from {view_name} """
else:
sql = f"""
<SEE ABOVE>
"""
memoize_view_name(sql, view_name)
sql = f""" select * from {view_name} """
return plpy.execute(sql)
$$ language plpython3u;
This still isn’t what I mean by metaprogramming. It introduces introspection — this is a pl/python function that discovers its own name and works with an eponymous materialized view — but that’s for a later episode.
It also introduces the use of Python modules by pl/python functions. A key thing to note here is that this is an example of what I call a memoizing function. When called it looks for a materialized view that captures the results of the SQL query shown above. If yes, it only needs to use a simple SELECT to return the cached result. If no, it calls memoize_view_name to run the underlying query and cache it in a materialized view that the next call to questions_and_answers_for_group(group_id) will use in a simple SELECT. Note that memoize_view_name is a special function that isn’t defined in Postgres using CREATE FUNCTION foo() like a normal pl/python function. Instead it’s defined using def foo() in a Python module called plpython_helpers. The functions there can do things — like create materialized views — that pl/python functions can’t. More about that in another episode.
The focus in this episode is metaprogramming, which is used in this example to roll up the results of multiple calls to questions_and_answers_for_group(group_id). That happens when the group_id refers to a course that has sections. If you’re teaching the course and you’ve got students in a dozen sections, you don’t want to look at a dozen dashboards; you’d much rather see everything on the primary course dashboard.
Here’s the function that does that consolidation.
create function consolidated_questions_and_answers_for_group(group_id text)
returns setof question_and_answer_for_group as $$
from plpython_helpers import (
get_caller_name,
sql_for_consolidated_and_memoized_function_for_group
)
base_view_name = get_caller_name()
sql = sql_for_consolidated_and_memoized_function_for_group(
plpy, base_view_name, 'questions_and_answers_for_group', _group_id)
sql += ' order by course, url, title, questioner, answerer'
return plpy.execute(sql)
$$ language plpython3u;
This pl/python function not only memoizes its results as above, it also consolidates results for all sections of a course. The memoization happens here.
The consolidation happens here, and this is finally what I think of as classical metaprogramming: using Python to compose SQL.
def consolidator_for_group_as_sql(plpy, _group_id, _function):
sql = f"select type_for_group('{_group_id}') as type"
type = row_zero_value_for_colname(plpy, sql, 'type')
if type == 'section_group' or type == 'none':
sql = f"select * from {_function}('{_group_id}')"
if type == 'course_group' or type == 'course':
sql = f"select has_sections('{_group_id}')"
has_sections = row_zero_value_for_colname(plpy, sql, 'has_sections')
if has_sections:
sql = f"""
select array_agg(group_id) as group_ids
from sections_for_course('{_group_id}')
"""
group_ids = row_zero_value_for_colname(plpy, sql, 'group_ids')
selects = [f"select * from {_function}('{_group_id}') "]
for group_id in group_ids:
selects.append(f"select * from {_function}('{group_id}')")
sql = ' union '.join(selects)
else:
sql = f"select * from {_function}('{_group_id}')"
return sql
If the inbound _groupid is p1mqaeep, the inbound _function is questions_and_answers_for_group, and the group has no sections, the SQL will just be select * from questions_and_answers_for_group('p1mqaeep').
If the group does have sections, then the SQL will instead look like this:
select * from questions_and_answers_for_group('p1mqaeep')
union
select * from questions_and_answers_for_group('x7fe93ba')
union
select * from questions_and_answers_for_group('qz9a4b3d')
This is a very long-winded way of saying that pl/python is an effective way to compose and run arbitarily complex SQL code. In theory you could do the same thing using pl/pgsql, in practice it would be insane to try. I’ve entangled the example with other aspects — modules, introspection — because that’s the real situation. pl/python’s maximal power emerges from the interplay of all three aspects. That said, it’s a fantastic way to extend Postgres with user-defined functions that compose and run SQL code.
I’ve long been enamored of the sparkline, a graphical device which its inventor Edward Tufte defines thusly:
A sparkline is a small intense, simple, word-sized graphic with typographic resolution. Sparklines mean that graphics are no longer cartoonish special occasions with captions and boxes, but rather sparkline graphics can be everywhere a word or number can be: embedded in a sentence, table, headline, map, spreadsheet, graphic.
Nowadays you can create sparklines in many tools including Excel and Google Sheets, both of which can use the technique to pack a summary of a series of numbers into a single cell. By stacking such cells vertically you can create views that compress vast amounts of information.
In A virtuous cycle for analytics I noted that we often use Metabase to display tables and charts based on extracts from our Postgres warehouse. I really wanted to use sparklines to summarize views of activity over time, but that isn’t yet an option in Metabase.
When Metabase is connected to Postgres, though, you can write Metabase questions that can not only call built-in Postgres functions but can also call user-defined functions. Can such a function accept an array of numbers and return a sparkline for display in the Metabase table viewer? Yes, if you use Unicode characters to represent the variable-height bars of a sparkline.
There’s a page at rosettacode.org devoted to Unicode sparklines based on this sequence of eight characters:
U+2581
▁
LOWER ONE EIGHTH BLOCK
U+2582
▂
LOWER ONE QUARTER BLOCK
U+2583
▃
LOWER THREE EIGHTHS BLOCK
U+2584
▄
LOWER HALF BLOCK
U+2585
▅
LOWER FIVE EIGHTHS BLOCK
U+2586
▆
LOWER THREE QUARTERS BLOCK
U+2587
▇
LOWER SEVEN EIGHTHS BLOCK
U+2588
█
FULL BLOCK
Notice that 2581, 2582, and 2588 are narrower than the rest. I’ll come back to that at the end.
If you combine them into a string of eight characters you get this result:
▁▂▃▄▅▆▇█
Notice that the fourth and eight characters in the sequence drop below the baseline. I’ll come back to that at the end too.
These characters can be used to define eight buckets into which numbers in a series can be quantized. Here are some examples from the rosettacode.org page:
To write a Postgres function that would do this, I started with the Python example from rosettacode.org:
bar = '▁▂▃▄▅▆▇█'
barcount = len(bar)
def sparkline(numbers):
mn, mx = min(numbers), max(numbers)
extent = mx - mn
sparkline = ''.join(bar[min([barcount - 1,
int((n - mn) / extent * barcount)])]
for n in numbers)
return mn, mx, sparkline
While testing it I happened to try an unchanging sequence, [3, 3, 3, 3], which fails with a divide-by-zero error. In order to address that, and to unpack the algorithm a bit for readability, I arrived at this Postgres function:
create function sparkline(numbers bigint[]) returns text as $$
def bar_index(num, _min, barcount, extent):
index = min([barcount - 1, int( (num - _min) / extent * bar_count)])
return index
bars = '\u2581\u2582\u2583\u2584\u2585\u2586\u2587\u2588'
_min, _max = min(numbers), max(numbers)
extent = _max - _min
if extent == 0: # avoid divide by zero if all numbers are equal
extent = 1
bar_count = len(bars)
sparkline = ''
for num in numbers:
index = bar_index(num, _min, bar_count, extent)
sparkline = sparkline + bars[index]
return sparkline
$$ language plpython3u;
Each row represents a university course in which students and teachers are annotating the course readings. Each bar represents a week’s worth of activity. Their heights are not comparable from row to row; some courses do a lot of annotating and some not so much; each sparkline reports relative variation from week to week; the sum and weekly max columns report absolute numbers.
This visualization makes it easy to see that annotation was occasional in some courses and continuous in others. And when you scroll, the temporal axis comes alive; try scrolling this view to see what I mean.
We use the same mechanism at three different scales. One set of sparklines reports daily activity for students in courses; another rolls those up to weekly activity for courses at a school; still another rolls all those up to weekly activity for each school in the system.
At the level of individual courses, the per-student sparkline views can show patterns of interaction. In the left example here, vertical bands of activity reflect students annotating for particular assignments. In the right example there may be a trace of such temporal alignment but activity is less synchronized and more continuous.
When we’re working in Metabase we can use its handy mini bar charts to contextualize the row-wise sums.
The sparkline-like mini bar chart shows a row’s sum relative to the max for the column. Here we can see that a course with 3,758 notes has about 1/4 the number of notes as the most note-heavy course at the school.
Because these Unicode sparklines are just strings of text in columns of SQL or HTML tables, they can participate in sort operations. In our case we can sort on all columns including ones not shown here: instructor name, course start date, number of students. But the default is to sort by the sparkline column which, because it encodes time, orders courses by the start of annotation activity.
The visual effect is admittedly crude, but it’s a good way to show certain kinds of variation. And it’s nicely portable. A Unicode sparkline looks the same in a psql console, an HTML table, or a tweet. The function will work in any database that can run it, using Python or another of the languages demoed at rosettacode.org. For example, I revisited the Workbench workflow described in A beautiful power tool to scrape, clean, and combine data and added a tab for Lake levels.
When I did that, though, the effect was even cruder than what I’ve been seeing in my own work.
In our scenarios, with longer strings of characters, the differences average out and things align pretty well; the below-the-baseline effect has been annoying but not a deal breaker. But the width variation in this example does feel like a deal breaker.
What if we omit the problematic characters U+2581 (too narrow) and U+2584/U+2588 (below baseline and too narrow)?
There are only 5 buckets into which to quantize numbers, and their heights aren’t evenly distributed. But for the intended purpose — to show patterns of variation — I think it’s sufficient in this case. I tried swapping the 5-bucket method into the function that creates sparklines for our dashboards but I don’t think I’ll switch. The loss of vertical resolution makes our longer sparklines less useful, and the width variation is barely noticeable.
Unicode evolves, of course, so maybe there will someday be a sequence of characters that’s friendlier to sparklines. Maybe there already is? If so please let me know, I’d love to use it.
Labels like “data scientist” and “data journalist” connote an elite corps of professionals who can analyze data and use it to reason about the world. There are elite practitioners, of course, but since the advent of online data a quarter century ago I’ve hoped that every thinking citizen of the world (and of the web) could engage in similar analysis and reasoning.
That’s long been possible for those of us with the ability to wrangle APIs and transform data using SQL, Python, or another programming language. But even for us it hasn’t been easy. When I read news stories that relate to the generation of electric power in California, for example, questions occur to me that I know I could illuminate by finding, transforming, and charting sources of web data:
– How can I visualize the impact of shutting down California’s last nuclear plant?
– What’s the relationship between drought and hydro power?
All the ingredients are lying around in plain sight, but the effort required to combine them winds up being more trouble than it’s worth. And that’s for me, a skilled longtime scraper and transformer of web data. For you — even if you’re a scientist or a journalist! — that may not even be an option.
Enter Workbench, a web app with the tagline: “Scrape, clean, combine and analyze data without code.” I’ve worked with tools in the past that pointed the way toward that vision. DabbleDB in 2005 (now gone) and Freebase Gridworks in 2010 (still alive as Open Refine) were effective ways to cut through data friction. Workbench carries those ideas forward delightfully. It enables me to fly through the boring and difficult stuff — the scraping, cleaning, and combining — in order to focus on what matters: the analysis.
Here’s the report that I made to address the questions I posed above. It’s based on a workflow that you can visit and explore as I describe it here. (If you create your own account you can clone and modify.)
The workflow contains a set of tabs; each tab contains a sequence of steps; each step transforms a data set and displays output as a table or chart. When you load the page the first tab runs, and the result of its last step is displayed. In this case that’s the first chart shown in the report:
As in a Jupyter notebook you can run each step individually. Try clicking step 1. You’ll see a table of data from energy.ca.gov. Notice that step 1 is labeled Concatenate tabs. If you unfurl it you’ll see that it uses another tab, 2001-2020 scraped, which in turn concatenates two other tabs, 2001-2010 scraped and 2011-2020 scraped. Note that I’ve helpfully explained that in the optional comment field above step 1.
Each of the two source tabs scrapes a table from the page at energy.ca.gov. As I note in the report, it wasn’t necessary to scrape those tables since the data are available as an Excel file that can be downloaded, then uploaded to Workbench (as I’ve done in the tab named energy.ca.gov xslx). I scraped them anyway because that web page presents a common challenge: the data appear in two separate HTML tables. That’s helpful to the reader but frustrating to an analyst who wants to use the data. Rapid and fluid combination of scraped tables is grease for cutting through data friction; Workbench supplies that grease.
Now click step 2 in the first tab. It’s the last step, so you’re back to the opening display of the chart. Unfurl it and you’ll see the subset of columns included in the chart. I’ve removed some minor sources, like oil and waste heat, in order to focus on major ones. Several details are notable here. First: colors. The system provides a default palette but you can adjust it. Black wasn’t on the default palette but I chose that for coal.
Second, grand total. The data set doesn’t include that column, and it’s not something I needed here. But in some situations I’d want it, so the system offers it as a choice. That’s an example of the attention to detail that pervades every aspect of Workbench.
Third, Vega. See the triple-dot button above the legend in the chart? Click it, then select Open in Vega Editor, and when you get there, click Run. Today I learned that Vega is:
a declarative format for creating, saving, and sharing visualization designs. With Vega, visualizations are described in JSON, and generate interactive views using either HTML5 Canvas or SVG.
Sweet! I think I’ll use it in my own work to simplify what I’ve recently (and painfully) learned how to do with D3.js. It’s also a nice example of how Workbench prioritizes openness, reusability, and reproducibility in every imaginable way.
I use the chart as the intro to my report, which is made with an elegant block editor in which you can combine tables and charts from any of your tabs with snippets of text written in markdown. There I begin to ask myself questions, adding tabs to marshal supporting evidence and sourcing evidence from tabs into the report.
My first question is about the contribution that the Diablo Canyon nuclear plant has been making to the overall mix. In the 2020 percentages all major sources tab I start in step 1 by reusing the tab 2001-2020 scraped. Step 2 filters the columns to just the same set of major sources shown in the chart. I could instead apply that step in 2001-2020 scraped and avoid the need to select columns for the chart. Since I’m not sure how that decision might affect downstream analysis I keep all the columns. If I change my mind it’s easy to push the column selection upstream.
Workbench not only makes it possible to refactor a workflow, it practically begs you to do that. When things go awry, as they inevitably will, it’s no problem. You can undo and redo the steps in each tab! You won’t see that in the read-only view but if you create your own account, and duplicate my workflow in it, give it a try. With stepwise undo/redo, exploratory analysis becomes a safe and stress-free activity.
At step 2 of 2020 percentages all major sources we have rows for all the years. In thinking about Diablo Canyon’s contribution I want to focus on a single reference year so in step 3 I apply a filter that selects just the 2020 row. Here’s the UX for that.
In situations like this, where you need to select one or more items from a list, Workbench does all the right things to minimize tedium: search if needed, start from all or none depending on which will be easier, then keep or remove selected items, again depending on which will be easier.
In step 4 I include an alternate way to select just the 2020 row. It’s a Select SQL step that says select * from input where Year = '2020'. That doesn’t change anything here; I could omit either step 3 or step 4 without affecting the outcome; I include step 4 just to show that SQL is available at any point to transform the output of a prior step.
Which is fantastic, but wait, there’s more. In step 5 I use a Python step to do the same thing in terms of a pandas dataframe. Again this doesn’t affect the outcome, I’m just showing that Python is available at any point to transform the output of a prior step. Providing equivalent methods for novices and experts, in a common interface, is extraordinarily powerful.
I’m noticing now, by the way, that step 5 doesn’t work if you’re not logged in. So I’ll show it to you here:
Step 6 transposes the table so we can reason about the fuel types. In steps 3-5 they’re columns, in step 6 they become rows. This is a commonly-needed maneuver. And while I might use the SQL in step 4 to do the row selection handled by the widget in step 3, I won’t easily accomplish the transposition that way. The Transpose step is one of the most powerful tools in the kit.
Notice at step 6 that the first column is named Year. That’s a common outcome of transposition and here necessitates step 7 in which I rename it to Fuel Type. There are two ways to do that. You can click the + Add Step button, choose the Rename columns option, drag the new step into position 7, open it, and do the renaming there.
But look:
You can edit anything in a displayed table. When I change Year to Fuel Type that way, the same step 7 that you can create manually appears automatically.
It’s absolutely brilliant.
In step 8 I use the Calculate step to add a new column showing each row’s percentage of the column sum. In SQL I’d have to think about that a bit. Here, as is true for so many routine operations like this, Workbench offers the solution directly:
Finally in step 9 I sort the table. The report includes it, and there I consider the question of Diablo Canyon’s contribution. According to my analysis nuclear power was 9% of the major sources I’ve selected, contributing 16,280 GWh in 2020. According to another energy.ca.gov page that I cite in the report, Diablo Canyon is the only remaining nuke plant in the state, producing “about 18,000 GWh.” That’s not an exact match but it’s close enough to give me confidence that reasoning about the nuclear row in the table applies to Diablo Canyon specifically.
Next I want to compare nuclear power to just the subset of sources that are renewable. That happens in the 2020 percentages renewable tab, the output of which is also included in the report. Step 1 begins with the output of 2020 percentages of all major sources. In step 2 I clarify that the 2020 column is really 2020 GWh. In step 3 I remove the percent column in order to recalculate it. In step 4 I remove rows in order to focus on just nuclear and renewables. In step 5 I recalculate the percentages. And in step 6 I make the chart that also flows through to the report.
Now, as I look at the chart, I notice that the line for large hydro is highly variable and appears to correlate with drought years. In order to explore that correlation I look for data on reservoir levels and arrive at https://cdec.water.ca.gov/. I’d love to find a table that aggregates levels for all reservoirs statewide since 2001, but that doesn’t seem to be on offer. So I decide to use Lake Mendocino as a proxy. In step 1 I scrape an HTML table with monthly levels for the lake since 2001. In step 2 I delete the first row which only has some months. In step 3 I rename the first column to Year in order to match what’s in the table I want to join with. In step 4 I convert the types of the month columns from text to numeric to enable calculation. In step 5 I calculate the average into a new column, Avg. In step 6 I select just Year and Avg.
When I first try the join in step 8 it fails for a common reason that Workbench helpfully explains:
In the other table Year looks like ‘2001’, but the text scraped from energy.ca.gov looks like ‘2,001’. That’s a common glitch that can bring an exercise like this to a screeching halt. There’s probably a Workbench way to do this, but in step 7 I use SQL to reformat the values in the Year column, removing the commas to enable the join. While there I also rename the Avg column to Lake Mendocino Avg Level. Now in tab 8 I can do the join.
In tab 9 I scale the values for Large Hydro into a new column, Scaled Large Hydro. Why? The chart I want to see will compare power generation in GWh (gigawatt hours) and lake levels in AF (acre-feet). These aren’t remotely compatible but I don’t care, I’m just looking for comparable trends. Doubling the value for Large Hydro gets close enough for the comparison chart in step 10, which also flows through to the report.
All this adds up to an astonishingly broad, deep, and rich set of features. And I haven’t even talked about the Clean text step for tweaking whitespace, capitalization, and punctuation, or the Refine step for finding and merging clusters of similar values that refer to the same things. Workbench is also simply beautiful as you can see from the screen shots here, or by visiting my workflow. When I reviewed software products for BYTE and InfoWorld it was rare to encounter one that impressed me so thoroughly.
But wait, there’s more.
At the core of my workflow there’s a set of tabs; each is a sequence of steps; some of these produce tables and charts. Wrapped around the workflow there’s the report into which I cherrypick tables and charts for the story I’m telling there.
There’s also another kind of wrapper: a lesson wrapped around the workflow. I could write a lesson that guides you through the steps I’ve described and checks that each yields the expected result. See Intro to data journalism for an example you can try. Again, it’s brilliantly well done.
So Workbench succeeds in three major ways. If you’re not a data-oriented professional, but you’re a thinking person who wants to use available web data to reason about the world, Workbench will help you power through the grunt work of scraping, cleaning, and combining that data so you can focus on analysis. If you aspire to become such a professional, and you don’t have a clue about how to do that grunt work, it will help you learn the ropes. And if you are one of the pros you’ll still find it incredibly useful.
Kudos to the Workbench team, and especially to core developers Jonathan Stray, Pierre Conti, and Adam Hooper, for making this spectacularly great software tool.
In episode 2 of this series I noted that the languages in which I’m writing Postgres functions share a common type system. It took me a while to understand how types work in the context of Postgres functions that can return sets of records and can interact with tables and materialized views.
Here is a set-returning function.
create function notes_for_user_in_group(
_userid text,
_groupid text)
returns setof annotation as $$
begin
return query
select * from annotation
where userid = concat('acct:', _userid)
and groupid = _groupid;
end;
$$ language plpgsql;
In this case the type that governs the returned set has already been defined: it’s the schema for the annotation table.
Column
Type
id
uuid
created
timestamp without time zone
updated
timestamp without time zone
userid
text
groupid
text
text
text
tags
text[]
shared
boolean
target_uri
text
target_uri_normalized
text
target_selectors
jsonb
references
uuid[]
extra
jsonb
text_rendered
text
document_id
integer
deleted
boolean
The function returns records matching a userid and groupid. I can now find the URLs of documents most recently annotated by me.
select
target_uri
from notes_for_user_in_group('judell@hypothes.is', '__world__')
order by created desc
limit 3;
You might wonder why the function’s parameters are prefixed with underscores. That’s because variables used in functions can conflict with names of columns in tables. Since none of our column names begin with underscore, it’s a handy differentiator. Suppose the function’s signature were instead:
create function notes_for_user_in_group(
userid text,
groupid text)
Postgres would complain about a confict:
ERROR: column reference "userid" is ambiguous
LINE 2: where userid = concat('acct:', userid)
^
DETAIL: It could refer to either a PL/pgSQL variable or a table column.
The table has userid and groupid columns that conflict with their eponymous variables. So for functions that combine variables and database values I prefix variable names with underscore.
Set-returning functions can be called in any SQL SELECT context. In the example above that context is psql, Postgres’ powerful and multi-talented REPL (read-eval-print loop). For an example of a different context, let’s cache the function’s result set in a materialized view.
create materialized view public_notes_for_judell as (
select
*
from notes_for_user_in_group('judell@hypothes.is', '__world__')
order by created desc
) with data;
Postgres reports success by showing the new view’s record count.
SELECT 3972
The view’s type is implicitly annotation; its schema matches the one shown above; selecting target_uri from the view is equivalent to selecting target_uri from the setof annotation returned from the function notes_for_user_in_group.
select
target_uri
from public_notes_for_judell
limit 3;
It shows up a lot faster though! Every time you select the function’s result set, the wrapped query has to run. For this particular example that can take a few seconds. It costs the same amount of time to create the view. But once that’s done you can select its contents in milliseconds.
Now let’s define a function that refines notes_for_user_in_group by reporting the count of notes for each annotated document.
create function annotated_docs_for_user_in_group(
_userid text,
_groupid text)
returns table (
count bigint,
userid text,
groupid text,
url text
) as $$
begin
return query
select
count(n.*) as anno_count,
n.userid,
n.groupid,
n.target_uri
from notes_for_user_in_group(_userid, _groupid) n
group by n.userid, n.groupid, n.target_uri
order by anno_count desc;
end;
$$ language plpgsql;
Instead of returning a setof some named type, this function returns an anonymous table. I’ve aliased the set-returning function call notes_for_user_in_group as n and used the alias to qualify the names of selected columns. That avoids another naming conflict. If you write userid instead of n.userid in the body of the function and then call it, Postgres again complains about a conflict.
ERROR: column reference "userid" is ambiguous
LINE 3: userid,
^
DETAIL: It could refer to either a PL/pgSQL variable or a table column.
Here’s a sample call to our new function..
select
*
from annotated_docs_for_user_in_group(
'judell',
'hypothes.is',
'__world__'
);
create materialized view url_counts_for_public_notes_by_judell as (
select
*
from annotated_docs_for_user_in_group(
'judell@hypothes.is',
'__world__'
)
) with data;
Postgres says:
SELECT 1710
When you ask for the definition of that view using the \d command in psql:
\d url_counts_for_public_notes_by_judell
It responds with the same table definition used when creating the function.
Column | Type
---------+--------
count | bigint
userid | text
groupid | text
url | text
Behind the scenes Postgres has created this definition from the anonymous table returned by the function.
To revise the function so that it uses a named type, first create the type.
create type annotated_docs_for_user_in_group as (
count bigint,
userid text,
groupid text,
url text
);
Postgres reports success:
CREATE TYPE
Now we can use that named type in the function. Since we’re redefining the function, first drop it.
drop function annotated_docs_for_user_in_group;
Uh oh. Postgres is unhappy about that.
ERROR: cannot drop function annotated_docs_for_user_in_group(text,text) because other objects depend on it
DETAIL: materialized view url_counts_for_public_notes_by_judell depends on function annotated_docs_for_user_in_group(text,text)
HINT: Use DROP ... CASCADE to drop the dependent objects too.
A view that depends on a function must be recreated when the function’s signature changes. I’ll say more about this in a future episode on set-returning functions that dynamically cache their results in materialized views. For now, since the view we just created is a contrived throwaway, just drop it along with the function by using CASCADE as Postgres recommends.
drop function annotated_docs_for_user_in_group cascade;
Postgres says:
NOTICE: drop cascades to materialized view url_counts_for_public_notes_by_judell
DROP FUNCTION
Now we can recreate a version of the function that returns setof annotated_docs_for_user_in_group instead of an anonymous table(...)
create function annotated_docs_for_user_in_group(
_userid text,
_groupid text)
returns setof annotated_docs_for_user_in_group as $$
begin
return query
select
count(n.*) as anno_count,
n.userid,
n.groupid,
n.target_uri
from notes_for_user_in_group(_userid, _groupid) n
group by n.userid, n.groupid, n.target_uri
order by anno_count desc;
end;
$$ language plpgsql;
The results are the same as above. So why do it this way? In many cases I don’t. It’s extra overhead to declare a type. And just as a view can depend on a function, a function can depend on a type. To see why you might not want such dependencies, suppose we want to also track the most recent note for each URL.
create type annotated_docs_for_user_in_group as (
count bigint,
userid text,
groupid text,
url text,
most_recent_note timestamp
);
That won’t work.
ERROR: type "annotated_docs_for_user_in_group" already exists
Dropping the type won’t work either.
ERROR: cannot drop type annotated_docs_for_user_in_group because other objects depend on it
DETAIL: function annotated_docs_for_user_in_group(text,text,text) depends on type annotated_docs_for_user_in_group
HINT: Use DROP ... CASCADE to drop the dependent objects too.
To redefine the type you have to do a cascading drop and then recreate functions that depend on the type. If any of those views depend on dropped functions, the drop cascades to them as well and they also must be recreated. That’s why I often write functions that return table(...) rather than setof TYPE. In dynamic languages it’s convenient to work with untyped bags of values; I find the same to be true when writing functions in Postgres.
Sometimes, though, it’s useful to declare and use types. In my experience so far it makes most sense to do that in Postgres when you find yourself writing the same returns table(...) statement in several related functions. Let’s say we want a function that combines the results of annotated_docs_for_user_in_group for some set of users.
create function annotated_docs_for_users_in_group(_userids text[], _groupid text)
returns setof annotated_docs_for_user_in_group as $$
begin
return query
with userids as (
select unnest(_userids) as userid
)
select
a.*
from userids u
join annotated_docs_for_user_in_group(u.userid, _groupid) a
on a.userid = concat('acct:', u.userid);
end;
$$ language plpgsql;
This new function uses the SQL WITH clause to create a common table expression (CTE) that converts an inbound array of userids into a transient table-like object, named userids, with one userid per row. The new function’s wrapped SQL then joins that CTE to the set returned from annotated_docs_for_user_in_group and returns the joined result.
(You can alternatively do this in a more procedural way by creating a loop variable and marching through the array to accumulate results. Early on I used that approach but in the context of Postgres functions I’ve come to prefer the more purely SQL-like set-oriented style.)
Sharing a common type between the two functions makes them simpler to write and easier to read. More importantly it connects them to one another and to all views derived from them. If I do decide to add most_recent_note to the type, Postgres will require me to adjust all depending functions and views so things remain consistent. That can be a crucial guarantee, and as we’ll see in a future episode it’s a key enabler of an advanced caching mechanism.
In A virtuous cycle for analytics I noted that our library of Postgres functions is written in two languages: Postgres’ built-in pl/pgsql and the installable alternative pl/python. These share a common type system and can be used interchangeably.
Here’s a pl/pgsql classifier that tries to match the name of a course against a list of patterns that characterize the humanities.
create function humanities_classifier(course_name text) returns boolean as $$
begin
return
lower(course_name) ~ any(array[
'psych',
'religio',
'soci'
]);
end;
$$ language plpgsql;
# select humanities_classifier('Religious Studies 101') as match;
match
-----
t
# select humanities_classifier('Comparative Religions 200') as match;
match
-----
t
Here is that same classifier in Python.
create function humanities_classifier(course_name text) returns boolean as $$
sql = f"""
select lower('{course_name}') ~ any(array[
'psych',
'religio',
'soci'
]) as match"""
results = plpy.execute(sql)
return results[0]['match']
$$ language plpython3u;
# select humanities_classifier('Religious Studies 101') as match;
match
-----
t
# select humanities_classifier('Comparative Religions 200') as match;
match
-----
t
The results are exactly the same. In this case, Python is only wrapping the SQL used in the orginal function and interpolating course_name into it. So why use pl/python here? I wouldn’t. The pl/pgsql version is cleaner and simpler because the SQL body doesn’t need to be quoted and course_name doesn’t need to be interpolated into it.
Here’s a more Pythonic version of the classifier.
create function humanities_classifier(course_name text) returns boolean as $$
import re
regexes = [
'psych',
'religio',
'soci'
]
matches = [r for r in regexes if re.search(r, course_name, re.I)]
return len(matches)
$$ language plpython3u;
There’s no SQL here, this is pure Python. Is there any benefit to doing things this way? In this case probably not. The native Postgres idiom for matching a string against a list of regular expressions is cleaner and simpler than the Python technique shown here. A Python programmer will be more familiar with list comprehensions than with the Postgres any and ~ operators but if you’re working in Postgres you’ll want to know about those, and use them not just in functions but in all SQL contexts.
What about performance? You might assume as I did that a pl/pgsql function is bound to be way faster than its pl/python equivalent. Let’s check that assumption. This SQL exercises both flavors of the function, which finds about 500 matches in a set of 30,000 names.
with matching_courses as (
select humanities_classifier(name) as match
from lms_course_groups
)
select count(*)
from matching_courses
where match;
Here are the results for three runs using each flavor of the function:
The Python flavor is slower but not order-of-magnitude slower; I’ve seen cases where a pl/python function outperforms its pl/pgsql counterpart.
So, what is special about Python functions inside Postgres? In my experience so far there are three big reasons to use it.
Python modules
The ability to wield any of Python’s built-in or loadable modules inside Postgres brings great power. That entails great responsibility, as the Python extension is “untrusted” (that’s the ‘u’ in ‘plpython3u’) and can do anything Python can do on the host system: read and write files, make network requests.
Here’s one of my favorite examples so far. Given a set of rows that count daily or weekly annotations for users in a group — so for weekly accounting each row has 52 columns — the desired result for the whole group is the element-wise sum of the rows. That’s not an easy thing in SQL but it’s trivial using numpy, and in pl/python it happens at database speed because there’s no need to transfer SQL results to an external Python program.
Metaprogramming
Functions can write and then run SQL queries. It’s overkill for simple variable interpolation; as shown above pl/pgsql does that handily without the cognitive overhead and visual clutter of poking values into a SQL string. For more advanced uses that compose queries from SQL fragments, though, pl/pgsql is hopeless. You can do that kind of thing far more easily, and more readably, in Python.
Introspection
A pl/python function can discover and use its own name. That’s the key enabler for a mechanism that memoizes the results of a function by creating a materialized view whose name combines the name of the function with the value of a parameter to the function. This technique has proven to be wildly effective.
I’ll show examples of these scenarios in later installments of this series. For now I just want to explain why I’ve found these two ways of writing Postgres functions to be usefully complementary. The key points are:
– They share a common type system.
– pl/pgsql, despite its crusty old syntax, suffices for many things.
– pl/python leverages Python’s strengths where they are most strategic
When I began this journey it wasn’t clear when you’d prefer one over the other, or why it might make sense to use both in complementary ways. This installment is what I’d like to have known when I started.
Suppose you’re a member of a team that runs a public web service. You need to help both internal and external users make sense of all the data that’s recorded as it runs. That’s been my role for the past few years, now it’s time to summarize what I’ve learned.
The web service featured in this case study is the Hypothesis web annotation system. The primary database, Postgres, stores information about users, groups, documents, courses, and annotations. Questions that our team needs to answer include:
– How many students created annotations last semester?
– In how many courses at each school?
Questions from instructors using Hypothesis in their courses include:
– Which passages in course readings are attracting highlights and discussion?
– Who is asking questions about those passages, and who is responding?
Early on we adopted a tool called Metabase that continues to be a pillar of our analytics system. When Metabase was hooked up to our Postgres database the team could start asking questions without leaning on developers. Some folks used the interactive query builder, while others went straight to writing SQL that Metabase passes through to Postgres.
Before long we had a large catalog of Metabase questions that query Postgres and display results as tables or charts that can be usefully arranged on Metabase dashboards. It’s all nicely RESTful. Interactive elements that can parameterize queries, like search boxes and date pickers, map to URLs. Queries can emit URLs in order to compose themselves with other queries. I came to see this system as a kind of lightweight application server in which to incubate an analytics capability that could later be expressed more richly.
Over time, and with growing amounts of data, early success with this approach gave way to two kinds of frustration: queries began to choke, and the catalog of Metabase questions became unmanageable. And so, in the time-honored tradition, we set up a data warehouse for analytics. Ours is another instance of Postgres that syncs nightly with the primary database. There are lots of other ways to skin the cat but it made sense to leverage ops experience with Postgres and I had a hunch that it would do well in this role.
To unthrottle the choking queries I began building materialized views that cache the results of Postgres queries. Suppose a query makes use of available indexes but still takes a few minutes, or maybe even an hour, to run. It still takes that long to build the corresponding materialized view, but once built other queries can use its results immediately. Metabase questions that formerly included chunks of SQL began reducing to select * from {viewname}.
This process continues to unfold in a highly productive way. Team members may or may not hit a performance wall as they try to use Metabase to answer their questions. When they do, we can convert the SQL text of a Metabase question to a Postgres materialized view that gets immediate results. Such views can join with others, and/or with underlying tables, in SQL SELECT contexts. The views become nouns in a language that expresses higher-order business concepts.
The verbs in this language turned out to be Postgres functions written in the native procedural language, pl/pgsql, and later also in its Python counterpart, pl/python. Either flavor can augment built-in Postgres library functions with user-defined functions that can return simple values, like numbers and strings, but can also return sets that behave in SQL SELECT contexts just like tables and views.
Functions were, at first, a way to reuse chunks of SQL that otherwise had to be duplicated across Metabase questions and Postgres CREATE MATERIALIZED VIEW statements. That made it possible to streamline and refactor both bodies of code and sanely maintain them.
To visualize what had now become a three-body system of sources in which Metabase questions, Postgres views, and Postgres functions can call (or link to) one another, I wrote a tool that builds a crosslinked concordance. That made it practical to reason effectively about the combined system.
Along the way I have learned how Postgres, and more broadly modern SQL, in conjunction with a tool like Metabase, can enable a team like ours to make sense of data. There’s plenty to say about the techniques I’ve evolved, and I aim to write them up over time. The details won’t interest most people, but here’s an outcome that might be noteworthy.
Team member: I had an idea that will help manage our communication with customers, and I’ve prototyped it in a Metabase question.
Toolsmith: Great! Here’s a Postgres function that encapsulates and refines your SQL. It’s fast enough for now, but if needed we can convert it into a materialized view. Now you can use that function in another Metabase question that projects your SQL across a set of customers that you can select.
That interaction forms the basis of a virtuous cycle: The team member formulates a question and does their best to answer it using Metabase; the toolsmith captures the intent and re-expresses it in a higher-level business language; the expanded language enables the team member to go farther in a next cycle.
We recognize this software pattern in the way application programmers who push a system to its limits induce systems programmers to respond with APIs that expand those limits. I suppose it’s harder to see when the application environment is Metabase and the systems environment is Postgres. But it’s the same pattern, and it is powerful.
In The Chess Master and the Computer, Garry Kasparov famously wrote:
The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
The title of his subsequent TED talk sums it up nicely: Don’t fear intelligent machines. Work with them.
That advice resonates powerfully as I begin a second work week augmented by GitHub Copilot, a coding assistant based on OpenAI’s Generative Pre-trained Transformer (GPT-3). Here is Copilot’s tagline: “Your AI pair programmer: get suggestions for whole lines or entire functions right inside your editor.” If you’re not a programmer, a good analogy is Gmail’s offer of suggestions to finish sentences you’ve begun to type.
In mainstream news the dominant stories are about copyright (“Copilot doesn’t respect software licenses”), security (“Copilot leaks passwords”), and quality (“Copilot suggests wrong solutions”). Tech Twitter amplifies these and adds early hot takes about dramatic Copilot successes and flops. As I follow these stories, I’m thinking of another. GPT-3 is an intelligent machine. How can we apply Kasparov’s advice to work effectively with it?
Were I still a tech journalist I’d be among the first wave of hot takes. Now I spend most days in Visual Studio Code, the environment in which Copilot runs, working most recently on analytics software. I don’t need to produce hot takes, I can just leave Copilot running and reflect on notable outcomes.
Here was the first notable outcome. In the middle of writing some code I needed to call a library function that prints a date. In this case the language context was Python, but might as easily have been JavaScript or SQL or shell. Could I memorize the date-formatting functions for all these contexts? Actually, yes, I believe that’s doable and might even be beneficial. But that’s a topic for another day. Let’s stipulate that we can remember a lot more than we think we can. We’ll still need to look up many things, and doing a lookup is a context-switching operation that often disrupts flow.
In this example I would have needed a broad search to recall the name of the date-formatting function that’s available in Python: strftime. Then I’d have needed to search more narrowly to find the recipe for printing a date object in a format like Mon Jan 01. A good place for that search to land is https://strftime.org/, where Will McCutchen has helpfully summarized several dozen directives that govern the strftime function.
Here’s the statement I needed to write:
day = day.strftime('%a %d %b')
Here’s where the needed directives appear in the documentation:
To prime Copilot I began with a comment:
# format day as Mon Jun 15
Copilot suggested the exact strftime incantation I needed.
This is exactly the kind of example-driven assistance that I was hoping @githubcopilot would provide. Life's too short to remember, or even look up, strptime and strftime.
(It turns out that June 15 was a Tuesday, that doesn't matter, Mon Jun 15 was just an example.) pic.twitter.com/a1epnaZRF9
Now it’s not hard to find a page like Will’s, and once you get there it’s not hard to pick out the needed directives. But when you’re in the flow of writing a function, avoiding that context switch doesn’t only save time. There is an even more profound benefit: it conserves attention and preserves flow.
The screencast embedded in the above tweet gives you a feel for the dynamic interaction. When I get as far as # format day as M, Copilot suggests MMDDYYY even before I write Mon, then adjusts as I do that. This tight feedback loop helps me explore the kinds of natural examples I can use to prime the system for the lookup.
For this particular pattern I’m not yet getting the same magical result in JavaScript, SQL, or shell contexts, but I expect that’ll change as Copilot watches me and others try the same example and arrive at the analogous solutions in these other languages.
I’m reminded of Language evolution with del.icio.us, from 2005, in which I explored the dynamics of the web’s original social bookmarking system. To associate a bookmarked resource with a shared concept you’d assign it a tag broadly used for that concept. Of course the tags we use for a given concept often vary. Your choice of cinema or movie or film was a way to influence the set of resources associated with your tag, and thus encourage others to use the same tag in the same way.
That kind of linguistic evolution hasn’t yet happened at large scale. I hope Copilot will become an environment in which it can. Intentional use of examples is one way to follow Kasparov’s advice for working well with intelligent systems.
Here’s a contrived Copilot session that suggests what I mean. The result I am looking for is the list [1, 2, 3, 4, 5].
l1 = [1, 2, 3]
l2 = [3, 4, 5]
# merge the two lists
l3 = l1 + l2 # no: [1, 2, 3, 3, 5]
# combine as [1, 2, 3, 4, 5]
l3 = l1 + l2 # no: [1, 2, 3, 3, 5]
# deduplicate the two lists
l1 = list(set(l1)) # no: [1, 2, 3]
# uniquely combine the lists
l3 = list(set(l1) | set(l2)) # yes: [1, 2, 3, 4, 5]
# merge and deduplicate the lists
l3 = list(set(l1 + l2)) # yes: [1, 2, 3, 4, 5]
The last two Copilot suggestions are correct; the final (and simplest) one would be my choice. If I contribute that choice to a public GitHub repository am I voting to reinforce an outcome that’s already popular? If I instead use the second comment (combine as [1, 2, 3, 4, 5]) am I instead voting for a show-by-example approach (like Mon Jun 15) that isn’t yet as popular in this case but might become so? It’s hard for me — and likely even for Copilot itself — to know exactly how Copilot works. That’s going to be part of the challenge of working well with intelligent machines. Still, I hope for (and mostly expect) a fruitful partnership in which our descriptions of intent will influence the mechanical synthesis even as it influences our descriptions.
I used a Lisp variant in my first programming job, so I have some appreciation for the “code as data” power of that language. Nowadays I’m building an analytics system that combines Postgres and two of its procedural languages, pl/pgsql and pl/python, with a sense-making tool called Metabase that’s written in a Lisp variant called Clojure.
In Postgres I’m able to wrap SQL queries in functions; they compose with other functions that do things like cache results in materialized views and aggregate subgroups. It all feels very dynamic and functional, which are two of Clojure’s main calling cards, so this makes me wonder about Clojure as another Postgres procedural language.
For the pl/python functions in my system, “code as data” looks like building SQL statements that swap variables into SQL templates and combine them. This string-building approach is an anti-pattern for Clojure folk. They don’t want to work with fragments of SQL text, they want to work with Clojure maps. Here’s an example from Honey SQL, the library that Metabase uses to build SQL texts from structured data.
This seems like an important power to be able to wield in the hybrid programming environment that Postgres provides. I can imagine making very good use of it.
But I know very little yet about Clojure. Is this really an idea worth exploring? How else would it differ from what I’m now able to do in Python? To learn more about Clojure I’ve been watching talks by its creator, Rich Hickey. Most are tech-heavy but one of them isn’t like the others. In Hammock-driven development he lays out a process for creative problem solving that coordinates the waking mind (at the computer doing critical thinking and analysis) with the background mind (on the hammock doing abstraction, analogy, and synthesis). The coordination is explicit: You use the waking mind to feed work to the background mind which is “the solver of most non-trivial problems”; you weight your inputs to the background mind in order to influence their priority in its thinking.
I guess I’ve done that kind of thing implicitly from time to time, but never in such an intentional way. So, is Clojure-in-Postgres worth exploring? Perhaps by writing this I’ll prime my background mind and will receive more clarity in the morning.
In 2018 I built a tool to help researchers evaluate a proposed set of credibility signals intended to enable automated systems to rate the credibility of news stories.
Here are examples of such signals:
– Authors cite expert sources (positive)
– Title is clickbaity (negative)
And my favorite:
– Authors acknowledge uncertainty (positive)
Will the news ecosystem ever be able to label stories automatically based on automatic detection of such signals, and if so, should it? These are open questions. The best way to improve news literacy may be the SIFT method advocated by Mike Caulfield, which shifts attention away from intrinsic properties of individual news stories and advises readers to:
– Stop
– Investigate the source
– Find better coverage
– Trace claims, quotes, and media to original context
“The goal of SIFT,” writes Charlie Warzel in Don’t Go Down the Rabbit Hole, “isn’t to be the arbiter of truth but to instill a reflex that asks if something is worth one’s time and attention and to turn away if not.”
SIFT favors extrinsic signals over the intrinsic ones that were the focus of the W3C Credible Web Community Group. But intrinsic signals may yet play an important role, if not as part of a large-scale automated labeling effort then at least as another kind of news literacy reflex.
What made these Trump supporters shift their views on vaccines? Science — offered straight-up and with a dash of humility.
The unlikely change agent was Dr. Tom Frieden, who headed the Centers for Disease Control and Prevention during the Obama administration. Frieden appealed to facts, not his credentials. He noted that the theory behind the vaccine was backed by 20 years of research, that tens of thousands of people had participated in well-controlled clinical trials, and that the overwhelming share of doctors have opted for the shots.
He leavened those facts with an acknowledgment of uncertainty. He conceded that the vaccine’s potential long-term risks were unknown. He pointed out that the virus’s long-term effects were also uncertain.
“He’s just honest with us and telling us nothing is 100% here, people,” one participant noted.
Here’s evidence that acknowledgement of uncertainty really is a powerful signal of credibility. Maybe machines will be able to detect it and label it; maybe those labels will matter to people. Meanwhile, it’s something people can detect and do care about. Teaching students to value sources that acknowledge uncertainty, and discount ones that don’t, ought to be part of any strategy to improve news literacy.
Several years ago I bought two 5-packs of reading glasses. There was a 1.75-diopter set for books, magazines, newspapers, and my laptop (when it’s in my lap), plus a 1.25-diopter set for the screens I look at when working in my Captain Kirk chair. They were cheap, and the idea was that they’d be an abundant resource. I could leave spectacles lying around in various places, there would always be a pair handy, no worries about losing them.
So of course I did lose them like crazy. At one point I bought another 5-pack but still, somehow, I’m down to a single 1.75 and a single 1.25. And I just realized it’s been that way for quite a while. Now that the resource is scarce, I value it more highly and take care to preserve it.
I’m sorely tempted to restock. It’s so easy! A couple of clicks and two more 5-packs will be here tomorrow. And they’re cheap, so what’s not to like?
For now, I’m resisting the temptation because I don’t like the effect such radical abundance has had on me. It’s ridiculous to lose 13 pairs of glasses in a couple of years. I can’t imagine how I’d explain that to my pre-Amazon self.
For now, I’m going to try to assign greater value to the glasses I do have, and treat them accordingly. And when I finally do lose them, I hope I’ll resist the one-click solution. I thought it was brilliant at the time, and part of me still does. But it just doesn’t feel good.
Were it not for the Wayback Machine, a lot of my post-1995 writing would now be gone. Since the advent of online-only publications, getting published has been a lousy way to stay published. When pubs change hands, or die, the works of their writers tend to evaporate.
I’m not a great self-archivist, despite having better-than-average skills for the job. Many but not all of my professional archives are preserved — for now! — on my website. Occasionally, when I reach for a long-forgotten and newly-relevant item, only to find it 404, I’ll dig around and try to resurrect it. The forensic effort can be a big challenge; an even bigger one is avoiding self-blame.
The same thing happens with personal archives. When our family lived in New Delhi in the early 1960s, my dad captured thousands of images. Those color slides, curated in carousels and projected onto our living room wall in the years following, solidified the memories of what my five-year-old self had directly experienced. When we moved my parents to the facility where they spent their last years, one big box of those slides went missing. I try, not always successfully, to avoid blaming myself for that loss.
When our kids were little we didn’t own a videocassette recorder, which was how you captured home movies in that era. Instead we’d rent a VCR from Blockbuster every 6 months or so and spend the weekend filming. It turned out to be a great strategy. We’d set it on a table or on the floor, turn it on, and just let it run. The kids would forget it was there, and we recorded hours of precious daily life in episodic installments.
Five years ago our son-in-law volunteered the services of a friend of his to digitize those tapes, and brought us the MP4s on a thumb drive. I put copies in various “safe” places. Then we moved a couple of times, and when I reached for the digitized videos, they were gone. As were the original cassettes. This time around, there was no avoiding the self-blame. I beat myself up about it, and was so mortified that I hesitated to ask our daughter and son-in-law if they have safe copies. (Spoiler alert: they do.) Instead I’d periodically dig around in various hard drives, clouds, and boxes, looking for files or thumb drives that had to be there somewhere.
During this period of self-flagellation, I thought constantly about something I heard Roger Angell say about Carlton Fisk. Roger Angell was one of the greatest baseball writers, and Carlton Fisk one of the greatest players. One day I happened to walk into a bookstore in Harvard Square when Angell was giving a talk. In the Q and A, somebody asked: “What’s the most surprising thing you’ve ever heard a player say?”
The player was Carlton Fisk, and the surprise was his answer to the question: “How many time have you seen the video clip of your most famous moment?”
That moment is one of the most-watched sports clips ever: Fisk’s walk-off home run in game 6 of the 1975 World Series. He belts the ball deep to left field, it veers toward foul territory, he dances and waves it fair.
So, how often did Fisk watch that clip? Never.
Why not? He didn’t want to overwrite the original memory.
Of course we are always revising our memories. Photographic evidence arguably prevents us from doing so. Is that good or bad? I honestly don’t know. Maybe both.
For a while, when I thought those home videos were gone for good, I tried to convince myself that it was OK. The original memories live in my mind, I hold them in my heart, nothing can take them away, no recording can improve them.
Although that sort of worked, I was massively relieved when I finally fessed up to my negligence and learned that there are safe copies. For now, I haven’t requested them and don’t need to see them. It’s enough to know that they exist.
Reading a collection of John McPhee stories, I found several that were new to me. The Duty of Care, published in the New Yorker in 1993, is about tires, and how we do or don’t properly recycle them. One form of reuse we’ve mostly abandoned is retreads. McPhee writes:
A retread is in no way inferior to a new tire, but new tires are affordable, and the retreaded passenger tire has descended to the status of a clip-on tie.
My dad wore clip-on ties. He also used retreaded tires, and I can remember visiting a shop on several occasions to have the procedure done.
Recently I asked a friend: “Whatever happened to retreaded tires?” We weren’t sure, but figured they’d gone away for good reasons: safety, reliability. But maybe not. TireRecappers and TreadWright don’t buy those arguments. Maybe retreads were always a viable option for our passenger fleet, as they still are for our truck fleet. And maybe, with better tech, they’re better than they used to be.
In Duty of Care, McPhee tells the story of the Modesto pile. It was, at the time, the world’s largest pile of scrap tires containing, by his estimate, 34 million tires.
You don’t have to stare long at that pile before the thought occurs to you that those tires were once driven upon by the Friends of the Earth. They are Environmental Defense Fund tires, Rainforest Action Network tires, Wilderness Society tires. They are California Natural Resources Federation tires, Save San Francisco Bay Association tires, Citizens for a Better Environment tires. They are Greenpeace tires, Sierra Club tires, Earth Island Institute tires. They are Earth First! tires!
(I love a good John McPhee list.)
The world’s largest pile of tires left a surprisingly small online footprint, but you can find the LA Times’ Massive Pile of Tires Fuels Controversial Energy Plan which describes the power plant — 41 million dollars, 14 megawatts, “the first of its kind in the United States and the largest in the world” — that McPhee visited when researching his story. I found it on Google Maps by following McPhee’s directions.
If you were abandon your car three miles from the San Joaquin County line and make your way on foot southwest one mile…
You can see the power plant. There’s no evidence of tires, or trucks moving them, so maybe the plant, having consumed the pile, is retired. Fortunately it never caught fire, that would’ve made a hell of a mess.
According to Wikipedia, we’ve reduced our inventory of stockpiled tires by an order of magnitude from a peak of a billion around the time McPhee wrote that article. We burn most of them for energy, and turn some into ground rubber for such uses as paving and flooring. So that’s progress. But I can’t help but wonder about the tire equivalent of Amory Lovins’ negawatt: “A watt of energy that you have not used through energy conservation or the use of energy-efficient products.”
Could retreaded passenger tires be an important source of negawatts? Do we reject the idea just because they’re as unfashionable as clip-on ties? I’m no expert on the subject, obviously, but I suspect these things might be true.
For 20 bucks or less, nowadays, you can buy an extra-wide convex mirror that clips onto your car’s existing rear-view mirror. We just tried one for the first time, and I’m pretty sure it’s a keeper. These gadgets claim to eliminate blind spots, and this one absolutely does. Driving down 101, I counted three seconds as a car passed through my driver’s-side blind spot. That’s a long time when you’re going 70 miles per hour; during that whole time I could see that passing car in the extended mirror.
Precious few gadgets spark joy for me. This one had me at hello. Not having to turn your head, avoiding the risk of not turning your head — these are huge benefits, quite possibly life-savers. For 20 bucks!
It got even better. As darkness fell, we wondered how it would handle approaching headlights. It’s not adjustable like the stock mirror, but that turns out not to be a problem. The mirror dims those headlights so they’re easy to look at. The same lights in the side mirrors are blinding by comparison.
I’ve been driving more than 40 years. This expanded view could have been made available at any point along the way. There’s nothing electronic or digital. It’s just a better idea that combines existing ingredients in a new way. That pretty much sums up my own approach to product development.
Finally, there’s the metaphor. Seeing around corners is a superpower I’ve always wanted. I used to love taking photos with the fisheye lens on my dad’s 35mm Exacta, now I love making panoramic views with my phone. I hate being blindsided, on the road and in life, by things I can’t see coming. I hate narrow-mindedness, and always reach for a wider view.
I’ll never overcome all my blind spots but it’s nice to chip away at them. After today, there will be several fewer to contend with.