Talking with Martin Hepp about solving the paradox of choice

In his luminous essay Information obesity, Ned Gulley illustrates the paradox of choice:

I’m reading about the Mohawk Trail, where the Cold River crashes noisily down the granitic glacier-fractured hillside. Where whispering understory birches are sheltered by towering firs. Now my mouth is watering. I have to go. I am referred to ReserveAmerica, a well-built web site that manages thousands of parks nationwide, and — DAMN! Mohawk Trail State Forest is booked solid. I start researching other nearby campgrounds, and now I’m sucked into the game. Unfortunately, ReserveAmerica lets you pick your campsite from an interactive map, and my book tells you which sites are the very best at each campground. Just when you start to salivate about the perfect spot, your dream is dashed by some early bird camper who’s beaten you to the reservation. You can cycle through this process for hours.

I borrow the phrase paradox of choice from Barry Schwartz, who argues in a compelling TED talk that as we broaden our options in all areas, we ratchet up our expectations about how good those options will be. The result is disappointment.

Less is more — except when it isn’t. My counterexample is a recent quest of mine for a particular kind of double-stick tape I needed for an interior storm window project. Key criteria included width (roughly 5/8″) and type of adhesion (plastic to wood). Web search yielded a bewildering array of choices, from various sources, but no way to filter by my criteria. This isn’t some idle consumer whim. I’m trying to save energy in the most effective way I can. I want to see as many qualifying choices as possible. But I can’t.

In Restructuring expert attention to revive the lost art of personal customer service I described one great solution to this problem: Kevin, the resident expert at, with whom I discussed SCF-01, DC-4420LB, and eventually settled on 3M-4905.

When there’s a Kevin available, he’ll be my first choice. But there won’t always be a Kevin. The answer in that case is not to artificially constrain my choices. That already happens because web search doesn’t enable me to state my criteria. Instead I want to search more effectively. To do that — as noted by several comments on Barry Schwartz’s TED video — we need to overcome filter failure.

This week’s Innovators show, with Martin Hepp, explores how we can create better filters. It’s a follow-on to an earlier show with Kingsley Idehen on the topics of RDFa, the GoodRelations ontology, and the idea that we can become the masters of our own search indexes.

The conversation mainly revolves around how to express an offer for goods or services by means of RDFa snippets that use the GoodRelations e-commerce vocabulary, that are generated by a form-based tool, and that rely on the web’s venerable traditions of view source and copy/paste.

But the same vocabulary used to describe offers can also express needs. And here Martin makes a really good observation about the current architecture of web search:

You can only search synchronously. You can’t ask a question and say, ‘Work on this for two weeks, improve your results in the background, and then come back with the best answer.’ But think about the potential if we can increase the amount of computational time for returning results. Currently there is only 400 milliseconds, because this is the average patience of web users. But if you can express what you’re looking for, and save it with a name, then the search engine will have two weeks to produce a good list of results.

I was also intrigued by Martin’s comments on intermediaries and affiliates. In his view, a commerce site like Amazon is not the only possible source of filter-enhancing metadata. Affiliates can play too. A travel service, for example, might supply search engines with enhanced views of Amazon relative to certain places and certain areas of expertise.

The paradox of choice is real, and in many cases we may indeed be happier with less. But when we really need or want more options, we shouldn’t have to prematurely foreclose them. Search could be far more effective, and an approach like the one Martin envisions is the way to make it so.

9 thoughts on “Talking with Martin Hepp about solving the paradox of choice

  1. Downloading the show now. Wondering if VRM was a central part of the discussion.

    We often joke that it would be nice to have a human filter out our spam. I can see a service based on micropayments and Amazon’s Mechanical Turk model backing a VRM-focused search for purchase of many items, particularly large ticket items. Making virtual assistants affordable/usable by Joe Six-Pack. The key will be making the micropayment “commission” go toward the “best search” not necessarily just another paid form of spam funded by the sellers.

  2. I worked on a search team for one of the engines during the first dot com boom. Even then (late 90s), the concept of asynchronous search was discussed. I see value in the idea, however the problem then (and probably now) from an engine’s standpoint is, is it worth spending orders of magnitude more processing time to return something with an incremental improvement in relevance? From a searcher’s standpoint, will I pay for something that has near 100% relevance when I can get something with decent relevance for free?

    Also consider that once you’re not expecting synchronous response, you open the possibility for human involvement. You might as well consider things like “yahoo answers,” “amazon mechanical turk,” etc… the output from an “asynchronous search” service.

  3. Wondering if VRM was a central part of the discussion.

    It wasn’t, the focus was mainly on crafting offers vs. crafting statements of need. However the latter is precisely what Doc et al. mean by VRM.

  4. is it worth spending orders of magnitude more processing time to return something with an incremental improvement in relevance?

    If the improvement is only incremental then no. But one of the things that stops us from issuing more specific queries today is that they return no results. In that case the difference between no results and some results is more than incremental!

  5. Another thought: Latency may be more of an issue than cycles. I think of the web of linked data as a highly decentralized and denormalized database. It takes time to traverse those links. Can that web of links be collapsed into an optimal core in the way that search indexes have been?

    I was skeptical that fulltext search could keep pace with the entirety of a fast-growing web, but it has. So maybe this latency problem will be conquered. But the first examples of linked-data traversal that I’m seeing — in Freebase and DbPedia — are pretty slow.

  6. I LOVE the Mohawk Trail. I’m proud to say I have some Mohawk ancestry from far back, as well as Huron and Mohican.
    Charlemont is a beautiful place in the Fall. I want to go there every Fall that I can.

Leave a Reply