In 2018 I built a tool to help researchers evaluate a proposed set of credibility signals intended to enable automated systems to rate the credibility of news stories.
Here are examples of such signals:
– Authors cite expert sources (positive)
– Title is clickbaity (negative)
And my favorite:
– Authors acknowledge uncertainty (positive)
Will the news ecosystem ever be able to label stories automatically based on automatic detection of such signals, and if so, should it? These are open questions. The best way to improve news literacy may be the SIFT method advocated by Mike Caulfield, which shifts attention away from intrinsic properties of individual news stories and advises readers to:
– Investigate the source
– Find better coverage
– Trace claims, quotes, and media to original context
“The goal of SIFT,” writes Charlie Warzel in Don’t Go Down the Rabbit Hole, “isn’t to be the arbiter of truth but to instill a reflex that asks if something is worth one’s time and attention and to turn away if not.”
SIFT favors extrinsic signals over the intrinsic ones that were the focus of the W3C Credible Web Community Group. But intrinsic signals may yet play an important role, if not as part of a large-scale automated labeling effort then at least as another kind of news literacy reflex.
This morning, in How public health officials can convince those reluctant to get the COVID-19 vaccine, I read the following:
What made these Trump supporters shift their views on vaccines? Science — offered straight-up and with a dash of humility.
The unlikely change agent was Dr. Tom Frieden, who headed the Centers for Disease Control and Prevention during the Obama administration. Frieden appealed to facts, not his credentials. He noted that the theory behind the vaccine was backed by 20 years of research, that tens of thousands of people had participated in well-controlled clinical trials, and that the overwhelming share of doctors have opted for the shots.
He leavened those facts with an acknowledgment of uncertainty. He conceded that the vaccine’s potential long-term risks were unknown. He pointed out that the virus’s long-term effects were also uncertain.
“He’s just honest with us and telling us nothing is 100% here, people,” one participant noted.
Here’s evidence that acknowledgement of uncertainty really is a powerful signal of credibility. Maybe machines will be able to detect it and label it; maybe those labels will matter to people. Meanwhile, it’s something people can detect and do care about. Teaching students to value sources that acknowledge uncertainty, and discount ones that don’t, ought to be part of any strategy to improve news literacy.
6 thoughts on “Acknowledgement of uncertainty”
If there are rules like that, the (AI) system will end up being gamed.
Yup. If the game is to acknowledge uncertainty, though, that might not be terrible!
Perhaps Bayesian reasoning might be a framework to adopt here.
Though it can be hard to estimate all the required probabilities/uncertainties.
I’d also say contextualization. Risk is very decontextualized in the way most people encounter it. One thing Frieden did was shift the frame from “elimination of risk” to “relative reduction of risk”. Not the uncertainty about the vaccine’s effects, for example, but the uncertainty of those effects when compared to the vastly more uncertain world of long-term COVID effects. Part of what has happened with a lot of public debates is that we’ve moved to demanding a level of absolute certainty rather than relative certainty. The same is true with climate change, for example. Tell even a liberal that we are more certain that manmade activities cause climate change than we are that cigarettes cause cancer and they will be stunned, because they are always hearing these 98% or 95% figures. So it’s not just uncertainty, I think, but the contextualization of uncertainty.
“Risk” is probably OT for this thread. But the way that “risk” is used is so often very unclear, so It’s hard to respond or think about statements like these.
There *is* a technical approach that is used in some places and has some merit. Risk is inherently some combination of likelihood and impact. If something is certain to happen but won’t hurt anything, there’s really no risk involved. If the probability is low but the impact is very large (say, death), then the risk can be high.
If we normalize both the probability and impact to 1.0, then the risk has to be 0 when either the probability or impact are zero. The simplest possible relation is
R = I * P
where R = risk, P = probability, and I = impact.
There’s some reason to think these curves aren’t really straight, so possibly this might be better:
R = I * P^k
where k could be something like sqrt(2), say.
At this point, the assessment of impact starts to seem somewhat subjective, so these relationship are probably more notional than exact. But it gives a starting point for discussion.