Filtering the covid signals from the noise
What rules can you use to decide who to trust when there is so much conflicting information and opinion flying around?
Until the first few months of 2020 I was, like most people, blissfully ignorant about pandemics and epidemiology. Suddenly covid became a dominant feature in our lives and we were blasted with conflicting analysis and information. It became apparent fairly early on in the crisis that “following the science” was a meaningless notion, given scientists habitually disagreed on what the data meant and the best courses of action.
Even more confusingly non-scientists like Rory Stewart and Jeremy Hunt clocked that the Government’s attachment to its pandemic plan was unjustified a week or so before their scientific advisers. And former model and reality TV star Caprice, turned out to be a better source of early advice on masks than Public Health England and the World Health Organisation. In both cases I’d lazily assumed the authorities must be right and then felt foolish for doing so.
To make sense of the situation it was going to be necessary to decide carefully who to listen to and develop some rules about how to navigate through the noise to find the signals. Since this realisation I think I’ve done a reasonable job of filtering and passing on accurate information to Twitter followers, friends and family. So I thought I’d share the rules as they can be broadly applied to figure out who to listen to on any complex fast moving story.
Rule 1: Qualifications tell you very little
My first step was to make a list of every infectious disease, epidemiology and public health expert I could find. This was helpful in that it allowed me to understand the differences of opinion amongst the expert community, but wasn’t much use in telling me who was more likely to be right. And whenever I ran into an appeal to authority argument it was usually a bad sign (“what exactly are your medical qualifications?” is a very unsatisfactory response to a challenge and suggests a lack of a good answer, as Caprice found).
A better test was whether the expert was providing added value. Were they pointing to relevant studies I hadn’t seen elsewhere? Were they doing their own modelling that helped explain trends better or offered a new way of viewing information? Or giving insights into the biology of the virus I hadn’t seen elsewhere?
Rule 2: Numerical competency is a valuable expertise on any issue involving data
As I whittled down my list of experts based on these criteria I realised that some of the best new information was coming from people who weren’t experts on pandemics at all. Or at least not in obviously relevant disciplines. People like Oliver Johnson (@BristOliver) a Maths Professor at Bristol University; John Burn-Murdoch (@jburnmurdoch) the Chief Data Reporter at the FT; and later James Ward (@JamesWard73), a Risk Director at an insurance company.
What they all had in common was an ability to analyse data in a way that aided understanding. (And the UK from relatively early on had a lot more data to analyse than many other countries thanks to high amounts of testing and genetic sequencing, as well as the regular ONS population samples, and the excellent daily dashboard).
I started to weight professional expertise in numerate disciplines like actuarial science or accountancy as highly as medical experience when making initial judgements as to whether to add someone to my list. Over the course of the pandemic I’ve come to rely on Oliver, John and James more than anyone else when trying to assess appropriate levels of risk and concern. Though, naturally this needed to be supplemented with medical expertise that could explain treatments and the underlying biological reasons why e.g. some people were affected more than others.
Rule 3: Looks for scouts not soldiers
In my previous post on “how to change your mind” I talked about Julia Galef’s book “The Scout Mindset”. A scout looks to create an accurate map of the world; a soldier looks to use their arguments to defeat others.
On covid plenty of experts quickly became soldiers. They took up a position such as “lockdowns are bad” (e.g. Carl Heneghan) or adopted slogans like Zero Covid (e.g. Deepti Gurdasani). Some joined groups like “Independent SAGE” which seemed to be established as an almost political opposition to actual SAGE. I became more sceptical of anyone who did this because, consciously or not, the moment you’re fighting for a cause you’ll start cherry picking. (To be clear this has nothing to with underlying intelligence – indeed cherry-picking convincingly is a skill that requires high intelligence).
Others, like the three data analysts mentioned above, and the UKHSA and Harvard epidemiologists Meaghan Kail (@kallmemeg) and Bill Hanage (@BillHanage), have been far more adaptive to the actual situation – sometimes urging concern and caution and at other times being more optimistic. They’ve never taken definitive positions for or against certain measures and, critically, have been reflective about their own biases and mistakes. All good signs of a scout mindset. Because of the way social media works absolutely no one has managed to avoid getting dragged into unproductive spats, or ever making a mistake, but scouts tend to try and extricate themselves and move on, as opposed to bearing long-term grudges. They are also much better at acknowledging trade-offs.
A scout mindset can operate at an institutional level too. The FT have been streets ahead of other papers during the crisis, not just because of John Burn-Murdoch but also their Science Commentator Anjana Ahuja (@anjahuja) and Health Reporter Oliver Barnes (@mroliverbarnes). At no point has the paper taken a strong political editorial line on covid, which has given their journalists the space to be scouts. (Honourable mentions to Jane Merrick at the I, Tom Whipple at the Times and Tom Chivers at Unherd for similar scout behaviours). The “Our World in Data” team based at Oxford University have also been exemplary scouts focused on collecting and disseminating useful global data and pushing for accuracy in the way things are reported.
You can still get useful info from “soldiers”. For instance I’ve picked up quite a few interesting papers via Christina Pagel’s threads (@chrischirp). And it’s always useful to find the best people advocating for a particular causes for “steel-manning” purposes. But I’ve been wary of their overarching analysis because it is shaped by advocacy and not at finding the most accurate map.
Rule 4: Follow networks
One of the best things about Twitter is that is allows you to see what sources of information other people value – either because they explicitly cite it or because they are regularly engaging with that person. This offers a great mechanism to build out a series of trusted sources from a single one. I found many of the people cited above via another one of them.
The danger of this approach is groupthink, you end up following a bunch of people who agree with each other, so I’d never rely on this alone as a method for deciding whether to trust a source, but it works well as a general rule. Scouts (on a given issue) tend to talk to other scouts because they’re engaged in the same mission, and are more likely to change their minds.
Since developing this approach for covid I’ve found myself applying it more in other areas like political analysis, which has led to adding more numerate analysts of e.g. polling data to my UK politics list and removing commentators who keep pushing the same lines, even if they’ve spent a lot of time working in politics. I’d be interested in whether anyone else has other approaches/rules for finding the best analysts on certain topics or thinks I’ve gone wrong somewhere in my own list.
Sometimes it is possible to follow the science when it is being done properly. I’m thinking mainly of the Recovery Trial (follow @martinlandray for updates) when pharmaceutical interventions have been rigorously tested providing definitive answers. Also people who stick to their specialist area (eg @cathnoakes ventilation science, Prof Iwasaki @VirusesImmunity) - often find it’s people willing to comment outside of their expertise that are unreliable.
I’ve also found some health professionals such as @rupert_pearse reliable at conveying the situation “on the ground” - I think your points about identifying reliable people in general can help find people like him.
Re scout mind would add @edyong209
Also worth looking at the sociologict Zeynep Tufekci in the US, who was early and right in her critique of the CDC's failures. This Ben Smith profile attributes her many successes to 1) an international background, 2) an ability to work across disciplines and 3) a commitment to systems thinking: https://www.nytimes.com/2020/08/23/business/media/how-zeynep-tufekci-keeps-getting-the-big-things-right.html. In line with Tetlock's findings.