
Did shutting down the schools do any good at all? | thearticle
- Select a language for the TTS:
- UK English Female
- UK English Male
- US English Female
- US English Male
- Australian Female
- Australian Male
- Language selected: (auto detect) - EN
Play all audios:

I had imagined it would take perhaps twelve months before sound evidence concerning the containment of Covid-19 emerged, and good epidemiologists could begin to tell us which countries
actually fared worst, and why. I was right: careful and informative studies are already starting to appear, which offer exactly this kind of data-based evidence. One eye-catching assessment
was of “non-pharmaceutical interventions” (i.e., public health and social measures) on Covid-19 across Europe, published recently in _Nature_. It’s a collaborative study from Imperial
College (yes, the ubiquitous Prof. Ferguson again) and Oxford, examining the impact of some of the principal non-pharmaceutical interventions across 11 European countries from February 2020
until early May, when lockdowns began to be lifted. Admittedly the results are based on mathematical modelling, concerning which all of us are a little more circumspect than we were in
happier days, but a number of sophisticated approaches are adopted to minimise the risk of error in estimating the effect of individual measures. For example, the authors “worked backwards”
from the death rate — one of the more definitive endpoints — to infer infection rates, so avoiding data based upon the vagaries of population testing. And the authors confidently assert that
pooling information between eleven countries “helps to overcome idiosyncrasies in the data” (thus exploiting the well-known scientific rule that 11 wrongs can make a right). The results
were evidently robust enough to convince _Nature_ — which employs one of the most stringent refereeing systems in world science. From their modelling, the authors infer infection rates
across Europe not dissimilar to those emerging from other sources and data — highest in Belgium (around 8 per cent of the population infected), a little lower in Spain, Italy and the UK (4-6
per cent), and lowest in Germany (0.85 per cent). More novel and interesting are the data on the impact of non-pharmacological interventions. The analyses imply that lockdown restrictions
had a major impact across Europe in reducing infection rates — so that, “across 11 countries 3.1 million [approximately] deaths have been averted”. Three million lives saved is quite
something — hardly the disastrous management most media would have you believe. The authors humbly emphasise potential sources of error in their estimates — “we rely on death data that are
incomplete, show systematic biases in reporting and are subject to future consolidation”. But they do fundamentally re-assert the plausibility of their results. No less fascinating are the
conclusions concerning other interventions. While lockdown appears effective, four other measures — social distancing, banning public events, closing schools and self-isolation — appear
equally to have had near-negligible effects. And these schools and public events results certainly tally with other independent data on infectivity in childhood, and on the lack of infection
spikes following sundry public gatherings ranging from raves to political demonstrations. The authors conclude that their evidence for a major beneficial impact of lockdown, allowing us
significant control over the spread of the virus, represents a striking cause for long term optimism. What they do not discuss is whether the absence of an identifiable impact from other
measures — perhaps especially school closures and banning public events — implies that these non-pharmacological interventions should be abandoned.