A Hard Lesson About Blind Trust in Scientists

A Hard Lesson About Blind Trust in Scientists
(Victoria Jones / PA via AP)
In e
arly March, British leaders planned to take a laissez-faire approach to the spread of the coronavirus. Officials would pursue “herd immunity,” allowing as many people in non-vulnerable categories to catch the virus in the hope that eventually it would stop spreading. But on March 16, a report from the Imperial College Covid-19 Response Team, led by noted epidemiologist Neil Ferguson, shocked the Cabinet of the United Kingdom into a complete reversal of its plans. Report 9, titled “Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand,” used computational models to predict that, absent social distancing and other mitigation measures, Britain would suffer 500,000 deaths from the coronavirus. Even with mitigation measures in place, the report said, the epidemic “would still likely result in hundreds of thousands of deaths and health systems (most notably intensive care units) being overwhelmed many times over.” The conclusions so alarmed Prime Minister Boris Johnson that he imposed a national quarantine.

Subsequent publication of the details of the computer model that the Imperial College team used to reach its conclusions raised eyebrows among epidemiologists and specialists in computational biology and presented some uncomfortable questions about model-driven decision-making. The Imperial College model itself appeared solid. As a spatial model, it divides the area of the U.K. into small cells, then simulates various processes of transmission, incubation, and recovery over each cell. It factors in a good deal of randomness. The model is typically run tens of thousands of times, and results are averaged—a technique commonly referred to as an ensemble model. Read Full Article »


Comment
Show comments Hide Comments


Related Articles