The Foundations of Faulty Reason
Right to Information request reveals the basis of NB Department of Health's flawed justification for removing all protective pandemic measures
On April 1, 2022, members of PoP NB submitted a Right to Information request to the New Brunswick Department of Health asking for records referencing the removal of the mandatory order, the criteria for removing the mandatory order, and justification for removing the mandatory order. In addition to a pile of redacted emails from journalists, we received one, single, damning document.
In numerous interviews, press conferences and even a formal response to the Child and Youth Advocate, Chief Medical Officer of Health Jennifer Russell assured all interested that the decision to recommend an end to all mandated public health precautions and pursue a vaccines-only response was founded in science and supported by research.
This assurance was met with justifiable scepticism, considering the removal of all pandemic protections was an act that flouted all current recommendations from the Public Health Agency of Canada who clearly state
Stay at home when you're sick,
Wear a well-fitting respirator or mask,
Improve indoor ventilation,
Practise respiratory etiquette and hand hygiene, and
Clean and disinfect surfaces and objects
In Child and Youth Advocate, Kelly Lamrock’s damning condemnation of Public Health’s decision to remove masking in schools, he asked Jennifer Russell to explain:
What specific information, data, advice, or studies from your office, to your knowledge, has led to the most recent decision by DEECD to reduce or remove Covid restrictions such as masking?
Russell’s response was disjointed, qualitative, anecdotal and characterized by leaps of logic one would attribute to pandemic deniers and conspiracy theorists. In her response, Russell offered her oft used non-quantitative anecdote and argument for lemming-like national solidarity by confirming that by March 2022, “most jurisdictions across Canada … made the decision to discontinue masking in schools,” as well as a citation to an external document written by psychologist Manon Porelle, which, she first claimed concluded that masking harmed children and subsequently confirmed did not actually even exist.
In the utter absence of any evidence which could rationalize the abandonment of public health protections, PoPNB submitted a Right to Information request to the New Brunswick Department of Health requesting copies of all records detailing any discussions pertaining to the March 14, 2022 lifting of the mandatory order.
In response to our request, the Department of Health supplied copies of similar questions from media and a January 2022 pre-print revision of a non-peer reviewed working paper titled A Literature Review and Meta-Analysis of the Effects of Lockdowns on Covid-19 Mortality published by the Johns Hopkins Institute for Applied Economics and authored by the three economists who work for the US Oil industry groups behind the Great Barrington Declaration and Climate Change Denial Campaigns; Jonas Herby (who is linked to the American Institute for Economic Research - the libertarian organization funded by Big Oil players such as Exxon and the Koch brothers that set the stage for the Great Barrington Declaration), Lars Jonung and Steve Hanke (of the Cato Institute - American climate denialism and libertarian think tank founded by Charles Koch).
In the absence of any other information, we are left to conclude this non-peer reviewed study forms the sole basis of New Brunswick Public Health’s "evidence-based" decision making.
One Single Damning Document
The study purports that lockdowns (defined as the imposition of at least one compulsory, non-pharmaceutical intervention) had little to no effect on Covid-19 mortality but imposed “enormous economic and social costs” where they have been adopted. It argues lockdown policies are “ill-founded” and should be rejected wholesale as a pandemic response.
Having a thick document from an institute with a credible sounding name would be plenty sufficient to justify policy moves which flew in the face of all scientific and epidemiological science. It appears the New Brunswick Department of Health and subsequently the Government of New Brunswick based the entirety of their decision making process for removing all pandemic protections on this single report.
Unfortunately, there are myriad problems with this study.
Almost immediately following the release of the working paper, experts began commenting on the faulty methodology of the paper. Most obviously, that a conclusion that the majority of worldwide public health response were ineffective is “a strong finding that should be backed up by equally strong evidence.”
Prof Neil Ferguson, Director of the MRC Centre for Global Infectious Disease Analysis, Jameel Institute, Imperial College London, stated:
“This report on the effect of “lockdowns” does not significantly advance our understanding of the relative effectiveness of the plethora of public health measures adopted by different countries to limit COVID-19 transmission. First, the policies which comprised “lockdown” varied dramatically between countries, meaning defining the term is problematic. In their new report, Herby et al appear to define lockdown as imposition of one or more mandatory non-pharmaceutical interventions (NPIs); by that definition, the UK has been in permanent lockdown since 16th of March 2021, and remains in lockdown – given it remain compulsory for people with diagnosed COVID-19 to self-isolate for at least 5 days.
“A second and more important issue is that the statistical methods used to estimate the impact of NPIs using observational data need to be appropriate. Such interventions are intended to reduce contact rates between individuals in a population, so their primary impact, if effective, is on transmission rates. Impacts on hospitalisation and mortality are delayed, in some cases by several weeks. In addition, such measures were generally introduced (or intensified) during periods where governments saw rapidly growing hospitalisations and deaths. Hence mortality immediately following the introduction of lockdowns is generally substantially higher than before. Neither is lockdown a single event as some of the studies feeding into this meta-analysis assume; the duration of the intervention needs to be accounted for when assessing its impact.
“A consequence of NPIs affecting transmission (rather than total deaths directly), is that interventions cannot be assumed to have fixed additive effects on outcome measures such as deaths over a certain time window – interventions affect transmission rates, and therefore the appropriate outcome measures to consider are growth rates (of cases or deaths) over time, with appropriate time lags – not total cases or deaths. Many studies of the effects of NPIs fail to recognise this important issue, but notable and methodologically rigorous exceptions have been published by both economists (e.g. Chernozhukov et al – https://www.sciencedirect.com/science/article/pii/S0304407620303468, which studied NPIs in the US) and public health researchers (e.g. Brauner et al.https://www.science.org/doi/10.1126/science.abd9338, an analysis of NPIs in 41 countries). Interestingly, both of the latter papers (only one of which is included in the Herby et al meta-analysis) reach similar qualitative conclusions, despite their results not being directly comparable. Namely, that the effectiveness of “lockdowns” came from the combined impact of the multiple individual interventions which made up that policy in different countries and states: limiting gathering size, business closure, mask wearing, school closure and stay at home orders. While removing any one of the measures making up “lockdown” is predicted by most studies to have a relatively limited effect on the effectiveness of the overall policy, that does not mean that the combined set of measures in place during times countries were in “lockdown” were not highly effective at driving down both COVID-19 transmission and daily deaths.
Dr Seth Flaxman, Associate Professor in the Department of Computer Science, University of Oxford, stated:
“Smoking causes cancer, the earth is round, and ordering people to stay at home (the correct definition of lockdown) decreases disease transmission. None of this is controversial among scientists. A study purporting to prove the opposite is almost certain to be fundamentally flawed.
“In this case, a trio of economists have undertaken a meta-analysis of many previous studies. So far so good. But they systematically excluded from consideration any study based on the science of disease transmission, meaning that the only studies looked at in the analysis are studies using the methods of economics. These do not include key facts about disease transmission such as: later lockdowns are less effective than earlier lockdowns, because many people are already infected; lockdowns do not immediately save lives, because there’s a lag from infection to death, so to see the effect of lockdowns on Covid deaths we need to wait about two or three weeks. (This was all known in March 2020 – we discussed it in a paper released that month, and later published in Nature. Our paper is excluded from consideration in this meta-analysis.)
“It’s as if we wanted to know whether smoking causes cancer and so we asked a bunch of new smokers: did you have cancer the day before you started smoking? And what about the day after? If we did this, obviously we’d incorrectly conclude smoking is unrelated to cancer, but we’d be ignoring basic science. The science of diseases and their causes is complex, and it has a lot of surprises for us, but there are appropriate methods to study it, and inappropriate methods. This study intentionally excludes all studies rooted in epidemiology–the science of disease.”
Prof Samir Bhatt, Professor of Statistics and Public Health, Imperial College London, stated:
“I find this paper has flaws and needs to be interpreted very carefully. Two years in, it seems still to focus on the first wave of SARS-COV2 and in a very limited number of countries. The most inconsistent aspect is the reinterpreting of what a lockdown is. The authors define lockdown as “as the imposition of at least one compulsory, non-pharmaceutical intervention”. This would make a mask wearing policy a lockdown. For a meta-analysis using a definition that is at odds with the dictionary definition (a state of isolation or restricted access instituted as a security measure) is strange.
The authors then further confuse matters when in Table 7 they revert to the more common definition of lockdown. Many scientists, including myself, quickly moved on from the word “lockdown” as this isn’t really a policy (Brauner et al 2020, and my work in Sharma et al 2021). It’s an umbrella word for a set of strict policies designed to reduce the reproduction number below one and halt the exponential growth of infections. Lockdown in Denmark and Lockdown in the UK are made up of very different individual policies.
Aside from issues of definitions there are other issues such as (a) It’s not easy to compare Low and High income countries in terms of the enforcement and adherence of policies, (b) Many countries locked down before seeing exponential growth and therefore saw no reduction in deaths, (c) There are lags – interventions operate on transmission but mortality is indirect and lagged – comparing mortality a month before and after lockdown is likely to have no effect (e.g Bjørnskov 2021a), (d) As I have mentioned it looks at a tiny slice of the pandemic, there have been many lockdowns since globally with far better data, (e) There are many prominent studies that cover the period in question looking at infections included including Brauner et al 2020, Alfano et al 2020, Dye et al 2020, Lai et al 2020, Hsiang et al 2020, Salje et al 2020 etc. The list of such studies is very long and suggests a highly incomplete meta-analysis.”
Nicolas Banholzer (ETH Zürich - Department of Management, Technology, and Economics), Adrian Lison (ETH Zürich - Department of Biosystems Science and Engineering), and Werner Vach (University of Basel), published a response to the working paper which identified multiple issues including:
Lockdown is an unspecific, ill-defined term and thus an inappropriate starting point for meta-analyses,
If anything, the lockdown effect is not the effect of single NPIs but the combined effect of multiple NPIs,
Mortality is not the only relevant and not a conclusive measure of NPI effectiveness, and
Highly restrictive eligibility criteria are no replacement for rigorous quality assessment.
Banzohler et al. begin their response by questioning the attempts to define “lockdowns” as any one non-pharmaceutical intervention (NPI).
Lockdown is a commonly used term to refer to a broad set of NPIs that governments implemented to control transmission, e. g. school closures, business closures, gathering bans, or shelter-in-place orders (SIPOs). Governments often differed in the specific kinds of NPIs that they implemented as part of their lockdown. As such, people around the world associate lockdowns with different kinds of NPIs. Commonly though, the lockdown is associated with a combination of multiple specific NPIs, most often culminating in the strict order to stay at home for all but essential purposes.
Herby et al. vaguely define the lockdown as “any policy consisting of at least one NPI” (p. 5). This implies that the policies and corresponding NPIs can vary from country to country. Since there was substantial variation in the policies and NPIs between countries, it is unclear how to conduct a meta-analysis when the intervention is not the same across populations and can thus not be compared. If the lockdown can be anything from a single NPI to a combination of multiple NPIs, from wearing face masks to SIPOs, then the effects are hardly comparable across studies investigating different populations and interventions.
The response clearly lays out the fact it would be more reasonable to conduct meta-analyses for specific NPIs as is commonly done in this type of study. Specific NPIs can be defined clearly and are therefore more comparable across populations than “the unspecific, ill-defined ‘lockdown’.”
Herby et al. continue from the erroneous definition of “lockdown” to measuring an association with “lockdown effect.” Banzohler et al. rightfully call out this error stating the problems with associating the “lockdown effect” with the effect of specific NPIs rather than the combined effect of all NPIs which happened to be in effect at a given time.
A Sole Focus on Mortality
Banzohler et al. make their strongest point on the working paper’s misguided focus solely on mortality, often with no established link to the NPIs it is attempting to make measure of. Indeed the paper removes most other context and focuses on the ill defined “lockdown” in close proximity to mortality. The response states:
While mortality is an important outcome, it is clearly not the only relevant outcome when evaluating the benefits of NPIs during a pandemic. Interventions that reduce the number of new infections can have downstream effects on various outcomes, including disease-related deaths, cases of severe illness and hospitalizations, cases with long-term health effects after infection, the efficiency of testing and contact tracing, the overall burden on the healthcare system and on health workers, the burden on other public services due to quarantine or isolation of individuals, the probability of emergence of new genetic variants in infected individuals, and potentially many more.
Thus, from a public health and infectious disease control perspective, evidence from mortality data cannot be regarded as conclusive with respect to the overall benefits of interventions. This is also because the majority of interventions implemented by governments aimed at either reducing contacts (e. g. through social distancing) or decreasing the probability of transmission upon contact (e. g. through mandatory wearing of face masks). Evaluating the effectiveness of interventions only in terms of mortality is hardly conclusive in this sense because deaths are only distantly related to transmission reduction. In contrast, if for example a negative result regarding the effect of NPIs on transmission – the main causal mechanism by which these interventions are intended to work – was obtained, there would have been more reason to question their overall effectiveness. Unfortunately, studies assessing the effects of interventions on transmission were not included in the meta-analysis by Herby et al., even if they used mortality data to inform their estimates. Finally, the effect of transmission-reducing interventions on the number of avoided deaths directly depends on the state and trend of an epidemic, which varies both over time and between populations. Therefore, synthesizing effect estimates by the number of percentage of avoided deaths has limited meaning, and it would be preferable to measure NPI effects by changes in the growth rate of deaths instead.
Cherry Picked Data
Finally, Banzohler et al. explain the poor criteria by which Herby et al. excluded all but 0.1% of available studies for meta-analysis.
In the subsequent meta-analysis, four criteria were used to assess the quality of the included studies, namely 1) whether the study was peer-reviewed, 2) whether the study used a long enough study period, 3) whether the study did not find an effect in the first 14 days after NPI implementation, and 4) whether the corresponding author is associated with an institute from the social sciences. None of these criteria assess the concrete methods and models used in the study. Overall, it appears that the authors assume to have ensured the quality of studies already in the selection stage and felt that a subsequent rigorous quality assessment would be superfluous.
It appears that Herby et al. limited the inclusion of studies to those which specifically drew a causal line between NPIs and mortality, a methodology which has already been shown to be flawed.
The eligibility criteria used in the present review restrict studies to one specific type, namely those with a “counterfactual difference-in-difference approach” that measure NPI effects in terms of avoided deaths. We see no convincing evidence that this is the only appropriate setup for an analysis of NPI effects, and not even a particularly elaborate or unbiased one. We are thus concerned that the authors have excluded well-recognized, high-quality studies.
Our Pandora’s Box
Being in possession of a document to act as justification to abdicate all responsibility to New Brunswick’s population, the Governement and Public Health did just that. The complete lack of any further consultation or research strongly reinforces the idea that the current government was never looking for a conclusion in support of the available evidence, but rather they were in search of evidence in support of their already developed conclusion.
PoPNB has submitted further Right to Information requests related to this specific subject and will make replies available as they are received.
If you would like to receive PoPNB posts in your email, please subscribe below. If you found this post helpful, please share it with your community.