PHE released the latest epidemiological summary of the B.1.617.2 VOC (aka “the variant that was first identified in India”) a few days ago. Evidence is emerging rapidly, and the datasets are far from conclusive. But it now seems clear that B.1.617.2 is more transmissible, causes no more hospitalisation or mortality, and vaccine effectiveness is slightly reduced when compared with other variants.Continue reading
As you may have heard, there’s a new SARS-CoV-2 variant of concern (VOC) on the block. So what do we know about this new variant? And how much of a threat does it pose to the pre-COVID freedom that we can see on the near horizon?Continue reading
I gave a talk at the Sussex Infection Prevention Development Week yesterday on ventilation and preventing the spread of SARS-CoV-2. I learnt a lot in putting together the talk, so thought I’d share my slides (here) and some of the key points. Ventilation is a crucial way to prevent the spread of SARS-CoV-2 (and other respiratory viruses), and I hope that improved ventilation in health and social care settings will be one of the good things to come out of this pandemic.Continue reading
PHE have published a rapid epidemiological comparison of the SARS-CoV-2 variant (VOC 202012/01 aka B1.1.7) with ‘wild-type’ SARS-Cov-2 in this country. Most of the characteristics don’t look to be different – the variant is not associated with more hospitalizations or an increase in 28-day mortality. However, there does seem to be an increase in secondary attack rates of the variant compared with wild-type SARS-CoV-2.Continue reading
Unless you have been living under a rock, you’ll have seen that there’s a new COVID-19 variant on the scene. This block summarises the key information that has emerged so far about this new variant. It seems to be more transmissible, no more virulent, and there’s no evidence that the vaccines that are approved or nearly approved will be less effective against the variant.Continue reading
If you’ve had to self-isolate for 14 days following a possible exposure to somebody with COVID-19, you’ll relate to just how long it feels. Towards the start of the pandemic, the Otter family entered a 14 day household self-isolation due to COVID-like symptoms in the pups. At that time, mass testing was not available and so we’re left hanging to this day as to where or not it was or wasn’t. But where does the 14 days come from? And how does the probability of developing COVID-19 following exposure change over time? I was asked this yesterday, and came across a very hand review and meta-analysis of studies related to the SARS-CoV-2 incubation period.
The review includes, published in BMJ Open, includes nine studies in the meta-analysis. Overall, the median incubation period was 5.1 days, and the 95% percentile was 11.7 days (see the Figure below). The team recognise that things will change as new studies come along, so helpfully have published an R Shiny app that will be updated as new data is published. Quite a clever trick, although the Shiny app isn’t the most intuitive.
In answer to your specific question about the difference in risk on day 10 vs. day 14 following exposure, this is tricky and will depend on a number of factors. However, the risk of developing COVID from the point of exposure changes over the 14 days peaking around day 4/5. I’ve attached a systematic review and meta-analysis of the COVID incubation period. Figure 5 is probably most helpful, which shows from the meta-analysis of 8 studies that approx. 90% of individuals who would eventually test positive had tested positive by day 10, whereas >95% had tested positive by day 14.
International guidelines recommend an isolation period of 14 days following patient or staff exposure to COVID-19 (see PHE and CDC). So why 14 days? And not 13 or 16? As you can see, the odd person developed COVID-19 outside of the 14 day window since exposure, but this is uncommon. And I think there’s something pragmatic about 14 days being 2 weeks!
The issue of preventing healthcare-associated COVID-19 is very topical right now, to say the least (see this JAMA commentary), so now is a really good time to review what happened in our hospitals during the ‘first wave’ to help us prevent hospital transmission during the second.
The study was performed during the first wave of COVID-19 in London, between March and mid-April. The focus of the study was on ‘hospital-onset definite healthcare-associated’ (HODHA) COVID-19 infections (with a sample date >14 days from the day of admission). Overall, 58 (7.1%) of 775 symptomatic COVID-19 infections in hospitalised patients were HODHA. Key findings included:
- Compared with community-associated COVID-19, patients with HODHA were more likely to be older, Black Asian or Minority Ethnicity (BAME), have several clinical underlying conditions (e.g. malignancy), and had an increased length of stay after COVID-19 diagnosis. Surprisingly, there was no increased risk of mortality (either 7, 14, or 30-day) or ICU admission.
- There was an interesting analysis of the impact of a delayed positive test (where there was no positive test within 48 hours of symptom development). This occurred in about a third of HODHA cases, and was associated with an increased risk of 30-day mortality.
- A potential source patient (a positive case on the same ward within 14 days of the positive test) was identified for 44/58 HODHA cases.
- There was a correlation between weekly self-reported sickness absence incidence and weekly HODHA incidence.
This is a similar piece of work to our analysis of healthcare-associated COVID-19. The period of time covered was almost identical (from March to mid-April) and the number of HODHAs was very similar (62 in our study compared with 58 in this study). This seems to illustrate how indiscriminate this outbreak has been regionally – a wave of healthcare-associated COVID-19 swept through our hospitals in March/April – and our job now is to reduce the size of this wave over the winter!
Colleagues from the University of Edinburgh did a really nice job exploring the impact of individual public health interventions on the SARS-CoV-2 reproduction number (R) across 131 countries. Their work fueled the discussion on whether schools should be closed to control transmission. Rightfully so? Read Patricia Bruijning-Verhagens’ take on this study.
For their analyses they used the real-life interventions as they were implemented when the pandemic started and subsequently lifted this summer, inevitably with differences in timing and sequences between countries. Yet, this variation allowed them to explore how each intervention influenced the effective R-value (Reff) over time in each country. A few reflections on the study:
First, we need to understand how comparisons were made; for each country they cut the observation period into time fragments based on the non-pharmaceutical public health interventions (NPIs) that were used. A change in NPI – implemented or lifted – starts a new fragment, which can last from days to months. For each day in a fragment, they took the Reff from the available country data, and compared the Reff from the last day of a fragment to the Reff on the first day of the new fragment, and subsequently to the Reff values of all subsequent days in that fragment. The result is a daily ratio of old versus new Reff values following a change in NPI.
Next, all Reff ratios were entered in a multivariate model to determine associations between Reff ratios and implementation or lifting of individual NPI. Results can be interpreted as; what is the relative effect of implementing intervention A on Reff, while keeping measures B, C, D, etc. constant. Importantly, effects are quantified in terms of RELATIVE reduction/increase in Reff. ABSOLUTE effects of NPI will depend on the Reff at the start of intervention. For example; The Reff ratio for a ban on public gatherings is 0.76 (minus 24%) when we compare the Reff at day 28 after implementation to a situation without bans. Then, if Reff was 3 before implementation, the ban on public gatherings will reduce the Reff to 0.76*3=2.28 at day 28, yielding an absolute reduction in Reff of 0.72. Yet, if Reff was 1.2 at the start, then the absolute reduction will be 0.29 (0.76%*1.2=0.91).
The results of the multivariate model highlight another effect that needs to be considered; whith multiple NPIs implemented/lifted at the same time, their joint effect is smaller than the sum of their individual effects. This is estimated as interaction parameters Z1 and Z2. For instance, closing schools has an Reff ratio of 0.86 on day 14 following closure and the Reff ratio for banning public gatherings is 0.83. The Reff ratio for interaction on day 14 is approximately 1.17 as you can see in the figure below.
So, the interaction eliminates the effect of one of both interventions. The same happens when lifting two interventions at the same time; the joint increase in Reff is less than would be expected on the Reff ratios from each NPI separately. The effect of an NPI may thus differ, depending on the context (i.e. other NPIs in place). An alternative explanation is that the model overestimates the single intervention Reff ratios, because of collinearity in the data. Ideally, one would estimate interaction effects separately for each possible combination of two NPIs, but this requires inclusion of many more parameters in the multivariate model, which were not available. This interaction effect also becomes apparent when we look at the four scenarios of composite NPIs; Moving from scenario candidate 3 to 4, the Reff ratio for day 28 changes by 0.10 only, although two more interventions were added (school closure and stay at home requirements).
An important limitation of the data is that many interventions were implemented or released shortly after one another, seriously limiting the number of informative datapoints and precluding quantification of individual effects of interventions. This is reflected by the wide confidence intervals for many estimates. For instance, schools were already closed at the start of the observation period in 64 of 131 countries and only 25 countries lifted school closure at some point. Moreover, school closure was followed by other interventions within a week in 75% of countries, leaving only 16 countries with more than 7 days to quantify effects of school closure as separate intervention. Furthermore, differences across countries add to heterogeneity in the data and, thus, to imprecision in estimates.
To conclude, this study provides some insight in the effectiveness of some NPIs, but precise effects of individual interventions remains uncertain and will highly depend on the prevailing Reff at the time of implementation/lifting, and other interventions implemented, lifted or maintained. The authors acknowledge some of these limitations and caution that ‘ the impact on R by future reintroduction and re-relaxation of interventions might be substantially different’. Obviously, many readers that claimed major effects of NPI, in particular of school closure, didn’t make it till this stage of the manuscript.
Patricia Bruijning-Verhagen, MD, PhD, is pediatrican and epidemiologist at the Julius Center for Health Sciences and Primary Care, at the UMC Utrecht
The next instalment of the HIS audience-led webinar series is on the role of contaminated surfaces in COVID-19 transmission. I was delighted to be part of the panel for this one:
- Dr Lena Ciric – Associate Professor in Environmental Engineering, University College London
- Dr Stephanie Dancer – Consultant Microbiologist, NHS Lanarkshire and Professor of Microbiology, Edinburgh Napier University, Scotland
- Dr Manjula Meda – Consultant Clinical Microbiologist and Infection Control Doctor, Frimley Park Hospital
- Dr Jon Otter – Infection prevention and control Epidemiologist, Imperial College London
- Chair: Dr Surabhi Taori, Consultant microbiologist and infection control doctor, Kings College Hospital NHS Foundation Trust
Here’s the recording: