SARS-CoV-2 variant: an update

PHE have published a rapid epidemiological comparison of the SARS-CoV-2 variant (VOC 202012/01 aka B1.1.7) with ‘wild-type’ SARS-Cov-2 in this country. Most of the characteristics don’t look to be different – the variant is not associated with more hospitalizations or an increase in 28-day mortality. However, there does seem to be an increase in secondary attack rates of the variant compared with wild-type SARS-CoV-2.

Continue reading

The new COVID-19 variant: a primer

Unless you have been living under a rock, you’ll have seen that there’s a new COVID-19 variant on the scene. This block summarises the key information that has emerged so far about this new variant. It seems to be more transmissible, no more virulent, and there’s no evidence that the vaccines that are approved or nearly approved will be less effective against the variant.

Continue reading

Secondary attack rate of COVID-19 in different settings: review and meta-analysis

A rather beautiful review and meta-analysis by colleagues at Imperial College London examines the evidence around the secondary attack rate (SAR) for SARS-CoV-2 in various settings, highlighting the risk of prolonged contact in homes as the highest risk for transmission.

Continue reading

The impact of a ward-based ‘PPE Helper’ programme on staff perceptions about COVID-19 PPE

During the first wave of COVID-19, we developed a ‘PPE Helper’ programme. This ward-based programme put PPE experts on the front line to spend time with staff to improve PPE knowledge, promote safe and effective use, and address staff anxiety. The programme was evaluated through a survey of staff views about PPE at the conclusion of the programme. This found that staff who had had contact with a PPE helper responded more positivity to questions about PPE and felt less PPE-related anxiety too.

Continue reading

Antimicrobial copper surfaces and linen and healthcare-associated infection: a review and meta-analysis

A helpful new review and meta-analysis asks whether treating hard surfaces or linen reduces healthcare-associated infections. The review identified only a small number of studies that had both a copper-related intervention related to surfaces and/or linen and an outcome related to HCAI. But the meta-analysis of the seven studies found that, overall, the risk of HCAI was reduced by 27% (risk ratio 0.73, 95% confidence interval 0.57–0.94).

Continue reading

SARS-CoV-2 incubation period: where does the 14 days come from?

If you’ve had to self-isolate for 14 days following a possible exposure to somebody with COVID-19, you’ll relate to just how long it feels. Towards the start of the pandemic, the Otter family entered a 14 day household self-isolation due to COVID-like symptoms in the pups. At that time, mass testing was not available and so we’re left hanging to this day as to where or not it was or wasn’t. But where does the 14 days come from? And how does the probability of developing COVID-19 following exposure change over time? I was asked this yesterday, and came across a very hand review and meta-analysis of studies related to the SARS-CoV-2 incubation period.

The review includes, published in BMJ Open, includes nine studies in the meta-analysis. Overall, the median incubation period was 5.1 days, and the 95% percentile was 11.7 days (see the Figure below). The team recognise that things will change as new studies come along, so helpfully have published an R Shiny app that will be updated as new data is published. Quite a clever trick, although the Shiny app isn’t the most intuitive.

Figure: How the probability of developing COVID-19 changes over time following exposure.

In answer to your specific question about the difference in risk on day 10 vs. day 14 following exposure, this is tricky and will depend on a number of factors. However, the risk of developing COVID from the point of exposure changes over the 14 days peaking around day 4/5. I’ve attached a systematic review and meta-analysis of the COVID incubation period. Figure 5 is probably most helpful, which shows from the meta-analysis of 8 studies that approx. 90% of individuals who would eventually test positive had tested positive by day 10, whereas >95% had tested positive by day 14.

International guidelines recommend an isolation period of 14 days following patient or staff exposure to COVID-19 (see PHE and CDC). So why 14 days? And not 13 or 16? As you can see, the odd person developed COVID-19 outside of the 14 day window since exposure, but this is uncommon. And I think there’s something pragmatic about 14 days being 2 weeks!

Preventing healthcare-associated COVID-19

The issue of preventing healthcare-associated COVID-19 is very topical right now, to say the least (see this JAMA commentary), so now is a really good time to review what happened in our hospitals during the ‘first wave’ to help us prevent hospital transmission during the second.

The study was performed during the first wave of COVID-19 in London, between March and mid-April. The focus of the study was on ‘hospital-onset definite healthcare-associated’ (HODHA) COVID-19 infections (with a sample date >14 days from the day of admission). Overall, 58 (7.1%) of 775 symptomatic COVID-19 infections in hospitalised patients were HODHA. Key findings included:

  • Compared with community-associated COVID-19, patients with HODHA were more likely to be older, Black Asian or Minority Ethnicity (BAME), have several clinical underlying conditions (e.g. malignancy), and had an increased length of stay after COVID-19 diagnosis. Surprisingly, there was no increased risk of mortality (either 7, 14, or 30-day) or ICU admission.
  • There was an interesting analysis of the impact of a delayed positive test (where there was no positive test within 48 hours of symptom development). This occurred in about a third of HODHA cases, and was associated with an increased risk of 30-day mortality.
  • A potential source patient (a positive case on the same ward within 14 days of the positive test) was identified for 44/58 HODHA cases.
  • There was a correlation between weekly self-reported sickness absence incidence and weekly HODHA incidence.

This is a similar piece of work to our analysis of healthcare-associated COVID-19. The period of time covered was almost identical (from March to mid-April) and the number of HODHAs was very similar (62 in our study compared with 58 in this study). This seems to illustrate how indiscriminate this outbreak has been regionally – a wave of healthcare-associated COVID-19 swept through our hospitals in March/April – and our job now is to reduce the size of this wave over the winter!

Can we halve GNBSI? The crowd say no…!

I participated in another pro-con debate recently up against fellow Reflections blogger Martin Kiernan during a Webber Teleclass. The question for the debate was “Can we halve Gram-negative BSI?” (I was arguing that we can). We ran a live Twitter poll and the outcome: 59% of the 22 respondents voted that no, we can’t halve GNBSI.

The slides from my talk are here.

My argument had two main themes: that there is a sizeable preventable portion of GNBSI and we have a lot to go for, and that we need a new approach to preventing GNBSI that will require new models of collaborative working across acute and non-acute health and social case.

The image below maps out the drivers of GNBSI. Some of these are modifiable (e.g. hydration and UTI, devices, antimicrobial stewardship), and some are not (e.g. deprivation [ok technically modifiable but beyond the scope of most IPC teams!], seasonal variation). The aim here is to identify those drivers of GNBSI that are modifiable and come up with practical interventions that could make a big difference.

Figure: Drivers of Gram-negative BSI.

Hydration is a good example. The most common source of E. coli BSI (which accounts for most GNBSI) is UTIs. We know that poor hydration is an important risk factor for UTI. So if we can improve hydration – in hospitals and outside – then there’s a good chance we’ll reduce UTI and in doing so reduce E. coli BSI.   

Antimicrobial stewardship is another. If we can improve the management of Gram-negative infections in the community through appropriate therapy outside of hospital admissions, then you reduce the chance that they’ll progress to a GNBSI.

I can’t tell you for sure that we can halve GNBSI. But we must try to prevent the preventable GNBSIs!

School’s out forever?

Colleagues from the University of Edinburgh did a really nice job exploring the impact of individual public health interventions on the SARS-CoV-2 reproduction number (R) across 131 countries. Their work fueled the discussion on whether schools should be closed to control transmission. Rightfully so? Read Patricia Bruijning-Verhagens’ take on this study.

For their analyses they used the real-life interventions as they were implemented when the pandemic started and subsequently lifted this summer, inevitably with differences in timing and sequences between countries. Yet, this variation allowed them to explore how each intervention influenced the effective R-value (Reff) over time in each country. A few reflections on the study:

First, we need to understand how comparisons were made; for each country they cut the observation period into time fragments based on the non-pharmaceutical public health interventions (NPIs) that were used. A change in NPI – implemented or lifted – starts a new fragment, which can last from days to months.  For each day in a fragment, they took the Reff from the available country data, and compared the Reff from the last day of a fragment to the Reff on the first day of the new fragment, and subsequently to the Reff values of all subsequent days in that fragment. The result is a daily ratio of old versus new Reff values following a change in NPI.

Next, all Reff ratios were entered in a multivariate model to determine associations between Reff ratios and implementation or lifting of individual NPI. Results can be interpreted as; what is the relative effect of implementing intervention A on Reff, while keeping measures B, C, D, etc. constant.  Importantly, effects are quantified in terms of  RELATIVE reduction/increase in Reff. ABSOLUTE effects of NPI will depend on the Reff at the start of intervention. For example; The Reff ratio for a ban on public gatherings is 0.76 (minus 24%) when we compare the Reff at day 28 after implementation to a situation without bans. Then, if Reff was 3 before implementation, the ban on public gatherings will reduce the Reff to 0.76*3=2.28 at day 28, yielding an absolute reduction in Reff of 0.72. Yet, if Reff was 1.2 at the start, then the absolute reduction will be 0.29 (0.76%*1.2=0.91).

The results of the multivariate model highlight another effect that needs to be considered; whith multiple NPIs implemented/lifted at the same time, their joint effect is smaller than the sum of their individual effects. This is estimated as interaction parameters Z1 and Z2. For instance, closing schools has an Reff ratio of 0.86 on day 14 following closure and the Reff ratio for banning public gatherings is 0.83. The Reff ratio for interaction on day 14 is approximately 1.17 as you can see in the figure below.

So, the interaction eliminates the effect of one of both interventions. The same happens when lifting two interventions at the same time; the joint increase in Reff is less than would be expected on the Reff ratios from each NPI separately. The effect of an NPI may thus differ, depending on the context (i.e. other NPIs in place). An alternative explanation is that the model overestimates the single intervention Reff ratios, because of collinearity in the data. Ideally, one would estimate interaction effects separately for each possible combination of two NPIs, but this requires inclusion of many more parameters in the multivariate model, which were not available. This interaction effect also becomes apparent when we look at the four scenarios of composite NPIs; Moving from scenario candidate 3 to 4, the Reff ratio for day 28 changes by 0.10 only, although two more interventions were added (school closure and stay at home requirements).

An important limitation of the data is that many interventions were implemented or released shortly after one another, seriously limiting the number of informative datapoints and precluding quantification of individual effects of interventions. This is reflected by the wide confidence intervals for many estimates. For instance, schools were already closed at the start of the observation period in 64 of 131 countries and only 25 countries lifted school closure at some point. Moreover, school closure was followed by other interventions within a week in 75% of countries, leaving only 16 countries with more than 7 days to quantify effects of school closure as separate intervention. Furthermore, differences across countries add to heterogeneity in the data and, thus, to imprecision in estimates.

To conclude, this study provides some insight in the effectiveness of some NPIs, but precise effects of individual interventions remains uncertain and will highly depend on the prevailing Reff at the time of implementation/lifting, and other interventions implemented, lifted or maintained. The authors acknowledge some of these limitations and caution that ‘ the impact on R by future reintroduction and re-relaxation of interventions might be substantially different’. Obviously, many readers that claimed major effects of NPI, in particular of school closure, didn’t make it till this stage of the manuscript.

Patricia Bruijning-Verhagen, MD, PhD, is pediatrican and epidemiologist at the Julius Center for Health Sciences and Primary Care, at the UMC Utrecht

Should we routinely audit hand hygiene in hospitals? The crowd say no…!

I had the privilege of participating in the IPS Autumn Webinar series yesterday, in a debate with Dr Evonne Curran on whether we should routinely audit hand hygiene in hospitals. It was good fun – and highlighted some important points about the strengths and limitations of hand hygiene audits – and audits generally for that matter!

Here’s my case for routine hand hygiene auditing in hospitals (you can register (free!) and view the webinars here):

My key arguments were that:

  • Hand hygiene is really important, and one of a range of interventions that we should be routinely auditing to launch focussed improvement work.
  • There are key sources of bias in hand hygiene auditing (see below). However, these can be reduced with optimised methodology.
    • Observation bias (aka Hawthorne effect) – where behaviour is modified by awareness of being observed. For example, if I stand over you with a clipboard and a pen, you’re more likely to do hand hygiene.
    • Observer bias – difference between the true value and the observed value related to observer variation. For example, poor trained auditors will result in variations in reported practice due to observer bias.
    • Selection bias – when the selected group / data does not represent the population. For example, only doing hand hygiene audits during day shifts won’t tell you the whole picture.
  • Hand hygiene audits are a legal and regulatory requirement (in England at least).
  • My own experience is that optimised hand hygiene auditing methodology can deliver a performance indicator that can identify areas of poor performance and drive focussed improvement initiatives.

At the end of the debate, two thirds of the live audience voted against doing routine hand hygiene audits in hospitals. Put another way – I lost! I am taking the view that the audience voted against the concept of inaccurate auditing returning unrealistically high level of compliance, rather than against properly monitored and measured auditing, which can help to fuel improvement.

If nothing else, I hope the debate made the point that poorly planned and executed hand hygiene auditing is doing nobody any good – and may be doing harm. If we are going to do hand hygiene auditing, it should be using optimised methodology to deliver actionable information that is put to work to improve hand hygiene practice.