The issue of preventing healthcare-associated COVID-19 is very topical right now, to say the least (see this JAMA commentary), so now is a really good time to review what happened in our hospitals during the ‘first wave’ to help us prevent hospital transmission during the second.
The study was performed during the first wave of COVID-19 in London, between March and mid-April. The focus of the study was on ‘hospital-onset definite healthcare-associated’ (HODHA) COVID-19 infections (with a sample date >14 days from the day of admission). Overall, 58 (7.1%) of 775 symptomatic COVID-19 infections in hospitalised patients were HODHA. Key findings included:
Compared with community-associated COVID-19, patients with HODHA were more likely to be older, Black Asian or Minority Ethnicity (BAME), have several clinical underlying conditions (e.g. malignancy), and had an increased length of stay after COVID-19 diagnosis. Surprisingly, there was no increased risk of mortality (either 7, 14, or 30-day) or ICU admission.
There was an interesting analysis of the impact of a delayed positive test (where there was no positive test within 48 hours of symptom development). This occurred in about a third of HODHA cases, and was associated with an increased risk of 30-day mortality.
A potential source patient (a positive case on the same ward within 14 days of the positive test) was identified for 44/58 HODHA cases.
There was a correlation between weekly self-reported sickness absence incidence and weekly HODHA incidence.
This is a similar piece of work to our analysis of healthcare-associated COVID-19. The period of time covered was almost identical (from March to mid-April) and the number of HODHAs was very similar (62 in our study compared with 58 in this study). This seems to illustrate how indiscriminate this outbreak has been regionally – a wave of healthcare-associated COVID-19 swept through our hospitals in March/April – and our job now is to reduce the size of this wave over the winter!
I participated in another pro-con debate recently up against fellow Reflections blogger Martin Kiernan during a Webber Teleclass. The question for the debate was “Can we halve Gram-negative BSI?” (I was arguing that we can). We ran a live Twitter poll and the outcome: 59% of the 22 respondents voted that no, we can’t halve GNBSI.
My argument had two main themes: that there is a sizeable preventable portion of GNBSI and we have a lot to go for, and that we need a new approach to preventing GNBSI that will require new models of collaborative working across acute and non-acute health and social case.
The image below maps out the drivers of GNBSI. Some of these are modifiable (e.g. hydration and UTI, devices, antimicrobial stewardship), and some are not (e.g. deprivation [ok technically modifiable but beyond the scope of most IPC teams!], seasonal variation). The aim here is to identify those drivers of GNBSI that are modifiable and come up with practical interventions that could make a big difference.
Hydration is a good example. The most common source of E. coli BSI (which accounts for most GNBSI) is UTIs. We know that poor hydration is an important risk factor for UTI. So if we can improve hydration – in hospitals and outside – then there’s a good chance we’ll reduce UTI and in doing so reduce E. coli BSI.
Antimicrobial stewardship is another. If we can improve the management of Gram-negative infections in the community through appropriate therapy outside of hospital admissions, then you reduce the chance that they’ll progress to a GNBSI.
I can’t tell you for sure that we can halve GNBSI. But we must try to prevent the preventable GNBSIs!
Colleagues from the University of Edinburgh did a really nice job exploring the impact of individual public health interventions on the SARS-CoV-2 reproduction number (R) across 131 countries. Their work fueled the discussion on whether schools should be closed to control transmission. Rightfully so? Read Patricia Bruijning-Verhagens’ take on this study.
For their analyses they used the real-life interventions as they were implemented when the pandemic started and subsequently lifted this summer, inevitably with differences in timing and sequences between countries. Yet, this variation allowed them to explore how each intervention influenced the effective R-value (Reff) over time in each country. A few reflections on the study:
First, we need to understand how comparisons were made; for each country they cut the observation period into time fragments based on the non-pharmaceutical public health interventions (NPIs) that were used. A change in NPI – implemented or lifted – starts a new fragment, which can last from days to months. For each day in a fragment, they took the Reff from the available country data, and compared the Reff from the last day of a fragment to the Reff on the first day of the new fragment, and subsequently to the Reff values of all subsequent days in that fragment. The result is a daily ratio of old versus new Reff values following a change in NPI.
Next, all Reff ratios were entered in a multivariate model to determine associations between Reff ratios and implementation or lifting of individual NPI. Results can be interpreted as; what is the relative effect of implementing intervention A on Reff, while keeping measures B, C, D, etc. constant. Importantly, effects are quantified in terms of RELATIVE reduction/increase in Reff. ABSOLUTE effects of NPI will depend on the Reff at the start of intervention. For example; The Reff ratio for a ban on public gatherings is 0.76 (minus 24%) when we compare the Reff at day 28 after implementation to a situation without bans. Then, if Reff was 3 before implementation, the ban on public gatherings will reduce the Reff to 0.76*3=2.28 at day 28, yielding an absolute reduction in Reff of 0.72. Yet, if Reff was 1.2 at the start, then the absolute reduction will be 0.29 (0.76%*1.2=0.91).
The results of the multivariate model highlight another effect that needs to be considered; whith multiple NPIs implemented/lifted at the same time, their joint effect is smaller than the sum of their individual effects. This is estimated as interaction parameters Z1 and Z2. For instance, closing schools has an Reff ratio of 0.86 on day 14 following closure and the Reff ratio for banning public gatherings is 0.83. The Reff ratio for interaction on day 14 is approximately 1.17 as you can see in the figure below.
So, the interaction eliminates the effect of one of both interventions. The same happens when lifting two interventions at the same time; the joint increase in Reff is less than would be expected on the Reff ratios from each NPI separately. The effect of an NPI may thus differ, depending on the context (i.e. other NPIs in place). An alternative explanation is that the model overestimates the single intervention Reff ratios, because of collinearity in the data. Ideally, one would estimate interaction effects separately for each possible combination of two NPIs, but this requires inclusion of many more parameters in the multivariate model, which were not available. This interaction effect also becomes apparent when we look at the four scenarios of composite NPIs; Moving from scenario candidate 3 to 4, the Reff ratio for day 28 changes by 0.10 only, although two more interventions were added (school closure and stay at home requirements).
An important limitation of the data is that many interventions were implemented or released shortly after one another, seriously limiting the number of informative datapoints and precluding quantification of individual effects of interventions. This is reflected by the wide confidence intervals for many estimates. For instance, schools were already closed at the start of the observation period in 64 of 131 countries and only 25 countries lifted school closure at some point. Moreover, school closure was followed by other interventions within a week in 75% of countries, leaving only 16 countries with more than 7 days to quantify effects of school closure as separate intervention. Furthermore, differences across countries add to heterogeneity in the data and, thus, to imprecision in estimates.
To conclude, this study provides some insight in the effectiveness of some NPIs, but precise effects of individual interventions remains uncertain and will highly depend on the prevailing Reff at the time of implementation/lifting, and other interventions implemented, lifted or maintained. The authors acknowledge some of these limitations and caution that ‘ the impact on R by future reintroduction and re-relaxation of interventions might be substantially different’. Obviously, many readers that claimed major effects of NPI, in particular of school closure, didn’t make it till this stage of the manuscript.
Patricia Bruijning-Verhagen, MD, PhD, is pediatrican and epidemiologist at the Julius Center for Health Sciences and Primary Care, at the UMC Utrecht
I had the privilege of participating in the IPS Autumn Webinar series yesterday, in a debate with Dr Evonne Curran on whether we should routinely audit hand hygiene in hospitals. It was good fun – and highlighted some important points about the strengths and limitations of hand hygiene audits – and audits generally for that matter!
Here’s my case for routine hand hygiene auditing in hospitals (you can register (free!) and view the webinars here):
My key arguments were that:
Hand hygiene is really important, and one of a range of interventions that we should be routinely auditing to launch focussed improvement work.
There are key sources of bias in hand hygiene auditing (see below). However, these can be reduced with optimised methodology.
Observation bias (aka Hawthorne effect) – where behaviour is modified by awareness of being observed. For example, if I stand over you with a clipboard and a pen, you’re more likely to do hand hygiene.
Observer bias – difference between the true value and the observed value related to observer variation. For example, poor trained auditors will result in variations in reported practice due to observer bias.
Selection bias – when the selected group / data does not represent the population. For example, only doing hand hygiene audits during day shifts won’t tell you the whole picture.
Hand hygiene audits are a legal and regulatory requirement (in England at least).
My own experience is that optimised hand hygiene auditing methodology can deliver a performance indicator that can identify areas of poor performance and drive focussed improvement initiatives.
At the end of the debate, two thirds of the live audience voted against doing routine hand hygiene audits in hospitals. Put another way – I lost! I am taking the view that the audience voted against the concept of inaccurate auditing returning unrealistically high level of compliance, rather than against properly monitored and measured auditing, which can help to fuel improvement.
If nothing else, I hope the debate made the point that poorly planned and executed hand hygiene auditing is doing nobody any good – and may be doing harm. If we are going to do hand hygiene auditing, it should be using optimised methodology to deliver actionable information that is put to work to improve hand hygiene practice.
You’ll all have seen wide variety of masks and face coverings worn in a wide (and often alarming!) variety of ways. Leaving aside the (in)correct wearing of masks, it’s useful to see some comparative data on the relative respiratory protection offered by different mask materials. This study, published years ago (pre COVID!), does just that.
There are rumblings that glove wearing (aka “hand coverings“) are being considered as a widespread recommendation to prevent the spread of SARS-CoV-2 in public places (e.g. shops) in the UK. The message of this post is simple – please, no gloves. Convincing clinical staff of the unintended consequences of glove overuse is tricky enough. But widespread use of gloves in public places like shops may just bring me to tears. (Unless anybody can point me in the direction of solid evidence that this is likely to have a net benefit in reducing transmission…!).
We recently published a study in the Journal of Antimicrobial Chemotherapy relating the impact of introducing an enhanced testing* programme for CPE in London. (And yes, this is the first post for a while that isn’t on COVID-19!) Following an outbreak of NDM-producing Klebsiella pneumoniae affecting 40 patients in 2015 (published elsewhere, here and here), we ramped up our CPE testing programme. The number of patients carrying CPE increased substantially, from around 10 patients per month in June 2015 to around 50 per month in March 2018. However, the proportion of tests that were positive for CPE remained constant at around 0.4%, suggesting this was more effective carrier identification rather than a swelling pool of carriers per se; seek and ye shall find! Curiously, the majority of CPE identified were not linked in time and space with other CPE, suggested they represented a ground-swell of CPE coming into the hospital, rather than frequent in-hospital transmission. Also, the number of patients with CPE infections during the study period did not increase, which was reassuring.
We have just had a study published in Clinical Infectious Diseases exploring the extent and magnitude of hospital surface and air contamination with SARS-CoV-2 during the (first!) peak of COVID-19 in London. The bottom line is that we identified pretty extensive surface and air contamination with SARS-CoV-2 RNA but did not culture viable virus. We concluded that this highlights the potential role of contaminated surfaces and air in the spread of SARS-CoV-2.