CRE and friends: Q&A

the end of antibiotics_smallI gave the first in a three part webinar series for 3M last night, and you can download the slides here. Also, you can access the recording here (although you will need to register to do so).

The webinar was attended by >200 participants from across the US. I tried to outline the three pronged threat of multidrug-resistant Gram-negative rods (especially CRE) in terms of high levels of antibiotic resistance, stark mortality (for invasive disease) and the potential for rapid spread (including the prospect of establishing a community reservoir). Then, I gave an overview of the US and European picture in terms of CRE prevalence. Finally, I discussed the diagnostic challenges and options.

The most interesting part for me was the response to the questions that I threw out to the audience (see Figure below).

Figure: response to the questions from the 200 or so participants.

CRE and friends charts q1

CRE and friends charts q2

CRE and friends charts q3

I was somewhat saddened but not especially surprised that the difference between CRE and CPE was not clear in the minds of most participants. I appreciate that this may be in part due to the fact that ‘CPE’ seems to be used more commonly in Europe than in the US. But this is an international problem, so we need to get our terminology straight in our globalised world.

It was encouraging to hear that most US hospitals have had no CRE, or only one or two cases. However, 11% of the participants see CRE regularly, with cases unconnected to outbreaks. This is a concern, and suggests that CRE has become established in these areas. Indeed, a recent study from 25 Southeastern US community hospitals reports a 5-fold increase in the prevalence of CRE since 2008, suggesting that CRE is becoming established in some parts of the US.

Most participants didn’t know which method was used by their clinical laboratory to detect CRE. I’m not sure whether or not this is a problem. You’d hope that laboratorians to know that they’re doing!

Q&A

The webinar included time for a Q&A from the audience, which covered the following:

  • “How long to resistant Gram-negatives survive on surfaces?” This depends on which Gram-negative you’re talking about. Non-fermenters, especially Acinetobacter baumannnii, have remarkable survival properties measured in months and years. Enterobacteriaceae have a somewhat lower capacity to survive on dry surfaces, but it can still be measured in weeks and months, rather than hours and days.
  • How important is the environment in the transmission of resistant Gram-negatives?” Again, this depends on which Gram-negative you’re talking about. For A. baumannii the answer is probably “very important” whereas for the Enterobacteriaceae the answer is more like “quite important”.
  • “What would you recommend for terminal disinfection following a case of CRE?” I would recommend the hospitals usual “deep clean” using either a bleach or hydrogen peroxide disinfectant, and consideration of using an automated room disinfection system. I would not be happy with a detergent or QAC clean; we can’t afford to leave an environmental reservoir that could put the next patient at risk.
  • “Are antibiotic-resistant Gram-negative bacteria also likely to be resistant to disinfectants” There’s been a lot of discussion on this issue, but the short answer is no. I’d expect an antibiotic-resistant Enterobacteriaceae isolate to be as susceptible to disinfectants as a corresponding antibiotic-susceptible isolate.  
  • “Should patients with CRE be left to the end of surgical lists, and are is special instrument reprocessing required?” There is no need to implement special instrument reprocessing – follow your usual procedures here. Should CRE patients be left to the end of surgical lists? It would be prudent if possible, but don’t lose sleep over it.
  • “Are any special decontamination measures necessary for endoscopes?” A number of outbreaks of CRE have been reported associated with endoscopy. However, usual endoscope reprocessing methods should be sufficient to deal with CRE, provided they are done correctly!
  • “How do you lessen your chances of acquiring CRE?” Healthy individuals lack the risk factors for CRE infection (although CRE can occasionally cause infections in the community). Thus, the personal protective equipment (PPE) specified for contact precautions (gloves and gowns) combined with rigorous hand hygiene are sufficient to protect healthcare workers.
  • “Are toilet seats in India safe?” What a question! I guess we’re talking about an organism with gastrointestinal carriage, so it’s probably that contamination of the toilet seat will occur. It may be prudent to clean or disinfect toilet seats in India before using them. Either that, or squat!
  • “Can you expand on isolation protocols?Firstly, ensure that patients infected or colonized with CRE are assigned a single room (not so relevant in the US, but important in healthcare elsewhere). Then, make sure you have appropriate policy and supply of PPE (principally gloves and gowns). Consider implementing ‘enhanced precautions’, including a restriction of mobile devices. Finally, consider cohorting patients and staff to the extent possible. You can read more about NIH’s approach to isolation here.
  • “Can patients who are colonized with CRE be deisolated?” This is a tricky one, which is basically an evidence free zone and hence an area of controversy. Longitudinal studies show that carriage of CRE can persist for months or even years, so it makes sense to continue isolation for the duration of a hospitalization and not bother with repeated swabbing. At the time of readmission, it makes sense to take a swab to see whether colonization continues. If not, then it may be rational to deisolate them – perhaps after a confirmatory swab. I wish I could be more decisive here, but the evidence is scant.

Do please let me know if you have anything to add to this Q&A!  

Preventing HCAI: go long or go wide?

quarterbackThere seems to be a general movement away from targeted, pathogen-based precautions (principally screening and isolation) in the USA. This changing professional opinion was clear from the recent SHEA conference, where several leading experts gave what amounted to a collective justification for abandoning contact precautions for MRSA.

The update of the SHEA Compendium of Strategies to Prevent Healthcare-Associated Infections in Acute Care Hospitals is accompanied by a commentary from a group of leading US figures titled ‘Approaches for preventing HCAI: Go long or go wide’. In the commentary, the authors weigh the evidence and opinion for so-called ‘vertical’ (aka targeted) vs. ‘horizontal’ (aka universal) interventions (Table).

go wide or longTable: Go long or go wide? Examples of targeted and universal interventions (adapted from Wenzel & Edmond, via Septimus et al.).

The commentary outlines the potential drawbacks of targeted approaches (such as fewer visits from healthcare workers and feelings of isolation), but doesn’t spend a lot of time discussing the potential drawbacks of universal approaches. For example, “isolation fatigue”, where a procedure loses its impact if it has to be applied to every patient. And then there’s the possibility of resistance when performing universal decolonization. This is particularly worrysome when using antibiotics, but could also be a problem when using biocides such as chlorhexidine.

I’m not ready to abandon pathogen-based targeted interventions just yet. Conceptually, it just does not make sense. If you have a patient with MRSA and a respiratory virus, chances are they will become a ‘super-spreader’. Those who favour universal approaches do make some provision for exceptional cases that really should be identified and isolated via a ‘syndromic’ approach to isolation: crudely, only isolate patients when they’re oozing. However, this syndromic approach would likely miss our ‘super-spreading’ patient, which may well result in an MRSA outbreak – that we could all do without.

Furthermore, if you have a patient who is colonized with CRE, are you brave enough to take no special precautions, as would be the case for a ‘universal only’ approach? The success of this strategy would depend on a high level of compliance with standard precautions such as hand hygiene and environmental cleaning and disinfection. Whilst sound in theory, this just doesn’t happen in the trenches; your facility is above average if your hand hygiene compliance rate is the right side of 40%. Whilst still not 100%, hand hygiene compliance is higher when patients are placed in isolation, most likely because there’s a stronger psychological trigger to comply with hand hygiene.

It’s important to note that targeted and universal approaches are by no means mutually exclusive. For example, on our ICU in London, we have been using universal chlorhexidine decolonization for a decade combined with targeted screening and isolation, and have seen a dramatic reduction in the spread of MRSA.

So, should we go long or go wide in the prevention of HCAI? The answer is both. We should optimize case for all patients, which means careful standard precations with liberal application of chlorhexidine and tight restriction of antibiotics. But we should also identify those with communicable pathogens and segregate them from others. In this regard, we have the weight of history on our side.

Image: Jeff Weese.

ICHE special edition on CRE and MDROs

CRE medium

Infection Control and Hospital Epidemiology have once again excelled themselves in putting together a fine special edition on CRE and MDROs. Around this time last year I posted an article on the ICHE special edition on the role of the environment, and this special edition is equally important. I strongly recommend that you read the special edition from cover to cover, but I’ve picked out a few of my personal highlights below:

  • A thoughtful editorial by Drs Lautenbach and Perencevich sets the scene. They reflect on our ‘woeful unpreparedness’ to address both current and future MDROs.
  • A number of articles provide updates on surveillance and prevalence. Brennan et al. report findings from a 6-month CRE point-prevalence survey based on voluntary reporting in the state of Michigan, finding a crude rate of 1.07 cases per 10,000 patient days. Interestingly, this rate was almost 3 cases per 10,000 patient days in long-term acute care facilities. Isolates were not collected and analyzed, so carbapenemase genes were not confirmed; the fact that close to 10% of isolates were susceptible to meropenem suggests that a good number of the CRE were not carbapenemase producers. Indeed, another state-level point-prevalence survey (Pfeiffer et al., from Oregon) found that only 3 of the 60 CRE isolates reported were carbapenemase producers. Another state-level survey of CRE (Johnson et al., from Michigan) identified regional clustering of CRE colonization of mechanically ventilated patients in the central region of the state.
  • Analysis through the SHEA Research Network found that contact isolation policies for multidrug-resistant Gram-negative rods (MDR-GNR) are surprisingly variable. Worryingly, almost 20% of facilities surveyed did not isolate patients infected or colonized with MDR Pseudomonas or Acinetobacter, and 6% do not isolate patients with CRE. Policies for de-escalation of contact precautions were equally variable. Contact isolation policies seem to be even more lax in long-term care facilities based on data from Pfeiffer et al., reporting that only half of patients colonized with MDROs are placed on contact precautions.
  • A number of studies evaluated risk factors for CRE. For example, Bhargava et al. identified high acute morbidity score, immunosuppression, presence of indwelling medical devices and prior antimicrobial exposures to be consistent risk factors for CRE in the various patient populations they evaluated.
  • A survey of the kitchen in a Swiss hospital identified ESBL-producing Enterobacteriaceae in 92% of raw chicken and 6% of rectal samples from food handlers.
  • The efficacy of chlorhexidine bathing for MDR-GNR has been questioned, so data from Lin et al. on this issue are particularly welcomed. In a study of 62 patients in a long-term acute care facility, daily chlorhexidine gluconate (CHG) bathing halved the chances of culturing CRE from the body sites analyzed. However, it’s worth noting that the measured CHG skin concentration (15-312 mg/L before the daily bath and 78-1250 mg/L after the daily bath) was much lower than the applied CHG concentration (10,000 mg/L). This potentially brings the subtly reduced susceptibility to CHG reported in MRSA into play.
  • Several studies evaluated the potential for environmental contamination with MDR-GNR. Rosa et al. found that exposure to surfaces contaminated with MDR A. baumannii increased the risk of acquisition by almost 3-fold. Although the design of the study was fundamentally different, it is interesting to note that the increased risk from admission to a room previously occupied by a patient with MDR A. baumannii was also around 3-fold in a previous study. Havill et al. reported that the survival time for CRE (including K. pneumoniae) on dry surfaces is measured in weeks not days. Rock et al. carefully observed 220 unique interactions between healthcare workers (HCW) and patients with KPC or non-KPC producing K. pneumoniae, finding that HCW gloves or gowns became contaminated during 14% of the 220 interactions, and 26% of 43 environmental samples were positive. There was no significant difference between HCW or environmental contamination rates for KPC vs. non-KPC producing K. pneumoniae.
  • There was not much on therapy for CRE – perhaps because there is little to say for pan-drug resistant CRE! An article discussing the challenges of managing CRE infections by Drekonja et al. through surveying the CDC funded Emerging Infections Network highlighted the common problems due to toxicity from using “last-line” antimicrobials colistin and tigecycline.

It seems that the prevalence of CRE is patchy in the USA at present, and that long-term care, and long-term acute facilities are an integral part of the story. Given the limited evidence base, interventions need to cover all bases: active surveillance, rapid and accurate diagnostics, environmental (and perhaps food) hygiene, contact isolation and perhaps antiseptic decolonization, all combined with facility-wide education and communication initiatives. The most effective – and cost-effective – interventions to prevent and control the spread of CRE and other MDR-GNR are controversial so to this end I am looking forward to the SHEA ‘From MRSA to CRE: Controversies in MDROs’ and joint HIS / IPS ‘What’s that coming over the hill: rising to the challenge of resistant Gram-negative rods’ Spring meetings next month!

Photo credit: Enterobacter cloacae NDM-1 growing on Oxoid Brilliance CRE Agar by Nathan Reading.

Could universal glove use provide a false sense of security?

Dawson blog imageHand Hygiene and Self-Protection

Guest blogger Carolyn Dawson (bio below) writes: The BUGG study provides support for the concept of self-protection in hand hygiene through its findings that healthcare professionals were more likely to perform hand hygiene after leaving a patient room than upon entry (mean compliance at room exit vs. entry in intervention universal glove and gown group: 78.3% vs. 56.1%, respectively; mean compliance control group: 50.2% vs. 62.9%, respectively).  This may suggest a stronger awareness of contamination occurring on the hands during patient interaction than of contamination having occurred prior to patient contact. It may also indicate a higher prioritisation of the implications of contamination acquired during, rather than prior to, patient contact.

The discussion here is how such self-protection themes may affect the concept of universal glove use providing a benefit to patient safety. The “urgh” factor provides a simple phrase to represent instinctive hand hygiene drivers, both at times when hands become physically soiled and when they are in contact with things which have an “emotionally dirty” association (e.g. armpits, clean bedpans) (based on Whitby et al., 2006). The “urgh” factor has been shown to increase likelihood of hand hygiene occurring in clinical practice (my research).

The “urgh” factor can be useful for driving hand hygiene: despite other pressing variables, such as time and workload, this instinctive self-protective driver increases the likelihood that hand hygiene will still occur on some occasions, providing the related patient and healthcare professional safety benefits. But it also means that there is less of a psychological driver for hand hygiene following contact with things that are perceived as “clean” but may be as contaminated as perceived “dirty” items.

Glove use reduces the “urgh” factor

The use of gloves (including inappropriate/over-use) has been shown to be driven by themes including disgust and fear (e.g. Wilson et al, 2013), suggesting their use leads to a feeling of security, reducing this “urgh” factor.  Therefore, one could expect that activities previously resulting in high levels of hand hygiene would be affected by the adoption of universal glove use, as the “urgh” factor influence is reduced.  In other words, if you are wearing gloves, you are less likely to feel repulsed by touching something you previously would have, and thus, in turn, are less likely to perform hand hygiene. Glove use is no substitute for effective hand hygiene, which should be performed both before and after gloves are used, and at specific points during patient care (RCN 2012).

For example: imagine moving from changing a catheter bag, to cleaning a wound. Both hand hygiene and the changing of gloves must be performed. With respect to the “urgh” factor, one could expect that instinctive drivers would motivate hand hygiene in this example, as self-protective drivers lean towards decontamination after handling the catheter bag. However, when gloves are used these desires may be muted, leaving a stronger demand on the knowledge and skills of the healthcare professional to perform necessary hand hygiene and glove use protocol.

‘Correct’ and ‘Incorrect’ glove use

It is worth noting that the definition of ‘appropriate’ use of gloves is subjective, with different settings likely to adhere to different standards and guidelines. Thus, caution is required when discussing ‘correct’ and ‘incorrect’ use of gloves. There are, however, some less debatable examples where gloves are not recommended due to low risk of contamination (RCN 2012, Appendix 1), yet gloves are often used e.g. collecting equipment, writing notes (Flores and Pevalin, 2006).

The use of gloves for these activities combined with uninterrupted use of gloves (from one activity/area to another without removal – Girou et al., 2004), likely results in microbial cross-contamination via the surface of these gloves. Such activities provide no “urgh” factor safety net, therefore the need to change gloves and perform required hand hygiene requires conscious decisions from the healthcare professional, demanding cognitive input. Commenting on the misuse of gloves, Fuller et al. (2011) wrote: “the reality is that healthcare workers do not always clean their hands before donning gloves, that their hands pick up further organisms during high-risk contacts, and that hands are not always cleaned when the gloves are removed.” It seems likely that a move towards universal gloving would result in more inappropriate ‘continued use’ activities occurring.

Correct, not universal glove use

Such knowledge suggests that rather than looking towards universal gloving as a preventative strategy, continued focus should be turned towards ensuring current glove use is appropriate, seeking to harness the “urgh” factor safety net to drive hand hygiene compliance.

Carolyn Dawson Bio

I am about to submit a PhD dissertation on healthcare hand hygiene which explores the challenges faced in monitoring, measuring and providing feedback compliance data: the audit process. My research questions the potential of hand hygiene technologies (electronic surveillance) as an aid for this process, insisting that first their ‘Fitness-For-Purpose’ must be evaluated using recognised standards. The application of behavioural theory to understand how different activities may influence whether hand hygiene is executed is explored through pilot work on ‘Inherent’ and ‘Elective’ hand hygiene. This case study research has been carried out within an NHS acute setting, however application of the WHO “My 5 Moments for Hand Hygiene” as a core element allows the potential for future work to build upon this foundation outside the current setting. Prior to beginning my PhD I graduated with a BSc in Psychology and an MA from Warwick Business School, and then spent 6 years working for a global laser company as a Project Analyst.

Photo credit: CDC / Amanda Mills.

How much Clostridium difficile infection is hospital-acquired?

B0006630 Clostridium difficile

This is a very impressive New England Journal of Medicine study from an Oxford University based group, using whole genome sequencing to really dissect relatedness of C. difficile isolates over a 5 year period. The study evaluates how many cases of C. difficile infection (CDI) were caused by isolates that were genetically related to previous symptomatic cases. This is not quite the same thing as evaluating how much CDI is hospital-acquired, mainly because the test used to detect CDI in the study has been phased out due to poor sensitivity, patients and staff were not screened for asymptomatic C. difficile carriage, and the environment was not sampled, so there was a large, unrecognized, hospital-based C. difficile reservoir from which horizontal transmission almost certainly occurred.

A major problem was the use of an Enzyme Immuno Assay (EIA) test kit to detect CDI. Whist these tests were used pretty much universally in the UK at the time of the study, they have now been shown to be very unsatisfactory. The sensitivity of EIA for the detection of CDI has been as low as 50% in some studies. Put another way, for every case of CDI that is detected, one goes undetected. This is crucially important in the context of this study, where the undetected CDI cases would contribute to the burden of asymptomatic carriers, which together would contribute to transmission. It’s also worth noting that C. difficile could not be cultured from 25% of stool samples that were EIA-positive, suggesting that the test may have had poor specificity too. The authors did try to ‘control’ for this problem, by effectively assuming that all stool specimens tested for CDI were positive in a sensitivity analysis, but this did not really help in explaining genetically related cases with no discernable epidemiological links.

There is also a technical point about the definition of ‘genetically distinct’ in terms of whole genome sequencing. If two isolates differ by 11 base pairs across the whole genome, do they originate from the same strain? It’s difficult to tell. In this study, they used a fairly conservative measure of relatedness: >10 single nucleotide variants (SNVs) was considered ‘genetically distinct’, and ≤2 SNVs was considered ‘genetically related’. This may have over-estimated apparent genetic heterogeneity. To be fair, the authors did perform a careful ‘validation’ study to determine the clock speed of mutation in their isolates by sequencing the first and list isolates obtained from a sample of patients, finding that 0-2 SNVs were expected for isolates <124 days apart. Even using these conservative measures of relatedness, the majority (55%) of isolates were related (‘not genetically distinct’ to be precise) to others in the collection (≤10 SNVs) and around a third of isolates were ‘genetically related’ to others in the collection (≤2 SNVs).

The authors performed detailed work to explore epidemiological associations between genetically related isolates (Figure). No acute- or community-based epidemiological links could be identified for 36% of the 333 genetically related cases, which perhaps supports the presence of unrecognized symptomatic cases or asymptomatic carriers.

CDI eyreFigure: Epidemiology relationships between 333 genetically related cases. ‘Ward contact’ = shared time on the same ward; ‘Hospital contact’ = shared time in the same hospital, without direct ward contact; ‘Ward contamination’ = admitted to the same ward within 28 days of the discharge of a symptomatic patient; ‘Same GP’ = no hospital contact, but shared the same GP; ‘Same postcode’ = no hospital contact, but shared the same postal code).

The overall rate of CDI was low, at <1 per 1000 patient days and it is noteworthy that the prevalence of genetically related and genetically distinct cases declined during the study period. I suspect if the same study had been performed for the period of 2000-2005, when more hospital transmission was almost certainly occurring, then a far higher proportion of isolates would have been genetically related.

I fear that this study will be used by some to ‘prove’ that horizontal transmission of C. difficile in healthcare settings is now uncommon, and most hospital-onset cases can be explained away by “CA-CDI”. Due to the poor sensitivity of the diagnostic kit combined with the likelihood of asymptomatic human carriage and environmental contamination, this study does not answer the question of how much CDI is hospital-acquired. It does, however, suggest that horizontal transmission from known symptomatic cases may be less common that we thought.

Article citation: Eyre DW, Cule ML, Wilson DJ et al. Diverse sources of C. difficile infection identified on whole-genome sequencing. N Engl J Med 2013; 369: 1195-1205.

Photo credit: Annie Cavanagh. Wellcome Images.

This study has been BUGGing me for a while

bug glove

A fabulous study recently published in JAMA evaluates the ‘Benefits of Universal Glove and Gown’ (BUGG) in US ICUs. This is a model study design: one of the first cluster randomized controlled trials of a non-therapeutic infection control intervention. Twenty ICUs were paired and randomized to either universal glove and gowning, or to continue the current practice of placing patients known to be infected or colonized with MRSA and VRE on contact precautions. The hypothesis is that undetected colonization with MRSA and VRE is common, and the only real way to address this is to assume everybody is colonized!

Summary of findings:

  • Universal glove and gowning was not associated with a reduction in a composite measure of MRSA / VRE acquisition (the primary outcome).
  • VRE acquisition was not reduced by universal glove and gown use, whereas MRSA was.
  • CLABSI, CAUTI and VAP; ICU mortality; and adverse events did differ significantly between the two groups.
  • Hand hygiene compliance on room entry was not significantly different between the two arms, whereas hand hygiene compliance on room exit was significantly higher in the intervention arm.
  • Healthcare workers visited patients 20% less frequently in the intervention arm (4.2 vs. 5.2 visits per hour).

BUGGFigure: The change in acquisition rate, comparing the baseline period with the study period for the intervention and control units.

Here’s what’s BUGGing me about this study:

  • The acquisition rate in both intervention and control study arms reduced (Figure). The acquisition rate reduction in the control arms may be due to improved compliance with admission screening, resulting in more accurate ascertainment of who required contact precautions.
  • The significant reduction was achieved for MRSA but not for VRE. The authors suggest that VRE colonization may have been suppressed on admission and not detected, and flourished during antimicrobial therapy giving the impressive of acquisition. I wonder whether differences in the routes of transmission may also have contributed; for example, VRE seems to be substantially “more environmental” than MRSA. Another potential confounder is that, by chance, the prevalence of MRSA or VRE on admission to the intervention ICUs was more than double that in the control ICUs (22% vs. 9%). In actual fact, the raw rate of MRSA acquisition in the intervention ICUs was marginally higher than in the control ICUs during the intervention period (6.00 vs. 5.94 per 1000 patient days), even though the change in rate was significantly greater on the intervention ICU. Although adjustment was made for this difference in the analysis, it may have skewed the findings somewhat.
  • The authors achieved remarkably high compliance with admission screening (around 95%), discharge screening (around 85%) and glove and gowning (around 85%). Each site had the luxury of a study coordinator and a physician champion to lead implementation, plus weekly feedback on screening compliance and visits from study investigators. Most ICUs would not be afforded these luxuries so I suspect that real-world compliance outside of the somewhat artificial study environment would be considerably lower. Indeed, an ID Week poster suggests that compliance with gowning in one US ICU was a ‘dismal’ 20%!
  • Adverse events were not significantly higher in the universal glove and gowning arm, which may seem surprising prima facie. However, the reason why adverse events are more common for patients on contact precautions is that they are marginalized by being on contact precautions. If all patients are effectively on contact precautions, the time of healthcare workers would be spread evenly.
  • Universal gloving is likely to result in universally bad hand hygiene compliance within the room during patient care; when healthcare workers feel protected, they are less likely to comply with hand hygiene and gloves are a good way to make healthcare workers feel protected. The increase in hand hygiene compliance on room exit is probably also a symptom of inherent human factors, since healthcare workers feel more ‘dirty’ when exiting the room of a patient with a higher perceived risk of MDRO ‘contamination’ (the so-called “urgh” factor).
  • Healthcare workers had less time for patient care in the intervention arm because they were busy donning and doffing gloves and gowns. Interestingly, the authors suggest that fewer visits may be a good thing for patients, and may have contributed to their reduced chances of acquiring MRSA. This seems unlikely though, given the fact that VRE acquisition was not reduced. On balance, less contact with healthcare workers is likely to be bad for patients.
  • The increased cost of universal glove and gowning was not evaluated and, whilst incrementally small, would be a substantial sum.

In summary, this study sets the standard in terms of rigorous assessment of an infection prevention and control intervention. Universal application of gloves and gowns is unlikely to do as much harm as universal administration of mupirocin, but it will not make a profound reduction in the transmission of MDROs. Therefore, I shouldn’t think many ICUs will be rushing to implement universal gloves and gowns on the strength of these findings.

Article citation: Harris AD, Pineles L, Belton B et al. Universal glove and gown use and acquisition of antibiotic-resistant bacteria in the ICU: a randomized trial. JAMA 2013;310:1571-1580.

Is “community-acquired” CDI real?

A recent high profile US study delved into apparent community-associated CDI cases to evaluate healthcare exposures. The study was large, evaluating almost 1000 cases of community-associated CDI from 8 US states. Only 177 (18%) of the 984 cases had no recent healthcare exposure (Figure 1). Furthermore, healthcare exposure was only evaluated for the 12 weeks prior to the positive specimen, so I would wager that a portion of this 18% acquired their infecting C. difficile in a healthcare facility.  CA-CDIFigure 1. Data demonstrating that most MRSA and CDI presenting on admission to hospital are likely to have been acquired in a healthcare facility.  

So, it seems that the majority of these cases are more likely to be community-onset, healthcare-acquired CDI, rather than community-acquired CDI. I feel like we’ve been here before. In the 1990s before the emergence of distinct strains of CA-MRSA, MRSA presenting at hospital admission was commonly termed ‘community-associated’ or, worse, ‘community-acquired’ when really it was MRSA that had been acquired in hospital during a previous stay (Figure 1). The situation has now changed since distinct MRSA clones have emerged that have the capacity to cause infection outside the healthcare environment.

Turning our attention to the UK, the mandatory report scheme classifies cases of CDI as ‘Trust-apportioned’ if the specimens is collected from patients who have been in hospital for four or more days (Figure 2). It is tempting to speculate that the cases of CDI that are non Trust-apportioned are CA-CDI. However, the definition for ‘Trust-apportioned’ does not account for previous healthcare contact, and the rate of Trust-apportioned and non-Trust-apportioned cases tracks so closely that, once again, these are likely to be healthcare-acquired CDI presenting on admission.

Slide1Figure 2. Number of cases of CDI in England through the mandatory reporting scheme, 2004-2013.

The epidemiology of C. difficile is fundamentally different to MRSA, in that healthy neonates typically have a high rate of C. difficile colonization. Thus, there is a ready reservoir for a low rate of genuinely community-acquired CDI. However, it seems to me that most “CA-CDI” reported thus are likely to be acquired in a healthcare facility and I have not seen any data to convince me that community-acquired CDI is increasing.

Article citation: Chitnis et al. Epidemiology of Community-Associated Clostridium difficile Infection, 2009 Through 2011. JAMA Intern Med 2013;173:1359-67.

Is there a causal relationship between contamination burden and transmission risk?

contamination v transmission There’s an age-old problem in science: how do you prove a causal relationship between variables that correlate? Proving that the variables are correlated is the easy part; it’s more difficult to disentangle cause from effect. This can be seen in several studies that identify a correlation between environmental burden and the number of patients that are infected or colonized with pathogens.DentonFigure 1. Correlation between the number of patients infected with Acinetobacter spp. and the number of positive Acinetobacter spp. environmental cultures per calendar month during an outbreak on a neurosurgical ICU.1

SalgadoFigure 2. Correlation between microbial burden and the number of patients who acquired an HAI in ICUs.2

WhiteFigure 3. Correlation between the number of hygiene failures and the number of patients who acquired an infection on a surgical intensive care unit each week.3

So can we conclude that the higher burden of contamination resulted in an increased risk of acquisition? Or is it that more patients were infected or colonized with pathogens, which resulted in more environmental shedding? From these studies, you can’t be sure.

If you were seeking to prove the role of a gene in a process, you’d knock out the gene and demonstrate that the process stopped or changed. So, the only way to disentangle cause and effect in contamination and transmission is to perform an intervention to reduce environmental contamination and show that this correlates with reduced transmission. While the Salgado study evaluated an intervention, the data correlating contamination burden with HAIs was not stratified by the intervention, which would have been one way to assess likely causation.2

There is some further in vitro and epidemiological data supporting that the degree of transmission may be proportional to the environmental burden. An in vitro mouse model established a ‘dose-response’ relationship between the degree of contamination with C. difficile spores and the development of CDI.4 Furthermore, this model showed that disinfectants that achieved a greater log reduction of C. difficile spores were more able to interrupt transmission.

Also, one of the studies demonstrating that admission to a room previously occupied by a patient with VRE increases the chances of VRE acquisition identified something amounting to a ‘dose response’.5 The greatest increased risk was for patients admitted to a room with an environmental culture positive for VRE, and being admitted to a room where the immediate prior room occupant was colonized with VRE carried a greater increased risk than being admitted to a room where any patient in the 2 weeks prior to admission was VRE colonized (Figure 4).

DreesFigure 4. How the increased risk of acquiring VRE from the prior room occupant changes due to patient and environmental factors.5

Is there a causal relationship between contamination burden and transmission risk? On balance, the answer seems to be yes, though it would be useful to have a solid intervention study to prove that an increasing environmental burden causes an incrementally increase in transmission risk.

Article citations:

  1. Denton M, Wilcox MH, Parnell P et al. Role of environmental cleaning in controlling an outbreak of Acinetobacter baumannii on a neurosurgical intensive care unit. J Hosp Infect 2004; 56: 106-110.
  2. Salgado CD, Sepkowitz KA, John JF et al. Copper surfaces reduce the rate of healthcare-acquired infections in the intensive care unit. Infect Control Hosp Epidemiol 2013; 34: 479-486.
  3. White LF, Dancer SJ, Robertson C, McDonald J. Are hygiene standards useful in assessing infection risk? Am J Infect Control 2008; 36: 381-384.
  4. Lawley TD, Clare S, Deakin LJ et al. Use of purified Clostridium difficile spores to facilitate evaluation of health care disinfection regimens. Appl Environ Microbiol 2010; 76: 6895-6900.
  5. Drees M, Snydman D, Schmid C et al. Prior environmental contamination increases the risk of acquisition of vancomycin-resistant enterococci. Clin Infect Dis 2008; 46: 678-685.

Do you know your CRO from your CPO from your CRE from your CPE?

Carbapenems are a class of beta-lactam antibiotic with a broad spectrum of activity against Gram-positive and Gram-negative bacteria. Whilst carbapenems are used for the treatment of Gram-positive infections, the emergence of Gram-negative bacteria with resistance to the carbapenem antibiotics is a health issue that has prompted unusually dramatic health warnings from the US CDC, Public Health England (PHE) and the European CDC (ECDC). However, the various acronyms employed to describe the subtleties of the problem are a minefield for the uninitiated:

Carbapenem-resistant organism (CRO) – Gram-negative bacteria* including the Enterobacteriaceae (such as Klebsiella pneumoniae and Escherichia coli) and non-fermenters (such as Acinetobacter baumannii, Pseudomonas aeroginosa and Stenotrophomonas maltophilia) that are resistant to carbapenems by any mechanism. The non-fermenters can be inherently resistant to carbapenems, or they can acquire carbapenemases (typically KPC, VIM, NDM and OXA-48 types). Enterobacteriaceae do not have inherent resistance but may be resistant to carbapenems through the production of an acquired carbapenemase or the production of an ESBL or AmpC combined with porin loss.

Carbapenemase-producing organism (CPO) – Enterobacteriaceae and non-fermeters that are resistant to carbapenems by means of an acquired carbapenemase.

Carbapenem-resistant Enterobacteriaceae (CRE) – Enterobacteriaceae that are resistant to carbapenems by any mechanism, including the production of an acquired carbapenemase or the production of an ESBL or AmpC combined with porin loss.

Carbapenemase-producing Enterobacteriaceae (CPE) – Enterobacteriaceae that are resistant to carbapenems by means of an acquired carbapenemase.

The image below tries to graphically represent the relative size of these groups (not to scale!), and the table provides a summary of their distinguishing features:

CRE etc

CRE table

“O” or “E”

The US CDC and European CDC seem to favour ‘CRE’ as a generic term for this problem, whereas the PHE in the UK seems to favour ‘CRO’. Clearly, each term has a defined meaning and context will determine which is technically correct. But which is most useful as a generic term?

The epidemiology of the non-fermenters and the Enterobacteriaceae is different, with the non-fermenters tending only to cause problems in very sick patients, usually in critical care settings. Meanwhile, the Enterobacteriaceae are more able to cause infections in a wider range of patients both inside and outside the hospital. Hence, the emergence of carbapenem resistance is more concerning in the Enterobacteriaceae, as demonstrated by rapid national and international spread of KPC-producing K. pneumoniae.

“R” or “P”

Amongst the Enterobacteriaceae, whilst carbapenem-resistance due to the production of an ESBL or AmpC combined with porin loss may lead to treatment failure, it is often unstable and may impose a fitness cost, meaning that these strains rarely spread. Hence, carbapenem resistance conferred by an acquired carbapenemases is the key problem.

So, for me, CPE would be the most suitable generic term for this emerging problem. However, since the US CDC and the ECDC seem to have gone with CRE, and the vast majority of CRE will be CPE, let’s go with CRE shall we?

 

[* Whilst Gram-positive bacteria that are resistant to carbapenems (such as MRSA) could be described as ‘CROs’, these terms are reserved to describe Gram-negative bacteria.]

The terms 'horizontal' and 'vertical' intervention leave me feeling upside down, confused

horizontal vertical

I am no expert in HIV, but I know that ‘vertical transmission’ means something very specific:

Vertical transmission: the transmission of a disease from mother to child either during pregnancy, childbirth, or by breastfeeding.

Similarly, the definition of ‘horizontal transmission’ is well defined:

Horizontal transmission: the transfer of an infection from person to person.

So, when I read about ‘vertical’ and ‘horizontal’ interventions in a recent New England Journal of Medicine Editorial and the Controversies blog, I began to get a little confused. I have a PhD in epidemiology so don’t consider myself easy to confuse (in this particular domain), but I would have thought that a ‘horizontal intervention’ would be directed towards preventing horizontal spread of an infectious agent and a ‘vertical intervention’ would be directed towards preventing the vertical transmission of an infectious agent. But this is not how these terms are being applied. Instead, a ‘horizontal intervention’ is being used to describe an intervention applied to every patient (such as chlorhexidine bathing or hospital-wide hand hygiene interventions) whereas a ‘vertical intervention’ is being used to describe an intervention designed to reduce colonization or infection due to a specific pathogen (such as active screening and isolation to prevent the spread of MRSA). The use of the term ‘vertical intervention’ seems especially confusing, since it’s a ‘vertical intervention’ to prevent the horizontal transmission of a specific pathogen!

I fail to see how the terms ‘vertical’ or ‘horizontal’ intervention are useful when there are such well-established definitions for horizontal and vertical transmission. I think that ‘universal intervention’ (such as universal screening or decolonization) and ‘targeted intervention’ (such as active screening and isolation to prevent the spread of MRSA) make a lot more sense. These terms are already in common circulation, so I would urge those who favour the use of ‘vertical’ or ‘horizontal’ intervention to reconsider their terminology.