CRE outbreak control: a view from the trenches

Drs Tara Palmore and David Henderson have written an engaging ‘view from the trenches’ in CID reflecting on their efforts to control an ongoing outbreak of CRE at the NIH Clinical Center, beginning in 2011.

The review outlines their interventions, including:

  • aggressive active surveillance (including regular house-wide surveys);
  • rapid identification and characterization of resistant organisms and resistance mechanisms (a mixture of conventional culture-based microbiology, mass-spec and mass spec);
  • whole-genome sequencing of outbreak isolates (which allowed the identified of counterintuitive transmission patterns);
  • enhanced contact precautions for all infected or colonized patients (patients only to leave room for medical reasons, visitors to wear gloves and gowns, staff not to touch personal electronic devices, preferable use of single-use equipment and enhanced terminal disinfection);
  • geographic and personnel cohorting;
  • daily chlorhexidine gluconate baths;
  • dedicating equipment for cohorted patients and aggressive decontamination of equipment that had to be reused on uncohorted patients;
  • monitoring adherence to infection control precautions, including unwavering attention to adherence to appropriate hand hygiene procedures (included the use of observing ‘enforcers’ to make sure staff complied with the basics);
  • enhanced environmental decontamination (including double bleach wipe daily disinfection, hydrogen peroxide vapor for terminal disinfection and careful management of drains);
  • engagement of all stakeholders involved in care of at-risk patients;
  • and detailed, frequent communication with hospital staff about issues relating to the outbreak.

The authors discuss the problem of determining which of these interventions worked, since they were implemented more or less simultaneously; the so-called “kitchen sink” approach (Figure). A recent systematic review performed by ECDC identified this problem in virtually all studies evaluating control interventions for CRE.

NIH CRE control measuresFigure. Perceived relative importance of outbreak control interventions at NIH.

There’s an interesting section on the ‘unintended consequences’ of publishing in report, including the inevitable scaremongering in some parts of the lay-press. It wasn’t all bad though; this is an unusually detailed article based on the original NIH outbreak report in the Washingtonian.

Some reflections from me:

  • This all started with the transfer of a colonized patient from New York. Recognizing and containing colonized patients that are transferred from other hospitals is going one of the most important fronts in the battle against CRE. Worth noting that ECDC are recommending a rectal screen of all cross-border transfers of hospital patients in Europe.
  • Mortality was especially high in the NIH outbreak (albeit in patients with serious underlying illness), illustrating the clinical ‘teeth’ that this issue bares.
  • The outbreak reignited from an unidentified reservoir after apparently being brought under control; we have a limited understanding of the challenging epidemiology of these organisms.
  • It’s sad, though not surprising, that the high hand hygiene compliance achieved during the outbreak could not be sustained following the outbreak.
  • As you would expect, relying on clinical cultures only is looking at the tip of the iceberg. Active surveillance is a must.
  • One unique aspect of their enhanced contact precautions was an instruction for staff to avoid touching personal electronic items. This makes a lot of sense, and should be considered for inclusion in regular contact precautions.
  • There are some telling insights on the practical challenges of cohorting staff, not least the fact that there were not enough physicians to feasibly cohort!
  • The initial isolation measures failed, and NIH (commendably) went to extraordinary lengths to bring the outbreak under control. ‘Aggressive’ is used to describe several aspect of their strategy, which seems apt. Israel is another success story of extraordinary CRE control measures. Greece and Italy are examples of where extraordinary measures have not been undertaken and CRE have quickly become endemic.

Article citation: Palmore TN, Henderson DK. Managing Transmission of Carbapenem-Resistant Enterobacteriaceae in Healthcare Settings: A View From the Trenches. Clin Infect Dis 2013 in press.

Micro Blog Autumn 2013 Update

autumn

The weather is turning colder and the leaves are beginning to turn, so that must mean it’s time for the Autumn Micro Blog Update! The following articles have been published on the blog since the Summer Update (listed in rough order of popularity based on number of visits):

As you can see, there’s been plenty of questions and comments, which is exactly what we want to see! If you have any questions or comments, please feel free to add them to the blog.

We are planning reports from the upcoming Infection Prevention Society (IPS) conference in London, ID Week in San Francisco and the International Federation for Infection Control (IFIC) in Buenos Aires over the next month or so. Also, look out for some guest bloggers before the end of the year.

And finally…how does good hand hygiene smell? Quite good, according to this study showing that hand hygiene compliance was higher when a fresh scent was in the air!

Is “community-acquired” CDI real?

A recent high profile US study delved into apparent community-associated CDI cases to evaluate healthcare exposures. The study was large, evaluating almost 1000 cases of community-associated CDI from 8 US states. Only 177 (18%) of the 984 cases had no recent healthcare exposure (Figure 1). Furthermore, healthcare exposure was only evaluated for the 12 weeks prior to the positive specimen, so I would wager that a portion of this 18% acquired their infecting C. difficile in a healthcare facility.  CA-CDIFigure 1. Data demonstrating that most MRSA and CDI presenting on admission to hospital are likely to have been acquired in a healthcare facility.  

So, it seems that the majority of these cases are more likely to be community-onset, healthcare-acquired CDI, rather than community-acquired CDI. I feel like we’ve been here before. In the 1990s before the emergence of distinct strains of CA-MRSA, MRSA presenting at hospital admission was commonly termed ‘community-associated’ or, worse, ‘community-acquired’ when really it was MRSA that had been acquired in hospital during a previous stay (Figure 1). The situation has now changed since distinct MRSA clones have emerged that have the capacity to cause infection outside the healthcare environment.

Turning our attention to the UK, the mandatory report scheme classifies cases of CDI as ‘Trust-apportioned’ if the specimens is collected from patients who have been in hospital for four or more days (Figure 2). It is tempting to speculate that the cases of CDI that are non Trust-apportioned are CA-CDI. However, the definition for ‘Trust-apportioned’ does not account for previous healthcare contact, and the rate of Trust-apportioned and non-Trust-apportioned cases tracks so closely that, once again, these are likely to be healthcare-acquired CDI presenting on admission.

Slide1Figure 2. Number of cases of CDI in England through the mandatory reporting scheme, 2004-2013.

The epidemiology of C. difficile is fundamentally different to MRSA, in that healthy neonates typically have a high rate of C. difficile colonization. Thus, there is a ready reservoir for a low rate of genuinely community-acquired CDI. However, it seems to me that most “CA-CDI” reported thus are likely to be acquired in a healthcare facility and I have not seen any data to convince me that community-acquired CDI is increasing.

Article citation: Chitnis et al. Epidemiology of Community-Associated Clostridium difficile Infection, 2009 Through 2011. JAMA Intern Med 2013;173:1359-67.

Is there a causal relationship between contamination burden and transmission risk?

contamination v transmission There’s an age-old problem in science: how do you prove a causal relationship between variables that correlate? Proving that the variables are correlated is the easy part; it’s more difficult to disentangle cause from effect. This can be seen in several studies that identify a correlation between environmental burden and the number of patients that are infected or colonized with pathogens.DentonFigure 1. Correlation between the number of patients infected with Acinetobacter spp. and the number of positive Acinetobacter spp. environmental cultures per calendar month during an outbreak on a neurosurgical ICU.1

SalgadoFigure 2. Correlation between microbial burden and the number of patients who acquired an HAI in ICUs.2

WhiteFigure 3. Correlation between the number of hygiene failures and the number of patients who acquired an infection on a surgical intensive care unit each week.3

So can we conclude that the higher burden of contamination resulted in an increased risk of acquisition? Or is it that more patients were infected or colonized with pathogens, which resulted in more environmental shedding? From these studies, you can’t be sure.

If you were seeking to prove the role of a gene in a process, you’d knock out the gene and demonstrate that the process stopped or changed. So, the only way to disentangle cause and effect in contamination and transmission is to perform an intervention to reduce environmental contamination and show that this correlates with reduced transmission. While the Salgado study evaluated an intervention, the data correlating contamination burden with HAIs was not stratified by the intervention, which would have been one way to assess likely causation.2

There is some further in vitro and epidemiological data supporting that the degree of transmission may be proportional to the environmental burden. An in vitro mouse model established a ‘dose-response’ relationship between the degree of contamination with C. difficile spores and the development of CDI.4 Furthermore, this model showed that disinfectants that achieved a greater log reduction of C. difficile spores were more able to interrupt transmission.

Also, one of the studies demonstrating that admission to a room previously occupied by a patient with VRE increases the chances of VRE acquisition identified something amounting to a ‘dose response’.5 The greatest increased risk was for patients admitted to a room with an environmental culture positive for VRE, and being admitted to a room where the immediate prior room occupant was colonized with VRE carried a greater increased risk than being admitted to a room where any patient in the 2 weeks prior to admission was VRE colonized (Figure 4).

DreesFigure 4. How the increased risk of acquiring VRE from the prior room occupant changes due to patient and environmental factors.5

Is there a causal relationship between contamination burden and transmission risk? On balance, the answer seems to be yes, though it would be useful to have a solid intervention study to prove that an increasing environmental burden causes an incrementally increase in transmission risk.

Article citations:

  1. Denton M, Wilcox MH, Parnell P et al. Role of environmental cleaning in controlling an outbreak of Acinetobacter baumannii on a neurosurgical intensive care unit. J Hosp Infect 2004; 56: 106-110.
  2. Salgado CD, Sepkowitz KA, John JF et al. Copper surfaces reduce the rate of healthcare-acquired infections in the intensive care unit. Infect Control Hosp Epidemiol 2013; 34: 479-486.
  3. White LF, Dancer SJ, Robertson C, McDonald J. Are hygiene standards useful in assessing infection risk? Am J Infect Control 2008; 36: 381-384.
  4. Lawley TD, Clare S, Deakin LJ et al. Use of purified Clostridium difficile spores to facilitate evaluation of health care disinfection regimens. Appl Environ Microbiol 2010; 76: 6895-6900.
  5. Drees M, Snydman D, Schmid C et al. Prior environmental contamination increases the risk of acquisition of vancomycin-resistant enterococci. Clin Infect Dis 2008; 46: 678-685.

The pitfalls of PCR for detecting pathogens on surfaces

PCR has proven an invaluable tool for the rapid diagnosis of a range of pathogens, including MRSA and C. difficile. Several studies have evaluated the potential use of PCR for the detection of pathogens on surfaces and have identified some issues that, frankly, seem pretty terminal for this application using currently available commercial PCR kits.

A study from Cleveland evaluated the use of a commercial RT-PCR test for detecting C. difficile on hospital surfaces. Three composite sites were sampled in 22 patient rooms, 41% of which housed a patient with CDI with the remaining 59% sampled after terminal cleaning and disinfection. Two swabs and a gauze were collected from each site; one swab was cultured directly onto selective agar and the other was tested using PCR. The gauze was cultured using broth enrichment. C. difficile that grew on the selective agar were tested for toxin production and only toxigenic C. difficile were included.

Overall, 23 (35%) of the 66 sites grew toxigenic C. difficile and only 4 of these were detected using the standard RT-PCR assay (sensitivity 17%, specificity 100%). The sensitivity of RT-PCR in rooms that had been cleaned and disinfected was even worse (10%). Increasing the CT threshold of the assay (making it less stringent) improved the overall sensitivity to 52% and did not affect the specificity.

The study has several important limitations. The RT-PCR assay detected only the Toxin B gene, whereas the toxigenic culture methodology would detect both Toxin A and B producers. More importantly, there was a crucial difference in sampling methodology: the gauzes used for broth enrichment culture had a 50% higher positivity rate than the swabs (in line with other findings), but only swabs were tested by both PCR and culture. Thus, if the gauzes are a more effective sampling device, this would make the RT-PCR methodology seems worse than it is. I would have liked to have seen the sensitivity of the RT-PCR assay for detecting C. difficile cultured from the swabs only, but I could not derive this from the data in the paper.

An older study from New Haven, Connecticut provides a contrasting view of the use of PCR to detect pathogens from surfaces. Here, 10 standardized sites were sampled in the rooms of 10 patients infected or colonized with MRSA, and 5 rooms of patients not known to be infected or colonized with MRSA. Swabs were directly plated onto selective agar for MRSA, then DNA was extracted from the swabs before a broth enrichment procedure using the same swabs. In this study, 40 (27%) of the 150 surfaces were positive by culture, but 90 (60%) were positive by PCR (sensitivity 93%, specificity 51%).

Deshpande 2013

Figure 1. Contrasting sensitivity and specificity when using PCR to detect C. difficile and MRSA on hospital surfaces.

It seems then that the sensitivity of PCR is too low for the environmental detection of C. difficile but the specificity is too low MRSA (figure 1). How could this be? Assuming that this is not due to experimental differences between the studies, it could be that the standard extraction procedure used for the C. difficile assay was not robust enough to liberate DNA from the mature environmental spores, resulting in low sensitivity. Conversely, the PCR assay was detecting DNA from dead MRSA on surfaces, resulting in low specificity.

So, in summary, the MRSA assay was too sensitive and the C. difficile assay was not sensitive enough! While the use of these “off the shelf” commercial assays doesn’t seem to be useful for detecting pathogens on surfaces, there may be hope for a PCR assay tailored specifically for an environmental application.

Article citations:

Deshpande A, Kundrapu S, Sunkesula VC, Cadnum JL, Fertelli D, Donskey CJ. Evaluation of a commercial real-time polymerase chain reaction assay for detection of environmental contamination with Clostridium difficile. J Hosp Infect 2013;85:76-78.

Otter JA, Havill NL, Boyce JM. Evaluation of real-time polymerase chain reaction for the detection of methicillin-resistant Staphylococcus aureus on environmental surfaces. Infect Control Hosp Epidemiol 2007;28:1003-1005.

Is it time to turn to ‘no-touch’ automated room disinfection?

I gave a webinar for 3M yesterday entitled ‘Is it time to turn to ‘no-touch’ automated room disinfection (NTD)?’ It was based broadly on a recent Journal of Hospital Infection review article, and you can access the slides here.

The webinar covered:

  • The key data supporting the need for improved hospital disinfection, particularly ‘terminal disinfection’ when patients are discharged.
  • The strengths and limitations of conventional disinfection methods, particularly in terms of reliance on the operator to ensure adequate formulation, distribution and contact time of the active agent.
  • The potential benefits of introducing automation into the room disinfection process.
  • Coverage of the advantages and disadvantages of the various “no-touch” automated room disinfection systems currently available.
  • Scenarios in which NTD systems may be warranted.

To summarize the rationale for using an NTD system: enhanced conventional methods are able to eliminate pathogens from surfaces, but the inherent reliance on a human operator to ensure adequate formulation, distribution and contact time of the active agent introduces variability into the process. NTD systems remove or reduce reliance on the operator for delivering hospital disinfection. However, they do not obviate the need for cleaning, so they are designed to augment rather than replace conventional methods.

So when to consider an NTD system? The flow chart below (Figure 1) shows a decision tree for which cleaning and disinfection approach to take. Given their practical limitations, NTD systems are best suited to disinfection of a room after a patient colonized or infected with a pathogen has been discharged to protect the incoming patient from acquiring the pathogen left behind by the prior room occupant. A recent study of a hydrogen peroxide vapor (HPV) NTD system shows that patients admitted to rooms disinfected using HPV were 64% less likely to acquire any multidrug-resistant organism (MDRO) than patients admitted to rooms disinfected using standard methods when the prior room occupant had an MDRO.Flow chartFigure 1. A disinfection decision diagram for when to consider an NTD system. a) Key pathogens associated with contamination of the environment include C. difficile, VRE, MRSA, A. baumannii, P. aeruginosa and norovirus. b) All NTD systems are applied after a cleaning step to ensure that surfaces are free from visible contamination, which is unacceptable to subsequent patients and will reduce the efficacy of the NTD disinfection. c) There is limited equivocal evidence that enhanced cleaning / disinfection in a low-risk general ward setting can reduce the spread of pathogens.

Ok, so you’ve decided that you want to use an NTD system. Which one to choose? Every conference I go too seems to have more and more NTD systems on show, all with bold and often conflicting claims. There are essentially four classes of NTD system that are commonly used in hospitals:

  • Hydrogen peroxide vapor (HPV)
  • Aerosolized hydrogen peroxide (aHP)
  • Ultraviolet C (UVC)
  • Pulsed-xenon UV (PX-UV)

I asked the audience which, if any, NTD system had been used in their hospital (Figure 2). 90% of the predominantly US based audience had not used an NTD system at all, which was a surprise. In the hospitals that had used an NTD system, there was a fairly even split between HPV and the UV systems.Which systemFigure 2. Has your hospital used an NTD system and if so, which one?

Each of these systems have advantages and disadvantages, which I have tried to summarize in the following table by ranking the systems in the key categories. The hydrogen peroxide systems tend to have higher efficacy and better distribution than the UV systems. But the UV systems are faster and easier to use. Thus, there is a trade-off between efficacy / distribution and cycle time / ease of use when deciding which NTD system would be more appropriate.Comparision table Table: Comparing the key features of the four commonly used NTD systems.

In order to illustrate the challenges in choosing a) whether to use and NTD system and b) which to use, I presented the audience with three scenarios. In scenario 1, below, I was expecting most people to select ‘conventional methods’ or one of the UV systems, which have both been shown to reduce the burden of contamination without reliably eliminating pathogens. The sheer number of patients with MRSA colonization transferred or discharged from general medical wards means that the additional time for HPV may not be warranted.Scenario 1Scenario 1. What do you do when a patient who was colonized with MRSA has been discharged from a room on a general medical ward?

Scenario 2 is an occasion where you want to be sure that residual contamination has been dealt with so that the incoming susceptible ICU patient will not acquire the virtually untreatable carbapenem-resistant A. baumannii. Therefore, HPV, which is associated with the elimination of pathogens from surfaces, is a rational choice.   Scenario 2Scenario 2: What do you do when a patient who had an infection with carbapenem-resistant A. baumannii has been discharged from an ICU room?

Scenario 3 is more tricky. While the likelihood of C. difficile spore contamination argues for the higher efficacy of the hydrogen peroxide systems, the number of transfers or discharges of patients with C. difficile on a surgical unit may be high, which argues for the lesser efficacy but faster cycles from the UV systems. The majority of the audience selected HPV in this scenario, considering that the combined risk of the pathogen and specialty required the elimination of C. difficile spores from the room prior to the admission of the next patient.    Scenario 3Scenario 3: What would you do when a patient who had C. difficile infection has been discharged from a room on a surgical unit?

To summarize, the use of an NTD system to augment terminal disinfection is warranted in some circumstances. The choice of NTD system will depend on a number of factors, including efficacy, distribution, ease of use, cycle time and cost. The features of the various NTD systems make them best suited to different applications, dictated by the clinical setting and the environmental-pathogenic characteristic of the target pathogen. So, is it time to turn to NTD systems? 52% of the audience voted ‘yes’ at the start of the webinar; 74% voted ‘yes’ at the end!Initial finalFigure 3: Is it time to turn to ‘no-touch’ automated room disinfection? The audience were asked this question at the start and the end of the webinar, indicating a swing towards the affirmative!

Article citation: Otter JA, Yezli S, Perl TM, Barbut F, French GL. Is there a role for “no-touch” automated room disinfection systems in infection prevention and control? J Hosp Infect 2013;83:1-13.

The effect of closing and cleaning wards on infection rates

Not so long ago, the UK Government ordered a national ‘deep clean’. This prompted a fair amount of debate among experts and the public. If the NHS needed a spring clean, then does that mean that it was dirty in the first place? Perhaps. There does not seem to have been a formal evaluation of impact, but there is some rationale for closing and cleaning wards. For example, this paper from the early 1970s evaluated the impact of closing and cleaning five wards in London.

The five wards (four surgical and one medical) had an outbreak of MRSA (termed ‘cloxacillin-resistant S. aureus’). Rates of infection (termed ‘sepsis’) were monitored on the study wards before and after closing and cleaning. Wards were closed to admissions and emptied of patients. All fabrics were sent for laundering and all left over supplies were discarded. Cleaning comprised washing floors, walls and all other surfaces with hot water containing detergent; bed frames and furniture were also washed. The length of time that all this cleaning too is not specified, but I suspect it took place over several days. Crucially, staff and patients were screened for carriage of epidemic strains of S. aureus; colonised patients were not re-admitted after ward cleaning where possible.

The charts below show the impact on all infections (Figure 1), all S. aureus infection (Figure 2) and MRSA infection (Figure 3). Infection rates were compared 3 months before vs. 3 months after cleaning on Wards 1-3 and 6 months before vs. 6 months after on Wards 4 and 5. As you can see, the impact was pretty dramatic.

Noone Fig 1

Figure 1. Total infection rate (proportion of admissions infected) on the five wards before vs. after ward closing and cleaning.

Noone Fig 2

Figure 2. S. aureus infection rate (proportion of admissions infected) on the five wards before vs. after ward closing and cleaning.

Noone Fig 3

Figure 3. MRSA infection rate (proportion of admissions infected) on the five wards before vs. after ward closing and cleaning.

The poor reduction in total infection rate on Ward 1, a gynecological ward, (Figure 1) is largely due to high Gram-negative infection rates before and after cleaning, most likely explained by endogenous urinary tract infections. Reductions in total infection rate and S. aureus infection rate appeared to be less on Wards 4 and 5, which could be influenced by the fact that rates were compared for 6 months pre and post ward closing and cleaning rather than 3 months on Wards 1-3. The impact of a one off environmental intervention is likely to diminish over time. It’s also interesting to note that the MRSA infections identified on Ward 5, a general surgical ward, after cleaning were due to a different strain of MRSA (determined by phage typing and antibiogram) than before cleaning. This new strain matched the outbreak strain from Ward 2. Two of the patients on Ward 5 who became infected with this strain were operated on in the same theatre as the infected patients from Ward 2 within two weeks of one another. Four other patients (on different wards) also appeared to acquire the strain in the same operating theatre.

The study has several important limitations. It is not possible to be certain whether active screening and isolation or ward closing and cleaning were responsible for the reduction in infection rates; it was probably combined impact. The study design lacked the rigor of more modern investigations: infection rates were not expressed in terms of patient-days and infection rates were compared for different time periods making direct comparison of the impact across the five wards difficult. Also, no environmental sampling was conducted to demonstrate the efficacy of the cleaning procedure (both initially and in terms of recontamination).

Notwithstanding these limitations, the study provides evidence that ward closing and cleaning combined with active screening and selective readmission resulted in a dramatic reduction in the rate of nosocomial infection on five study wards. The impact appeared to be most pronounced in the first three months, which is consistent with a reduction in environmental contamination. Outbreaks of MRSA were eradicated by closing and cleaning on all five study wards. However, there was evidence of new nosocomial transmission following the re-admission of infected patients. Finally there was some interesting circumstantial evidence of transmission within operating theatres.

Article citation: Noone P, Griffiths RJ. The effect of sepsis rates of closing and cleaning hospital wards. J Clin Pathol 1971;24:721-725.

Prevalence survey illustrates the difference between CRE and CPE

I recently posted an article on the difference between CPE and CRE, which is neatly illustrated by a prevalence survey from Alder Hey Children’s Hospital in Liverpool. In case you didn’t read my CPE/CRE blog (shame on you), here’s a reminder of the difference between the two:

Carbapenem-resistant Enterobacteriaceae (CRE) – Enterobacteriaceae that are resistant to carbapenems by any mechanism, including the production of an acquired carbapenemase or the production of an ESBL or AmpC combined with porin loss.

Carbapenemase-producing Enterobacteriaceae (CPE) – Enterobacteriaceae that are resistant to carbapenems by means of an acquired carbapenemase.

At Alder Hey, a large children’s hospital in Liverpool, a prevalence survey was performed between September 2011 and August 2012. All clinical and screening specimens were included; rectal screens were collected on admission and weekly from all patients in the ICU and HDU. 24 patients with CRE were identified, five (21%) from clinical specimens and 19 (79%) from rectal screens. The prevalence of CRE in rectal screens was 4.5% (19/421). Four of the 19 patients identified by screening specimens only went on to develop an infection, so 9 (38%) of patients ended up with a CRE cultured from a clinical specimen.

The majority (71%) of the 24 isolates were resistant to carbapenems by AmpC or ESBL combined with impermeability; seven (29%) were CPE, 4 with NDM and three with KPC (Figure). Typing indicated that the 3/4 NDM producing Klebsiella pneumoniae isolates were clonal, and they were clustered in space and time, which may indicate a small outbreak.

CRE alder hey Figure. Composition of CRE at Alder Hey Children’s Hospital. 

Carbapenem-resistance due to the production of an ESBL or AmpC combined with porin loss may lead to treatment failure, but it is often unstable and may impose a fitness cost, meaning that these strains rarely spread. Hence, carbapenem resistance conferred by an acquired carbapenemases is the key problem. This study helps to define the prevalence of CRE (and, more importantly CPE) in the population. We are not given a denominator for the clinical specimens, so the prevalence of CRE amongst clinical specimens cannot be calculated. However, the fact that around 5% of patients admitted to ICU / HDU were carrying CRE is a concern, although the prevalence of CPE on the rectal screens was lower at 1.7% (7/421).

Currently, the prevalence and epidemiology of CRE and CPE is poorly defined in the UK so this useful prevalence survey from Alder Hey is welcome. However, we urgently need more research from other hospitals to scale the CRE problem.

Article citation: Drew et al. Emergence of carbapenem-resistant Enterobacteriaceae in a UK paediatric hospital. J Hosp Infect 2013;84:300-304.

Look out: resistance outbreaks about!

We are all familiar with the idea of outbreaks. A noteworthy pathogen rears its ugly head leaving a trail of destruction in its wake (as in ‘Contagion’, or before that, ‘Outbreak’). (I credit ‘Outbreak’ with getting me into microbiology and epidemiology as an impressionable 15 year old by the way.) Or more commonly in hospitals, a ward experiences an increased incidence of a particularly resistant or virulent clone. But a recent study from some colleagues at the Centre for Clinical Infection and Diagnostics Research at St. Thomas’ Hospital in London turns the idea of ‘outbreak’ on its head by identifying surprisingly common outbreaks of resistance to a particular antibiotic across different species.

The horizontal transfer of resistance genes is generally considered to be a rare event relative to horizontal, clonal transmission of an outbreak pathogen (see Figure below). But the findings of this study suggest more promiscuous spread of resistance genes than you may expect.

ICU resistance

Figure: Horizontal transfer of resistance genes is generally considered to be the least common cause of ICU resistance.  

The team used some outbreak scanning software to interrogate laboratory reports from two ICUs between 2002 and 2009. Analysis of the large dataset, comprising almost 90,000 patient days, found that outbreaks occurred for two thirds of the 26 ‘species-groups’ studied. Only three of these were recognized at the time. Thirty-nine outbreaks of resistance were detected, the majority of which (87%) did not coincide with an increase in a particular ‘species-group’, supporting the fact that these were due to horizontal gene transfer between species.

The clustering of individual species into ‘species-groups’ is somewhat problematic, and may serve to over-emphasize the number of outbreaks that occurred. Quite a number of the outbreaks of the same ‘species-group’ and of resistance were very small – with 2 cases over a day or two. Also, clustering of the same species does not necessarily mean clonal transmission has occurred – you’d need to do molecular typing to prove that. Similarly, clustering of resistance across species to the same antibiotic does not necessarily mean horizontal gene transfer has occurred; multiple mechanisms could be involved. Notwithstanding these limitations, this is an important study and has changed the way that I think about hospital outbreaks.

Infection control interventions implemented to control recognized outbreaks on the ICU appeared to reduce the overall number of outbreaks of the same ‘species-group’, but did not affect the number of resistance outbreaks. So, it seems that different measures are necessary to control outbreaks of resistance. Perhaps the best weapon we have to combat outbreaks of resistance is to restrict our use of antibiotics. If we can reduce the selective pressure driving resistance, we should see less clonal outbreaks of resistant bacteria and less resistance outbreaks across species.

Article citation: Vlek AL, Cooper BS, Kypraios T, Cox A, Edgeworth JD, Auguet OT. Clustering of antimicrobial resistance outbreaks across bacterial species in the intensive care unit. Clin Infect Dis 2013; 57: 65-76.

Do you know your CRO from your CPO from your CRE from your CPE?

Carbapenems are a class of beta-lactam antibiotic with a broad spectrum of activity against Gram-positive and Gram-negative bacteria. Whilst carbapenems are used for the treatment of Gram-positive infections, the emergence of Gram-negative bacteria with resistance to the carbapenem antibiotics is a health issue that has prompted unusually dramatic health warnings from the US CDC, Public Health England (PHE) and the European CDC (ECDC). However, the various acronyms employed to describe the subtleties of the problem are a minefield for the uninitiated:

Carbapenem-resistant organism (CRO) – Gram-negative bacteria* including the Enterobacteriaceae (such as Klebsiella pneumoniae and Escherichia coli) and non-fermenters (such as Acinetobacter baumannii, Pseudomonas aeroginosa and Stenotrophomonas maltophilia) that are resistant to carbapenems by any mechanism. The non-fermenters can be inherently resistant to carbapenems, or they can acquire carbapenemases (typically KPC, VIM, NDM and OXA-48 types). Enterobacteriaceae do not have inherent resistance but may be resistant to carbapenems through the production of an acquired carbapenemase or the production of an ESBL or AmpC combined with porin loss.

Carbapenemase-producing organism (CPO) – Enterobacteriaceae and non-fermeters that are resistant to carbapenems by means of an acquired carbapenemase.

Carbapenem-resistant Enterobacteriaceae (CRE) – Enterobacteriaceae that are resistant to carbapenems by any mechanism, including the production of an acquired carbapenemase or the production of an ESBL or AmpC combined with porin loss.

Carbapenemase-producing Enterobacteriaceae (CPE) – Enterobacteriaceae that are resistant to carbapenems by means of an acquired carbapenemase.

The image below tries to graphically represent the relative size of these groups (not to scale!), and the table provides a summary of their distinguishing features:

CRE etc

CRE table

“O” or “E”

The US CDC and European CDC seem to favour ‘CRE’ as a generic term for this problem, whereas the PHE in the UK seems to favour ‘CRO’. Clearly, each term has a defined meaning and context will determine which is technically correct. But which is most useful as a generic term?

The epidemiology of the non-fermenters and the Enterobacteriaceae is different, with the non-fermenters tending only to cause problems in very sick patients, usually in critical care settings. Meanwhile, the Enterobacteriaceae are more able to cause infections in a wider range of patients both inside and outside the hospital. Hence, the emergence of carbapenem resistance is more concerning in the Enterobacteriaceae, as demonstrated by rapid national and international spread of KPC-producing K. pneumoniae.

“R” or “P”

Amongst the Enterobacteriaceae, whilst carbapenem-resistance due to the production of an ESBL or AmpC combined with porin loss may lead to treatment failure, it is often unstable and may impose a fitness cost, meaning that these strains rarely spread. Hence, carbapenem resistance conferred by an acquired carbapenemases is the key problem.

So, for me, CPE would be the most suitable generic term for this emerging problem. However, since the US CDC and the ECDC seem to have gone with CRE, and the vast majority of CRE will be CPE, let’s go with CRE shall we?

 

[* Whilst Gram-positive bacteria that are resistant to carbapenems (such as MRSA) could be described as ‘CROs’, these terms are reserved to describe Gram-negative bacteria.]