Tampilkan postingan dengan label public reporting of HAIs. Tampilkan semua postingan
Tampilkan postingan dengan label public reporting of HAIs. Tampilkan semua postingan

Rabu, 10 Agustus 2011

Central line + positive blood culture = CLABSI (not!)

There's a thoughtful commentary in a recent issue of Clinical Infectious Diseases by Tom Fraser and Steve Gordon at Cleveland Clinic on problems related to CDC's central line associated bloodstream infection (CLABSI) case definition. We've blogged about this before. The definition is old and was designed to maximize sensitivity long before anyone thought about public reporting. But the issues of poor specificity of this definition are haunting many of us, particularly those who work at hospitals with cancer centers. Unfortunately, neutropenic cancer patients not uncommonly have translocation of enteric flora across their intestinal mucosa and the resulting bloodstream infection in the presence of a central line forces us to label these as CLABSIs, even though these infections are not at all related to the central line. Ten years ago no one cared about this surveillance technicality. Now, in the era of public reporting this is a big problem. In fact, nearly every "CLABSI" in the medical ICU of my hospital falls into this category. Fraser and Gordon show us how this is handled at their hospital with a modification to the CDC definition that is used for internal purposes. Hopefully relief is on the way. CDC is very interested in this issue and has assembled a committee that is actively evaluating the issue.

Minggu, 10 Juli 2011

SHEA's Consumer Retort

Kudos to SHEA for publishing a statement on Consumer Reports' article on teaching hospitals and hospital quailty ratings, which include infection prevention metrics. Here's the money quote:
Buried in the methodology, the publisher of Consumer Reports agrees that comparisons must be done carefully, but the article does not reflect this caution. Instead, the article draws broad conclusions about the quality and safety of care throughout entire health systems based on one measurement gathered from a single unit in each hospital.care.
While Consumer Reports may know how to evaluate refrigerators, they have a long way to go in order to produce a high quality assessment of health care.

Selasa, 14 Juni 2011

Surveillance bias and public reporting

This week’s JAMA has a commentary from Haut and Pronovost that’s worth a read, on the topic of surveillance bias. This is Epidemiology 101 for those of us who live and breath surveillance and prevention, but unfortunately it is not very well understood by patients and public policy makers. So when data like this garbage dump from Consumers Reports are released and generate media attention, more harm than good result. Good hospitals are unfairly maligned, and undoubtedly some hospitals that game their rates or just perform poor surveillance are rewarded. Call me a stupid consumer, but if I require ICU care and have to choose between Vanderbilt, Virginia Commonwealth, University of Maryland and one of these hospitals, I’m choosing one of the first three. Now I'm sure this list includes excellent hospitals, but I have no (zero) confidence in the data presented (i.e. I'm 'getting to zero' confidence).

We’ve blogged plenty about the challenges of public reporting and establishing a level playing field, so I’ll refer readers to these prior posts. As Haut and Pronovost point out, to ignore standardized, accurate and fair measurement (to include external validation) is both “reckless and unjust”. It is also true that “to be done appropriately, quality measurement is expensive”. Cheap shortcuts, like using ICD-9 coding data, simply prove the maxim that ‘you get what you pay for’.

Sabtu, 26 Februari 2011

Maryland report on hospital associated complications: Don't waste your time reading it

This week, NPR and the Washington Post ran stories on a new report on healthcare associated complications in Maryland. The report can be viewed here. Based on the results of the report, nine hospitals are required to pay penalties due to higher than average rates of complications. Eleven of the report's 49 indicators are infectious complications, such as infection related to central venous catheters. However, what is most important to know is that the source of the data for the report is administrative claims (ICD-9 codes which were developed for billing purposes). We've blogged before about how notoriously inaccurate these codes are for determining whether patients experienced healthcare associated infections. This is because case ascertainment is performed by abstractors with little medical training using case definitions that were not designed for surveillance purposes. Last year, Pennsylvania, the state with the most comprehensive mandatory reporting program for healthcare associated infections in the country, abandoned use of administrative claims data and required that all hospitals use CDC surveillance methodology. Particularly when hospitals are going to be punished by fines and bad publicity, valid methods must be used. I noted in the report that there were 431 cases of "moderate infectious," at a cost of over $6 million. What in the world is "moderate infectious"? I don't think you could find an infectious disease doctor anywhere who could tell you what that is because there is no such thing. Those of us who work in hospital epidemiology understand the need for public reporting because our society values transparency and accountability. We get it. But public reporting is a two-way street that requires a commitment on the part of public agencies to insure that the data generated are obtained via state-of-the-art methods and risk adjusted in order to produce the most valid reports for the public. In other words, it's about playing fairly.

Jumat, 21 Januari 2011

Learning to count...

There's an old joke about asking an accountant "what's 2+2?" and the accountant responds "what do you want it to be?"

Unfortunately, CDC has some creative math rules of its own. So for hospital epidemiologists, 1 + x = 1 when it comes to counting central line days. That is, for patients who have more than one central line, only one line can be counted per day for the denominator in calculation of central line associated bloodstream infection (CLABSI) rates. It's as if only one of the three central lines in the acutely ill ICU patient poses a risk to the patient. Magically, the other two are immune.

In the February issue of Infection Control and Hospital Epidemiology, there is a study by the Hopkins group that examined the effect of the one-catheter-per-day rule and found that counting only one catheter falsely overestimated their CLABSI rate by 36% in 3 surgical ICUs.

Two years ago, our group presented a very similar study at SHEA done in our medical and surgical trauma ICUs. We found that the CDC rule falsely overestimated our CLABSI rate by 20%.

In the era of mandatory public reporting of HAIs, it's imperative that everything be done to produce the most valid data for consumers. I'm baffled that CDC has been so slow to respond to these issues. The focus seems to be on validating surveillance using the methodology as is, rather than modifying the methodology to make it more valid.

Jumat, 31 Desember 2010

California's Healthcare Associated Infections Report: For what it's worth...

California has just released its first statewide report on healthcare associated infections (you can view it here). The metrics reported are healthcare associated VRE bloodstream infections per 1,000 inpatient days, healthcare associated MRSA bloodstream infections per 1,000 inpatient days, and CLABSI in ICUs per 1,000 central line days. The report has major problems as evidenced by the disclaimer on every table of rates that says that the data should not be compared between hospitals, which is generally the whole purpose of public reporting. However, since the reporting period for this report ended, the state mandated that all hospitals join NHSN, which they anticipate will improve the quality of the data reported.

Minggu, 12 Desember 2010

Mandatory reporting of HAIs: A work in progress

The December issue of American Journal of Infection Control, has a paper looking at validation of central line associated bloodstream infection (CLABSI) reporting in Connecticut, where all 30 acute care hospitals are required to report CLABSIs from 1 ICU via the National Healthcare Safety Network (NHSN). All positive blood cultures for the fourth quarter of 2008 for these ICUs were reviewed by a trained nurse microbiologist (the gold standard for the study) and compared to locally collected data. A total of 770 blood cultures were reviewed and the validator detected 48 CLABSI cases.

The validity parameters were as follows:

  • Sensitivity  48%
  • Specificity  99%
  • Positive predictive value 85%
  • Negative predictive value  94%
So, as you might suspect even before seeing these data, the major problem with mandatory reporting programs is a failure to adequately detect cases. Keep that in mind when you read through Consumer Reports honor roll of US hospitals that reported no CLABSIs! While I continue to believe that mandatory reporting is the right thing to do, much work is still needed to produce valid data.

Rabu, 10 November 2010

Public Reporting of CLABSI: Is it a valid measure for hospital comparison?

Overview of the computerized CLABSI agorithm
In today's JAMA, Michael Lin and authors from four CDC Epicenter academic hospitals (2 in Chicago, 1 in Columbus, OH and 1 in St. Louis) compared annual IP-determined CLABSI rates in 20 ICUs during 2004-2007 with a computer-generated reference standard. The median CLABSI rate was 3.3/1000 central line days. Overall correlation with the computer algorithm was poor at 0.34 and ranged from 0.83 in one center down to 0.10 at another. Interestingly, the center with the lowest reported CLABSI rate by IP had the highest computer rate. (2.4/1000 CL-days vs. 12.6/1000 CL-days)

I have posted the schematic of the computer algorithm and also the link to the code (below).  My only methods question (at this moment) is why did they limited the analysis to yearly comparisons and not quarterly or monthly comparisons.  I would have liked to see that level of data analyzed even though it would be noisier.  It was interesting how the IP-reported rates were narrowly clustered around each other while the computer-generated rates were widely distributed.  The findings should make us pause when we consider public reporting of these rates.  If so much emphasis is being placed on CLABSI rates at the state and national level for comparison and reimbursement, there should be funded validation of the reported rates and also consideration of other measures (outcome or process) that might be more reliable. 

Lin et al JAMA November 2010
Link to computer algorithm code (looks like you might need to apply for a password)

Rabu, 29 September 2010

Creativity: Not always a good thing

There's a paper in this month's American Journal of Infection Control which looks at surveillance for CLABSI in pediatric ICUs. Surveys of personnel at 16 PICUs, mostly in academic medical centers, revealed that surveillance practices varied widely. Other practices which could affect the CLABSI rate also had great variability (e.g., blood culture practices--when they are drawn, how they are drawn, and how much blood volume is obtained). Interestingly, 100% of IPs surveyed reported that they applied the CDC CLABSI definition; however, when they were tested with clinical vignettes, none of the IPs applied the definition as written.

There really are no surprises here. This study confirms what many of us already knew--surveillance for HAIs is currently a mess, and little has been done to improve validity.

This week, through an informal email discussion with several hospital epidemiologists, I learned that the process of HAI case detection varies widely, with some hospitals involving front line providers in the decision as to whether an HAI exists. As the stakes associated with infections become greater, there is obviously a natural inclination to look hard at every potential case. But here's the real problem: whether the patient truly has an HAI or whether the patient meets the CDC definition of HAI are two different questions. At some hospitals, a strict black and white reading of the definition is applied. At others, clinical judgment is also considered, and in some cases, allowed to trump the definition.

Given the increasing practice of public reporting of HAI rates, improving the validity of data must become a priority. As a first step in this process, better definitions, with more specificity, would be of great help.

Jumat, 24 September 2010

No HAC-king next week....

CMS announced today that its plan to publicly release hospital-specific data on hospital acquired conditions (HACs) next week has been put on hold. Apparently the data were flawed. We recently reviewed our data report that was to be released, and compared the hospital acquired CLABSI and catheter associated UTI data to that collected via concurrent surveillance by our IPs. To describe the CMS data as wildly inaccurate would be an understatement.

Selasa, 17 Agustus 2010

The Show Me State?


Here is a new twist on public reporting: how long should a state be required to keep hospital-specific data easily accessible to the public? Missouri’s Department of Public Health is taking heat for removing older data from their website (essentially, they just “write over” the old data with new data, so the older data gets purged).

The data still exists at the department...as the state’s data manager says, “it just isn’t handy”. To get it you have to formally request it, a programmer must be available, and you have to pay the cost of retrieval.

An interesting side note—the 2004 Missouri statute mandating public reporting came with no appropriation, even though the state’s health department spends over 240K each year to implement it. It’s true that you get what you pay for—if public reporting is important, it needs to be appropriately funded. And doing it correctly (including independent validation) is expensive!
Addendum: Never mind, the older data will be restored. By the time Iowa starts public reporting of HAI data, we'll have all these lessons to draw on!

Kamis, 01 Juli 2010

Another state HAI report

For those of you interested in public reporting of healthcare associated infections, Oregon has just published its first report on HAIs. You can view it here.

Jumat, 25 Juni 2010

More than you'll ever want to know.....

For those of you interested in public reporting of healthcare associated infections, Pennsylvania has just released its 2009 report on HAIs. You can view it here. Since every HAI in every hospital is reported by mandate, this report represents the closest thing to a registry of HAIs that has ever been produced, and this report represents the first full year of reporting. There are 128 pages of slicing and dicing the data. In the 250 hospitals there were 26,000 HAIs across 11 million patient-days for an overall crude rate of 2.4 HAIs per 1,000 patient days. Surgical site infections accounted for 24%, UTIs accounted for 23%, BSIs accounted for 13%, and pneumonia cases accounted for 11% of the HAIs. Note that PA requires non-device related infections to be reported as well. Rates of infections were highest in academic medical centers and long-term acute care hospitals, as would be expected considering the patient populations served. Of note, MSRA accounted for 8% of all HAIs. Of the CLABSIs reported, 16% were due to Staph. aureus, 16% were due to coag-negative staph, and 14% were due to enterococci. One caveat: there has been no true validation of the surveillance at the hospitals though on-site audits are reported to begin this summer.

Kamis, 03 Juni 2010

Extreme public reporting

The UK has taken public reporting of HAIs to a new (and absurd) level. The NHS website now publishes weekly counts of hospital-acquired MRSA bloodstream infection and C. difficile cases for every hospital in the country. You can see it here in an excel spreadsheet. Now I happen to think that transparency and accountability are vital concerns, but I also think that publicly reported data should have utility. As a hospital epidemiologist with two decades of experience, I don't know what to make of these data, so how could the average healthcare consumer? Because of the stochastic nature of HAIs, the frequency counts for any given week are useless, not to mention there is no risk adjustment provided. Yet there seems to be an implicit association of these data with quality of care, and this is another example of perception trumping reality in infection control. What's next--Twitter alerts for every new C. diff case in the country?

Kamis, 27 Mei 2010

17 states report CLABSI rates. Why only 17?

The CDC just released the First State-Specific Healthcare-Associated Infections Summary Data Report, which focuses on CLABSI. HHS press release is available here. Needless to say, we are all disappointed with the rates here in Maryland given how hard the State and hospitals have worked at preventing these infections. No one seems more disappointed than Peter Pronovost. No excuses.

Senin, 17 Mei 2010

Did someone get fired for using NHSN or NNIS definitions to count CLABSIs?

Wow. I just read the second article that Mike posted Saturday out of the Chicago Tribune and I kinda wish I didn't. I'll paste the section that scared me a bit below, but it appears that someone might have been fired for "over-reporting HAIs", which is a bit scary to me. There is tremendous pressure to not call a BSI , a CLABSI, which I think is now part of the getting to zero culture. I wonder what percent of the way towards zero will be paved with these sorts of statements:

"Thorek's infection rate was the highest of all medical centers in Illinois. Frank Solare, Thorek's president and chief executive officer, said hospital officials have collected medical charts for the 22 infected patients and have "started an independent review … to try and understand this."

Asked why the Lakeview hospital didn't take action last year, Thorek's compliance officer Morgan Murphy said a former employee didn't alert senior management to the problem. "It wasn't making its way up the chain, unfortunately," he said.

Senior management also suspects that the employee may have counted central line infections incorrectly, inflating the hospital's numbers. "There may have been over-reporting," Murphy said."

Sabtu, 15 Mei 2010

Public reporting of HAIs in Illinois

This morning's Chicago Tribune has an article on public reporting of healthcare associated infections. There's really nothing new that comes to light in the article. A second piece in the paper focuses on hospitals with high infection rates and hospitals elaborate on specific reasons for their performance, including poor data collection and surveillance methods. This newspaper has had a long-term interest in HAIs and was one of the first media outlets to investigate the issue, publishing an expose in 2002. Here's the link to the Illinois hospital report card that shows central line associated bloodstream infection data by hospital.

Kamis, 29 April 2010

Rhinoceroses and Total Hip Arthroplasty

Distinctions are very important. I was just visiting Ohio last week and had the chance to visit the Columbus Zoo. It's a pretty cool place if you like zoos. I enjoyed reading about various animals and learned that the Black Rhino is endangered, while the White Rhino is not. Thus, it would make sense to spend your conservation money, if you have some, on Black Rhinos first, since time is running out. In infection control, we ought to do the same thing, but in reverse; spending our limited resources on preventing more common infections first. Also, with the rise of public reporting and other methods of interhospital comparison, efforts must be made to place hospitals on a level playing field. There is a nice study that highlights these two issues in the May ICHE by Surbhi Leekha and colleagues at the Mayo Clinic in Rochester, MN.

They examined at a 5-year cohort (2002-2006) of all total hip arthroplasties (primary and revision) and looked to see who developed SSI, using CDC definitions. After controlling for age, gender and NNIS index, patients who had a revision total hip arthroplasty had twice the odds of SSI compared to primary surgery (OR=2.2, 95% CI 1.3-3.7). The difference was even more stark when outcomes were restricted to deep or organ space SSI with revisional surgery associated with four times the odds of SSI (OR 3.9, 95% CI, 2-7.9). One note, they didn't appear to control for duration of surgery as a confounder, even though it was associated with both revisions and SSI. I think this is correct. They were not completing a risk-factor study, but were interested in outcomes.

The usual caveats apply to these types of studies including a single center study and a relatively unique single center at that. However, this is an important study and if these findings hold up at other institutions, which they most certainly will, this suggests that the case-mix of revision and primary hip arthroplasty must be taken into account when SSI rates are reported and hospitals compared. Perhaps an easier solution, as the authors suggest, is to treat them as two different animals, if you will, and report them separately. Also, if one wanted to target specific infections or high-risk procedures, these results suggest targeting revision surgeries over primary ones.

Note: Surbhi is joining the group at my old Maryland stomping grounds and I know everyone is excited for her to arrive.