Tampilkan postingan dengan label hospital quality indicators. Tampilkan semua postingan
Tampilkan postingan dengan label hospital quality indicators. Tampilkan semua postingan

Sabtu, 09 April 2011

Healthcare quality: Running from science

We've blogged before about our hang-ups with QI. So I was pleased to see a thoughtful perspective in this month's Health Affairs by Peter Pronovost and Richard Lilford entitled "A Road Map for Improving the Performance of Performance Measures." As you might guess, this piece focuses on the validity of quality metrics, and Pronovost deserves credit for trying to push the hospital quality community to clean up its act.

The conclusion of the essay is also the money quote: "For the past decade, health care quality has largely sought quick fixes and run from science; the results are evident. Let us hope that efforts in the next decade embrace science instead."

Kamis, 02 Desember 2010

A surgical site infection enhancement bundle?

A new paper in the Archives of Surgery evaluated the implementation of a surgical site infection (SSI) prevention bundle for colon surgery (see free text of the paper here). The bundle contained the following components:
  • Omission of mechanical bowel preparation
  • Preoperative and intraoperative patient warming
  • Increased concentration of inspired oxygen during and immediately after the surgical procedure
  • Limiting intraoperative intravenous fluid volumes
  • Use of wound barriers to protect the surgical wound from contamination during the procedure 
Each component of the bundle was supported by 1 or more randomized trials demonstrating reduction in SSIs. About 200 patients were randomized to receive either the bundle or standard care. SSIs were determined by IPs using CDC case definitions. The study was terminated after a planned interim analysis revealed a <1% chance of showing a positive effect of the bundle were the study to continue to the accrual goal.

The overall rate of infection was 35%. In the control group, 24% of patients developed an SSI vs. 45% in the intervention group (p .003). In multivariate analysis, the bundle was shown to an independent predictor for SSI (RR 2.49, CI95 1.36-4.56).

I was struck by the very high rate of infection in this study, so I looked at CDC's most recent surveillance report, which shows that the mean pooled infection rates for colon surgery depending on risk category ranges from 3.99% to 9.47%. The authors postulate that their case ascertainment may be greater since the study was performed at a VA hospital where outpatient follow-up of patients is much easier to track. However, even in light of that, the rate still seems quite high. But on the other hand, unless there is something very unique about these patients or the care provided at this hospital, you might think that an effective prevention bundle would have even more impact in a setting with exceedingly high infection rates. This raises more concern that the bundle was not just ineffective but actually increased the risk of post-operative infection.

This paper is another example of how immature implementation science remains. I think the authors of this paper are correct to conclude that bundles of evidence-based interventions need to be formally tested before there is wide spread implementation.

Minggu, 28 November 2010

Don't pull that trigger!

The headline on the front page of the New York Times this week read "Study Finds No Progress in Safety at Hospitals." This article (graphic shown) reported on a paper in this week's New England Journal of Medicine (free text here). In this study, 240 charts from each of 10 hospitals in North Carolina were reviewed using the Institute for Healthcare Improvement's (IHI) Global Trigger Tool. The admissions reviewed spanned the years 2002 to 2007.

Now I didn't know much about the Trigger Tool and the methods section of the paper doesn't give much description, so I looked up the guide, which you can review here. The triggers are 53 different indicators that when observed in the medical record should prompt further review to assess for an adverse event. For example, administration of benadryl is a trigger to look for a drug allergy, which according to IHI is an adverse event. Adverse events are further classified by severity and whether they were preventable. Per the IHI guide, no more than 20 minutes can be spent on the review of any chart (that rule was also observed for the published study).

Per the IHI methodology, healthcare-associated infections are both a trigger and an adverse event. Here is what the guide states (p.17):
Any infection occurring after admission to the hospital is likely an adverse event, especially those related to procedures or devices. Infections that cause admission to the hospital should be reviewed to determine whether they are related to medical care (e.g., prior procedure, urinary catheter at home or in long-term care) versus naturally occurring disease (e.g., community-acquired pneumonia).
Note that HAIs are never defined. Unlike the CDC's National Healthcare Safety Network (NHSN), which defines infections using multiple data points, IHI methodology doesn't guide the reviewer as to case ascertainment. I did a PubMed search this morning and found no studies where the Trigger Tool was compared to NHSN methodology to assess its validity.

So here are some concerns I have about this paper and the Trigger Tool:

  • By design the Trigger Tool is not true surveillance. There is no attempt to detect all instances of harm. Imagine looking at the medical record of a patient who stayed in the hospital for 8 months with a 20-minute time limit. While I can understand how the Trigger Tool might uncover problems in any given hospital using a case-based approach for quality improvement, to look for trends over time using these data doesn't make any sense since there is no attempt to capture all the cases of harm. Of what value is trending incomplete data? I think this harkens back to the philosophical differences between quality improvement and healthcare epidemiology that I've talked about before
  • I have serious concerns regarding the validity of this approach for HAIs. We know how problematic surveillance can be even when using well-delineated case definitions and how poorly administrative claims data perform for HAIs. The IHI approach seems much more analogous to the administrative data approach.
  • In the New England Journal paper the secular trends were shown only for all harms and preventable harms, but not for any of the component harms, such as HAIs. It would be interesting to see the trended data for HAIs. Recall that AHRQ, using administrative claims data, recently published a paper claiming that HAIs are increasing in the US, while CDC, using much more rigorous surveillance methodology, published the opposite conclusion.
  • Generalizability seems to be problematic. In this paper 2,400 hospital records were reviewed from 10 hospitals in a single state. Over the same time period, there were approximately 220 million hospital admissions in the US. That means that about 1 in 100,000 hospital admissions were reviewed (and only partially given the 20-minute rule). While the published paper never attempts to generalize the study findings to the universe of US hospitals, the media certainly did, and the lead author of the study states in the New York Times, “It is unlikely that other regions of the country have fared better.”
  • Some of the instances of "harm" are not preventable and I'm not sure how they are related to quality of care. For example, consider the case of a patient with no known drug allergy who is treated with an antibiotic and develops a rash. This would be classified as a harm, and it is indeed a harm to the patient, but it's not predictable and not preventable. How does it help us to trend such data? And how would we attempt to reduce this harm? It is preventable harm that needs our attention. 
  • With regards to HAIs, even if these data were valid, I don't believe these data reflect the current state of affairs in US hospitals given that much improvement in infection rates has occurred since 2007.
So here we have another paper that beats us up some more. If truth be told, I bet that the quality of care in US hospitals is significantly better today than it was in 2002. It sure would be nice to see that in print, but it probably wouldn't hit the front page. 

P.S. It's amazing that IHI claimed to have saved 123,000 lives in the US due to its safety program for US hospitals, but now claims that during the same time frame there was little evidence of improvement in patient safety. Something doesn't compute......  

Minggu, 26 September 2010

Why are quality and infection prevention programs like oil & water?

This week, I analyzed our most recent performance in the HOP project. HOP is an acronym for the Hospital Outpatient Quality Data Reporting Program (HOP QDRP), a CMS program that is publicly reported at Hospital Compare. HOP is the outpatient analogue of SCIP (the Surgical Care Improvement Project), though HOP has only 2 metrics--pre-procedure antibiotic selection and appropriate timing of the antibiotic dose. We slice the data by procedure, service and surgeon to look for areas where performance is suboptimal. I received an email from one of our senior surgeons who complained that he had several cases where he was deemed noncompliant because the pre-procedure anitbiotic was not given within the window period 60 minutes prior to incision. He pointed out that the reason for his "noncompliance" was that these were dialysis patients who had received a dose of vancomycin at dialysis the day before. And he was practicing good medicine because the patients would still have a therapeutic level of vancomycin at the time of the procedures (all vascular access procedures). So we posted a query as to why this situation would be considered noncompliant (i.e., could the rules be changed to allow for this situation?). We received prompt responses from the physician in charge of the national project, but he avoided answering the question. After multiple emails back and forth, he finally stated that the surgeon did the right thing, but it would still be deemed noncompliant. He went on to say that hospitals should not use the data in this way (i.e., drill it down to the provider level) and the project leadership could not possibly think of all the exceptions for when an antibiotic should not be given within 60 minutes of incision. Now I have some problems with his thinking--if you are going to publicly report our performance then I think you need to be flexible enough to allow for exceptions that actually reflect good practice, and in an environment of 24/7 communication it shouldn't be hard to have a panel of experts make decisions on requests for exceptions to the rules. With SCIP we've actually been dinged when a pre-op antibiotic was not given before incision for a patient who entered the OR in cardiac arrest! I've blogged before about how these types of problems really turn physicians off not just to these specific projects but to quality improvement projects in general.


All of this made me think some more about the differences in quality improvement and healthcare epidemiology. The table below is modified from a plenary talk I gave at SHEA a few years ago. These differences really become sources of friction when QI and hospital epi folks are pulled into common projects like SCIP and HOP.

Characteristic
Healthcare Epidemiology
Quality Improvement
Philosophic orientation
Modern
Post-modern
Primary influences
Science & medicine
Business
Analytic orientation
Population based
Often case based
Focus
Exploration & analysis
Modification
Primary audience
Internal stakeholders
External stakeholders
Primary task
Define problems, elucidate risk factors
Design & implement interventions
Content expertise
Almost always
Usually not
Strength
Rigorous methodology
& validity
Process design
Approach
Structured,
relatively uniform
Innovative
Delivery style
Instructive
Collaborative
Solutions
Targeted
Empiric
Tactics
Data oriented,
relatively dull
Flashy campaigns, catchy slogans
Perspective
Long term
Short term, evolving
Tempo
Relatively slow
Relatively fast


I don't have any solutions for how to make the groups work together more effectively. But perhaps starting with a recognition that our approaches to problems are different is a start.

Jumat, 16 April 2010

This healthcare quality report is excellent....for us to poop on!

So SHEA, APIC and IDSA have released a joint statement that echoes (and expands upon) Mike's concerns about the recent, highly publicized AHRQ report. The statement and accompanying set of talking points speak for themselves. I will quote one small section, and let you read the rest.
"We are concerned that any report coming from a government agency based solely on the use of administrative data, commonly referred to as billing/coding data, paints an inaccurate picture of healthcare-associated infections for the public. In contrast, another Department of Health and Human Services agency -- the Centers for Disease Control and Prevention (CDC) – is preparing to release epidemiologically sound, surveillance data based on the National Healthcare Safety Network (NHSN). Multiple studies have concluded that administrative coding data appears to be a poor tool for accurately identifying infections. This may create greater confusion among consumers."

Sabtu, 03 April 2010

Quality improvement: 2 steps forward, 1 step back

I don't know how many hospital infection prevention programs are tasked with tracking the performance metrics for the Surgical Care Improvement Project (SCIP). It's a quality improvement program designed to reduce post-operative complications. At many hospitals it belongs to the QI folks, but since it started with the tracking of infection prevention metrics (perioperative antimicrobial prophylaxis), at my hospital Infection Control houses the program. The byzantine rules for data abstraction are mind-numbing, and I'm really thankful to have a wonderful nurse abstractor. But at times these projects lose the forest for the trees. Here's a case I dealt with last week. A patient was admitted with a lawn mower blade injury resulting in a dirty, open wound that required vascular reconstruction. Because the surgeon continued the patient's antibiotic beyond 24 hours post-op, he was judged noncompliant because the rule says antibiotics have to stop within 24 hours unless an infection is present pre-operatively. In my opinion, as an infectious diseases specialist I agree with the surgeon, so if I am run over by a lawn mower, give me the antibiotics! As Peter Pronovost notes in his book, measuring quality is often an endeavor of dubious quality. In the big picture (at the hospital level), it probably doesn't matter that we have some misclassified cases like this one. However, quality shoots itself in the foot with these seemingly stupid issues that turn physicians off. And this is exactly why it's difficult to get buy-in from doctors on QI projects.