Tampilkan postingan dengan label administrative claims data. Tampilkan semua postingan
Tampilkan postingan dengan label administrative claims data. Tampilkan semua postingan
Sabtu, 26 Februari 2011
Maryland report on hospital associated complications: Don't waste your time reading it
This week, NPR and the Washington Post ran stories on a new report on healthcare associated complications in Maryland. The report can be viewed here. Based on the results of the report, nine hospitals are required to pay penalties due to higher than average rates of complications. Eleven of the report's 49 indicators are infectious complications, such as infection related to central venous catheters. However, what is most important to know is that the source of the data for the report is administrative claims (ICD-9 codes which were developed for billing purposes). We've blogged before about how notoriously inaccurate these codes are for determining whether patients experienced healthcare associated infections. This is because case ascertainment is performed by abstractors with little medical training using case definitions that were not designed for surveillance purposes. Last year, Pennsylvania, the state with the most comprehensive mandatory reporting program for healthcare associated infections in the country, abandoned use of administrative claims data and required that all hospitals use CDC surveillance methodology. Particularly when hospitals are going to be punished by fines and bad publicity, valid methods must be used. I noted in the report that there were 431 cases of "moderate infectious," at a cost of over $6 million. What in the world is "moderate infectious"? I don't think you could find an infectious disease doctor anywhere who could tell you what that is because there is no such thing. Those of us who work in hospital epidemiology understand the need for public reporting because our society values transparency and accountability. We get it. But public reporting is a two-way street that requires a commitment on the part of public agencies to insure that the data generated are obtained via state-of-the-art methods and risk adjusted in order to produce the most valid reports for the public. In other words, it's about playing fairly.
Minggu, 28 November 2010
Don't pull that trigger!
The headline on the front page of the New York Times this week read "Study Finds No Progress in Safety at Hospitals." This article (graphic shown) reported on a paper in this week's New England Journal of Medicine (free text here). In this study, 240 charts from each of 10 hospitals in North Carolina were reviewed using the Institute for Healthcare Improvement's (IHI) Global Trigger Tool. The admissions reviewed spanned the years 2002 to 2007.
Now I didn't know much about the Trigger Tool and the methods section of the paper doesn't give much description, so I looked up the guide, which you can review here. The triggers are 53 different indicators that when observed in the medical record should prompt further review to assess for an adverse event. For example, administration of benadryl is a trigger to look for a drug allergy, which according to IHI is an adverse event. Adverse events are further classified by severity and whether they were preventable. Per the IHI guide, no more than 20 minutes can be spent on the review of any chart (that rule was also observed for the published study).
Per the IHI methodology, healthcare-associated infections are both a trigger and an adverse event. Here is what the guide states (p.17):
So here are some concerns I have about this paper and the Trigger Tool:
P.S. It's amazing that IHI claimed to have saved 123,000 lives in the US due to its safety program for US hospitals, but now claims that during the same time frame there was little evidence of improvement in patient safety. Something doesn't compute......
Now I didn't know much about the Trigger Tool and the methods section of the paper doesn't give much description, so I looked up the guide, which you can review here. The triggers are 53 different indicators that when observed in the medical record should prompt further review to assess for an adverse event. For example, administration of benadryl is a trigger to look for a drug allergy, which according to IHI is an adverse event. Adverse events are further classified by severity and whether they were preventable. Per the IHI guide, no more than 20 minutes can be spent on the review of any chart (that rule was also observed for the published study).
Per the IHI methodology, healthcare-associated infections are both a trigger and an adverse event. Here is what the guide states (p.17):
Any infection occurring after admission to the hospital is likely an adverse event, especially those related to procedures or devices. Infections that cause admission to the hospital should be reviewed to determine whether they are related to medical care (e.g., prior procedure, urinary catheter at home or in long-term care) versus naturally occurring disease (e.g., community-acquired pneumonia).Note that HAIs are never defined. Unlike the CDC's National Healthcare Safety Network (NHSN), which defines infections using multiple data points, IHI methodology doesn't guide the reviewer as to case ascertainment. I did a PubMed search this morning and found no studies where the Trigger Tool was compared to NHSN methodology to assess its validity.
So here are some concerns I have about this paper and the Trigger Tool:
- By design the Trigger Tool is not true surveillance. There is no attempt to detect all instances of harm. Imagine looking at the medical record of a patient who stayed in the hospital for 8 months with a 20-minute time limit. While I can understand how the Trigger Tool might uncover problems in any given hospital using a case-based approach for quality improvement, to look for trends over time using these data doesn't make any sense since there is no attempt to capture all the cases of harm. Of what value is trending incomplete data? I think this harkens back to the philosophical differences between quality improvement and healthcare epidemiology that I've talked about before.
- I have serious concerns regarding the validity of this approach for HAIs. We know how problematic surveillance can be even when using well-delineated case definitions and how poorly administrative claims data perform for HAIs. The IHI approach seems much more analogous to the administrative data approach.
- In the New England Journal paper the secular trends were shown only for all harms and preventable harms, but not for any of the component harms, such as HAIs. It would be interesting to see the trended data for HAIs. Recall that AHRQ, using administrative claims data, recently published a paper claiming that HAIs are increasing in the US, while CDC, using much more rigorous surveillance methodology, published the opposite conclusion.
- Generalizability seems to be problematic. In this paper 2,400 hospital records were reviewed from 10 hospitals in a single state. Over the same time period, there were approximately 220 million hospital admissions in the US. That means that about 1 in 100,000 hospital admissions were reviewed (and only partially given the 20-minute rule). While the published paper never attempts to generalize the study findings to the universe of US hospitals, the media certainly did, and the lead author of the study states in the New York Times, “It is unlikely that other regions of the country have fared better.”
- Some of the instances of "harm" are not preventable and I'm not sure how they are related to quality of care. For example, consider the case of a patient with no known drug allergy who is treated with an antibiotic and develops a rash. This would be classified as a harm, and it is indeed a harm to the patient, but it's not predictable and not preventable. How does it help us to trend such data? And how would we attempt to reduce this harm? It is preventable harm that needs our attention.
- With regards to HAIs, even if these data were valid, I don't believe these data reflect the current state of affairs in US hospitals given that much improvement in infection rates has occurred since 2007.
P.S. It's amazing that IHI claimed to have saved 123,000 lives in the US due to its safety program for US hospitals, but now claims that during the same time frame there was little evidence of improvement in patient safety. Something doesn't compute......
Jumat, 24 September 2010
No HAC-king next week....
CMS announced today that its plan to publicly release hospital-specific data on hospital acquired conditions (HACs) next week has been put on hold. Apparently the data were flawed. We recently reviewed our data report that was to be released, and compared the hospital acquired CLABSI and catheter associated UTI data to that collected via concurrent surveillance by our IPs. To describe the CMS data as wildly inaccurate would be an understatement.
Sabtu, 21 Agustus 2010
The bean counters are missing some beans
There is a recently released report from the Society of Actuaries entitled the Economic Measurement of Medical Errors. It's nearly 300 pages long and includes data on catheter-associated UTI, central line associated bloodstream infection, and surgical site infections. It is based entirely on administrative claims data, which are notoriously inaccurate for healthcare-associated infections. See Kurt Stevenson's paper on this topic here.
As I looked through the report, I noted that the numbers looked quite odd based on my familiarity with the literature. So I compared the SOA report data to CDC's estimates on the burden of HAIs in the US (Klevens et al) and Eli's review of the literature on attributable cost in the table below.
Now there a number of caveats to point out:
As I looked through the report, I noted that the numbers looked quite odd based on my familiarity with the literature. So I compared the SOA report data to CDC's estimates on the burden of HAIs in the US (Klevens et al) and Eli's review of the literature on attributable cost in the table below.
Estimated annual number of cases | Attributable cost/case | |||
SOA Report | CDC | SOA Report | Perencevich | |
CA-UTI | 9,080 | 561,667 | $32,820 | $1,257 |
CLABSI | 3,679 | 248,678 | $110,462 | $18,462 |
Now there a number of caveats to point out:
- The SOA reports the estimated number of cases due to error; to convert from number of cases to cases due to error they multiplied the number of cases by 0.95. Therefore, I divided the "error" cases by 0.95 to convert back to number of cases (Does anyone believe that 95% of CA-UTI cases are due to error, that is, preventable???)
- Eli used 2005 dollars for cost data, and SOA used 2008
- SOA used 2008 claims data, and CDC (Klevens et al) used 1990-2002 NNIS data and National Hospital Discharge Survey 2002 data
- CDC data appear to included non-device associated infections, though we know that the vast majority of UTIs and BSIs are device related
However, despite the differences I note, the SOA data seem hugely flawed. They appear to vastly underestimate the frequency of HAIs, while substantially overestimating the attributable costs. I suspect the SOA report will be widely quoted, so take a look at it and be prepared!
Selasa, 13 April 2010
2009 AHRQ National Healthcare Quality Report: Getting Worse(r)
AHRQ just published the "2009 National Healthcare Quality Report and National Healthcare Disparities Report" and things don't look too good. Full reports available here. Post-op sepsis or BSI---increased 8%. Post-op catheter-associated UTI---increased 3.6%. And for CLABSI - drum roll----no change. Ouch! Post-op pneumonia down 12%. The New York Times seemed upbeat, calling the problem of these infections "largely solvable."
I'm not sure where these data leave us and perhaps Mike, Dan and Connie will have some comments on this report too. However, I think what these data are telling us is that when you don't fund enough proper studies (and multiple studies) on methods to prevent HAIs - the 6th leading cause of death and when you are mostly left to resort to "absence of evidence-based medicine" tricks like kitchen-sink bundles...well, this is where you end up.
To suggest, as Katherine Sebelius did, that the new health care law would “help turn these numbers around” since hospitals with high rates of infections will be penalized starting in the 2015, is a bit hopeful. I think we haven't invested in HAI prevention studies like we have in cardiology and other areas, and I don't think we're going to get these rates down with sticks alone. You can't see in the dark if you don't know how to make a flashlight or even light a fire.
I'm not sure where these data leave us and perhaps Mike, Dan and Connie will have some comments on this report too. However, I think what these data are telling us is that when you don't fund enough proper studies (and multiple studies) on methods to prevent HAIs - the 6th leading cause of death and when you are mostly left to resort to "absence of evidence-based medicine" tricks like kitchen-sink bundles...well, this is where you end up.
To suggest, as Katherine Sebelius did, that the new health care law would “help turn these numbers around” since hospitals with high rates of infections will be penalized starting in the 2015, is a bit hopeful. I think we haven't invested in HAI prevention studies like we have in cardiology and other areas, and I don't think we're going to get these rates down with sticks alone. You can't see in the dark if you don't know how to make a flashlight or even light a fire.
Langganan:
Komentar (Atom)
