There's a paper in this month's American Journal of Infection Control which looks at surveillance for CLABSI in pediatric ICUs. Surveys of personnel at 16 PICUs, mostly in academic medical centers, revealed that surveillance practices varied widely. Other practices which could affect the CLABSI rate also had great variability (e.g., blood culture practices--when they are drawn, how they are drawn, and how much blood volume is obtained). Interestingly, 100% of IPs surveyed reported that they applied the CDC CLABSI definition; however, when they were tested with clinical vignettes, none of the IPs applied the definition as written.
There really are no surprises here. This study confirms what many of us already knew--surveillance for HAIs is currently a mess, and little has been done to improve validity.
This week, through an informal email discussion with several hospital epidemiologists, I learned that the process of HAI case detection varies widely, with some hospitals involving front line providers in the decision as to whether an HAI exists. As the stakes associated with infections become greater, there is obviously a natural inclination to look hard at every potential case. But here's the real problem: whether the patient truly has an HAI or whether the patient meets the CDC definition of HAI are two different questions. At some hospitals, a strict black and white reading of the definition is applied. At others, clinical judgment is also considered, and in some cases, allowed to trump the definition.
Given the increasing practice of public reporting of HAI rates, improving the validity of data must become a priority. As a first step in this process, better definitions, with more specificity, would be of great help.
Tidak ada komentar:
Posting Komentar