Wednesday, April 21, 2010

First, do no harm: Then, measure to be sure.

The President of the Medical Staff, Dr. Keith White and I have decided to share a conversation we had related to measuring harm and our use of the IHI Global Trigger Tool (GTT). We've been testing its use at CCRMC for almost two years.

The context: I had just left a Medical Staff Leadership meeting where we had a spirited discussion around measuring harm using the GTT. Following the meeting I sent out some information on the GTT. Below is the exchange of email and ideas between Dr. White and myself further exploring the issue. We're sharing this here because we're assuming we aren't the only Hospital CEO and Medical Staff President having this sort of conversation.
If nothing else, you may find it amusing.
Dr. White:
Thanks for the articles Anna. I read them all. Good discussion with the Medical Staff yesterday. I'm intrigued by the Trigger Tool and I'd like to use it, but I'm not certain that it directs us to the truth. Doctors have gone down all sorts of blind alleys in the past and we're skeptical (that can be a good thing).

• Is there evidence that the Trigger Tools are reproducible from month to month at a particular hospital? One would assume that problems would continue to follow the same trends. (Dr. Bliss brought that up)
• Is there any published evidence that using Tools helped bring about improvement? Can you demonstrate that they are practically useful?
• What grade of evidence would you give the Tool if you used Oxford Evidence-Based Medicine or USPTF standards? http://www.dbpeds.org/articles/detail.cfm?TextID=672? These are sacred to Doctors.
Thanks for the info. I'll see you at the Academy tomorrow.

Anna:
Hi Keith,
Great questions! I don't claim to be a safety expert like many of my esteemed colleagues but I will give this a go and answer generally and to the best of my ability.

Question 1. Is there evidence that the Trigger Tools are reproducible from month to month at a particular hospital? One would assume that problems would continue to follow the same trends.

Reproduction speaks to inter-rater reliability or repeatability. As you know several factors can effect repeatability. These include: power, effect-size, sample size etc. There is indication in our own data that we are seeing stability in our data (as evidenced by the graphic I provided yesterday). This type of display of data shows random variation, aka no significant trend in increased number of incidence of harm, but no improvement either. Additionally, we know some problems may not be detectable without trending their occurrence if the volume is very low, as is the case with perinatal events which occur at a very low rate. This makes detection tools with a higher degree of sensitivity even more important to identify areas in need of improvement. This is a form of "active surveillance" not to be confused with process measurement that monitors and drives interventions aimed at improving.

How do harm and mortality come into play at the oversight level for your department? What tools are used now and when are trends analyzed?

Question 2. Is there any published evidence that using Tools helped bring about improvement? Can you demonstrate that they are practically useful?

Surveillance in and of itself does not drive improvement, people do. People need to measure in order to understand whether the intervention is leading to an improvement. I have a copy of The Improvement Guide in my office and am happy to share that with you. It’s pretty much Deming packaged up for healthcare and a very easy read. It has lots of story!


Question 3. What grade of evidence would you give the Tool if you used Oxford Evidence-Based Medicine or USPTF standards? http://www.dbpeds.org/articles/detail.cfm?TextID=672? These are sacred to Doctors.

The levels of evidence tables are applied in the broadest sense to research and unfortunately improvement work has been left out of this loop so often. The Squire Guidelines, which are becoming more widely accepted to use for publication may assist those engaged in improvement better ready their studies. I think research and academia will get there in a decade or two and recognize quality improvement as legitimate science!(of course I'm kidding, but you get my point)

As I said yesterday, there is a dearth of evidence related to the use of Trigger Tools beyond those that are process specific- for example there are several systematic reviews with homogeneity (meta-analysis) of randomized control trials (RCTs) that study the use of Trigger Tool methodology related to medication adverse event surveillance. This may have been driven by increased use of computerized physician order entry (CPOE) and the cry for decision support and may only become more compelling as we are all held to the meaningful use requirements.

The use of Trigger Tools for adverse drug event (ADE) surveillance is standard practice in pharmacies nationwide and in ours. There is also strong evidence behind individual triggers and I would suggest a literature search per trigger could best yield answers to your specific questions as your questions can be applied to a wide swath of literature.

Thanks for your interest.
Hope this helps…
Anna


Dr. White:
Hi Anna,
Thanks for the thoughtful reply. I understand the issues and want to read more. I'm curious to use the Global Trigger Tool (GTT) and trying to anticipate medical staff objections.

• "Harm" is a loaded word. We've taken an oath not to cause any, and some of the kinds described aren't reducible. This is a whole different uncomfortable kind of review
• I understand because of sample sizes that we might not get coherent results. Wandering causes of harm might be very frustrating.
• Some will argue that requiring capital "E" evidence for medical research and small "e" evidence for improvement is playing with the rules (did IHI really prevent 5 Million cases of harm?).
As you point out, we don't really have broad oversight of harm and death. Let's give it a try. Seismology is a lovely science even though it doesn't prevent earthquakes!
See you tomorrow
Keith

Anna:
Hi Keith,

I'm not suggesting we accept small "e," evidence. I am suggesting there is a dearth of evidence as of yet. I believe more will come and certainly adoption of electronic medical records will accelerate such monitoring and reporting.

The GTT is a measurement instrument with an accompanying sampling strategy, not an intervention. Measurement does not "prevent," it attempts to quantify or describe as does story.

Wandering causes should not be responded to unless they declare themselves as more than random variation or even a one-off special cause variation (one data point exceeding three standard deviations from the mean). I agree lack of consistent sampling and trending has led to many change campaigns (tampering) that were likely unwarranted.

I’m not certain of the The 5 Million Lives Campaign count. I do know we lack national standards around harm measurement on any consistent basis. The complexity of that sort of count exceeds my abilities. However, I tend to think more than 5 million incidents of harm were prevented, but my sense doesn't count as "E" evidence does it? We do have reporting requirements and tools we consistently use to rate the severity of an event such as The National Coordinating Council for Medication Error Reporting and Prevention Index (NCC MERP). For harm it seems to get a bit more hazy. It was much easier to look at mortality in the 100k lives campaign because everyone counts that in some manner. Harm is much more elusive. Only the highest performers take on such a task.

I'm with you Keith. Let's proceed carefully and with a critical eye.
See you tomorrow. You can let me know how my presentation went.

Anna

1 comment:

  1. First of all, thank you and Keith for deciding to share this exchange. Beautiful example of professional interaction with an open and receptive mind (separating stong negative feelings and fears the term "Harm Measurement" might bring) Reading it I could imaging the many providers that wouldn't be able to separate those feelings to implement a process that can bring to light "results that we don't want to know about" Something similar to a patient refusing to go for that diagnostic exam, fearing that it could come back positive for Cancer! We might choose to avoid investigating the depths of negative or adverse effects in our services for fear of not liking the findings or just because there is not enough evidence based data. It is truth that "change doesn't come out of learning any results" but out of the will to improve despite difficulty and pain associated. This is a hard decision to make. This might be a very unpopular process to implement but, if its implementation can help us provide better services and become better professionals...

    ReplyDelete