I don’t believe in meta-analysis

April 23, 2011

Skimming a recent paper in Clinical Chemistry (subscription required), I came across quality improvement paper that uses a process I had never heard of – the A6 cycle: (Ask, Acquire, Appraise, Apply, and Assess). This is the reason for the picture above*.

But the paper also features meta-analysis as a way to summarize an overall effect. Many studies have flaws including bias. See this paper by John P. A. Ioannidis with the catchy title “Why Most Published Research Findings Are False.” One needs to find studies without bias, which are rare, not to summarize all comers.

*It was pointed out to me that I have only 5 A’s. Analysis is the 6th.

Advertisements

Adverse event causes and errors

April 21, 2011

In one of the blogs I scan, I came across an entry that merits comment. Dr. Wachter refers to a recent article which has been in the news which suggests that medical errors are more common (10 times perhaps) than suspected.

In the entry, Dr. Wachter goes on to give a hypothetical example about a patient taking Coumadin who has a GI bleed while her INR was in the therapeutic range. The example is a little fuzzy so I’ll change it to say that her INR was actually 3.5 (aiming for 2-3) and the high INR was found out after the bleed event.

I have trouble with Dr. Wachter’s sentence – “If her INR had been above the therapeutic range (say, 3.5, when we’re aiming for 2-3) but there was no overt error identified (she was on a reasonable dose, being monitored at the correct intervals), we’d call that preventable harm (you wouldn’t have to think too hard to envision a system that might have caught and fixed this problem before the bleed), but not an error.”

This says that if one does not identify an error, there was no error!

All adverse events have causes – the something that produced the adverse result. An error is performing a procedure incorrectly – a mistake. Errors can be far removed from the adverse event, can exist but are not detected, and in some cases, the adverse event cause does not have an error.

Here are two of many possibilities, using the clinical laboratory since an INR was involved.

Case 1 – Patient sample mix up, identified and reported by the laboratory, after the adverse event.
Case 2 – False negative QC event identified and reported by the laboratory, after the adverse event.

In case 1, the laboratory made an error and caused the adverse event.

In case 2, a bad reagent caused the adverse event, but the laboratory did not make any errors. They were correctly following procedure. That is, a QC result has imprecision and the QC process in place (e.g., QC rules) has certain (low) probabilities of false positives and false negatives. Thus, QC could indicate results are acceptable, when they are not.

Another viewpoint of case 2 is that the adverse event was caused by the socioeconomic practice of medicine. The laboratory followed regulatory accepted ways of controlling quality. These practices are a tradeoff between effectiveness and cost.

The problem with Dr. Wachter’s suggestion is that it provides an incentive to not look very hard for errors. Thus, if in case 1, no one found out about the patient sample mix up, according to Dr. Wachter, there would have been no error.


Maybe it’s finally time to talk about outliers

April 9, 2011

I’ve argued for some time that to properly evaluate an assay, one has to account for all of the results. The conventional wisdom is to specify limits for 95% of the results whereby it is implied that if the results are within limits, the assay is acceptable. Implied or explicitly part of the 95% requirements is the assumption that the data are normally distributed. This makes large errors extremely unlikely.

A recent paper to be published in May in Clinical Chemistry (subscription required) and available now is about troponin outliers. These authors found that for 4 methods, the outlier rate ranged from 0.06% to 0.44%. Thus, all of these assays would fly under the radar if they met requirements for 95% of results. To put these rates in perspective, for 1,000,000 results a rate of 0.06% is 600 outliers and 0.44% is 4,400 outliers. These outliers are of one type – irreproducible outliers in duplicate samples. Reproducible outliers due to an interfering substance are another type of outliers. So the rate of outliers due to all causes would be larger, although the way outliers were calculated in this study, assays that had very good precision were at a disadvantage.

The study included 2,391 samples, not the sample size that laboratories would like to run to qualify an assay. This is probably one reason that people don’t talk about outliers. To evaluate them by running samples requires too many samples. The most efficient way to evaluate the potential for outliers to perform risk analysis.


Conflict of Interest

April 1, 2011

I had previously complained about a conflict of interest at CLSI. Bringing this up to CLSI didn’t help. I became aware from another blog that others also observe this as reported here.