The meaning of error grid limits

April 27, 2012

I gave a presentation about error grids at the Quality in the Spotlight conference in Antwerp, Belgium. Someone asked a question about the meaning of error limits which I tried to answer but since then have thought of a better answer, which follows.

Assume there is a line of cars waiting in a tunnel and ahead of the cars there is a car spewing out carbon monoxide. A CO sensor goes off in the tunnel and emergency workers arrive. All engines are shut off and car occupants are examined.

Car 1 – the occupant has a high CO reading and is unresponsive. He is taken to a hospital and after a long treatment recovers.

Car 2 – the occupant has a much less CO than car 1, but it is still high. He has slight CO poisoning symptoms and is treated.

Car 3 – the occupant has a 5% CO less than car 2, and no symptoms.

Car 4 – the occupant has a detectable CO but much less than car 3 and no symptoms.

Car 5 – the occupant has zero CO and no symptoms.

If another CO meter had been reading zero CO for cars 1-5, one could postulate some error grid limits.

The error for the bad meter for car 1 would be at the “C” zone limit – meaning that this amount of error is life threatening.

By looking at the symptoms and CO levels for cars 2 and 3, one could set the “A” zone limit between the two error values for cars 2 and 3. That is, this error limit is just when harm starts to be observed.

However, it is important to think about what is happening in cars 3 and 4. In both cases there is harm but no clinical symptoms. Harm is occurring at a subclinical level – but it is still harm! So to set the limit where harm is detectable is somewhat arbitrary. The only case where there is no harm is where there is no error – for car 5. Thus, it always makes sense to have as little error as possible and hence state of the art limits are recommended. These arguments would translate to other diseases and tests.

NOTE: Okay, some COHb is normal in blood, but that’s my example.

Advertisements

New Markers – too good to be true?

April 13, 2012

One thing that in-vitro diagnostics people know is that new markers never seem to turn out as good as the initial publications. This has now been studied and is available, here. Included in this analysis are CRP, prostate markers, and others. Ioannidis and Panagiotou compared the initial publication results with meta-analysis and larger study results and confirmed that the effects claimed in the initial publications were often found not to be as large in the larger study or meta-analysis.

If the initial study were unbiased, one would expect the larger studies would show greater effects half of the time and less effects the other half.


I’m not an expert in risk management

April 4, 2012

I was at the Quality in the Spotlight conference in Antwerp, Belgium, which as always was enjoyable. There were several talks about risk management, and the CLSI guideline EP23. (My main talk was about error grids). I found it strange that several people referred to me as the expert in risk management. Now I have studied risk management techniques such as FMEA, fault trees, and FRACAS, attended conferences such as RAMS (Reliability and Maintainability Symposium), practiced all of these techniques for years, and consider myself competent, but not an expert.

I think the problem is that many people in clinical chemistry have little knowledge or experience with formal risk management techniques so relatively speaking I appear as an expert to them.

This reminds me of an EP23 phone conference meeting several years ago, where one of the subcommittee members said “now let me get this straight, when you’re performing risk management, you’re ….” and this person tried to go through the steps of a FMEA pretty much like a person trying to understand football – so if you make 10 yards, then you keep the ball, right? Of course, the problem was that this person was a member of the committee – most of the other members were at a similar knowledge level – but committees are generically called a committee of experts.

If there is a subcommittee on a statistical topic, then it is understandable that not all committee members are competent in the statistics at hand but risk management is different. There is nothing complicated about risk management – anyone can learn it.

But anyone can also not learn it with the result that a committee can easily go astray. So the CLSI risk management documents are:

EP18A2 – the formal techniques of FMEA, fault trees and FRACAS. The examples in EP18 are poor because no one could contribute real examples and that it what is needed.

EP23A – is a deviation from the formal techniques, IMHO because no one knew enough about the formal techniques and hence they did what thought seemed ok. The example was also poor because it was constructed.

And now there is the EP23 workbook – the book to explain the book – always a bad sign, although I have not seen this yet.