Not a member of the club

May 28, 2011

The NACB (National Academy of Clinical Biochemistry) has published guidelines for glucose meters (subscription may be required). Of the 378 references, my Letter (1) and our review article on glucose meter standards (2) didn’t make it. Not like these two references are not on topic or in obscure journals.

One of the points we make is that standards should account for 100% of the data (by using an error grid). But the NACB still recommends limits for 95% of the data. So our publications were essentially ignored. Being ignored doesn’t mean you’re wrong, it just means you’re not a member of the club.

References

  1. Krouwer JS. Wrong thinking about glucose standards. Clin Chem, 2010;56:874-875.
  2. Krouwer JS and Cembrowski GS. A review of standards and statistics used to describe blood glucose monitor performance. Journal of Diabetes Science and Technology, 2010;4:75-83.
Advertisements

EP23 – Quality control based on risk management – misses the mark

May 27, 2011

CLSI EP23 looks like it is about to be published. To recall, EP23 was going to be how a laboratory would use EP22. But EP22 has been canceled so EP23 must stand on its own.

The way I review these standards is to go right to the example, which is about a hypothetical automated glucose assay. It seems as if there were a bunch of people sitting around a table asking what can go wrong with a glucose assay, what the manufacturer has done or recommends, and what things the laboratory can do. The list that was prepared sounds reasonable, but it’s not. Here’s why.

Most of the actions recommended for the laboratory would be done anyway such as: not to use expired reagents; or to send out samples to a proficiency program. Because of this, the result is that the list of actions merely documents what a laboratory already does. To stop there and call that risk management is a mistake. Documentation of existing procedures is a step in FMEA, but the more important part of FMEA asks what (if anything) the laboratory needs to do additionally. Of course, another way of looking at what the committee has done is to comment that has little to do with the real world – laboratories don’t start from scratch, they have existing procedures.

There are other problems such as:

  1. Deviating from a standard FMEA (for example there is no Pareto)
  2. By focusing on an assay, EP23 ignores the generic pre- and post-analytical errors that occur across assays.
  3. Ignoring FRACAS, which is a tool to reduce the rate of observed errors.

I made a proposal to the EP23 team last November to scrap most of EP23 and merge what is good into EP18. What would remain would be a series of real examples of FMEA (hard to find real examples) and FRACAS (easier to find real examples). These examples would need to be mapped into the language of FMEA and FRACAS. A non laboratory FRACAS example is the Pronovost work to reduce the rate of infections when placing central lines.


Patients – the missing voice

May 24, 2011

An important question, when answered about an assay – is the performance good enough, is usually answered by a standards group. An example is the ISO 15197 standard for glucose meters.

The usual input into standards groups are manufacturers, clinicians, regulators, and laboratorians. Within these groups, manufacturers tend to dominate. This was true for ISO 15197.

But one voice is often missing – that of patients. This is particularly important for glucose, where patients act as clinicians and laboratorians.

The FDA meeting last year did have a patient advocate and patients have commented on the FDA meeting, here and here.


CLSI Guidelines – the importance of real examples

May 15, 2011

CLSI Evaluation Protocol guidelines often contain statistical procedures and statistics is challenging for most people. One can think of CLSI documents as having three parts: the explanatory text, examples, and the appendices. The text is often lacking in spite of many revisions, simply because statistical explanations are hard to follow. The justifications of some of the statistics are in the appendices – even harder to follow.

This leaves the examples as an important part of these guidelines. If one understands the examples, then one can do the procedure, even if some of the text can’t be followed. Now this is a bit less important with the introduction of StatisPro software from CLSI, but some users might choose not to buy StatisPro and StatisPro doesn’t cover all guidelines.

Examples can be completely made up or they can have real data, which is much more useful. EP21 (total error) has two examples. One is real data (ldl cholesterol) and has a few outliers. During the comment period for EP21, several people wanted to change or delete the example because of the outliers, but outliers happen in the real world. The second example is made up because I wanted normally distributed data and although I worked for a manufacturer at the time, I couldn’t find an example of normally distributed data.

So the tradeoff is a made up example that neatly illustrates the statistical method with no brainer conclusions or a real example – warts and all – that also illustrates the statistical method but doesn’t look very appealing or leads to conclusions that require judgment.

This issue occurs in EP27 (error grids) but is much more intense. That is because error grids require judgment in their creation and this judgment can seem (and often is) arbitrary. But this is the real world. For example, with glucose, clinicians are still debating the location of the innermost zone of the error grid. The error grids in EP27 are real (blood lead, prothrombin time, and urine albumin). So now the comments complain that the error grids seem arbitrary and the commentators would rather have a made up, neat example that is an abstraction of an error grid, with clearly defined clinical consequence zones. But that is not the real world and won’t help anyone.


CLSI loses its way and then finds itself again

May 8, 2011

Continuing the saga of EP27, the previous post mentioned that the Board of Directors of CLSI had a slew of comments – 201 at first and then an extra 46 thrown in for good measure. There was one comment that I found rather irritating – not the one in the previous post – and got me to thinking …

When I was chairholder of the Area Committee on Evaluation Protocols (1999-2004), my recollection of the document development process was … and I choose to use the old names

The subcommittee wrote documents – (from the CLSI website … has “having primary responsibility for drafting individual consensus documents and for evaluating and addressing comments received during each phase of the consensus process.”)

The area committee ensured the quality of documents – (from the CLSI website … is “responsible for the final technical review before publication at the proposed level and/or submission to the Board of Directors for approval to publish at the approved consensus level.”

The Board of Directors ensured that the process was followed. I have no memory of the Board of Directors commenting on documents. Examining the 247 Board of Director comments, many correct grammar – this is job of the CLSI editors. To be fair, some of the comments improved the standard and this is a good thing. The comments that were technical are the job of the area committee. But most importantly, the Board voted on the document, basically as another area committee but with more clout as a single reject or postpone vote would delay the document (this is the first I heard of a postpone vote).

It turns out that other subcommittees have complained and the Board of Directors recently issued a statement that the area committee (now called consensus committee) is responsible for the final approval of documents.


Allowable assay error and regulations

May 4, 2011

EP27, the CLSI standard about error grids is about to begin its fifth year without being published as an approved level standard. The latest roadblock is that the CLSI Board of Directors had a slew of comments – they shouldn’t even be commenting but more of this later. So here is one of the comments:

“Regulatory requirements should not influence grids; the grids are based on risk of harm.”

This statement sounds good but is not helpful. Here’s why. The risk of patient harm depends on the amount of assay error. It is often suggested that there is an error limit below which there is no harm and above which, harm is certain (red line is above figure). This is also not true. The risk of harm is zero for zero assay error and rises as assay error rises (green line in above figure, which is taken from EP27P). And the harm itself can be different. For example, a glucose error can cause a small error in the amount of insulin given which is less harmful than a large glucose error which suggests hyperglycemia for a patient who is hypoglycemic.

If you’re still with me, error grids have to use the red lines from the figure. That is, they have dichotomized, the continuous risk of harm into an on off level for each zone in the error grid. To simplify things, consider only the innermost zone in an error grid, the one that separates no harm from minor harm. One can ask, where should this limit be placed? Remembering the green line, the limit should be placed as close to zero error as possible. [OK, it is possible that the green line is “S” shaped rather than as shown but that doesn’t change things.] The problem with placing the limit too close to zero is that it is too expensive. Thus, to have each assay run in triplicate reduces error but increases cost. The reality is that the socioeconomic practice of medicine dictates where the limit is placed and the socioeconomic practice is often realized by regulation, which can differ in different countries.

So in the real world, regulatory requirements influence error grids.


Medical errors that have happened to me – 1

May 1, 2011

I had an appointment with a urologist as a routine follow up to having been treated for prostate cancer two years ago. My treating doctor was in another state and this local urologist had been recommended.

After checking in, the receptionist told me I would be seeing “L”. I assumed L. was going to take my vital signs – he never did – but when he introduced himself as a physician’s assistant, I asked if I was indeed going to see Dr. F, with whom I had made the appointment. He said no, Dr. F. wouldn’t be back until the afternoon. He was surprised because he said it was their policy for new patients (like me) to be seen initially by the doctor.

I have written before about error cascades. There is the initial error (scheduling me when Dr. F was not there), the possibility to detect the error (both the receptionist and L. detected the error) and the recovery – to tell me to come back in the afternoon when Dr. F would be there! The recovery failed.

Things got worse. They wanted to increase the frequency of PSA testing and also test me for testosterone. It just so happened they have their own physician’s office immunoassay analyzer – with a limited menu – but does include assays for PSA and testosterone. I didn’t let them draw my blood and I won’t be back.