The lighter side of trade shows

July 29, 2009

clown

I have been going to the AACC (American Association for Clinical Chemistry) meetings for many years. Each year, about a month before the meeting, I start getting mail from companies which describes what products the company will feature at the meeting, invitations to workshops, and the location of their booth.

During the meeting (when I have already left) and after I return home, and without fail, I continue to receive more mail, again announcing products I should see. It makes one wonder what’s going on in these companies. Next year I will show the distribution of mail received vs. date.

Advertisements

CLSI Evaluation Protocol Area Committee – Tough crowd

July 25, 2009

sweat

Although I responded about a written comment about EP21 (Total Error), this comment was also presented verbally at the AACC meeting in Chicago. There were some bizarre additional rants by this commentator which I will ignore but the basic comment was that EP21 should not attempt to include pre-analytical error but focus only on analytical error.

So here is my response.

EP21 simply looks at differences between a candidate and comparative method. These differences are typically generated in a method comparison experiment whereby one sequentially assays patient samples by both methods. There is no way to know the source of these differences. Because most of the time, the same sample will be split in two and analyzed on two instrument systems, pre-analytical error will not be observed but there is no guarantee that the same sample will (or in some cases should) be used.

Consider a manufacturer who compares a blood gas pO2 value (syringe drawn from a tonometer) to its reference value (gas tank). Here, the difference is not just analytical error but also any pre-analytical error from room air mixing with the blood in the syringe. This pre-analytical error is dependent on the technique of the operator.

Another example is comparing a Point-of-Care assay that relies on a fingerstick (glucose, PT) to a laboratory comparative method that uses a (venous) serum sample. In this case, if one split the serum sample and used it for the Point-of-Care assay, one would be excluding the possibility of error that might occur routinely (e.g. from an improper fingerstick).

So in one sense, the revised version of EP21 is simply correcting a mistake in the original version, because there is nothing new to either the protocol or the analysis method.

Putting things another way, the “total” in total error will capture all errors that are possible in the experiment that has been performed. If pre-analytical error is possible in the experiment, then it may appear in the differences. In all cases, not all error sources will be sampled so the “total” only applies to that experiment. For example, for analytical error, (usually) only one reagent lot is being sampled, most pre-analytical error is not captured when a split sample is used.

Finally, there was another commentator with a reject vote and rather hostile I might add (thanks to the several people who came to my defense). One of his comments was about the LDL cholesterol example in EP21. He said it was a poor example because there were three gross outliers which should lead to troubleshooting and further analysis would be a waste of time.

Now most Evaluation Protocol documents use simulated examples. The LDL example is a real example from a clinical laboratory. The commentator has missed the purpose of the experiment and analysis. Data is not deleted in EP21 (e.g. outliers). The data is what it is – a snapshot in time of what the laboratory will see routinely. It is naive to expect that the laboratory has the resources to uncover the sources of the differences. That one can expect some large differences to occur in routine use is valuable information.


CLSI and conflict of interest

July 24, 2009

coi

CLSI (Clinical And Laboratory and Standards Institute) formerly NCCLS, develops consensus based standards in laboratory medicine.

I have previously commented on a standard that CLSI cancelled (EP11 – Uniformity of Claims) even though it had passed the consensus process.

As head of the working group on revising EP21 (Total Error), I receive all of the consensus comments. One of them, accompanied by a reject vote is troubling and comes from the chairholder of the Area Committee on Evaluation Protocols.

The chairholder seems to be unfamiliar with EP21 since he treats it as if it were a new document. It has been out for six years with the current draft a minor revision. One would have hoped that the chairholder would be more familiar with documents he is supposed to manage (e.g. EP21). Moreover, to comment largely about the existing document after the working group has completed its revision is poor management.

The chairholder’s comments made me question how someone with such little knowledge about these topics could be the area committee chair. EP21 is an extremely simple standard. All one does is plot differences between a candidate and comparative method. There is no mathematical modeling like more involved total error methods. Yet the comment was that this standard is too complicated. So perhaps this is a tactic –  if something is simple, call it complicated. Other comments followed this pattern.

But the real problem is the chairholder’s complaint that EP21 is a departure from Westgard’s method for estimating total error. As a standard, EP21 relies on peer reviewed literature. Apparently, the area committee chair is unfamiliar with this literature (1-8) but the concern is that the commentator’s company has a financial relationship with Westgard’s company. This conflict of interest should not be allowed to occur at CLSI.

References

  1. Jan S. Krouwer: Estimating Total Analytical Error and Its Sources: Techniques to Improve Method Evaluation. Arch Pathol Lab Med., 116, 726-731 (1992).
  2. Jan S. Krouwer and Katherine L Monti: A Simple Graphical Method to Evaluate Laboratory Assays, Eur. J. Clin. Chem. and Clin. Biochem., 33, 525-527 (1995).
  3. Jan S. Krouwer: Evaluation of Assay Systems. Clin. Chem. News, 27, 10-14 March 2001
  4. Jan S. Krouwer: Setting Performance Goals and Evaluating Total Analytical Error for Diagnostic Assays. Clin. Chem., 48: 919-927 (2002).
  5. Jan S. Krouwer: Recommendation to treat continuous variable errors like attribute errors. Clin Chem Lab Med 2006;44(7):797–798.
  6. Jan S. Krouwer: How to Improve Total Error Modeling by Accounting for Error Sources Beyond Imprecision and Bias, Clin. Chem., 47, 1329-30 (2001).
  7. Jan S. Krouwer: Problems with the NCEP (National Cholesterol Education Program) Recommendations for Cholesterol Analytical Performance. Arch Pathol Lab Med 127, 1249 (2003).
  8. Jan S. Krouwer: A recommended improvement for specifying and estimating serum creatinine performance. Clin Chem 2007;53:1715-1716.

No free lunches no magic bullets

July 24, 2009

magic

At the recent AACC meeting, I went to a CLSI (Clinical And Laboratory Standards Institute) meeting about evaluation standards. I chair three such standards which were discussed at this meeting. This entry is about EP18, a standard about risk management.

A lab director commented that no clinical laboratory would ever use this document. He went on to say that medical technologists would not understand EP18 implying that perhaps some changed version (e.g. the magic bullet) of the document would help.

I got the impression that this lab director would never initiate a risk management program. EP18 is simply a standard. It is not a regulatory requirement and I suspect that without some incentive (regulatory or otherwise), not only this lab director but many clinical laboratories would not undertake a formal risk management program.

The need for risk management exists! After the meeting, another lab director told me about a death that was caused by pre-analytical error (patient sample mix up) in his laboratory.

What’s required for clinical laboratories to perform risk management? It’s true that medical technologists are not taught risk management and would benefit by training, hopefully by using EP18. Once people are trained, then the actual risk management must be carried out and while this is not a major project it won’t be completed in ten minutes either (no free lunches).

A general hospital example relates to this discussion (previously covered here). The infection rate for placing central lines was about 11 percent. This was a serious problem leading to patient harm and occurred in accredited hospitals. One clinician acted to do something on his own. He used one of the tools described in EP18 – he of course didn’t use EP18 – and reduced the infection rate to zero.

So CLSI has to decide. It can sit on the sidelines and let improvement in laboratory patient safety proceed without CLSI or it can try to promote the use of risk management. Once again, EP18 is only a standard. The writing can always be improved but there is a limit to how well one can explain an “OR gate”.


EP21 and pre-analytical errors

July 12, 2009

slip

The CLSI document EP21, which is about total error is being revised.  First, some background.

Clinical laboratory evaluations have been hampered for many years by a recommended modeling of errors, which unfortunately leaves some errors out of the picture. The typical model adds two times the standard deviation of a candidate method (evaluated from a precision experiment) to the average bias (evaluated from regression between a candidate and comparative method). This gives an estimate of the location of 95% of the differences between the candidate and comparative method (given that certain assumptions are met). This location can be compared to a set of “acceptable limits.” Because one is estimating parameters (average bias and imprecision) it is perfectly acceptable to discard outlier results.

The main problem with this method is one would also like to know the location of the remaining 5% of the differences. If these differences are large enough (e.g. beyond the set of acceptable limits), then they might cause patient harm. Since many clinical laboratories report a million results a year, up to 50,000 results could potentially harm patients for an assay that might have been judged acceptable according to the modeling method just discussed.

EP21 changed this by dropping modeling. It simply observes differences between the candidate and comparative method. Particularly useful is the “mountain plot” which shows probability vs. differences. Important to this analysis is that no data can be discarded.

The EP21 analysis highlights outlying observations, whereas the modeling method suppresses them. But it is precisely these outlying observations which have the potential to harm patients.

An EP21 revision now includes in additional to analytical errors, pre-analytical and post-analytical errors. Since there is no change to the data collection protocol, it is probably better stated that the analysis no longer excludes pre-analytical and post-analytical errors. Examples of pre-analytical errors are incorrect sample preparation or mixing up two patient samples. Errors such as mixing up patient samples are not specific to an assay being evaluated. Another type of error is when an assayed sample fails to produce a result. This can happen when an instrument detects a fault in the analytical process and suppresses the result. A non result does not generate a difference but delayed results are a potential source of patient harm and their frequency can be measured.

This has raised an objection since traditional evaluations focus on analytical error only.  EP21 does allow a subset of errors due to the analytical process only to be reported, but it is time to estimate errors from all sources.

Reference

Krouwer JS. Estimating Total Analytical Error and Its Sources: Techniques to Improve Method Evaluation. (1992) Arch Pathol Lab Med, 116;726:731.


Reading the New York Times can be dangerous to your health

July 10, 2009

bad

David Leonhardt has a July 7th, 2009 front page article in the New York Times: “In Health Reform, a Cancer Offers an Acid Test.” His test is how health care reform will affect prostate cancer treatment costs. Unfortunately, his article is ill-informed and biased to agree with his agenda. Here’s why.

He starts by comparing treatments for what he calls “slow-growing, early-stage prostate cancer.” Unfortunately, the terms slow-growing and early-stage don’t always go together. A fast growing cancer is at some time early stage and the main problem  – not mentioned by Mr. Leonhardt – is that no one knows how to distinguish between slow and fast-growing prostate cancer. Although most prostate cancers are slow growing, some aren’t and about 30,000 men die each year from prostate cancer.

So Mr. Leonhardt has already set the stage – how does one treat a non threatening disease since he has framed the discussion around slow-growing, early-stage prostate cancer. He suggests that among the choices are watchful waiting or more “aggressive” options including proton beam therapy which  “involves a proton accelerator that can be as big as a football field.” The word aggressive implies overtreatment and big as a football field suggests too much money.

It is somewhat baffling why Mr. Leonhardt would ask Dr. Daniella Perlroth what she would recommend to a family member. Her answer was watchful waiting. Dr. Perlroth, who Mr. Leonhardt said has studied “the data” (which data?), is an infectious disease specialist. Why wouldn’t Mr. Leonhardt ask a urologist or radiation oncologist? Would you ask a baseball player which hockey stick to buy?

Mr. Leonhardt talks about costs – $50,000 for IMRT (a form of photon based radiation) and $100,000 for proton beam therapy. Here, Mr. Leonhardt is silent about his sources. One of the commentators, Sameer Keole who is a radiation oncologist disputes these figures and says “The Medicare cost of treating men with prostate cancer with IMRT is approximately $35,000; the cost for protons is $49,000.”

Mr. Leonhardt has a brief discussion about side effects and says “Imagine if further prostate research showed that a $50,000 dose of targeted radiation did not extend life but did bring fewer side effects, like diarrhea, than other forms of radiation.” Mr. Leonhardt should know that the major side effects to worry about with prostate cancer treatments are incontinence and impotence. Now who wouldn’t want an effective treatment which has fewer of these side effects even if it costs more.

So Mr. Leonhardt will lower the cost of healthcare by providing misinformation to steer people towards a treatment (watchful waiting) that might be OK, also might kill you, but it costs less. He overstates the cost of effective treatments and neglects to mention side effects. This is an opinion or editorial which does not belong on page one. Well, actually because it is so ill-informed, it does not belong in the New York Times at all.