Disagreeing with Myself

February 21, 2011

I had previously written about FMEA event severity. I won’t try to reproduce that entry here. The issue is how things are ranked in a Pareto.  For example …

Lab error Lab error Prob. Effect Effect Prob.
Patient sample mix up
(glucose assay)
0.01% Patient harm 0.0001%
Patient sample mix up
(newborn screening assay)
0.07% Patient harm 0.0000001%

The column Effect Prob. is the combination of the probability of the lab error with the probability that the two patients that were mixed up have such different values that patient harm is likely and that the clinician gives the wrong treatment based on the incorrect result. The issue is that I had previously argued that in ranking two items, when the probability of lab error is higher, even if the probability of patient harm is lower (true for newborn screening), this error should be ranked higher. (Assume that the processes for newborn screening and glucose assays are different so that there are two mechanisms for patient sample mix up).

Having read another blog, I was influenced by Bill Wilson and would keep the ranking as shown in the table. Also, this entry is a good explanation of the difference between an error and its potential effects.

The change in ranking is minor since for either case, one would try to ensure zero patient sample mix ups. But eventually, the money available runs out – hence the need for a Pareto.


EP21 and Sampling

February 5, 2011

EP21 continues to struggle – it’s going on 4 years which seems extreme for what was to be a simple revision. I have complained about obstructionists holding up documents but there are just too many smart people objecting to EP21 for me to chalk this issue up to obstructionists.

The issue has to do with pre analytical error and total error. As a glucose meter example, an EP21 evaluation would compare finger stick glucose meter results against a laboratory glucose method using a venous sample. Assume that the finger stick is performed by non laboratory personnel, which is what would routinely happen (in this hospital). In this case, some people comment that:

  1. The laboratory has no control over non laboratory personnel.
  2. The total error results may differ from this evaluation compared to one in a different hospital (due among other reasons to differences in training for the non laboratory personnel who perform the finger stick).
  3. The laboratory won’t know the performance of this ABC glucose meter’s analytical performance (i.e., separated from the error contribution from finger sticks).
  4. There may be variation in the error rate of the non laboratory personnel who perform the finger stick.

My response is that all of the above are true. EP21’s goal is to provide the distribution of differences between the candidate and comparative method that are representative of what will occur in routine use. To quote Cuthbert Daniel (one of my mentors) “The observations must be a fair representative, random) sample of the population about which inferences are desired.” To respond to 1-4:

  1. The laboratory has no control over non laboratory personnel. Dealing with interdepartmental interfaces may be difficult but this is no reason to subvert the goal of EP21.
  2. The total error results may differ from this evaluation compared to one in a different hospital (due among other reasons to differences in training for the non laboratory personnel who perform the finger stick). True, but what is important are the results for the hospital where the study was performed.
  3. The laboratory won’t know the performance of this ABC glucose meter’s analytical performance (i.e., separated from the error contribution from finger sticks). True, but this is not the goal of EP21.
  4. There may be variation in the error rate of the non laboratory personnel who perform the finger stick. The protocol should account for this variation by adequate sampling.

Quote is from Application of Statistics to Industrial Experimentation, Daniel C., Wiley, NY 1976 pp 5


New Names Won’t Help

February 3, 2011

CLSI publishes clinical laboratory standards, but these days the rate at which the standards are appearing is painfully slow. So recently, CLSI came out with a revised, streamlined process. From what I can see the process hasn’t changed at all other than certain names. Thus:

Area committee->consensus committee
Area committee observer-> Area committee reviewer

Subcommittee->document development committee
Subcommittee advisor-> document development contributor
Subcommittee observer -> document development contributor

New names won’t help, better leadership is needed. The key individual is the chairholder of the area committee (now consensus committee). When I was head of the Evaluation Protocols area committee (now consensus committee) I got three documents published that each had no action for 13 years! And I did this without new names.