Articles accompanied by an editorial

March 16, 2018

Ever notice how in Clinical Chemistry (and other journals), an editorial accompanies an article (or series of articles) in the same issue. The editorial is saying – hey! listen up people, these articles are really important. And then the editorial goes on to explain what the article is about and why it’s important. It’s the book explaining the book.


Theranos again

March 15, 2018

I have previous posted about Theranos (here and here) and now Theranos is in the news again in a bad way (thanks to the AACC artery for spotting this).

Thus, Elizabeth Holmes was charged with massive fraud by the SEC. Makes me wonder if the AACC past presidents are happy that they accepted positions on the Theranos board. And also if AACC regrets it decision to feature Elizabeth Holmes at the 2016 AACC national meeting.

Performance specifications, lawsuits, and irrelevant statistics

March 11, 2018

Readers of this blog know that I’m in favor of specifications that account for 100% of the results. The danger of specifications that are for 95% or 99% of the results is that errors can occur that cause serious patient harm for assays that meet specifications! Large and harmful errors are rare and certainly less than 1%. But hospitals might not want specifications that account for 100% of results (and remember that hospital clinical chemists populate standards committees). A potential reason is that if a large error occurs, the 95% or 99% specification can be an advantage for a hospital if there is a lawsuit.

I’m thinking of an example where I was an expert witness. Of course, I can’t go into the details but this was a case where there was a large error, the patient was harmed, and the hospital lab was clearly at fault. (In this case it was a user error). The hospital lab’s defense was that they followed all procedures and met all standards, e.g., sorry but stuff happens.

As for irrelevant statistics, I’ve heard two well-known people in the area of diabetes (Dr. David B Sachs and Dr. Andreas Pfützner) say in public meetings that one should not specify glucose meter performance for 100% of the results because one can never prove that the number of large errors is zero.

That one can never prove that the number of large errors is zero is true but this does not mean one should abandon a specification for 100% of the results.

Here, I’m reminded of blood gas. For blood gas, obtaining a result is critical. Hospital labs realize that blood gas instruments can break down and fail to produce a result. Since this is unacceptable, one can calculate the failure rate and reduce the risk of no result with redundancy (meaning using multiple instruments). No matter how many instruments are used, the possibility that all instruments will fail at the same time is not zero!

A final problem with not specifying 100% of the results is that it may cause labs to not put that much thought into procedures to minimize the risk of large errors.

And in industry (at least at Ciba-Corning) we always had specifications for 100% of the results, as did the original version of the CLSI total error document, EP21-A (this was dropped in the A2 version).

A flaw in almost all lab medicine evaluations

January 25, 2018

Anyone who has even briefly ventured into the realm of statistics has seen the standard setup. One states a hypothesis, plans a protocol, collects and analyzes data and finally concludes that the hypothesis is true or false.

Yet a typical lab medicine evaluation will state the importance of the assay, present data about precision, bias, and other parameters and then launch into a discussion.

What’s missing is the hypothesis, or in terms that we used in industry – the specifications. For example, assay A should have a CV of 5% or less in the range of XX to YY. After data analysis, the conclusion is that assay A met (or didn’t meet) the precision specification.

These specifications are rarely if ever present in evaluation publications. Try to find a specification the next time you read an evaluation paper. And without specifications, there are usually no meaningful conclusions.

A simple improvement to total error and measurement uncertainty

January 15, 2018

There has been some recent discussion about the differences between total error and measurement uncertainty, regarding which is better and which should be used. Rather than rehash the differences, let’s examine some similarities:

1.       Both specifications are probability based.
2.       Both are models

Being probability based is the bigger problem. If you specify limits for a high percentage of results (say 95% or 99%), then either 5% or 1% of results are unspecified. If all of the unspecified results caused problems this would be a disaster, when one considers how many tests are performed in a lab. There are instances of medical errors due to lab test error but these are (probably?) rare (meaning much less than 5% or 1%). But the point is probability based specifications cannot account for 100% of the results because the limits would include minus infinity to plus infinity.

The fact that both total error and measurement uncertainty are models is only a problem because the models are incorrect. Rather than rehash why, here’s a simple solution to both problems.

Add to the specification (either total error or measurement uncertainty) the requirement that zero results are allowed beyond a set of limits. To clarify, there are two sets of limits, an inner set to contain 95% or 99% of results and an outer set of limits for which no results should exceed.

Without this addition, one cannot claim that meeting either a total error or measurement uncertainty specification will guarantee quality of results, where quality means that the lab result will not lead to a medical error.

Do it right the first time – not always the best strategy

December 14, 2017

Watching a remarkable video about wing suit flyers jumping into an open door of descending plane, it appears that they had tried to accomplish this feat 100 times before having success.

On page four of a document that summarizes the quality gurus: Crosby, Deming and Juran, Crosby’s “Do it right the first time” appears. Clearly, this would have been a problem for the wing suit flyers. Crosby’s suggestion is appropriate if the state of knowledge is high. For the wing suit flyers, there were many unknowns, hence the state of knowledge was low. When the state of knowledge is meager, as it was at Ciba Corning when we were designing in vitro diagnostic instruments, we used the test analyze and fix strategy (TAAF) as part of reliability growth management and FRACAS. This sounds like the opposite of a sane quality strategy but in fact was the fastest way to achieve reliability goals for our instruments.

Risk based SQC – What does it really mean

December 4, 2017

Having just read a paper on risk based SQC, here are my thoughts…

CLSI has recently adopted a risk management theme for some of their standards. The fact that Westgard has jumped on the risk management bandwagon is as we say in Boston, wicked smaaht.

But what does this really mean and is it useful?

SQC as described in the Westgard paper is performed to prevent patient results from exceeding an allowable total error (TEa). To recall, TEa = |bias|/SD*1.65. I have previously commented that this model does not account for all error sources, especially for QC samples. But for the moment, let’s assume that the only error sources are average bias and imprecision. The remaining problem with TEa is that it is always given as a percentage of results, usually 95%. So if some SQC procedure were to just meet their quality requirement, up to 5% of patient results could exceed their TEa and potentially cause medical errors. This is 1 in every 20 results! I don’t see how this is a good thing even if one were to use a 99% TEa.

The problem is one of “spin.” SQC, while valuable, does not guarantee the quality of patient results. The laboratory testing process is like a factory process and with any such process, to be useful it must be in control (meaning in statistical quality control). Thus, SQC helps to guard against an out of control process. To be fair, if the process were out of control, patient sample results might exceed TEa.

The actual risk of medical errors due to lab error is a function not only of an out of control process but also due to all other error sources not accounted for by QC, such as user errors with patient samples (as opposed to QC samples), patient interferences, and so on. Hence, to say that risk based SQC can address the quality of patient results is “spin.” SQC is a process control tool – nothing more and nothing less.

And the best way of running SQC would be for a manufacturer to assess results from all laboratories.

Now some people might think, this is a nit-piking post but here is an additional point. One might be lulled into thinking that with this risk based SQC that labs don’t have to worry about bad results. But interferences can cause large errors that can cause medical errors. For example, in the maltose problem for glucose meters, 6 of 13 deaths occurred after an FDA warning. And recently, there have been concerns about biotin interference in immunoassays. So it’s not good to oversell SQC, since people might loose focus on other, important issues.