Looking at a paper about QC procedures (subscription required), I admit I was intrigued by the title: “Selecting Statistical Procedures for Quality Control Planning Based on Risk Management.”
Just reading the abstract and the first few lines informs me that the conclusions are unwarranted because the authors claim, they can estimate the probability of patient harm based on which QC procedure is chosen.
A QC procedure helps to detect problems with the assay process. Patient harm can be caused by an assay process gone astray but it can also be caused by things with an assay process that has not gone astray. For example, a patient interference can cause patient harm and will not be detected by QC. Moreover, the authors assume that an out of control condition will occur in a constant fashion until it is detected by the next QC sample, but a shift in results that occurs for a limited number of samples can occur and is eliminated from consideration. So even QC considerations don’t include all possible errors.
Ok, I admit that I have stopped reading but it is clear that whatever the authors estimate (assuming their logic is correct) is an underestimate of the probability of patient harm.
That also makes me wonder, of all cases of patient harm caused by wrong medical decisions caused by assay error, what percentage are due to the assay process gone bad vs. other causes (e.g., interferences). For example searching for the word “interference” in the title of Clinical Chemistry over the last 10 years yielded 912 results.