Thanks to Sten Westgard, whose website alerted me to an article about analytical performance specifications. Thanks also to Clin Chem Lab Med for making this article available without a subscription.
To recall, the EFLM task group was going to fill in details about performance specifications as initially described by the Milan conference held in 2014.
Basically, what this paper does is to assign analytes (not all analytes that can be measured but a subset) to one of three categories for how to arrive at analytical performance specifications: clinical outcomes, biological variation, or state of the art. Note that no specifications are provided – only which analytes are in which categories. Doesn’t seem like this should take three years.
And I don’t agree with this paper.
For one, talking about “analytical” performance specifications implies that user error or other mishaps that cause errors are not part of the deal. This is crazy because the preferred option is the effect of assay error on clinical outcomes. It makes no sense to exclude errors just because their source is not analytical.
I don’t agree with the second and third options ever playing a role (biological variation and state of the art). My reasoning follows:
If a clinician orders an assay, the test must have some use for the clinician to decide on treatment. If this is not the case, the only reason a clinician would order such an assay is that he has to make a boat payment and needs the funds.
So, for example say the clinician will provide treatment A (often no treatment) if the result falls within X1-X2. If the result is greater than X2, then the clinician will provide treatment B. Of course this is oversimplified since other factors are involved besides the assay result. But if the assay is 10 times X2 but truth is between X1 and X2, then the clinician will make the wrong treatment decision based on laboratory error. I submit this model applies to all assays and that if one assembles clinician opinion, one can construct error specifications (see last sentence at bottom).
In the event that outcome studies do not exist, authors encourage double-blind randomized controlled trials. Get real people – these studies would never be approved! (e.g., feeding clinicians the wrong answer to see what happens).
The authors also suggest simulation studies which I have previously commented that their premier simulation study which was cited was flawed (Boyd Bruns glucose meter simulations).
The Milan 2014 conference rejected the use of clinician opinion to establish performance specifications. I don’t see how clinical chemists and pathologists trump clinicians.