Overinterpretation of results – bad science

June 16, 2017

A recent article (subscription required) in Clinical Chemistry suggests that in many accuracy studies the results are overinterpreted. The authors go on to say that there is evidence of “spin” in the conclusions. All of this is a euphemistic way of saying the conclusions are not supported by the study that was conducted, which means the science is faulty.

As an aside, early in the article, the authors imply that overinterpretation can lead to false positives, which can cause potential overdiagnosis. I have commented that the word overdiagnosis makes no sense.

But otherwise, I can relate to what the authors are saying – I have many posts of a similar nature. For example…

I have commented that Westgard’s total error analysis while useful does not live up to his claims of being able to determine the quality of a measurement procedure.

I commented that a troponin assay was declared “a sensitive and precise assay for the measurement of cTnI” in spite of the fact that in the results section the assay failed the ESC- ACC (European Society of Cardiology – American College of Cardiology) guidelines for imprecision.

I published observations that most clinical trials conducted to gain regulatory approval for an assay are biased.

I suggested that a recommendation section should be part of Clinical Chemistry articles. There is something about the action verbs in a recommendation that make people think twice.

It would have been interesting if the authors determined how many of the studies were funded by industry, but on the other hand, you don’t have to be part of industry to state conclusions that are not supported by the results.