Overinterpretation of results – bad science

June 16, 2017

A recent article (subscription required) in Clinical Chemistry suggests that in many accuracy studies the results are overinterpreted. The authors go on to say that there is evidence of “spin” in the conclusions. All of this is a euphemistic way of saying the conclusions are not supported by the study that was conducted, which means the science is faulty.

As an aside, early in the article, the authors imply that overinterpretation can lead to false positives, which can cause potential overdiagnosis. I have commented that the word overdiagnosis makes no sense.

But otherwise, I can relate to what the authors are saying – I have many posts of a similar nature. For example…

I have commented that Westgard’s total error analysis while useful does not live up to his claims of being able to determine the quality of a measurement procedure.

I commented that a troponin assay was declared “a sensitive and precise assay for the measurement of cTnI” in spite of the fact that in the results section the assay failed the ESC- ACC (European Society of Cardiology – American College of Cardiology) guidelines for imprecision.

I published observations that most clinical trials conducted to gain regulatory approval for an assay are biased.

I suggested that a recommendation section should be part of Clinical Chemistry articles. There is something about the action verbs in a recommendation that make people think twice.

It would have been interesting if the authors determined how many of the studies were funded by industry, but on the other hand, you don’t have to be part of industry to state conclusions that are not supported by the results.

 


Why do performance goals change – has human physiology changed?

May 3, 2016

DSC01219edp

[Photo is Cape Cod Canal] Ok, the title was a rhetorical question. Some examples of the changes:

Blood lead lowest allowable limit:

1960s 60ug/dL
1978   30ug/dL
1985   25ug/dL
1991   10 ug/dL
2012     5 ug/dL

 

Glucose meters:

2003 ISO 15197 standard is 20% above 75,
2013 ISO 15197 standard is 15% above 100,
2014 proposed FDA standard is 10% above 70.

The players:

Industry – Regulatory affairs professionals participate in standards committees and support each other through their trade organization, AdvaMed. The default position of industry is no standards – when standards are inevitable, their position is to make the standard as least burdensome as possible to industry.

Lab – Clinical chemists and pathologists are knowledgeable about assay performance. ALERTpathologists are not clinicians. Also, lab people are often beholden to industry since clinical trials are paid by industry, conducted in hospitals by clinical chemists or pathologists.

Clinicians – Sometime, clinicians are part of standards but less often than one might think.

Regulators – People from FDA, CDC, and other organizations have to decide to approve or reject assays and are often part of standards groups.

Patients – Patients have a voice sometimes – diabetes is an example.

Medical Knowledge – As the title implies, the medical knowledge related to performance goals is probably of little consequence. For example, the harm of lead exposure is not a recent discovery.

Technology – Improving assay performance due to technical improvements probably does play a role in standards. All of a sudden the performance standard is tighter and coincidently, assay performance has improved.

Cost – Healthcare is rationed in most countries so cost is always an issue, but it is rarely discussed.

Note that the earliest standard for these two assays is 100% or more lenient than the current standard.


Biases in clinical trials performed for regulatory approval – update

June 12, 2015

basicedp

This article is now online (subscription required).


Biases in clinical trials performed for regulatory approval

May 31, 2015

review

The title of this post has been accepted for publication in the journal: Accreditation and Quality Assurance. The article describes common biases and how they might be avoided.


Ultrasensitive troponin and lab reports

February 22, 2014

heart

I attended an NEAACC meeting which featured the pathologist Petr Jarolim speaking about troponin. Since I haven’t followed troponin for a few years, I was humbled to learn how fast things have changed. For example, for ultrasensitive troponin assays:

  • 100% of healthy subjects have a measureable troponin
  • The higher the troponin value, the more likely a cardiac event, even for values below the cutoff

This led to a question from the audience, should healthy people get a baseline troponin value. The speaker is a pathologist not a clinician but he thought this was a good idea and he has his baseline value.

This raises an issue about lab reports, which typically do not list the manufacturer of the assay. But for an assay such as troponin, if one got serial results over the years and didn’t know if the results were from one manufacturer, the results might be hard to interpret. Maybe healthy people will get serial troponin values, maybe not, but the same lab report problem exists for all assays. The manufacturer/test method should be listed on the lab report.


Blood Lead – what is the rationale for the allowable level?

January 22, 2014

lead

I’ve been consulting for a while for a company that makes blood lead assays. It used to be that the lowest allowable level of lead was 10 ug/dL.  Below this level, no action was needed whereas above this level, a repeat assay was proscribed to determine if the source of contamination was still present. The lead level that sparks chelation treatment is 45 ug/dL. 

The cut-off of 10 makes one wonder. If a person (usually a child) has a level of 9.9 and another child has an undetectable lead level, do these two kids have the same risk for lead poisoning? (Note a lead assay measures lead exposure, not lead poisoning).

But now, the CDC has changed the allowable level to 5 ug/dL. This raises some strange possibilities. The parents of a child who previously had a lead level of 6 may not have been even notified of the result, but had the child just been tested they would be.

What has changed? One thing that has not changed is the biological role for lead in humans. There is none! And since higher levels of lead cause severe problems isn’t it likely that any level of lead is undesirable?


Quantitative results should be reported below the detection limit

January 27, 2013

detect

To recall:

LoB = limit of blank
LoD = limit of detection
LoQ = limit of quantification

Although there are various ways to calculate these quantities, LoB can be estimated by running some samples without analyte and adding 2 sd to the average. LoD is estimated by running some low samples and adding 2 sd to the LoB. LoQ is estimated by running some low samples and calling LoQ where the data equals a pre specified CV. See here for some pictures.

I’m not sure how most laboratories handle reporting values below the detection limit (LoD) but CLSI EP17-A2 suggests that for values between the LoB and LoQ, the value should be reported as “analyte detected, result < LoQ.”

Some of my blog entries (such as this one) come from my consulting experience, which is of course confidential. So here is a made up example of a tumor marker with a detection limit of 1.0 and some patients who are being serially monitored. As one might expect, no tumor marker is an ideal result.

Patient A: 0.3, 0.6, 0.2, 0.6, 0.3
Patient B: 0.2, 0.5, 0.4 0.6, 0.8

These quantitative results could be compared to the reports for all values of “analyte detected, result < LoQ.”

Clearly, there is something happening with patient B and this trend information is lost by the reporting rules.