PSA and Obamacare

May 29, 2012

Having a personal experience with prostate cancer, I saw a blog entry which I can’t improve upon, so here it is.

Advertisements

LS MAD and LS MaxAD curves don’t have as much information as an error grid

May 25, 2012

I had seen a paper about LS MAD (locally-smoothed median absolute difference curves) before but it referenced a paper I didn’t have. But now I have come across a paper that is very clear and explains everything (1). The locally-smoothed median absolute difference curves plots the median absolute difference against reference where the median absolute difference is averaged over a small region. Glucose is used as an example and the small region is 30 mg/dL (± 15).

The problem with this approach is simple. Outliers won’t appear on the graph. So if truth is 30 mg/dL and the candidate method reports 300 mg/dL, this life threatening result won’t show up. Hence, the LS MAD curve has lost information contained in the data.

But this paper accounts for that by including a second curve called the LS MaxAD (locally-smoothed maximum absolute difference curves) where the maximum absolute difference is plotted against reference and averaged over a much smaller region of 2 mg/dL (± 1). The region chosen can be changed of course – the ones above are used by these authors.

Now, if truth is 30 mg/dL and the candidate method reports 300 mg/dL, this life threatening result will show up. But there are still problems. If truth is 200 mg/dL, the candidate method might report 50 or 350 – both 150 mg/dL errors but in different directions. But the LS MaxAD curve treats these cases as the same. However, a Parkes error grid would place the -150 error into zone C and the + 150 error into zone B. Zone C is a more serious error than zone B, so the LS MaxAD curve has lost information. And the error grid is one graph whereas LS MAD and LS MaxAD are two.

Reference

  1. Kost GJ, Tran NK and Singh H. Mapping point-of-care performance using locally-smoothed median and maximum absolute difference curves Clin Chem Lab Med 2011;49(10):1637–1646.

The basis of a spec

May 16, 2012

I proposed and was the chairholder of EP27, the CLSI standard about error grids. A while ago during the document development committee discussions, I suggested that an error limit specification contain two items – the level of error (e.g., ± 10%) and the percentage of results that would meet the limit (e.g., 95%). A committee member strongly objected and said no – the spec should be the level or error only. So I said, would it be acceptable for the 10% spec, if 20% of results met the spec. He said, of course no. So I said, what about 60%? He said again no and commented – I see where you’re going. Yes, the percentage of results is used to determine acceptability but is not part of the spec. So I said, a spec is a set of criteria and an evaluation is conducted to determine if those criteria have been met, but this line of reasoning didn’t convince him. There might have been more to this story but then I was unexpectedly and rather unceremoniously thrown off the document development committee.


Peer Review

May 8, 2012

Having read about peer review recently, here are two errors.

False negatives – These are defined as articles that deserve to be published but are not. A good example comes from a Wall Street Journal article about the drug Vioxx. Basically, a pharmacist analyzed data – correctly – that showed a problem with the drug but her submission to the New England Journal was rejected.

IMHO, false negatives result from the author – “not being a member of the club”

False positives – These are defined as articles that are published but shouldn’t be. The work of Ioannidis is a good place to start such as “Why Most Published Research Findings Are False.”

IMHO, false negatives result from the author – “being a member of the club”

The good part of peer review comes from constructive comments.