The average and the individual – PSA screening and lab statistics

March 22, 2009

peeps

PSA has been in the news lately as two studies have been completed about the benefits (or lack thereof) of PSA screening (recent  articles in NEJM). These studies talk about what happens on average – how many lives are saved (in one study 7 per 10,000). Economic implications come with these results, although with these studies, there were no definitive conclusions.

The problem is an individual is concerned what happens to him (or her for cases other than prostate) much more than what happens on average. So to be told on average some bad thing shouldn’t have happened – it’s a rare event – is not very helpful.

There are similar currents in lab statistics. In GUM (Guide to the expression of Uncertainty in Measurement), one is informed about the location of 95% of the results. But if you have a result that has a large error, then GUM for you is not helpful (large errors are not considered by GUM). Similar arguments apply to six sigma.

Moreover, one could perhaps say the same thing about quality control – and this is in the works with equivalent quality control. The thinking is why run all that quality control when it is rarely out.

What happens on average is important but so are the large deviations that are rare. All results are important – the average and each individual result.

Advertisements

Comparative Effectiveness Research may be coming to a lab near you

March 17, 2009

dollar

 

Comparative Effectiveness Research (CER) to the tune of 1.1 billion dollars is part of the economic stimulus package – see here. Whereas much of the money will be used to compare medical treatments, (drugs and procedures) medical devices are also included in CER. What this means is not clear but one could speculate that questions may be asked such as

·         Is pharmacogenomic testing for Coumadin therapy effective?

·         Is percent free PSA useful?

·         Is Point of Care troponin as effective as a lab troponin?

Implied in these questions are:

What is the dollar cost of the procedure? What information does it provide over other procedures (increase in sensitivity and specificity). How much does this information add – either to a more accurate medical decision or quality of life?

If all of this begins to happen, then the biggest requirement is to ensure that unbiased data analysis methods are used. This means that:

·         Clear, quantitative goals are required

·         Study designs are chosen that are practical and will address the questions

·         Standards for data collection and analysis are used

·         Results are written such that recommendations and conclusions are supported by the data

Statisticians can play a key role in providing clarity and quality – they need to be included as full members of this effort (part of the design team) and not relegated to “crunching the numbers.”

 

 


Beware of quality indicators

March 1, 2009

beware1

There is a new standard that is coming out at CLSI: GP35 “Development and Use of Quality Indicators for Process Improvement and Monitoring of Laboratory Quality; Proposed Guideline

Unfortunately, I can’t recommend it – here’s why.

This standard recommends that various aspects of the clinical laboratory be monitored for quality with so called quality indicators, such that 20 or maybe 50 such indicators will be monitored. The first question is, how will goals be determined for each quality indicator. This is no small task. Previous attempts at CLSI to set goals for accuracy failed and the projects were cancelled. Here, there are multiple processes that require goals.

Another problem is that the logic to select items to monitor is not optimal. FMEA should be used but it is only mentioned in passing.

No mention of severity occurs in the document. One is supposed to track all of the indicators without any notion as to which are the most important. Pareto charts are mentioned but based on frequency alone and not severity.

There is a solution. Use FMEA or FRACAS and reduce the numerous indicators to one or perhaps a few – the error rate of the clinical laboratory. This would be a severity adjusted rate that could be followed using the principles of reliability growth management (1).

References

  1. Assay Development and Evaluation: A Manufacturer’s Perspective. Jan S. Krouwer, AACC Press, Washington DC, 2002