February 22, 2014
I attended an NEAACC meeting which featured the pathologist Petr Jarolim speaking about troponin. Since I haven’t followed troponin for a few years, I was humbled to learn how fast things have changed. For example, for ultrasensitive troponin assays:
- 100% of healthy subjects have a measureable troponin
- The higher the troponin value, the more likely a cardiac event, even for values below the cutoff
This led to a question from the audience, should healthy people get a baseline troponin value. The speaker is a pathologist not a clinician but he thought this was a good idea and he has his baseline value.
This raises an issue about lab reports, which typically do not list the manufacturer of the assay. But for an assay such as troponin, if one got serial results over the years and didn’t know if the results were from one manufacturer, the results might be hard to interpret. Maybe healthy people will get serial troponin values, maybe not, but the same lab report problem exists for all assays. The manufacturer/test method should be listed on the lab report.
February 15, 2014
I was reading a paper, which started out by saying that measurement uncertainty estimates are used by clinicians to help them to interpret results. Now this type of general introductory statement is common in papers that then go off onto the meat of the paper, which in this case didn’t have anything to do with clinicians.
In any case, I have never seen a lab report that provides measurement uncertainty estimates and I believe that clinicians pretty much believe lab results as is and would not use measurement uncertainty estimates, were they provided. Does anyone know of clinicians who use measurement uncertainty estimates?
February 10, 2014
In my ongoing battle with authors who advocate an incomplete total error model for glucose meters (used in diabetes), my latest contribution is here. The link provides the abstract but the full paper seems also to be available without a subscription, at least for now.
January 22, 2014
I’ve been consulting for a while for a company that makes blood lead assays. It used to be that the lowest allowable level of lead was 10 ug/dL. Below this level, no action was needed whereas above this level, a repeat assay was proscribed to determine if the source of contamination was still present. The lead level that sparks chelation treatment is 45 ug/dL.
The cut-off of 10 makes one wonder. If a person (usually a child) has a level of 9.9 and another child has an undetectable lead level, do these two kids have the same risk for lead poisoning? (Note a lead assay measures lead exposure, not lead poisoning).
But now, the CDC has changed the allowable level to 5 ug/dL. This raises some strange possibilities. The parents of a child who previously had a lead level of 6 may not have been even notified of the result, but had the child just been tested they would be.
What has changed? One thing that has not changed is the biological role for lead in humans. There is none! And since higher levels of lead cause severe problems isn’t it likely that any level of lead is undesirable?
January 18, 2014
I was made aware of a new FDA glucose guidance by Sten Westgard. Reading the guidance revealed that I’ve been vindicated. Here’s why.
I’ve been critical of ISO and CLSI glucose standards and have critiqued them in the literature (1-5). I’ve also advocated for better performance standards on CLSI subcommittees that I’ve led, such as EP21 (Total Error) and EP27 (Error Grids). But I was summarily kicked off those subcommittees.
Previously, FDA recommended that companies adhere to ISO glucose guidelines, but a sentence in the new FDA guidance is rather striking: “FDA believes that the criteria set forth in the ISO 15197 standard do not adequately protect patients using BGMS devices in professional settings, and does not recommend using these criteria for BGMS devices.”
As one reads what the FDA does recommend, some things that I have published appear in one form or another, such as:
- The FDA guidance requires error limits on 100% of the data. (ISO and CLSI allow a certain percentage of the data to have unspecified errors).
- User error should not be eliminated – For example, FDA says: “FDA recognizes that most study evaluations performed for pre-market submissions occur in idealized conditions, thereby potentially overestimating the total accuracy of the BGMS device, even when performed in the hands of the intended user. Nonetheless, it is important that you design your study to most accurately evaluate how the device will perform in the intended use population.”
- And to point #2 “Testing should be performed by the intended POC (point of care) user (e.g., nurses, nurse assistants, etc.) to accurately reflect device performance in POC settings; at least 9 operators should participate in each study (capillary, venous, and arterial).” Readers may remember my recommendation that user error should not be excluded in EP21 was vigorously objected to by some subcommittee members.
- Krouwer JS. Wrong thinking about glucose standards. Clin Chem, 2010;56:874-875.
- Krouwer JS and Cembrowski GS. A review of standards and statistics used to describe blood glucose monitor performance. Journal of Diabetes Science and Technology, 2010;4:75-83.
- Krouwer JS and Cembrowski GS. Towards more complete specifications for acceptable analytical performance – a plea for error grid analysis. Clinical Chemistry and Laboratory Medicine, 2011;49:1127-1130.
- Krouwer JS. Why specifications for allowable glucose meter errors should include 100% of the data. Clinical Chemistry and Laboratory Medicine, 2013;51:1543-1544.
- Krouwer JS. The new glucose standard, PCCT12-A3 misses the mark. Journal of Diabetes Science and Technology, 2013;7:1400-1402.
January 17, 2014
One of the biggest sources of bias for assays in hospitals is reagent bias. Companies have specifications for allowable error during reagent testing.
In the chart, either reagent A or reagent B meets the allowable error spec. But if a customer switched from reagent A to reagent B, the observed error is greater and can be double the allowable error.
January 16, 2014
A while back, I heard of a company who withdrew their application to get their assay approved by the FDA. Not knowing the details, I can only speculate – my hunch is the company was overwhelmed by the process and not that their assay was a lost cause.
In fact, I’ve worked for several companies when things looked pretty bleak. That is, after years of development and a careful submission, the letter that comes back from the FDA makes things look hopeless. There are a huge number of findings – some admittedly minor while others seem insurmountable. But all is not lost and it would be a mistake to give up. One needs to understand the situation and then plan what to do. Keep in mind:
- The fact is that virtually all assays should be approved. I have written about this before. Basically, the harm caused by the unavailability of an assay result (due to the assay being rejected) is usually far greater than harm due to a rare, highly incorrect result.
- One reason findings occur is because that is what reviewers do! For example, being a reviewer for journals, I always have findings, even for good papers. Hence, one should plan for a bunch of findings – this is the current environment.
The plan of action should include:
- Ensuring that there are no misunderstandings. Some findings occur because the reviewer has not understood something, for a variety of reasons. The best way to remedy this is with a conference call.
- Managing how the company personnel react to the findings. Typically, the company scientists are experts in the topic compared to the FDA reviewer. If a contested point arises where the company is clearly correct, one needs to be diplomatic. No one likes to be called wrong. Moreover, not all findings are as simple as one view is right and the other wrong. I remember one case where the company was told to redo all of their regressions using a different statistical method. They proposed to argue this rather than simply do this task, which in all likelihood would not have changed any conclusions.
- Reinforce the value of the assay – i.e., the problems with the unavailability of the information should the assay be rejected. This of course must be done carefully. The problem is that the FDA tends to overweight the risk of harm due to wrong results compared to the risk of harm due to the unavailability of results.
- Stand your ground when work is suggested (or required) that is both unnecessary and would delay the assay by too much time.
One thing some companies do is to submit a detailed pre-IDE with a lot of questions. I don’t favor a detailed pre-IDE because it allows for the possibility of arguing over details (e.g., findings) before any actual data has been collected.