Product performance acceptance limits

February 17, 2013

Speed Limit Traffic Sign

During my consulting career – both while working full time within a company and as an external consultant – one of the most common issues has been the failure of the company to provide useful acceptance limits for performance parameters for a product.

These included:

  • No limits whatsoever
  • Clearly unachievable limits
  • Limits that could not be evaluated (often non quantifiable limits)

Limits are important because matching observed performance to limits (assuming the limits are correct) prevents releasing a product too late or too early:

  • Late – Delaying product release
  • Attempted early – Having the product rejected or delayed by the FDA
  • Early – Having the product rejected by the marketplace

Any of the above is bad for the financial health of a company.

So why is it so difficult for this fundamental market requirement to be established? I’m not sure.


More on the detection limit

February 4, 2013

detect

I wasn’t happy with my last blog entry (2/3/2013) so through the power of the Internet, I have replaced it with this entry.

Assume that for an assay, negative values are reported and the blank has an average of 0 and an sd of 0.3. This means the LoB is 0.5 (0 + 0.3×1.645). Several low samples also have an sd of 0.3 which means the LoD is 1.0 (0.5 + 0.3×1.645). This assumes that one replicate will be assayed routinely. Note that if more replicates were routinely run, the LoD can be made as low as one wishes.

What does a result at 0.5 mean? If one knew that it actually was 0.5 (because it was replicated) it would be a detected sample. But because it is a single determination, 95% of the time the observed value would fall between 0 and 1.0. This means – since the LoB is the criterion for detection – 50% of the time, the result would be called detected and 50% of the time, the result would be called not detected. Thus, the concept of “reliable” detection. For samples at 1.0, there is a 95% chance that the observed value will be greater than the LoB, where again the LoB is the basis for calling the sample detected.

But it is also true that for an assay with an observed value at 0.5, there is a 95% chance that this sample has analyte as there are only two states: analyte present or analyte absent. An observed result at 0 would have a 50% chance of analyte present and an observed result at -0.5 would still have a 5% chance of analyte present. The confusion is with the term “reliable detection.” Just because observations are not reliably detected until 1.0 doesn’t inform about the probability of analyte.

But what is the difference between results at 0.9 vs. 1.1 or more specifically, is it right to report 1.1 as that number and 0.9 without a number and as not detected. The sample at 0.9 fails the criterion for “reliable” detection but it would be extremely rare for either of these samples not to have analyte.

I say both should be reported as numbers, with the addition that the 0.9 result could be annotated as below the detection limit. Clearly, these two values are similar with respect to detection.

And what about numbers lower than the LoB. I suggest they should be given numerical values as well with the annotation that they are less than the LoB

So for values below the LoD or LoB, reporting the values as numbers is important as it provides more information than “not detected.”

detection