There is a thought provoking blog post here, which references this post. The ur- reference is here (Supreme Court decision). These posts concern a lawsuit of the drug company Matrixx, whose drug caused problems. In particular, …
“Matrixx contended that the bar was statistical significance, and that anything short of that was not a “material event” that had to be addressed.”
I have commented before that the use of point estimates with confidence intervals provides more information than hypothesis testing. For either case, one has to beware that assumptions (distributions, random sampling) may not be met or that biases may exist in the study in which case the estimates are not right.
As a consultant to medical diagnostic companies, I frequently have to correct the statement from a study (especially with hypothesis testing): “Compound A was found not to interfere with assay XYZ.” The conclusion should be stated as: “Interference from Compound A was not detected with assay XYZ.”
But there is another way to view things. For a diagnostic assay, the manufacturer performs a series of relatively short studies designed to estimate relevant performance parameters. If good enough, the data are submitted to the FDA and the assay is approved. After the assay is released, millions of results are reported and this provides a potential new data stream; namely adverse events or even better, the adverse event rate (the number of adverse events over the number of assays run).
Manufacturers try to reproduce adverse events in-house under controlled conditions. This can be difficult because the exact conditions under which the event occurred are often not available to the in-house scientists. And there can be allegations that the assay was not used properly. But the point is that more attention should be paid to adverse event rates. This is the real data about assay quality.