**History** – Total error has probably been around for a long time but the first mention that I found is from Mandel (1). In talking about a measurement error, he wrote:

*error = x – R = (x – mu) + (mu – R) where x=a measurement and R=reference*

The term (x – mu) is the imprecision and (mu – R) is the inaccuracy. An implied assumption is that the errors are **IIDN** = independently and identically distributed in a normal distribution with mean zero and variance sigma squared. With laboratory assays of blood, this is almost never true.

**Westgard model** – The Westgard model of total error (2) is the same as Mandel; namely that

*Total error TE = bias + 2 times imprecision*.

The problem with this model is that it neglects other errors, with interfering substances affecting individual samples as perhaps the most important. Note that it is not just rare, large interferences that are missed in this model. I described a case where small interferences inflate the total error (3).

**Lawton model** – The Lawton model (4) adds interfering substances affecting individual samples.

**Other factors** – I added (5) to the Lawton model by including other factors such as drift, sample carryover, reagent carryover.

Here’s an example of a problem with the Westgard model. This model suggests that average bias accounts for systematic error and imprecision accounts for random error. Say you have an assay with linear drift between a 30 minute calibration cycle. The assay starts out with a negative bias, has 0 bias at 15 minutes, and ends with a positive bias. The Westgard model would estimate zero bias for the systematic error and assign imprecision for the random error. But this is not right. There is clearly systematic bias (as a function of time) and the calculated imprecision (the SD of the observations) is not equal to random error.

**The problem with Bland Altman Limits of Agreement** – In this method, one multiplies (usually x2) the SD of differences of the candidate method from reference. This is an improvement since interferences or other error sources are included in the SD of differences. But the differences must be normally distributed and outliers are allowed to be discarded. By discarding outliers, one can not claim total error.

**The problem with measurement uncertainty** – The GUM method (Guide to the Expression of Uncertainty in Measurement) is a bottoms up approach which adds all errors as sources of imprecision. I have critiqued this method (6) as bias is not allowed in the method, which does not seem to match what happens in the real world, and errors that cannot be modeled will not be captured.

**The problem with probability models** – Any one of the above models paradoxically cannot account for 100% of the results which makes the term “total” in total error meaningless. The above *probability* models will __never__ account for 100% of the results as the 100% probability error limits stretch from minus infinity to plus infinity (7).

**Errors that cannot be modeled** – An additional problem is that there are errors that can occur but really can’t be modeled, such as user errors, software errors, manufacturing mistakes, and so on (7). The Bland Altman method does not suffer from this problem while all of the above other methods do.

**A method to account for all results** – The mountain plot (8) is simply a plot (or table) of differences of the candidate method from reference. No data are discarded. This is a nonparametric estimate of total error. A limitation is that error sources that are not part of the experiment may lead to an underestimate of total error.

**Error Grid Analysis** – One overlays a scatterplot from a method comparison on an error grid. The analysis is simply to tally the proportions of observations in each error grid zone. This analysis also accounts for all results.

**The CLSI EP21 story – **The original CLSI total error standard used the Westgard model but had a requirement that outliers could not be discarded and thus if outliers were present that exceeded limits, the assay would fail the total error requirement – 100% of the results had to meet goals. In the revision of EP21, the statements about outliers were dropped and this simply became the Westgard model. The mountain plot, which was an alternative method in EP21 was dropped in the revision.

Moreover, I argued that user error had to be included in the experimental setup. This too was rejected and the proposed title change from *total analytical error* to *total error* was rejected.

**References**

- Mandel J. The statistical analysis of experimental data Dover, New York 1964 p 105.
- Westgard, JO, Carey, RN, Wold, S. Criteria for judging precision and accuracy in method development and evaluation. Clin Chem. 1974;20:825-833
- Lawton, WH, Sylvester, EA, Young-Ferraro, BJ. Statistical comparison of multiple analytic procedures: application to clinical chemistry. Technometrics. 1979;21:397-409.
- Krouwer JS The danger of using total error models to compare glucose meter performance. Journal of Diabetes Science and Technology, 2014;8:419-421
- Krouwer JS Setting Performance Goals and Evaluating Total Analytical Error for Diagnostic Assays. Clin. Chem., 48: 919-927 (2002).
- Krouwer JS A Critique of the GUM Method of Estimating and Reporting Uncertainty in Diagnostic Assays Clin. Chem., 49:1818-1821 (2003)
- Krouwer JS The problem with total error models in establishing performance specifications and a simple remedy. Clinical Chemistry and Laboratory Medicine, 2016;54:1299-1301.
- Krouwer JS and Monti KL A Simple Graphical Method to Evaluate Laboratory Assays, Eur. J. Clin. Chem. and Clin. Biochem., 33, 525-527 (1995)