March 19, 2015
For no particular reason, I searched for Dr. Getzenberg in Google. To recall about previous entries on this blog, search for EPCA-2 on this blog. (there is a search form on the top right of this blog). I found two rather different entries in Google.
One deals with the seventh retraction for articles written by Dr. Getzenberg
Another talks about awards distinction and how he is a senior leader in oncology and urology.
March 16, 2015
There is a new article in Clinical Chemistry about a complicated (to me) analysis of quality targets for A1c when it would seem that a simple error grid – prepared by surveying clinicians would fit the bill.
Thus, this paper has problems. They are:
- The total error model is limited to average bias and imprecision. Error from interferences, user error, or other sources is not included. It is unfortunate to call this “total” error, since there is nothing total about it.
- A pass fail system is mentioned, which is dichotomous and unlike an error grid which allows for varying degrees of error with respect to severity of harm to patients.
- A hierarchy of possible goals are mentioned. This comes from a 1999 conference. But there is really only one way to set patient goals (listed near the top of the 1999 conference): namely; a survey of clinician opinions.
- Discussed in the Clinical Chemistry paper is the use of biological variation based goals for quality targets. Someone needs to explain to me how this could ever be useful.
- The analysis is based on proficiency survey materials, which due to the absence of patient interferences (see #1) is a subset of total error.
- From I could tell from their NICE reference (#11) in the paper, the authors have inferred that total allowable error should be 0.46% but this did not come from surveying clinicians.
- I’m on-board with six sigma in its original use at Motorola. But I don’t see its usefulness in laboratory medicine compared to an error grid.
March 12, 2015
I’ve written before that total error means error from any source not just analytical error. Thus, if a clinician makes an incorrect treatment decision because the test result is wrong due to user error, it is little consolation to know that the analytical system was ok.
All of this applies to SMBG (self-monitoring blood glucose) where the treating “clinician” and user are the patient.
A Letter in Clinical Chemistry (subscription required) shows that whereas 9 out 10 glucose meters met performance standards when the tests were performed by expert users, only 6 out of 10 meters met standards when the tests were performed by routine users.
Of interest as well is that the authors cite as performance standards both the ISO 2013 standard and the suggested FDA draft performance standard from 2014.