I recently reviewed a paper for a journal and my recommendation was “accept with minor modifications”, whereby my suggestions for improvement were provided to the authors – as is usual. Eventually, the paper was resubmitted – my suggestions were implemented and I recommended “accept.” But I noticed that the other reviewer had recommended reject with no comments to the authors other than saying it was an article of limited interest. As a sometimes author I would have been pretty unhappy with such sparse feedback. I think authors deserve better.
These requests are not coming from Clinical Chemistry or similar journals. In the last two weeks, here is a list of journals requesting me to submit an article. Many (all?) require fees to publish your article.
Journal of Clinical Research and Ophthalmology
Open Access Journal of Diabetes
Endocrinological diabetes Clinical and Medical Research
Archives of Preventive Medicine
International Journal of Diabetes and Clinical Research
EC Diabetes and Metabolic Research
Journal of International Medical Research
European Journal of Biomedical and Pharmaceutical Sciences
Functional Foods in Health and Disease Journal
International Journal of Engineering Inventions
Journal of Palliative Care
Advances in Mechanical Engineering
Journal Cell Biology & Cell Metabolism
International Journal of Sports and Exercise Medicine
Therapeutic Advances in Endocrinology and Metabolism
International Journal of Computational Engneering Research
Journal of Research in Diabetes & Metabolism
I just came back from the Quality in the Spotlight conference in Antwerp. Many of the presentations were about the Milan conference. After the Antwerp conference, I had an epiphany so here it is:
Regarding setting and evaluating performance goals in laboratory assays, I believe there are two worlds:
World A consists of the people who are either part of the Milan conference, previous meetings, or have a keen interest in them. These people talk about creating performance specifications based on outcome studies or biological variation, they estimate sigma values for assays or calculate measurement uncertainty. They also praise the ISO equivalent of the 9001 standard, which for labs is ISO 15189. Such discussions have been going on for a long time. World A people primarily work in hospitals.
World B consists of people who work in industry developing assays. A subset of this group collect and analyze data to determine if product performance meets the company’s performance specifications and they also prepare FDA submissions.
Most of the people in World A are not also part of World B. I belong to both worlds, which is rare.
I contend that if you put a group of World B people in a room and explain the ideas of the World A group, the World B group would listen politely and maybe ask a few questions (perhaps a sign that World A ideas are unknown to World B). After the lecture, World B people would go back to work and soon forget everything that was said.
Basically, the impact of World A on World B is zero.
That is not to say that no one has an influence on World B. Changes to FDA regulations have an impact as do newer statistical tools such as those by Bland Altman and Passing Bablok.
Now it has been frustrating for me since I have tried to have an influence on World A by suggesting that World B methods should be considered for World A. I have been largely unsuccessful.
So maybe it’s time for World A to ask themselves why they have no influence on World B.
Amazon has been in the news regarding their suing people who write fake reviews of their products. A recent article in the New England Journal of Medicine describes cases where authors supply fake names and emails when a journal to which they have submitted a publication asks for suggested reviewers. These emails belong to the author, who then writes fake reviews so the article gets accepted.
I don’t there if there is a connection but in the past year or so, I get an email about once a week from different journals asking me to submit an article. I assume that many people gets these emails.
And don’t forget this site, which allows one to generate a fake paper.
I’ve written before that total error means error from any source not just analytical error. Thus, if a clinician makes an incorrect treatment decision because the test result is wrong due to user error, it is little consolation to know that the analytical system was ok.
All of this applies to SMBG (self-monitoring blood glucose) where the treating “clinician” and user are the patient.
A Letter in Clinical Chemistry (subscription required) shows that whereas 9 out 10 glucose meters met performance standards when the tests were performed by expert users, only 6 out of 10 meters met standards when the tests were performed by routine users.
Of interest as well is that the authors cite as performance standards both the ISO 2013 standard and the suggested FDA draft performance standard from 2014.
For those who have had some papers rejected over the years (like me), this post is worth reading … http://majesticforest.wordpress.com/2014/08/15/papers-that-triumphed-over-their-rejections/
Glucose meters are an example of unit use devices, meaning that when a sample is assayed, a new reagent strip (the unit) has to be used. Some years ago, unit use device manufacturers argued that QC is less important for their products because of among other reasons, a more rigorously controlled manufacturing process was used.
I have been doing some work with glucose meters and note that at least twice this year there have been recalls for reagents strips from two different manufacturers. Here are my reasons for why these recalls continue to happen.
- Vendors that supply raw materials have provided different lots from those used to design and evaluate the original reagent strip.
- Vendor processes have changed.
- The glucose meter manufacturer processes have changed
- The process used to release reagent strip lots is imperfect. It is not as rigorous as a full blown method comparison and the parameters measured may not reflect all aspects of performance.
- The process parameters limits may not be correct.
- Some key variables may not be measured.
- The sample size may not be adequate.
- And last but not least people make mistakes!!!
As someone who worked for manufacturers, the recall sequence was usually: our service department received complaints from customers, these complaints were verified in-house, and a recall was initiated.