Allowable limit for blood lead – why does it keep changing

August 19, 2017

A recent article suggests the CDC limit for blood lead may be lowered again. The logic for this is to base the limit on the 97.5th percentile of NHANES data, and to revisit the limit every 4 years. An article in Pediatrics has the details. Basically, the 97.5th percentile for blood lead has been decreasing – it was around 7 in 2000. And in the Pediatrics article it is stated that: “No safe blood lead concentration in children has been identified.” Nor has human physiology changed!

It’s hard to understand the logic behind the limit. If a child had a blood lead of 6 in 2011, the child was ok according to the CDC standard, but not ok in 2013. Similarly, a blood lead of 4 in 2016 was ok but not in 2017?

Here is a summary of lead standards in the USA through time.

1960s 60ug/dL
1978   30ug/dL
1985   25ug/dL
1991   10 ug/dL
2012    5 ug/dL
2017?   3.48 ug/dL


Blog Review

May 26, 2017

I started this blog 13 years ago in March 2004 – the first two articles are about six sigma, here and here. The blog entry being posted now is my 344th blog entry.

Although the blog has an eclectic range of topics, one unifying theme for many entries is specifications, how to set them and how to evaluate them.

A few years ago, I was working on a hematology analyzer, which has a multitude of reported parameters. The company was evaluating parameters with the usual means of precision studies and accuracy using regression. I asked them:

  1. a) what are the limits that, when differences from reference are contained within these limits, will ensure that no wrong medical decisions would be made based on the reported result (resulting in patient harm) and
  2. b) what are the (wider) limits that, when differences from reference are contained within these limits, will ensure that no wrong medical decisions would be made based on the reported result (resulting in severe patient harm)

This was a way of asking for an error grid for each parameter. I believe, then and now, that constructing an error grid is the best way to set specifications for any assay.

As an example about the importance of specifications there was a case for which I was an expert witness whereby the lab had produced an incorrect result that led to patent harm. The lab’s defense was that they had followed all procedures. Thus, as long as they as followed procedures, they were not to blame. But procedures, which contain specifications, are not always adequate. As an example, remember the CMS program “equivalent quality control”?


You get what you ask for

April 13, 2007

I have written before about the difference between horizontal and vertical standards. ISO/TC212 produces standards for the clinical laboratory. The following came from a talk by Dr. Stinshoff, who has headed the ISO/TC212 effort. The red highlights are from Dr. Stinshoff.

“ISO/TC 212 Strategies:

– Select new projects using the breadth and depth of the expertise gathered in ISO/TC 212; focus on horizontal standards; address topics that are generally applicable to all IVD devices; and, limit the activities of ISO/TC 212 to a level that corresponds to the resources that are available (time and funds of the delegates).

– Assign high preference to standards for developed technologies; assign high preference to performance-oriented standards; take the potential cost of implementation of a standard into consideration; and, solicit New Work Item ideas only according to perceived needs, which should be fully explained and supported by evidence.

– Globalize regional standards that have a global impact”

 

What is meant by performance-oriented standards? “ISO Standardisation Performance vs. Prescriptive Standards:

Whenever possible, requirements shall be expressed in terms of performance rather than design or descriptive characteristics. This approach leaves maximum freedom to technical development….

(Excerpt of Clause 4.2, ISO/IEC Directives, Part 2, 2004)”

So one reason ISO/TC212 produces horizontal standards is because that is their strategy.

 


European and US clinical laboratory quality

April 5, 2007

I am somewhat skeptical about the statement in a recent Westgard essay which suggests that Europeans  who use ISO 15189 to help with accreditation are more likely to improve quality in their laboratories than US laboratories, who just try to meet minimum CLIA standards. ISO 15189 is much like ISO 9001, which  is used for businesses. I have previously written that ISO 9001 certification plays no role in improving quality for diagnostic companies (1). As an example of ISO 15189 guidance – albeit in the version I have which is from 2002 – under the section “Resolution of complaints”, ISO 15189 says the laboratory should have a policy and procedures for the resolution of complaints. In ISO 17025, which is a similar standard, virtually the identical passage occurs.

Westgard mentions that clinical laboratories need a way to estimate uncertainty that is more practical than the ISO GUM standard and mentions a CLSI subcommittee which is working on this. A more practical way will be unlikely. I was on that subcommittee. I didn’t want to participate at first, since I don’t agree that clinical laboratories should estimate uncertainty according to GUM (2). However, the chair holder wanted me for my contrarian stance, so I joined. I must say that I enjoyed being on the subcommittee, which had a lot of smart people and an open dialog. However, I was unable to convince anyone of my point of view and therefore resigned, because it would make no sense to be both an author of this document and reference 2. The last version of this document I saw was 80 pages long (half of it an Appendix) with many equations. This version will not be understood by most (any?) clinical laboratories. However, there is a CLSI document that allows one to estimate uncertainty intervals easily, EP21A, although not according to GUM.

What is needed to improve clinical laboratory quality anywhere? Policies that emphasize measuring error rates such as FRACAS (3).

References

  1. Krouwer JS: ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry. Accred. Qual. Assur. 2004;9:39-43.
  2. Krouwer JS: A Critique of the GUM Method of Estimating and Reporting Uncertainty in Diagnostic Assays Clin. Chem., 49:1818 -1821 (2003).
  3. Krouwer JS: Using a Learning Curve Approach to Reduce Laboratory Error, Accred. Qual. Assur., 7: 461-467 (2002).

 


Better automation for clinical chemistry

March 30, 2007

I first heard Martin Hinckley speak at an AACC forum. That talk was published in Clinical Chemistry, 1997;43:873-879.

A new article is available at  http://www.springerlink.com/content/y5m227582854220k/fulltext.pdf (I suspect this link will work for a limited time).

This article deals with automation and how it has not lived up to the expectation that it would greatly improve quality. Hinckley offers some interesting advice regarding how to improve the implementation of automation.

 


You’re either part of the problem or part of the solution

February 18, 2007

 

Westgard bemoans the current process of establishing performance claims for assays.  He states that

“There is one major fault with this approach [precision, accuracy, linear range, reference range(s), etc.]. Manufacturers do not make any claim that a method or test system provides the quality needed for medical application of the test results, i.e., FDA clearance does not require a claim for quality! To do so, a manufacturer would need to state a quality specification, e.g., the allowable total error, the maximum allowable SD, or the maximum allowable bias, then demonstrate that the new method or test system has less error than specified by those allowable limits of error.”

You’re either part of the problem or part of the solution. In this case, Westgard is part of the problem. His suggestion of allowable total error as stated above sounds good, but as I have pointed out many times,

  • Westgard’s maximum allowable total error is for a specified percentage of results – often 95% – which allows for too many results to fail to meet clinical needs
  • Westgard’s suggested testing procedures as described by his quality control rules fail to include all contributions to total error

Thus, 5% of a million results means that there could be 50,000 medically unacceptable results – that’s not quality. When one tests with control samples, one cannot detect interferences, which is often a source of important clinical laboratory errors so all of Westgard’s control quality algorithms for total error are meaningless – they inform about a subset of total error.

Things are improving. In the FDA draft guidance for CLIA waiver applications, FDA requires use of error grids (such as those in use for glucose) and demonstration of lack of erroneous results as defined in those error grids in addition to total allowable error. Many of my essays stress the need to go beyond total allowable error – as used by Westgard – and to put in place procedures to estimate erroneous results (1).

References

  1.  Jan S. Krouwer: Recommendation to treat continuous variable errors like attribute errors. Clin Chem Lab Med 2006;44(7):797–798.

Frequency of Medical Errors II – Where’s the Data?

May 17, 2006

In virtually any tutorial about quality improvement, one is likely to encounter something like Figure 1, which describes a “closed loop” process. The way this works is simple. One has a goal which one wishes to meet. One measures data appropriate to the goal. If the results from this measure fall short, one enters the “closed loop” where one revises the process and measures progress and continues this cycle until the goal is met. Then, one enters into a different phase, (not shown in Figure 1), where one ensures that the goal will continue to be met.

Two deficiencies in the patient safety movement are: 1) the lack of clear, quantitative goals; and 2) the data from which one can measure progress. A list of problems the way goals are often stated is available (1).

An interesting paper that appeared recently discuses wrong site surgery (2). Given the visibility of wrong site surgery, one notable aspect of this paper is that it is one of the few sources which has wrong site surgery rates. The wrong site surgery rate was 1 in 112,994, or 8.85 wrong site surgeries per million opportunities. To recall, a 6 sigma process has 3.4 errors per million opportunities, so this rate is about 5.8 sigma. The authors state that the rate is equivalent to an error occurrence once every 5 to 10 years. This corresponds to the lowest frequency ranking in the Veterans Administration scheme of an error occurrence once or less every 5 to 30 years (3).

Another interesting aspect of the paper is the discussion of the Universal Protocol, which is a series of steps incorporated into the surgical process and designed to prevent wrong site surgery. One of the conclusions of the paper is that the Universal Protocol does not prevent all wrong site surgeries. The Universal Protocol was implemented as the solution to prevent wrong site surgeries. The problem is that where one would hope that a process change might be sufficient to remedy an issue, often this is not the case. Thus, one must continue to collect data and to add remedies and or change existing ones until the goal has been met, or in other words, continue with the cycle shown in figure 1. So one criticism of the patient safety movement is the mandated, static nature of corrective actions. The dynamic nature implied in figure 1 seems to have been bypassed.

The authors lament that the public is likely to overreact to wrong site surgery relative to other surgical errors such as retained foreign bodies. There are several points to be made here.

In classifying the severity of an error, one must examine the effect of the error, which means looking at the consequences of downstream events connected to the error (often facilitated by using a fault tree). Based on the authors discussion from actual data, retained foreign bodies is a more severe error than wrong site surgery. This is somewhat of a surprise, but is understandable.

Given one has classified all error events for criticality (which is severity and frequency of occurrence), one has the means to construct a Pareto chart. Since organizations have limited resources and cannot fix all problems, based on the Pareto chart, retained foreign bodies is likely to be higher on the Pareto chart than wrong site surgery and deserves more attention.

Proposed process changes need to be evaluated with respect to cost and effectiveness. The “portfolio” of proposed process changes can be viewed as a decision analysis problem whereby the “basket” of process changes selected represent the largest cumulative reduction in medical errors (e.g., reduction in cost associated with medical errors) for the lowest cumulative cost. See the essay on preventability.

I discuss (4) a hypothetical case where two events have identical criticality with respect to patient safety but one is high profile and the other isn’t. Should the high profile event get more attention? The answer is yes, because besides patient safety, there are other error categories for which the high profile event will be more important, such as customer complaints, threat to accreditation, and threat to financial health.

There are other comments that could be made but perhaps the most important comment is that studies such as those conducted by these authors are extremely valuable and are the heart of figure 1; namely, examining error events and currently implemented corrective actions and deciding how to make further improvements.

References:

  1. Assay Development and Evaluation: A Manufacturer’s Perspective. Jan S. Krouwer, AACC Press, Washington DC, 2002. pp 33-44.
  2. Kwaan MR, Studdert DM, Zinner MJ, Gawande AA Incidence, patterns, and prevention of wrong-site surgery. Arch Surg. 2006;141:353-7; discussion 357-8, available at http://archsurg.ama-assn.org/cgi/content/full/141/4/353
  3. Healthcare Failure Mode and Effect Analysis (HFMEA) VA National Center for Patient Safety http://www.va.gov/ncps/SafetyTopics/HFMEA/HFMEAmaterials.pdf
  4. Managing risk in hospitals using integrated Fault Trees / FMECAs. Jan S. Krouwer, AACC Press, Washington DC, 2004. pp 17-18.