January 24, 2008
In spending two sessions with groups of people who verify and validate medical device software, I got the impression that most effort is spent on testing code (to the requirements that exist). In part, I based this assessment on the amount of questions (e.g., interest by the audience) when code testing was discussed vs. examining requirements. Yet, in reviewing recalls, and my experience in the IVD industry, I suspect that that most errors are caused by wrong requirements (see figure).
This makes me recall some definitions.
Bug – A coding error that prevents the software from meeting its stated requirement. A divide by zero error is a bug, but if the denominator can never be zero, this bug will never be a failure. Never be zero means the value can never be zero without a code logic statement such as If X <> 0, then … If the code logic statement were present, there would be no divide by zero bug.
Failure – Any deviation from customer expectations. This rather liberal statement is similar to the general definition of quality by ASQ. Each failure must be evaluated by the software / product development team to decide whether they agree and of course deviations have non software causes.
Example – A home glucose meter produces a value over 500 mg/dL. The meter displays ERR1. This is a requirements error. It is known the value is too high ( it could be 501 or 1,000). The meter should say something like HIGH.
January 4, 2008
I have previously compared FMEA and FRACAS, here. Another simple difference is:
(Successful) FMEA reduces risk.
(Successful) FRACAS reduces failure rates.
Now, one often hears about successful FMEAs. In my experience, these are not FMEAs, they are examples of FRACAS. An example is here. How can one tell that this is FRACAS and not FMEA. It’s simple – what is described is the reduction of a too high failure rate to a lower rate. With FMEA, the failure rate is zero – the event has not happened. What one does is to reduce the risk of this potential failure, from some amount to a lower amount. This is perhaps one of the reasons, one does not hear too much about FMEA successes. As I said before, to say that something that has never happened is now even less likely to happen (due to FMEA) just isn’t too exciting.
To reduce failure rates is a good thing and it is not a big deal to call this FMEA when it is FRACAS. However, it is simple to use the correct terms and if one doesn’t one might wind up neglecting to perform FMEA when it’s needed.
January 1, 2008
I have spent my career in industry in R&D in a quality role. As I continue to interact with people that deal with quality in the in vitro diagnostics industry, I get the impression that most of these people are not from R&D but rather from regulatory affairs. What’s the difference? My perception is that regulatory affairs professionals focus more on compliance – I have focused on measuring things. Compliance is often assessed through audits with documentation a large part of audits. Measuring things forces activities to focus on improving the metric of interest. Documentation is of less importance.
What’s another difference? Whenever I write an article for publication on quality, it’s reviewed by regulatory affairs professionals. I can tell by the comments (e.g., they disagree with most of what I say). R&D people agree with me.