Software Validation

When I worked in R&D for diagnostic companies, part of what I did was to write software. These included SAS programs to evaluate assay performance. Some of the results of this work (also performed by others in my group) ended up in the instrument and affected result accuracy. For example, method comparison to reference was used to establish calibration algorithms that were hard coded in the instrument. I also wrote Excel programs in vba (visual basic for applications) for the scientific staff so that they could analyze data from the field in near real time. A senior developer wrote the other part of this software (a major effort) – communication with field instruments and data acquisition and transfer to a large database.

For all of this, there was no “formal validation” until the FDA changed the way it inspected sites and required R&D to validate software (e.g., in the same way that manufacturing had always done). Of course R&D complied. The purpose of this essay is to point out some misconceptions with formal validation.

Formal validation comments

Exclude the developer from the validation – Before formal validation, the developer would validate his or her own software, a practice that is avoided in formal validation. The benefit in validating one’s own software must be considered in the following perspective – we were not contract or journeymen developers – we were extremely knowledgeable in the subject matter. This meant that we could test the reasonableness of the results and spot problems on that basis – formal validation usually has no provision for this and often, the people who validate the software have no knowledge of the subject matter.

Formal validation tends to exclude creativity – Formal validation follows a set of rules. For example, there are test protocols based on the user interface specifications with a variety of inputs. As early users of new software, our group had a reputation of finding bugs in formally validated software. One reason is that we thought up test conditions during testing (e.g., playing around with the software) that were not covered by formal validation protocols. There is no mechanism for this in formal validation.

One hundred percent coverage is misleading – Formal validation involves the concept of 100% coverage. This is misleading. Due to branching and the many variations of inputs, true 100% coverage of software cannot be achieved. Whereas this is known by software professionals, it is often misrepresented to or by management.

Formal validation may simply be a checkmark – The quality of software validation cannot be guaranteed, simply because it has been checked off upon completion and there may be pressure for this milestone to be reached.

The ideal case

Ideally, 1) software validation will be conducted by the developer, as before (although it is not called “formal validation”) but a step in development. 2) People who have a talent for finding errors are given this task and proceed by informal methods. 3) Formal validation is then conducted which has the benefit of discovering other errors.

A danger is that less effort will be spent on procedures 1 and 2 because 3 is required.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: