What it takes to put together a flying video

September 28, 2010

First, of course, is a flight in a plane. Recently, I flew for about 40 minutes practicing take-offs and landings. I used two cameras (one looking out the front and one trained on the instrument panel) and one audio recorder to capture talking to the tower. These are the next steps.

  1. Download the videos from the cameras – I use iMovie 09 on a MacBook. This took a little under two hours.
  2. I then piece together the video portions. The instrument panel video is an inset on the view out the front video so the two have to be synced correctly. This is the creative part and can also be time consuming, especially matching things up.
  3. Transfer the audio to a PC.
  4. The audio is on a PC because I have to chop up the audio to match the video pieces. Audio is a Windows media file and I use the Windows media file editor. I then use WinFF to convert the smaller files to .wav files, or they won’t be compatible with iMovie 09.
  5. I then sync the audio to the video. Although the tower can’t be heard on the videos, my talking can be faintly heard, which helps with the syncing.
  6. After the movie is made, I have to clean up the 16 GBytes on my MacBook or else I would run out of disk space. Most of this space is taken up by the two video files. Many of the flights are longer than 40 minutes and the raw video files can be 30 GBytes. I transfer these files to a PC. This is over a wireless network and takes several hours.
  7. I can’t directly offload the files to my external drive cause it is formatted FAT32 (4 GByte file size limit). So I have another program to split these files into smaller pieces.

So the total processing time is much longer than the flight. Here is the video.

Advertisements

Clinical vs. Regulatory Standards for Laboratory Assays

September 19, 2010

In preparing a standard, some discussion has come up regarding different types of standards. Here is an attempt to clarify.

Clinical standards state what is needed whether or not it can be achieved. For example, two cardiology societies specified troponin I performance (1), yet no assay at the time met this performance (2).

The level of performance required in regulatory standards is based on clinical needs and currently achievable performance. Thus, the troponin assays were all approved assays in spite of not meeting the clinical need standard.

This paradox can perhaps be explained by the following table. Regulatory standards must tradeoff the two risks in the table, whereas the clinical standard does not have the risk in the second row of the table because the assay is already in use.

Risk benefit analysis for approving or rejecting an assay

  Benefit Risk
Approve assay Information helps clinicians Assay errors cause wrong
medical decisions
Reject assay No wrong medical decisions
from assay errors
Lack of information from assay
causes harm

References

  1. The Joint European Society of Cardiology/American College of Cardiology Committee. Myocardial infarction redefined—a consensus document of the joint European Society of Cardiology/American College of Cardiology Committee for the redefinition of myocardial infarction. J Am Coll Cardiol 2000;36:959–69.
  2. Mauro Panteghini Franca Pagani Kiang-Teck J. Yeo Fred S. Apple Robert H. Christenson Francesco Dati Johannes Mair Jan Ravkilde and Alan H.B. Wu on behalf of the Committee on Standardization of Markers of Cardiac Damage of the IFCC. Evaluation of Imprecision for Cardiac Troponin Assays at Low-Range Concentrations Clin Chem 2004;50:327-332.

Problem not solved by redefining terms

September 14, 2010

One of the blogs I read (http://www.medrants.com/) periodically complains about performance measures especially, P4P (pay for performance) in medicine. He also says that we should focus on “safety” issues (such as central line infections) rather than performance over which often the physician has little control (such as a patient’s A1c values).

Now I can sympathize with this blogger in that I and others (including Jim Westgard) have objected without effect to CMS’s initiative to reduce the frequency of quality control in clinical laboratories. The logical arguments have failed.

But this blogger has also decided to try to address this issue by redefining terms. He wishes to ban the term “quality” from being used in medicine and this is one reason he uses the term “safety” for errors which the physician has control, such as central line infections. But redefining terms doesn’t work. If we use the word safety, there can be a safety effort with good or poor quality, so quality creeps right back into things. And many error rates can be measured which are not related to safety (patient harm) within the control of the physician and these too are quality.


Worrying about the bottom of the Pareto Chart

September 7, 2010

While displaying a Pareto chart at a reliability meeting at Ciba Corning, one of the participants suggested we knock off all the little problems at the bottom of the chart. (Maybe he saw some problems that appealed to him). In any case, this is a bad idea.

Although I commented on the reply to the Letter to the Editor that I wrote, a post from my friends at the Westgard blog, makes me think of the Pareto analogy.

So to construct this Pareto chart, take the specification of some limit (the exact limit is what people are debating) and require that 95% of glucose results are within this limit as defined by average bias times a multiple of the CV. This limit demarcates no harm from minor harm. To setup the Pareto chart, use a 1 to 5 scale with 5 being the most severe harm and 5 the highest probability of occurrence.

For the 5% that exceed this limit, assume that most that exceed the limit (4.5%) are close to the limit and cause minor harm and 0.5% cause major harm. To classify this:

Minor harm: severity = 1, probability = 3; Pareto rank = 1X3=3
Major harm: severity=5, probability = 1; Pareto rank = 5X1=5

To continue to ignore severe harm in discussions about glucose specifications is a bad idea, and severe harm does occur.

Note: One way of constructing Pareto charts is to rank all severity=5 events by decreasing probability, then list all severity=4 events and so on.