And now for something completely different

December 23, 2009

Click on image for full size version.

This is a MapPoint image of the airports that I have landed at mainly while taking flying lessons. To produce this or to produce something like it (e.g. any set of locations), you need

  1. Your own copy of MapPoint (I have the 2006 version)
  2. An Excel file with the latitudes and longitudes of each location you want to display (I got these from
  3. Some name you wish to appear on the map (I used the airport names and identifiers) – save the file as the 2003 version 
  4. The following VBA code, modified to your situation
    1. To add the code in Excel, press alt F11
    2. Add a module
    3. Copy and paste the code
    4. Set a reference to Microsoft MapPoint Control 13.0
  5. Open MapPoint
  6. Run the module
  7. Save the picture

VBA Code

Sub OpenDataSet()
    Dim objApp As MapPoint.Application
    Dim oMap As MapPoint.Map
    Dim objDataSets As MapPoint.DataSets
    Dim objDataSet As MapPoint.DataSet
    Dim zDataSource As String
    Dim objRS As MapPoint.Recordset
    Dim ppin As MapPoint.Pushpin

    Set objApp = GetObject(, “MapPoint.Application”)
    Set oMap = objApp.ActiveMap
    ”’ This is where your Excel file is located. Use the 2003 format
    zDataSource = “C:\Jan8100\JansData\HomeStuff\Pilot\JKAirports.xls!Sheet1!A1:F23″
    Set objDataSets = objApp.ActiveMap.DataSets
    Set objDataSet = objDataSets.ImportData(zDataSource, , _
            geoCountryDefault, _
            geoDelimiterComma, _
   ”’ This is a purple plane. For other symbols, go to
    objDataSet.Symbol = 89
    Set objRS = objDataSet.QueryAllRecords
        Do While Not objRS.EOF
            Set ppin = objRS.Pushpin
            ppin.Highlight = True
            ”’ The first column in the Excel data set
            ”’ is the airport name and the fourth columns
            ”’ is the airport identifier
            ppin.Name = objRS.Fields(1).Value & “(” & objRS.Fields(4).Value & “)”
            ppin.BalloonState = geoDisplayName
End Sub

Another EPCA-2 update

December 9, 2009


It’s time to improve assay specifications

December 7, 2009

Some of my critiques go back almost 20 years.

These standards have one or more of the following problems:

  • Limits are given for only 95% of the data, so 5% of the data are unspecified
  • The wrong model is used (often total error = bias ± 1.96 X imprecision)
  • Outliers are discarded
  • User error is excluded

The ideal specification should have:

  • Limits for 100% of the data, as exemplified by an error grid
  • A protocol for collecting method comparison data. The protocol should not exclude user error
  • An analysis method, whereby no data is thrown out. The analysis could be as simple as tallying the percentage of data in each error grid zone
  • FMEA and fault tree analysis to evaluate the risk of rare errors


  1. Krouwer JS. Problems with the NCEP (National Cholesterol Education Program) Recommendations for Cholesterol Analytical Performance. Arch Pathol Lab Med 2003;127: 1249 (2003).
  2. Krouwer JS and Cembrowski GS. A review of standards and statistics used to describe blood glucose monitor performance. Journal of Diabetes Science and Technology 2010;4:75-83.
  3. Jan S. Krouwer: A recommended improvement for specifying and estimating serum creatinine performance. Clin Chem 2007;53:1715-1716.
  4. See:

Appendix – Disagreeing with so many experts

Each of the standard organizations comprises a group of experts and four groups equals a lot of experts! I know people in these groups and respect their expertise. These experts are much more knowledgeable than I am in the clinical chemistry of each analyte. However, another domain of interest is how to specify and measure the quality of these assays. I suspect that these groups are underrepresented in this area.

EPCA-2 Update

December 6, 2009

Go here for a Letter by Dr. Diamandis and the response by Dr. Getzenberg regarding the prostate cancer marker EPCA-2.

Wrong thinking about hemoglobin A1c Standards

December 3, 2009

There will be an article and editorial (subscription required) about 6 of 8 assays that fail the NGSP hemoglobin A1c standard, which is here. As an aside, the NGSP could use a little revision control so that one can understand what is new.

There are problems with this standard. Here’s why. The standard states:

“In order for a commercial method to be considered traceable to the CPRL, the 95% CI of the differences between methods (test method and SRL method) must fall within the clinically significant limits of ±0.85% GHB.”

The problem is this is a measure of the average difference. While it is true that the 95% CI (confidence interval) will fail if there is too much scatter in the differences, reading further suggests a another problem.

“All data analysis will be performed by the NETCORE following Bland and Altman Assessment of Agreement. Outliers will be analyzed for informational purposes only; an outlier is defined as > mean + 3SD of the absolute differences between pairs. All outliers will be investigated by the NETCORE to determine if the discrepancy could be due to characteristics of the specimen rather than the assay method. If results show that a discrepancy could be due to characteristics of the specimen, then the manufacturer will be asked to submit a new specimen and the data will be reanalyzed.”

This doesn’t make too much sense to me. An evaluation should try to estimate performance that will be observed under routine conditions.

1)      Routine conditions don’t include a reference assay with which one can calculate differences.

2)      Eliminating data will provide a biased and too favorable performance estimate

3)      Why should one throw out a result “due to characteristics of the specimen rather than the assay method.” The assay method performance is a summation of many things including how characteristics of the specimen are handled by the assay.

The Bland-Altman approach requires normal (distribution) data. If the data is not normal, it must be transformed.

A simpler specification would be use an error grid, which accounts for 100% of the data.