Wednesday, August 10, 2011

Not All Computer Aided Detection Methods Are Equal

A recent publication in the Journal of the National Cancer Institute has indicated that computer-aided detection (CAD) technologies do not aid in improving breast cancer detection via x-ray mammography based screening (here is a news article on the publication). The conclusions reached in this study may be flawed and this article explores some of the issues of why this may have occurred.

The study in question looked at centers that introduced CAD technologies in the United States after the government classified CAD as an insurable, reimbursable medical expense. After the government decision CAD use increased substantially and this study looked at determining the benefits (or lack thereof) of introducing CAD technologies into the screening process.

The main problem with this research study is that it groups many screening centers that have introduced CAD into a pooled analysis. This blurs together the results of many different CAD technologies produced from different medical technology vendors. Not all CAD technologies were created equal and as such we should expect to see variability between the systems produced by different commercial organizations. Commercial systems can be quite different from each other (here are three different examples: 1, 2, 3). In fact, as a CAD researcher I can safely say that there are an infinite possible number of methods for performing CAD. Any given CAD system designer could program the computer to perform the cancer detection in any given way. The researchers who performed this study are not CAD scientists and appear to have overlooked this important issue.

Part of the reason this analytical error occurred is because the U.S. government classified all CAD as an insured medical expense (obliging health insurance companies to reimburse individuals who rely on the technology). When the U.S. government made this decision, it did not consider the issue of significant performance differences between different commercial CAD vendor's products. It is quite plausible that it was correct to fund some CAD technologies, while others perhaps don't perform as well and so should not be reimbursable through insurance.

Additionally, I would like to point out that the study in question looked at the sensitivity for detecting breast cancer for screening centers that introduced CAD technology. This sensitivity was evaluated over many years which can be problematic. After the first year of screening with CAD, the population that a given center is ongoingly screening for breast cancer has changed substantially (provided the CAD system provided a sensitivity increase). This is due to the fact that if the CAD increases sensitivity in the first year of screening then the remaining population that the given screening center is monitoring has fewer cancers left to find (because of the extra cancers that were removed from the population) which can effect the resultant sensitivities in subsequent years of CAD enabled screening. Thus comparing the sensitivity of CAD screening in years 2+ with the sensitivity of CAD screening in the first year can potentially be misleading because the two sensitivities were computed on populations with different prevalence (and potentially presentation) of cancers. This effect was well described by Dr. Nishikawa in the journal Radiology.

My opinion is that some commercial CAD technologies for mammography are effective in their job and consistently produce sensitivity improvements of 10% or more in cross-sectional studies. It is also plausible that the U.S. government has forced insurance companies to fund some CAD technologies that are not assisting in cancer detection. We don't know which CAD systems are performing the best and which ones are underperforming, however, this JNCI study would have been much more informative and helpful if the results had been broken down into an analysis for each different vendor's CAD system.

Jacob Levman, PhD
Imaging Research
Sunnybrook Research Institute