CT scans and machine learning – how can we use this to diagnoses disease?
Machine learning (ML) is everywhere, from your Netflix recommendations to a self-driving car. Also in medicine, ML can be used for many applications, including diagnosis of different diseases – here’s an earlier example that was featured about the brain tumors. Recently medical imaging has seen several algorithms outperforming, or performing on par with, human experts, see  for an overview.
To teach a ML algorithm to diagnose disease in a medical scan – for example, a magnetic resonance (MR) or a computed tomography (CT or CAT) scan, you need examples of scans with, and without, the disease. The more examples you have, and the more diverse they are – for examples scans from different hospitals – the better you can expect your algorithm to do in the future. Unfortunately it is more difficult to collect as many medical scans than say, pictures of cars, and diagnostic performance can suffer as a result.
A way to overcome the problem of limited data in medical imaging, is to reuse information extracted from other types of data, or “transfer learning” [2, 3]. For example, if learning to diagnose lung cancer from CT scans, we could try using brain CT scans to teach the algorithm some “basics” about images, so that less lung cancer CT scans are needed to learn more specific features relating to lung cancer. In this case, the brain CT scans are the source data, and the lung CT are the target data.
[Figure from reference 5] Overview of transfer learning. First an algorithm is trained on a source dataset, to learn the “basics” of images. These images can be medical or non-medical. Then the algorithm can be further trained on the target dataset – in the example of this figure, a skin cancer dataset. By doing this, the number of skin cancer images needed to train the algorithm, is reduced. (https://www.tue.nl/en/research/researchers/veronika-cheplygina/)
Around 2014, a surprising discovery was made  – the source images could be quite different from the target images. In this paper the researchers used a subset of images from the ImageNet dataset, which contains images from 10 categories like “bird”, “car” and “cat”, to teach an algorithm the “basics”, or in other words, to pretrain it on this source dataset. They compared this approach to pretraining on brain CT scans, and it turned out that the natural images were more effective! The explanation they give is that brain CT scans do not have some variations that we expect to find in lung scans, whereas natural images do provide more variations.
Transfer learning is currently quite popular in medical imaging, especially with ImageNet , possibly because you can just download an already pretrained algorithm from the internet, saving yourself time and resources. But there is no definite answer yet, and researchers sometimes compare natural and medical source datasets. In my overview, “Cats or CAT scans: transfer learning from natural or medical image source data sets?” I looked at several such comparisons. Half of the papers achieved better results with natural images, and half with medical images. We do not yet know for sure what’s best – but we can be certain that cats are helping algorithms, just a little bit.
Veronika Cheplygina is an assistant professor at the Medical Image Analysis group, Eindhoven University of Technology since 2017. She received her Ph.D. from the Delft University of Technology for her thesis ``Dissimilarity-Based Multiple Instance Learning“ in 2015. As part of her PhD, she was a visiting researcher at the Max Planck Institute for Intelligent Systems in Tuebingen, Germany. From 2015 to 2016 she was a postdoc at the Biomedical Imaging Group Rotterdam, Erasmus MC, where she applied machine learning algorithms to medical image analysis problems. Her research interests are centered around learning scenarios where few labels are available, such as multiple instance learning, transfer learning, and crowdsourcing. Next to research, Veronika blogs about academic life at http://www.veronikach.com.