Postdoc
E-mail: b [dot] d [dot] devos [at] amsterdamumc [dot] nl
Phone: +31 20 566 89 78
LinkedIn; Google Scholar
In 2012 Bob finished his Master Biomedical Engineering at the University of Groningen (RUG). During his study Bob focused on clinical physics and became interested in medical image processing. After a couple of temporary assignments in the industry, Bob got the opportunity to be a PhD-candidate at the ISI. His main focus during his PhD was on finding biomarkers in CT lung cancer screening scans to predict cardiovascular risk. Currently Bob holds a position as a postdoctoral researcher. His project focuses on dimensionality reduction for disease analysis in high-dimensional data.
Bob is co-organizer of the MICCAI Challenge on Automatic Coronary Calcium Scoring.
Journal Articles
1. J. Sander, B.D. de Vos, I. Išgum
Automatic segmentation with detection of local segmentation failures in cardiac MRI Journal Article
Scientific Reports, 10 (21769 ), 2020.
@article{Sander2020b,
title = {Automatic segmentation with detection of local segmentation failures in cardiac MRI},
author = {J. Sander, B.D. de Vos, I. Išgum},
url = {https://www.nature.com/articles/s41598-020-77733-4},
year = {2020},
date = {2020-12-10},
journal = {Scientific Reports},
volume = {10},
number = {21769 },
abstract = {Segmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Segmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation.2. J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum
Deep learning-based regression and classification for automatic landmark localization in medical images Journal Article
IEEE Transactions on Medical Imaging, 39 (12), pp. 4011-4022, 2020, ISSN: 1558-254X.
@article{Noothout2020,
title = {Deep learning-based regression and classification for automatic landmark localization in medical images},
author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum},
url = {https://arxiv.org/pdf/2007.05295.pdf},
doi = {10.1109/TMI.2020.3009002},
issn = {1558-254X},
year = {2020},
date = {2020-07-09},
journal = {IEEE Transactions on Medical Imaging},
volume = {39},
number = {12},
pages = {4011-4022},
abstract = {In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. 3. G. Litjens, F. Ciompi, J.M. Wolterink, B.D. de Vos, T. Leiner, J. Teuwen, I. Išgum
State-of-the-art deep learning in cardiovascular image analysis Journal Article
JACC: Cardiovascular Imaging, 12 (8 Part 1), pp. 1549-1565, 2019.
@article{Litjens2019,
title = {State-of-the-art deep learning in cardiovascular image analysis},
author = {G. Litjens, F. Ciompi, J.M. Wolterink, B.D. de Vos, T. Leiner, J. Teuwen, I. Išgum},
url = {https://doi.org/10.1016/j.jcmg.2019.06.009},
year = {2019},
date = {2019-08-05},
journal = {JACC: Cardiovascular Imaging},
volume = {12},
number = {8 Part 1},
pages = {1549-1565},
abstract = {Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses >80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses >80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed.4. B.D. de Vos; F.F. Berendsen; M.A. Viergever; H. Sokooti; M. Staring; I. Išgum
A deep learning framework for unsupervised affine and deformable image registration Journal Article
Medical Image Analysis, 52 , pp. 128 - 143, 2019.
@article{deVos2019b,
title = {A deep learning framework for unsupervised affine and deformable image registration},
author = {B.D. de Vos and F.F. Berendsen and M.A. Viergever and H. Sokooti and M. Staring and I. Išgum},
url = {http://www.sciencedirect.com/science/article/pii/S1361841518300495},
year = {2019},
date = {2019-02-21},
journal = {Medical Image Analysis},
volume = {52},
pages = {128 - 143},
abstract = {Deep learning, Unsupervised learning, Affine image registration, Deformable image registration, Cardiac cine MRI, Chest CT\",
abstract = \"Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Deep learning, Unsupervised learning, Affine image registration, Deformable image registration, Cardiac cine MRI, Chest CT",
abstract = "Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.5. B.D. de Vos, J.M. Wolterink, T. Leiner, P.A. de Jong, N. Lessmann, I. Išgum
Direct automatic coronary calcium scoring in cardiac and chest CT Journal Article
IEEE Transactions on Medical Imaging, 34 , pp. 123-136, 2019.
@article{deVos2019,
title = {Direct automatic coronary calcium scoring in cardiac and chest CT},
author = {B.D. de Vos, J.M. Wolterink, T. Leiner, P.A. de Jong, N. Lessmann, I. Išgum},
url = {https://ieeexplore.ieee.org/abstract/document/8643342
https://arxiv.org/abs/1902.05408},
year = {2019},
date = {2019-02-21},
journal = {IEEE Transactions on Medical Imaging},
volume = {34},
pages = {123-136},
abstract = {Cardiovascular disease (CVD) is the global leading cause of death. A strong risk factor for CVD events is the amount of coronary artery calcium (CAC). To meet demands of the increasing interest in quantification of CAC, i.e. coronary calcium scoring, especially as an unrequested finding for screening and research, automatic methods have been proposed. Current automatic calcium scoring methods are relatively computationally expensive and only provide scores for one type of CT. To address this, we propose a computationally efficient method that employs two ConvNets: the first performs registration to align the fields of view of input CTs and the second performs direct regression of the calcium score, thereby circumventing time-consuming intermediate CAC segmentation. Optional decision feedback provides insight in the regions that contributed to the calcium score. Experiments were performed using 903 cardiac CT and 1,687 chest CT scans. The method predicted calcium scores in less than 0.3 s. Intra-class correlation coefficient between predicted and manual calcium scores was 0.98 for both cardiac and chest CT. The method showed almost perfect agreement between automatic and manual CVD risk categorization in both datasets, with a linearly weighted Cohen's kappa of 0.95 in cardiac CT and 0.93 in chest CT. Performance is similar to that of state-of-the-art methods, but the proposed method is hundreds of times faster. By providing visual feedback, insight is given in the decision process, making it readily implementable in clinical and research settings.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cardiovascular disease (CVD) is the global leading cause of death. A strong risk factor for CVD events is the amount of coronary artery calcium (CAC). To meet demands of the increasing interest in quantification of CAC, i.e. coronary calcium scoring, especially as an unrequested finding for screening and research, automatic methods have been proposed. Current automatic calcium scoring methods are relatively computationally expensive and only provide scores for one type of CT. To address this, we propose a computationally efficient method that employs two ConvNets: the first performs registration to align the fields of view of input CTs and the second performs direct regression of the calcium score, thereby circumventing time-consuming intermediate CAC segmentation. Optional decision feedback provides insight in the regions that contributed to the calcium score. Experiments were performed using 903 cardiac CT and 1,687 chest CT scans. The method predicted calcium scores in less than 0.3 s. Intra-class correlation coefficient between predicted and manual calcium scores was 0.98 for both cardiac and chest CT. The method showed almost perfect agreement between automatic and manual CVD risk categorization in both datasets, with a linearly weighted Cohen's kappa of 0.95 in cardiac CT and 0.93 in chest CT. Performance is similar to that of state-of-the-art methods, but the proposed method is hundreds of times faster. By providing visual feedback, insight is given in the decision process, making it readily implementable in clinical and research settings.6. J. Šprem; B.D. de Vos; N. Lessmann; P.A. de Jong; M.A. Viergever; I. Išgum
Impact of automatically detected motion artifacts on coronary calcium scoring in chest CT Journal Article
Journal of Medical Imaging, 5 (4), pp. 044007 , 2018.
@article{Šprem2018,
title = {Impact of automatically detected motion artifacts on coronary calcium scoring in chest CT},
author = {J. Šprem and B.D. de Vos and N. Lessmann and P.A. de Jong and M.A. Viergever and I. Išgum},
url = {https://doi.org/10.1117/1.JMI.5.4.044007},
year = {2018},
date = {2018-12-11},
journal = {Journal of Medical Imaging},
volume = {5},
number = {4},
pages = {044007 },
abstract = {The amount of coronary artery calcification (CAC) quantified in CT scans enables prediction of cardiovascular disease (CVD) risk. However, interscan variability of CAC quantification is high, especially in scans made without ECG synchronization. We propose a method for automatic detection of CACs that are severely affected by cardiac motion. Subsequently, we evaluate the impact of such CACs on CAC quantification and CVD risk determination. This study includes 1000 baseline and 585 one year follow-up low-dose chest CTs from the National Lung Screening Trial. 415 baseline scans are used to train and evaluate a convolutional neural network that identifies observer determined CACs affected by severe motion artifacts. Thereafter, 585 paired scans acquired at baseline and follow-up were used to evaluate the impact of severe motion artifacts on CAC quantification and risk categorization. Based on the CAC amount the scans were categorized into four standard CVD risk categories. The method identified CACs affected by severe motion artifacts with 85.2% accuracy. Moreover, reproducibility of CAC scores in scan pairs is higher in scans containing mostly CACs not affected by severe cardiac motion. Hence, the proposed method enables identification of scans affected by severe cardiac motion where CAC quantification may not be reproducible.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The amount of coronary artery calcification (CAC) quantified in CT scans enables prediction of cardiovascular disease (CVD) risk. However, interscan variability of CAC quantification is high, especially in scans made without ECG synchronization. We propose a method for automatic detection of CACs that are severely affected by cardiac motion. Subsequently, we evaluate the impact of such CACs on CAC quantification and CVD risk determination. This study includes 1000 baseline and 585 one year follow-up low-dose chest CTs from the National Lung Screening Trial. 415 baseline scans are used to train and evaluate a convolutional neural network that identifies observer determined CACs affected by severe motion artifacts. Thereafter, 585 paired scans acquired at baseline and follow-up were used to evaluate the impact of severe motion artifacts on CAC quantification and risk categorization. Based on the CAC amount the scans were categorized into four standard CVD risk categories. The method identified CACs affected by severe motion artifacts with 85.2% accuracy. Moreover, reproducibility of CAC scores in scan pairs is higher in scans containing mostly CACs not affected by severe cardiac motion. Hence, the proposed method enables identification of scans affected by severe cardiac motion where CAC quantification may not be reproducible.7. N. Lessmann, B. van Ginneken, M. Zreik, P.A. de Jong, B.D. de Vos, M.A. Viergever, I. Išgum
Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions Journal Article
IEEE Transactions on Medical Imaging, 37 (2), pp. 615-625, 2018.
@article{Lessmann2017,
title = {Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions},
author = {N. Lessmann, B. van Ginneken, M. Zreik, P.A. de Jong, B.D. de Vos, M.A. Viergever, I. Išgum},
url = {https://arxiv.org/pdf/1711.00349.pdf},
year = {2018},
date = {2018-02-01},
journal = {IEEE Transactions on Medical Imaging},
volume = {37},
number = {2},
pages = {615-625},
abstract = {Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening.8. I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P. J. Slomka
Erratum to: Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT Journal Article
Journal of Nuclear Cardiology, 25 (6), pp. 2143, 2017.
@article{Isgum2017b,
title = {Erratum to: Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT},
author = {I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P. J. Slomka},
url = {http://dx.doi.org/10.1007/s12350-017-0946-4},
year = {2017},
date = {2017-06-06},
journal = {Journal of Nuclear Cardiology},
volume = {25},
number = {6},
pages = {2143},
abstract = {Regrettably an error was introduced in Table 3 during the article’s production. The very first cell (row: Very low 0; column: Very low) should read ‘12’ and not ‘21’ as originally published.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Regrettably an error was introduced in Table 3 during the article’s production. The very first cell (row: Very low 0; column: Very low) should read ‘12’ and not ‘21’ as originally published.9. I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P.J. Slomka
Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT Journal Article
Journal of Nuclear Cardiology, 25 (6), pp. 2133-2142, 2017.
@article{Isgum2017,
title = {Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT},
author = {I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P.J. Slomka},
url = {http://dx.doi.org/10.1007/s12350-017-0866-3},
year = {2017},
date = {2017-03-15},
journal = {Journal of Nuclear Cardiology},
volume = {25},
number = {6},
pages = {2133-2142},
abstract = {BACKGROUND: We investigated fully automatic coronary artery calcium (CAC) scoring and cardiovascular disease (CVD) risk categorization from CT attenuation correction (CTAC) acquired at rest and stress during cardiac PET/CT and compared it with manual annotations in CTAC and with dedicated calcium scoring CT (CSCT). METHODS AND RESULTS: We included 133 consecutive patients undergoing myocardial perfusion 82Rb PET/CT with the acquisition of low-dose CTAC at rest and stress. Additionally, a dedicated CSCT was performed for all patients. Manual CAC annotations in CTAC and CSCT provided the reference standard. In CTAC, CAC was scored automatically using a previously developed machine learning algorithm. Patients were assigned to a CVD risk category based on their Agatston score (0, 1-10, 11-100, 101-400, >400). Agreement in CVD risk categorization between manual and automatic scoring in CTAC at rest and stress resulted in Cohen\'s linearly weighted κ of 0.85 and 0.89, respectively. The agreement between CSCT and CTAC at rest resulted in κ of 0.82 and 0.74, using manual and automatic scoring, respectively. For CTAC at stress, these were 0.79 and 0.70, respectively. CONCLUSION: Automatic CAC scoring from CTAC PET/CT may allow routine CVD risk assessment from the CTAC component of PET/CT without any additional radiation dose or scan time.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
BACKGROUND: We investigated fully automatic coronary artery calcium (CAC) scoring and cardiovascular disease (CVD) risk categorization from CT attenuation correction (CTAC) acquired at rest and stress during cardiac PET/CT and compared it with manual annotations in CTAC and with dedicated calcium scoring CT (CSCT). METHODS AND RESULTS: We included 133 consecutive patients undergoing myocardial perfusion 82Rb PET/CT with the acquisition of low-dose CTAC at rest and stress. Additionally, a dedicated CSCT was performed for all patients. Manual CAC annotations in CTAC and CSCT provided the reference standard. In CTAC, CAC was scored automatically using a previously developed machine learning algorithm. Patients were assigned to a CVD risk category based on their Agatston score (0, 1-10, 11-100, 101-400, >400). Agreement in CVD risk categorization between manual and automatic scoring in CTAC at rest and stress resulted in Cohen's linearly weighted κ of 0.85 and 0.89, respectively. The agreement between CSCT and CTAC at rest resulted in κ of 0.82 and 0.74, using manual and automatic scoring, respectively. For CTAC at stress, these were 0.79 and 0.70, respectively. CONCLUSION: Automatic CAC scoring from CTAC PET/CT may allow routine CVD risk assessment from the CTAC component of PET/CT without any additional radiation dose or scan time.10. B.D. de Vos, J.M. Wolterink, P.A. de Jong, T. Leiner, M.A. Viergever, I. Isgum
ConvNet-based localization of anatomical structures in 3D medical images Journal Article
IEEE Transactions on Medical Imaging, 36 (7), pp. 1470-1481, 2017.
@article{devos2017,
title = {ConvNet-based localization of anatomical structures in 3D medical images},
author = {B.D. de Vos, J.M. Wolterink, P.A. de Jong, T. Leiner, M.A. Viergever, I. Isgum},
year = {2017},
date = {2017-02-17},
journal = {IEEE Transactions on Medical Imaging},
volume = {36},
number = {7},
pages = {1470-1481},
abstract = {Localization of anatomical structures is a prerequisite for many tasks in medical image analysis. We propose a method for automatic localization of one or more anatomical structures in 3D medical images through detection of their presence in 2D image slices using a convolutional neural network (ConvNet). A single ConvNet is trained to detect presence of the anatomical structure of interest in axial, coronal, and sagittal slices extracted from a 3D image. To allow the ConvNet to analyze slices of different sizes, spatial pyramid pooling is applied. After detection, 3D bounding boxes are created by combining the output of the ConvNet in all slices. In the experiments 200 chest CT, 100 cardiac CT angiography (CTA), and 100 abdomen CT scans were used. The heart, ascending aorta, aortic arch, and descending aorta were localized in chest CT scans, the left cardiac ventricle in cardiac CTA scans, and the liver in abdomen CT scans. Localization was evaluated using the distances between automatically and manually defined reference bounding box centroids and walls. The best results were achieved in localization of structures with clearly defined boundaries (e.g. aortic arch) and the worst when the structure boundary was not clearly visible (e.g. liver). The method was more robust and accurate in localization multiple structures.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Localization of anatomical structures is a prerequisite for many tasks in medical image analysis. We propose a method for automatic localization of one or more anatomical structures in 3D medical images through detection of their presence in 2D image slices using a convolutional neural network (ConvNet). A single ConvNet is trained to detect presence of the anatomical structure of interest in axial, coronal, and sagittal slices extracted from a 3D image. To allow the ConvNet to analyze slices of different sizes, spatial pyramid pooling is applied. After detection, 3D bounding boxes are created by combining the output of the ConvNet in all slices. In the experiments 200 chest CT, 100 cardiac CT angiography (CTA), and 100 abdomen CT scans were used. The heart, ascending aorta, aortic arch, and descending aorta were localized in chest CT scans, the left cardiac ventricle in cardiac CTA scans, and the liver in abdomen CT scans. Localization was evaluated using the distances between automatically and manually defined reference bounding box centroids and walls. The best results were achieved in localization of structures with clearly defined boundaries (e.g. aortic arch) and the worst when the structure boundary was not clearly visible (e.g. liver). The method was more robust and accurate in localization multiple structures.11. S.A.M. Gernaat, I. Isgum, B.D. de Vos, R.A.P. Takx, D.A. Young Afat, N. Rijnberg, D.E. Grobbee, Y. van der Graaf, P.A. de Jong, T. Leiner, H.J.G.D. van den Bongard, J.P. Pignol, H.M. Verkooijen
Automatic coronary artery calcium scoring on radiotherapy planning CT scans of breast cancer patients: reproducibility and association with traditional cardiovascular risk factors Journal Article
Plos One, 11 (12), pp. e0167925, 2016.
@article{Gern16,
title = {Automatic coronary artery calcium scoring on radiotherapy planning CT scans of breast cancer patients: reproducibility and association with traditional cardiovascular risk factors},
author = {S.A.M. Gernaat, I. Isgum, B.D. de Vos, R.A.P. Takx, D.A. Young Afat, N. Rijnberg, D.E. Grobbee, Y. van der Graaf, P.A. de Jong, T. Leiner, H.J.G.D. van den Bongard, J.P. Pignol, H.M. Verkooijen},
year = {2016},
date = {2016-12-09},
journal = {Plos One},
volume = {11},
number = {12},
pages = {e0167925},
abstract = {OBJECTIVES: Coronary artery calcium (CAC) is a strong and independent predictor of cardiovascular disease (CVD) risk. This study assesses reproducibility of automatic CAC scoring on radiotherapy planning computed tomography (CT) scans of breast cancer patients, and examines its association with traditional cardiovascular risk factors. METHODS: This study included 561 breast cancer patients undergoing radiotherapy between 2013 and 2015. CAC was automatically scored with an algorithm using supervised pattern recognition, expressed as Agatston scores and categorized into five categories (0, 1-10, 11-100, 101-400, >400). Reproducibility between automatic and manual expert scoring was assessed in 79 patients with automatically determined CAC above zero and 84 randomly selected patients without automatically determined CAC. Interscan reproducibility of automatic scoring was assessed in 294 patients having received two scans (82% on the same day). Association between CAC and CVD risk factors was assessed in 36 patients with CAC scores >100, 72 randomly selected patients with scores 1-100, and 72 randomly selected patients without CAC. Reliability was assessed with linearly weighted kappa and agreement with proportional agreement. RESULTS: 134 out of 561 (24%) patients had a CAC score above zero. Reliability of CVD risk categorization between automatic and manual scoring was 0.80 (95% Confidence Interval (CI): 0.74-0.87), and slightly higher for scans with breath-hold. Agreement was 0.79 (95% CI: 0.72-0.85). Interscan reliability was 0.61 (95% CI: 0.50-0.72) with an agreement of 0.84 (95% CI: 0.80-0.89). Ten out of 36 (27.8%) patients with CAC scores above 100 did not have other cardiovascular risk factors. CONCLUSIONS: Automatic CAC scoring on radiotherapy planning CT scans is a reliable method to assess CVD risk based on Agatston scores. One in four breast cancer patients planned for radiotherapy have elevated CAC score. One in three patients with high CAC scores don't have other CVD risk factors and wouldn't have been identified as high risk.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
OBJECTIVES: Coronary artery calcium (CAC) is a strong and independent predictor of cardiovascular disease (CVD) risk. This study assesses reproducibility of automatic CAC scoring on radiotherapy planning computed tomography (CT) scans of breast cancer patients, and examines its association with traditional cardiovascular risk factors. METHODS: This study included 561 breast cancer patients undergoing radiotherapy between 2013 and 2015. CAC was automatically scored with an algorithm using supervised pattern recognition, expressed as Agatston scores and categorized into five categories (0, 1-10, 11-100, 101-400, >400). Reproducibility between automatic and manual expert scoring was assessed in 79 patients with automatically determined CAC above zero and 84 randomly selected patients without automatically determined CAC. Interscan reproducibility of automatic scoring was assessed in 294 patients having received two scans (82% on the same day). Association between CAC and CVD risk factors was assessed in 36 patients with CAC scores >100, 72 randomly selected patients with scores 1-100, and 72 randomly selected patients without CAC. Reliability was assessed with linearly weighted kappa and agreement with proportional agreement. RESULTS: 134 out of 561 (24%) patients had a CAC score above zero. Reliability of CVD risk categorization between automatic and manual scoring was 0.80 (95% Confidence Interval (CI): 0.74-0.87), and slightly higher for scans with breath-hold. Agreement was 0.79 (95% CI: 0.72-0.85). Interscan reliability was 0.61 (95% CI: 0.50-0.72) with an agreement of 0.84 (95% CI: 0.80-0.89). Ten out of 36 (27.8%) patients with CAC scores above 100 did not have other cardiovascular risk factors. CONCLUSIONS: Automatic CAC scoring on radiotherapy planning CT scans is a reliable method to assess CVD risk based on Agatston scores. One in four breast cancer patients planned for radiotherapy have elevated CAC score. One in three patients with high CAC scores don't have other CVD risk factors and wouldn't have been identified as high risk.12. J.M. Wolterink, T. Leiner, B.D. de Vos, R.W. van Hamersvelt, M.A. Viergever, I. Isgum
Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks Journal Article
Medical Image Analysis, 34 , pp. 123-136, 2016.
@article{Wolt16b,
title = {Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks},
author = {J.M. Wolterink, T. Leiner, B.D. de Vos, R.W. van Hamersvelt, M.A. Viergever, I. Isgum},
year = {2016},
date = {2016-05-11},
journal = {Medical Image Analysis},
volume = {34},
pages = {123-136},
abstract = {The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. CAC is clinically quantified in cardiac calcium scoring CT (CSCT), but it has been shown that cardiac CT angiography (CCTA) may also be used for this purpose. We present a method for automatic CAC quantification in CCTA. This method uses supervised learning to directly identify and quantify CAC without a need for coronary artery extraction commonly used in existing methods. The study included cardiac CT exams of 250 patients for whom both a CCTA and a CSCT scan were available. To restrict the volume-of-interest for analysis, a bounding box around the heart is automatically determined. The bounding box detection algorithm employs a combination of three ConvNets, where each detects the heart in a different orthogonal plane (axial, sagittal, coronal). These ConvNets were trained using 50 cardiac CT exams. In the remaining 200 exams, a reference standard for CAC was defined in CSCT and CCTA. Out of these, 100 CCTA scans were used for training, and the remaining 100 for evaluation of a voxel classification method for CAC identification. The method uses ConvPairs, pairs of convolutional neural networks (ConvNets). The first ConvNet in a pair identifies voxels likely to be CAC, thereby discarding the majority of non-CAC-like voxels such as lung and fatty tissue. The identified CAC-like voxels are further classified by the second ConvNet in the pair, which distinguishes between CAC and CAC-like negatives. Given the different task of each ConvNet, they share their architecture, but not their weights. Input patches are either 2.5D or 3D. The ConvNets are purely convolutional, i.e. no pooling layers are present and fully connected layers are implemented as convolutions, thereby allowing efficient voxel classification. The performance of individual 2.5D and 3D ConvPairs with input sizes of 15 and 25 voxels, as well as the performance of ensembles of these ConvPairs, were evaluated by a comparison with reference annotations in CCTA and CSCT. In all cases, ensembles of ConvPairs outperformed their individual members. The best performing individual ConvPair detected 72% of lesions in the test set, with on average 0.85 false positive (FP) errors per scan. The best performing ensemble combined all ConvPairs and obtained a sensitivity of 71% at 0.48 FP errors per scan. For this ensemble, agreement with the reference mass score in CSCT was excellent (ICC 0.944 [0.918–0.962]). Aditionally, based on the Agatston score in CCTA, this ensemble assigned 83% of patients to the same cardiovascular risk category as reference CSCT. In conclusion, CAC can be accurately automatically identified and quantified in CCTA using the proposed pattern recognition method. This might obviate the need to acquire a dedicated CSCT scan for CAC scoring, which is regularly acquired prior to a CCTA, and thus reduce the CT radiation dose received by patients.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. CAC is clinically quantified in cardiac calcium scoring CT (CSCT), but it has been shown that cardiac CT angiography (CCTA) may also be used for this purpose. We present a method for automatic CAC quantification in CCTA. This method uses supervised learning to directly identify and quantify CAC without a need for coronary artery extraction commonly used in existing methods. The study included cardiac CT exams of 250 patients for whom both a CCTA and a CSCT scan were available. To restrict the volume-of-interest for analysis, a bounding box around the heart is automatically determined. The bounding box detection algorithm employs a combination of three ConvNets, where each detects the heart in a different orthogonal plane (axial, sagittal, coronal). These ConvNets were trained using 50 cardiac CT exams. In the remaining 200 exams, a reference standard for CAC was defined in CSCT and CCTA. Out of these, 100 CCTA scans were used for training, and the remaining 100 for evaluation of a voxel classification method for CAC identification. The method uses ConvPairs, pairs of convolutional neural networks (ConvNets). The first ConvNet in a pair identifies voxels likely to be CAC, thereby discarding the majority of non-CAC-like voxels such as lung and fatty tissue. The identified CAC-like voxels are further classified by the second ConvNet in the pair, which distinguishes between CAC and CAC-like negatives. Given the different task of each ConvNet, they share their architecture, but not their weights. Input patches are either 2.5D or 3D. The ConvNets are purely convolutional, i.e. no pooling layers are present and fully connected layers are implemented as convolutions, thereby allowing efficient voxel classification. The performance of individual 2.5D and 3D ConvPairs with input sizes of 15 and 25 voxels, as well as the performance of ensembles of these ConvPairs, were evaluated by a comparison with reference annotations in CCTA and CSCT. In all cases, ensembles of ConvPairs outperformed their individual members. The best performing individual ConvPair detected 72% of lesions in the test set, with on average 0.85 false positive (FP) errors per scan. The best performing ensemble combined all ConvPairs and obtained a sensitivity of 71% at 0.48 FP errors per scan. For this ensemble, agreement with the reference mass score in CSCT was excellent (ICC 0.944 [0.918–0.962]). Aditionally, based on the Agatston score in CCTA, this ensemble assigned 83% of patients to the same cardiovascular risk category as reference CSCT. In conclusion, CAC can be accurately automatically identified and quantified in CCTA using the proposed pattern recognition method. This might obviate the need to acquire a dedicated CSCT scan for CAC scoring, which is regularly acquired prior to a CCTA, and thus reduce the CT radiation dose received by patients.13. J.M. Wolterink, T. Leiner, B.D. de Vos, J-L. Coatrieux, B.M. Kelm, S. Kondo, R.A. Salgado, R. Shahzad, H. Shu, M. Snoeren, R.A.P. Takx, L.J. van Vliet, T. van Walsum, T.P. Willems, G. Yang, Y. Zheng, M.A. Viergever, I. Isgum
An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework Journal Article
Medical Physics, 43 (5), pp. 2361, 2016.
@article{Wolt16,
title = {An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework},
author = {J.M. Wolterink, T. Leiner, B.D. de Vos, J-L. Coatrieux, B.M. Kelm, S. Kondo, R.A. Salgado, R. Shahzad, H. Shu, M. Snoeren, R.A.P. Takx, L.J. van Vliet, T. van Walsum, T.P. Willems, G. Yang, Y. Zheng, M.A. Viergever, I. Isgum},
year = {2016},
date = {2016-03-29},
journal = {Medical Physics},
volume = {43},
number = {5},
pages = {2361},
abstract = {Purpose: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiacCT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiacCT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiacCT using a publicly available standardized framework. Methods: CardiacCT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CTscanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Results: Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen’s kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. Conclusions: A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiacCT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Purpose: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiacCT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiacCT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiacCT using a publicly available standardized framework. Methods: CardiacCT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CTscanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Results: Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen’s kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. Conclusions: A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiacCT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.
Journal Articles |
|
1. | J. Sander, B.D. de Vos, I. Išgum Automatic segmentation with detection of local segmentation failures in cardiac MRI Journal Article Scientific Reports, 10 (21769 ), 2020. @article{Sander2020b, title = {Automatic segmentation with detection of local segmentation failures in cardiac MRI}, author = {J. Sander, B.D. de Vos, I. Išgum}, url = {https://www.nature.com/articles/s41598-020-77733-4}, year = {2020}, date = {2020-12-10}, journal = {Scientific Reports}, volume = {10}, number = {21769 }, abstract = {Segmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Segmentation of cardiac anatomical structures in cardiac magnetic resonance images (CMRI) is a prerequisite for automatic diagnosis and prognosis of cardiovascular diseases. To increase robustness and performance of segmentation methods this study combines automatic segmentation and assessment of segmentation uncertainty in CMRI to detect image regions containing local segmentation failures. Three existing state-of-the-art convolutional neural networks (CNN) were trained to automatically segment cardiac anatomical structures and obtain two measures of predictive uncertainty: entropy and a measure derived by MC-dropout. Thereafter, using the uncertainties another CNN was trained to detect local segmentation failures that potentially need correction by an expert. Finally, manual correction of the detected regions was simulated in the complete set of scans of 100 patients and manually performed in a random subset of scans of 50 patients. Using publicly available CMR scans from the MICCAI 2017 ACDC challenge, the impact of CNN architecture and loss function for segmentation, and the uncertainty measure was investigated. Performance was evaluated using the Dice coefficient, 3D Hausdorff distance and clinical metrics between manual and (corrected) automatic segmentation. The experiments reveal that combining automatic segmentation with manual correction of detected segmentation failures results in improved segmentation and to 10-fold reduction of expert time compared to manual expert segmentation. |
2. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum Deep learning-based regression and classification for automatic landmark localization in medical images Journal Article IEEE Transactions on Medical Imaging, 39 (12), pp. 4011-4022, 2020, ISSN: 1558-254X. @article{Noothout2020, title = {Deep learning-based regression and classification for automatic landmark localization in medical images}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum}, url = {https://arxiv.org/pdf/2007.05295.pdf}, doi = {10.1109/TMI.2020.3009002}, issn = {1558-254X}, year = {2020}, date = {2020-07-09}, journal = {IEEE Transactions on Medical Imaging}, volume = {39}, number = {12}, pages = {4011-4022}, abstract = {In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. }, keywords = {}, pubstate = {published}, tppubtype = {article} } In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. |
3. | G. Litjens, F. Ciompi, J.M. Wolterink, B.D. de Vos, T. Leiner, J. Teuwen, I. Išgum State-of-the-art deep learning in cardiovascular image analysis Journal Article JACC: Cardiovascular Imaging, 12 (8 Part 1), pp. 1549-1565, 2019. @article{Litjens2019, title = {State-of-the-art deep learning in cardiovascular image analysis}, author = {G. Litjens, F. Ciompi, J.M. Wolterink, B.D. de Vos, T. Leiner, J. Teuwen, I. Išgum}, url = {https://doi.org/10.1016/j.jcmg.2019.06.009}, year = {2019}, date = {2019-08-05}, journal = {JACC: Cardiovascular Imaging}, volume = {12}, number = {8 Part 1}, pages = {1549-1565}, abstract = {Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses >80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses >80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed. |
4. | B.D. de Vos; F.F. Berendsen; M.A. Viergever; H. Sokooti; M. Staring; I. Išgum A deep learning framework for unsupervised affine and deformable image registration Journal Article Medical Image Analysis, 52 , pp. 128 - 143, 2019. @article{deVos2019b, title = {A deep learning framework for unsupervised affine and deformable image registration}, author = {B.D. de Vos and F.F. Berendsen and M.A. Viergever and H. Sokooti and M. Staring and I. Išgum}, url = {http://www.sciencedirect.com/science/article/pii/S1361841518300495}, year = {2019}, date = {2019-02-21}, journal = {Medical Image Analysis}, volume = {52}, pages = {128 - 143}, abstract = {Deep learning, Unsupervised learning, Affine image registration, Deformable image registration, Cardiac cine MRI, Chest CT\", abstract = \"Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Deep learning, Unsupervised learning, Affine image registration, Deformable image registration, Cardiac cine MRI, Chest CT", abstract = "Image registration, the process of aligning two or more images, is the core technique of many (semi-)automatic medical image analysis tasks. Recent studies have shown that deep learning methods, notably convolutional neural networks (ConvNets), can be used for image registration. Thus far training of ConvNets for registration was supervised using predefined example registrations. However, obtaining example registrations is not trivial. To circumvent the need for predefined examples, and thereby to increase convenience of training ConvNets for image registration, we propose the Deep Learning Image Registration (DLIR) framework for unsupervised affine and deformable image registration. In the DLIR framework ConvNets are trained for image registration by exploiting image similarity analogous to conventional intensity-based image registration. After a ConvNet has been trained with the DLIR framework, it can be used to register pairs of unseen images in one shot. We propose flexible ConvNets designs for affine image registration and for deformable image registration. By stacking multiple of these ConvNets into a larger architecture, we are able to perform coarse-to-fine image registration. We show for registration of cardiac cine MRI and registration of chest CT that performance of the DLIR framework is comparable to conventional image registration while being several orders of magnitude faster. |
5. | B.D. de Vos, J.M. Wolterink, T. Leiner, P.A. de Jong, N. Lessmann, I. Išgum Direct automatic coronary calcium scoring in cardiac and chest CT Journal Article IEEE Transactions on Medical Imaging, 34 , pp. 123-136, 2019. @article{deVos2019, title = {Direct automatic coronary calcium scoring in cardiac and chest CT}, author = {B.D. de Vos, J.M. Wolterink, T. Leiner, P.A. de Jong, N. Lessmann, I. Išgum}, url = {https://ieeexplore.ieee.org/abstract/document/8643342 https://arxiv.org/abs/1902.05408}, year = {2019}, date = {2019-02-21}, journal = {IEEE Transactions on Medical Imaging}, volume = {34}, pages = {123-136}, abstract = {Cardiovascular disease (CVD) is the global leading cause of death. A strong risk factor for CVD events is the amount of coronary artery calcium (CAC). To meet demands of the increasing interest in quantification of CAC, i.e. coronary calcium scoring, especially as an unrequested finding for screening and research, automatic methods have been proposed. Current automatic calcium scoring methods are relatively computationally expensive and only provide scores for one type of CT. To address this, we propose a computationally efficient method that employs two ConvNets: the first performs registration to align the fields of view of input CTs and the second performs direct regression of the calcium score, thereby circumventing time-consuming intermediate CAC segmentation. Optional decision feedback provides insight in the regions that contributed to the calcium score. Experiments were performed using 903 cardiac CT and 1,687 chest CT scans. The method predicted calcium scores in less than 0.3 s. Intra-class correlation coefficient between predicted and manual calcium scores was 0.98 for both cardiac and chest CT. The method showed almost perfect agreement between automatic and manual CVD risk categorization in both datasets, with a linearly weighted Cohen's kappa of 0.95 in cardiac CT and 0.93 in chest CT. Performance is similar to that of state-of-the-art methods, but the proposed method is hundreds of times faster. By providing visual feedback, insight is given in the decision process, making it readily implementable in clinical and research settings.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Cardiovascular disease (CVD) is the global leading cause of death. A strong risk factor for CVD events is the amount of coronary artery calcium (CAC). To meet demands of the increasing interest in quantification of CAC, i.e. coronary calcium scoring, especially as an unrequested finding for screening and research, automatic methods have been proposed. Current automatic calcium scoring methods are relatively computationally expensive and only provide scores for one type of CT. To address this, we propose a computationally efficient method that employs two ConvNets: the first performs registration to align the fields of view of input CTs and the second performs direct regression of the calcium score, thereby circumventing time-consuming intermediate CAC segmentation. Optional decision feedback provides insight in the regions that contributed to the calcium score. Experiments were performed using 903 cardiac CT and 1,687 chest CT scans. The method predicted calcium scores in less than 0.3 s. Intra-class correlation coefficient between predicted and manual calcium scores was 0.98 for both cardiac and chest CT. The method showed almost perfect agreement between automatic and manual CVD risk categorization in both datasets, with a linearly weighted Cohen's kappa of 0.95 in cardiac CT and 0.93 in chest CT. Performance is similar to that of state-of-the-art methods, but the proposed method is hundreds of times faster. By providing visual feedback, insight is given in the decision process, making it readily implementable in clinical and research settings. |
6. | J. Šprem; B.D. de Vos; N. Lessmann; P.A. de Jong; M.A. Viergever; I. Išgum Impact of automatically detected motion artifacts on coronary calcium scoring in chest CT Journal Article Journal of Medical Imaging, 5 (4), pp. 044007 , 2018. @article{Šprem2018, title = {Impact of automatically detected motion artifacts on coronary calcium scoring in chest CT}, author = {J. Šprem and B.D. de Vos and N. Lessmann and P.A. de Jong and M.A. Viergever and I. Išgum}, url = {https://doi.org/10.1117/1.JMI.5.4.044007}, year = {2018}, date = {2018-12-11}, journal = {Journal of Medical Imaging}, volume = {5}, number = {4}, pages = {044007 }, abstract = {The amount of coronary artery calcification (CAC) quantified in CT scans enables prediction of cardiovascular disease (CVD) risk. However, interscan variability of CAC quantification is high, especially in scans made without ECG synchronization. We propose a method for automatic detection of CACs that are severely affected by cardiac motion. Subsequently, we evaluate the impact of such CACs on CAC quantification and CVD risk determination. This study includes 1000 baseline and 585 one year follow-up low-dose chest CTs from the National Lung Screening Trial. 415 baseline scans are used to train and evaluate a convolutional neural network that identifies observer determined CACs affected by severe motion artifacts. Thereafter, 585 paired scans acquired at baseline and follow-up were used to evaluate the impact of severe motion artifacts on CAC quantification and risk categorization. Based on the CAC amount the scans were categorized into four standard CVD risk categories. The method identified CACs affected by severe motion artifacts with 85.2% accuracy. Moreover, reproducibility of CAC scores in scan pairs is higher in scans containing mostly CACs not affected by severe cardiac motion. Hence, the proposed method enables identification of scans affected by severe cardiac motion where CAC quantification may not be reproducible.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The amount of coronary artery calcification (CAC) quantified in CT scans enables prediction of cardiovascular disease (CVD) risk. However, interscan variability of CAC quantification is high, especially in scans made without ECG synchronization. We propose a method for automatic detection of CACs that are severely affected by cardiac motion. Subsequently, we evaluate the impact of such CACs on CAC quantification and CVD risk determination. This study includes 1000 baseline and 585 one year follow-up low-dose chest CTs from the National Lung Screening Trial. 415 baseline scans are used to train and evaluate a convolutional neural network that identifies observer determined CACs affected by severe motion artifacts. Thereafter, 585 paired scans acquired at baseline and follow-up were used to evaluate the impact of severe motion artifacts on CAC quantification and risk categorization. Based on the CAC amount the scans were categorized into four standard CVD risk categories. The method identified CACs affected by severe motion artifacts with 85.2% accuracy. Moreover, reproducibility of CAC scores in scan pairs is higher in scans containing mostly CACs not affected by severe cardiac motion. Hence, the proposed method enables identification of scans affected by severe cardiac motion where CAC quantification may not be reproducible. |
7. | N. Lessmann, B. van Ginneken, M. Zreik, P.A. de Jong, B.D. de Vos, M.A. Viergever, I. Išgum Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions Journal Article IEEE Transactions on Medical Imaging, 37 (2), pp. 615-625, 2018. @article{Lessmann2017, title = {Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions}, author = {N. Lessmann, B. van Ginneken, M. Zreik, P.A. de Jong, B.D. de Vos, M.A. Viergever, I. Išgum}, url = {https://arxiv.org/pdf/1711.00349.pdf}, year = {2018}, date = {2018-02-01}, journal = {IEEE Transactions on Medical Imaging}, volume = {37}, number = {2}, pages = {615-625}, abstract = {Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening. |
8. | I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P. J. Slomka Erratum to: Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT Journal Article Journal of Nuclear Cardiology, 25 (6), pp. 2143, 2017. @article{Isgum2017b, title = {Erratum to: Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT}, author = {I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P. J. Slomka}, url = {http://dx.doi.org/10.1007/s12350-017-0946-4}, year = {2017}, date = {2017-06-06}, journal = {Journal of Nuclear Cardiology}, volume = {25}, number = {6}, pages = {2143}, abstract = {Regrettably an error was introduced in Table 3 during the article’s production. The very first cell (row: Very low 0; column: Very low) should read ‘12’ and not ‘21’ as originally published.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Regrettably an error was introduced in Table 3 during the article’s production. The very first cell (row: Very low 0; column: Very low) should read ‘12’ and not ‘21’ as originally published. |
9. | I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P.J. Slomka Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT Journal Article Journal of Nuclear Cardiology, 25 (6), pp. 2133-2142, 2017. @article{Isgum2017, title = {Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT}, author = {I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P.J. Slomka}, url = {http://dx.doi.org/10.1007/s12350-017-0866-3}, year = {2017}, date = {2017-03-15}, journal = {Journal of Nuclear Cardiology}, volume = {25}, number = {6}, pages = {2133-2142}, abstract = {BACKGROUND: We investigated fully automatic coronary artery calcium (CAC) scoring and cardiovascular disease (CVD) risk categorization from CT attenuation correction (CTAC) acquired at rest and stress during cardiac PET/CT and compared it with manual annotations in CTAC and with dedicated calcium scoring CT (CSCT). METHODS AND RESULTS: We included 133 consecutive patients undergoing myocardial perfusion 82Rb PET/CT with the acquisition of low-dose CTAC at rest and stress. Additionally, a dedicated CSCT was performed for all patients. Manual CAC annotations in CTAC and CSCT provided the reference standard. In CTAC, CAC was scored automatically using a previously developed machine learning algorithm. Patients were assigned to a CVD risk category based on their Agatston score (0, 1-10, 11-100, 101-400, >400). Agreement in CVD risk categorization between manual and automatic scoring in CTAC at rest and stress resulted in Cohen\'s linearly weighted κ of 0.85 and 0.89, respectively. The agreement between CSCT and CTAC at rest resulted in κ of 0.82 and 0.74, using manual and automatic scoring, respectively. For CTAC at stress, these were 0.79 and 0.70, respectively. CONCLUSION: Automatic CAC scoring from CTAC PET/CT may allow routine CVD risk assessment from the CTAC component of PET/CT without any additional radiation dose or scan time.}, keywords = {}, pubstate = {published}, tppubtype = {article} } BACKGROUND: We investigated fully automatic coronary artery calcium (CAC) scoring and cardiovascular disease (CVD) risk categorization from CT attenuation correction (CTAC) acquired at rest and stress during cardiac PET/CT and compared it with manual annotations in CTAC and with dedicated calcium scoring CT (CSCT). METHODS AND RESULTS: We included 133 consecutive patients undergoing myocardial perfusion 82Rb PET/CT with the acquisition of low-dose CTAC at rest and stress. Additionally, a dedicated CSCT was performed for all patients. Manual CAC annotations in CTAC and CSCT provided the reference standard. In CTAC, CAC was scored automatically using a previously developed machine learning algorithm. Patients were assigned to a CVD risk category based on their Agatston score (0, 1-10, 11-100, 101-400, >400). Agreement in CVD risk categorization between manual and automatic scoring in CTAC at rest and stress resulted in Cohen's linearly weighted κ of 0.85 and 0.89, respectively. The agreement between CSCT and CTAC at rest resulted in κ of 0.82 and 0.74, using manual and automatic scoring, respectively. For CTAC at stress, these were 0.79 and 0.70, respectively. CONCLUSION: Automatic CAC scoring from CTAC PET/CT may allow routine CVD risk assessment from the CTAC component of PET/CT without any additional radiation dose or scan time. |
10. | B.D. de Vos, J.M. Wolterink, P.A. de Jong, T. Leiner, M.A. Viergever, I. Isgum ConvNet-based localization of anatomical structures in 3D medical images Journal Article IEEE Transactions on Medical Imaging, 36 (7), pp. 1470-1481, 2017. @article{devos2017, title = {ConvNet-based localization of anatomical structures in 3D medical images}, author = {B.D. de Vos, J.M. Wolterink, P.A. de Jong, T. Leiner, M.A. Viergever, I. Isgum}, year = {2017}, date = {2017-02-17}, journal = {IEEE Transactions on Medical Imaging}, volume = {36}, number = {7}, pages = {1470-1481}, abstract = {Localization of anatomical structures is a prerequisite for many tasks in medical image analysis. We propose a method for automatic localization of one or more anatomical structures in 3D medical images through detection of their presence in 2D image slices using a convolutional neural network (ConvNet). A single ConvNet is trained to detect presence of the anatomical structure of interest in axial, coronal, and sagittal slices extracted from a 3D image. To allow the ConvNet to analyze slices of different sizes, spatial pyramid pooling is applied. After detection, 3D bounding boxes are created by combining the output of the ConvNet in all slices. In the experiments 200 chest CT, 100 cardiac CT angiography (CTA), and 100 abdomen CT scans were used. The heart, ascending aorta, aortic arch, and descending aorta were localized in chest CT scans, the left cardiac ventricle in cardiac CTA scans, and the liver in abdomen CT scans. Localization was evaluated using the distances between automatically and manually defined reference bounding box centroids and walls. The best results were achieved in localization of structures with clearly defined boundaries (e.g. aortic arch) and the worst when the structure boundary was not clearly visible (e.g. liver). The method was more robust and accurate in localization multiple structures.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Localization of anatomical structures is a prerequisite for many tasks in medical image analysis. We propose a method for automatic localization of one or more anatomical structures in 3D medical images through detection of their presence in 2D image slices using a convolutional neural network (ConvNet). A single ConvNet is trained to detect presence of the anatomical structure of interest in axial, coronal, and sagittal slices extracted from a 3D image. To allow the ConvNet to analyze slices of different sizes, spatial pyramid pooling is applied. After detection, 3D bounding boxes are created by combining the output of the ConvNet in all slices. In the experiments 200 chest CT, 100 cardiac CT angiography (CTA), and 100 abdomen CT scans were used. The heart, ascending aorta, aortic arch, and descending aorta were localized in chest CT scans, the left cardiac ventricle in cardiac CTA scans, and the liver in abdomen CT scans. Localization was evaluated using the distances between automatically and manually defined reference bounding box centroids and walls. The best results were achieved in localization of structures with clearly defined boundaries (e.g. aortic arch) and the worst when the structure boundary was not clearly visible (e.g. liver). The method was more robust and accurate in localization multiple structures. |
11. | S.A.M. Gernaat, I. Isgum, B.D. de Vos, R.A.P. Takx, D.A. Young Afat, N. Rijnberg, D.E. Grobbee, Y. van der Graaf, P.A. de Jong, T. Leiner, H.J.G.D. van den Bongard, J.P. Pignol, H.M. Verkooijen Automatic coronary artery calcium scoring on radiotherapy planning CT scans of breast cancer patients: reproducibility and association with traditional cardiovascular risk factors Journal Article Plos One, 11 (12), pp. e0167925, 2016. @article{Gern16, title = {Automatic coronary artery calcium scoring on radiotherapy planning CT scans of breast cancer patients: reproducibility and association with traditional cardiovascular risk factors}, author = {S.A.M. Gernaat, I. Isgum, B.D. de Vos, R.A.P. Takx, D.A. Young Afat, N. Rijnberg, D.E. Grobbee, Y. van der Graaf, P.A. de Jong, T. Leiner, H.J.G.D. van den Bongard, J.P. Pignol, H.M. Verkooijen}, year = {2016}, date = {2016-12-09}, journal = {Plos One}, volume = {11}, number = {12}, pages = {e0167925}, abstract = {OBJECTIVES: Coronary artery calcium (CAC) is a strong and independent predictor of cardiovascular disease (CVD) risk. This study assesses reproducibility of automatic CAC scoring on radiotherapy planning computed tomography (CT) scans of breast cancer patients, and examines its association with traditional cardiovascular risk factors. METHODS: This study included 561 breast cancer patients undergoing radiotherapy between 2013 and 2015. CAC was automatically scored with an algorithm using supervised pattern recognition, expressed as Agatston scores and categorized into five categories (0, 1-10, 11-100, 101-400, >400). Reproducibility between automatic and manual expert scoring was assessed in 79 patients with automatically determined CAC above zero and 84 randomly selected patients without automatically determined CAC. Interscan reproducibility of automatic scoring was assessed in 294 patients having received two scans (82% on the same day). Association between CAC and CVD risk factors was assessed in 36 patients with CAC scores >100, 72 randomly selected patients with scores 1-100, and 72 randomly selected patients without CAC. Reliability was assessed with linearly weighted kappa and agreement with proportional agreement. RESULTS: 134 out of 561 (24%) patients had a CAC score above zero. Reliability of CVD risk categorization between automatic and manual scoring was 0.80 (95% Confidence Interval (CI): 0.74-0.87), and slightly higher for scans with breath-hold. Agreement was 0.79 (95% CI: 0.72-0.85). Interscan reliability was 0.61 (95% CI: 0.50-0.72) with an agreement of 0.84 (95% CI: 0.80-0.89). Ten out of 36 (27.8%) patients with CAC scores above 100 did not have other cardiovascular risk factors. CONCLUSIONS: Automatic CAC scoring on radiotherapy planning CT scans is a reliable method to assess CVD risk based on Agatston scores. One in four breast cancer patients planned for radiotherapy have elevated CAC score. One in three patients with high CAC scores don't have other CVD risk factors and wouldn't have been identified as high risk.}, keywords = {}, pubstate = {published}, tppubtype = {article} } OBJECTIVES: Coronary artery calcium (CAC) is a strong and independent predictor of cardiovascular disease (CVD) risk. This study assesses reproducibility of automatic CAC scoring on radiotherapy planning computed tomography (CT) scans of breast cancer patients, and examines its association with traditional cardiovascular risk factors. METHODS: This study included 561 breast cancer patients undergoing radiotherapy between 2013 and 2015. CAC was automatically scored with an algorithm using supervised pattern recognition, expressed as Agatston scores and categorized into five categories (0, 1-10, 11-100, 101-400, >400). Reproducibility between automatic and manual expert scoring was assessed in 79 patients with automatically determined CAC above zero and 84 randomly selected patients without automatically determined CAC. Interscan reproducibility of automatic scoring was assessed in 294 patients having received two scans (82% on the same day). Association between CAC and CVD risk factors was assessed in 36 patients with CAC scores >100, 72 randomly selected patients with scores 1-100, and 72 randomly selected patients without CAC. Reliability was assessed with linearly weighted kappa and agreement with proportional agreement. RESULTS: 134 out of 561 (24%) patients had a CAC score above zero. Reliability of CVD risk categorization between automatic and manual scoring was 0.80 (95% Confidence Interval (CI): 0.74-0.87), and slightly higher for scans with breath-hold. Agreement was 0.79 (95% CI: 0.72-0.85). Interscan reliability was 0.61 (95% CI: 0.50-0.72) with an agreement of 0.84 (95% CI: 0.80-0.89). Ten out of 36 (27.8%) patients with CAC scores above 100 did not have other cardiovascular risk factors. CONCLUSIONS: Automatic CAC scoring on radiotherapy planning CT scans is a reliable method to assess CVD risk based on Agatston scores. One in four breast cancer patients planned for radiotherapy have elevated CAC score. One in three patients with high CAC scores don't have other CVD risk factors and wouldn't have been identified as high risk. |
12. | J.M. Wolterink, T. Leiner, B.D. de Vos, R.W. van Hamersvelt, M.A. Viergever, I. Isgum Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks Journal Article Medical Image Analysis, 34 , pp. 123-136, 2016. @article{Wolt16b, title = {Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks}, author = {J.M. Wolterink, T. Leiner, B.D. de Vos, R.W. van Hamersvelt, M.A. Viergever, I. Isgum}, year = {2016}, date = {2016-05-11}, journal = {Medical Image Analysis}, volume = {34}, pages = {123-136}, abstract = {The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. CAC is clinically quantified in cardiac calcium scoring CT (CSCT), but it has been shown that cardiac CT angiography (CCTA) may also be used for this purpose. We present a method for automatic CAC quantification in CCTA. This method uses supervised learning to directly identify and quantify CAC without a need for coronary artery extraction commonly used in existing methods. The study included cardiac CT exams of 250 patients for whom both a CCTA and a CSCT scan were available. To restrict the volume-of-interest for analysis, a bounding box around the heart is automatically determined. The bounding box detection algorithm employs a combination of three ConvNets, where each detects the heart in a different orthogonal plane (axial, sagittal, coronal). These ConvNets were trained using 50 cardiac CT exams. In the remaining 200 exams, a reference standard for CAC was defined in CSCT and CCTA. Out of these, 100 CCTA scans were used for training, and the remaining 100 for evaluation of a voxel classification method for CAC identification. The method uses ConvPairs, pairs of convolutional neural networks (ConvNets). The first ConvNet in a pair identifies voxels likely to be CAC, thereby discarding the majority of non-CAC-like voxels such as lung and fatty tissue. The identified CAC-like voxels are further classified by the second ConvNet in the pair, which distinguishes between CAC and CAC-like negatives. Given the different task of each ConvNet, they share their architecture, but not their weights. Input patches are either 2.5D or 3D. The ConvNets are purely convolutional, i.e. no pooling layers are present and fully connected layers are implemented as convolutions, thereby allowing efficient voxel classification. The performance of individual 2.5D and 3D ConvPairs with input sizes of 15 and 25 voxels, as well as the performance of ensembles of these ConvPairs, were evaluated by a comparison with reference annotations in CCTA and CSCT. In all cases, ensembles of ConvPairs outperformed their individual members. The best performing individual ConvPair detected 72% of lesions in the test set, with on average 0.85 false positive (FP) errors per scan. The best performing ensemble combined all ConvPairs and obtained a sensitivity of 71% at 0.48 FP errors per scan. For this ensemble, agreement with the reference mass score in CSCT was excellent (ICC 0.944 [0.918–0.962]). Aditionally, based on the Agatston score in CCTA, this ensemble assigned 83% of patients to the same cardiovascular risk category as reference CSCT. In conclusion, CAC can be accurately automatically identified and quantified in CCTA using the proposed pattern recognition method. This might obviate the need to acquire a dedicated CSCT scan for CAC scoring, which is regularly acquired prior to a CCTA, and thus reduce the CT radiation dose received by patients.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. CAC is clinically quantified in cardiac calcium scoring CT (CSCT), but it has been shown that cardiac CT angiography (CCTA) may also be used for this purpose. We present a method for automatic CAC quantification in CCTA. This method uses supervised learning to directly identify and quantify CAC without a need for coronary artery extraction commonly used in existing methods. The study included cardiac CT exams of 250 patients for whom both a CCTA and a CSCT scan were available. To restrict the volume-of-interest for analysis, a bounding box around the heart is automatically determined. The bounding box detection algorithm employs a combination of three ConvNets, where each detects the heart in a different orthogonal plane (axial, sagittal, coronal). These ConvNets were trained using 50 cardiac CT exams. In the remaining 200 exams, a reference standard for CAC was defined in CSCT and CCTA. Out of these, 100 CCTA scans were used for training, and the remaining 100 for evaluation of a voxel classification method for CAC identification. The method uses ConvPairs, pairs of convolutional neural networks (ConvNets). The first ConvNet in a pair identifies voxels likely to be CAC, thereby discarding the majority of non-CAC-like voxels such as lung and fatty tissue. The identified CAC-like voxels are further classified by the second ConvNet in the pair, which distinguishes between CAC and CAC-like negatives. Given the different task of each ConvNet, they share their architecture, but not their weights. Input patches are either 2.5D or 3D. The ConvNets are purely convolutional, i.e. no pooling layers are present and fully connected layers are implemented as convolutions, thereby allowing efficient voxel classification. The performance of individual 2.5D and 3D ConvPairs with input sizes of 15 and 25 voxels, as well as the performance of ensembles of these ConvPairs, were evaluated by a comparison with reference annotations in CCTA and CSCT. In all cases, ensembles of ConvPairs outperformed their individual members. The best performing individual ConvPair detected 72% of lesions in the test set, with on average 0.85 false positive (FP) errors per scan. The best performing ensemble combined all ConvPairs and obtained a sensitivity of 71% at 0.48 FP errors per scan. For this ensemble, agreement with the reference mass score in CSCT was excellent (ICC 0.944 [0.918–0.962]). Aditionally, based on the Agatston score in CCTA, this ensemble assigned 83% of patients to the same cardiovascular risk category as reference CSCT. In conclusion, CAC can be accurately automatically identified and quantified in CCTA using the proposed pattern recognition method. This might obviate the need to acquire a dedicated CSCT scan for CAC scoring, which is regularly acquired prior to a CCTA, and thus reduce the CT radiation dose received by patients. |
13. | J.M. Wolterink, T. Leiner, B.D. de Vos, J-L. Coatrieux, B.M. Kelm, S. Kondo, R.A. Salgado, R. Shahzad, H. Shu, M. Snoeren, R.A.P. Takx, L.J. van Vliet, T. van Walsum, T.P. Willems, G. Yang, Y. Zheng, M.A. Viergever, I. Isgum An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework Journal Article Medical Physics, 43 (5), pp. 2361, 2016. @article{Wolt16, title = {An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework}, author = {J.M. Wolterink, T. Leiner, B.D. de Vos, J-L. Coatrieux, B.M. Kelm, S. Kondo, R.A. Salgado, R. Shahzad, H. Shu, M. Snoeren, R.A.P. Takx, L.J. van Vliet, T. van Walsum, T.P. Willems, G. Yang, Y. Zheng, M.A. Viergever, I. Isgum}, year = {2016}, date = {2016-03-29}, journal = {Medical Physics}, volume = {43}, number = {5}, pages = {2361}, abstract = {Purpose: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiacCT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiacCT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiacCT using a publicly available standardized framework. Methods: CardiacCT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CTscanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Results: Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen’s kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. Conclusions: A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiacCT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Purpose: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiacCT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiacCT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiacCT using a publicly available standardized framework. Methods: CardiacCT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CTscanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Results: Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen’s kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. Conclusions: A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiacCT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination. |
Inproceedings |
|
1. | J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum Automatic segmentation of the olfactory bulbs in MRI Inproceedings In: SPIE Medical Imaging, pp. 115961J, 2021. @inproceedings{Noothout2021, title = {Automatic segmentation of the olfactory bulbs in MRI}, author = {J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum}, doi = {10.1117/12.2580354}, year = {2021}, date = {2021-02-16}, booktitle = {SPIE Medical Imaging}, volume = {11596}, pages = {115961J}, abstract = {A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs. |
2. | J. Sander, B.D. de Vos, I. Išgum Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples Inproceedings In: SPIE Medical Imaging, pp. 115960E, 2021. @inproceedings{Sander2021, title = {Unsupervised super-resolution: creating high-resolution medical images from low-resolution anisotropic examples}, author = {J. Sander, B.D. de Vos, I. Išgum}, doi = {10.1117/12.2580412}, year = {2021}, date = {2021-02-16}, booktitle = {SPIE Medical Imaging}, volume = {11596}, pages = {115960E}, abstract = {Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional interpolation methods. Furthermore, the qualitative results indicate that especially ner cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional interpolation methods. Furthermore, the qualitative results indicate that especially ner cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset. |
3. | T.F.A. van der Ouderaa, I. Išgum, W.B. Veldhuis, B.D. de Vos Deep group-wise variational diffeomorphic image registration Inproceedings In: MICCAI workshop on Thoracic Image Analysis , 2020. @inproceedings{vanderOuderaa2020, title = {Deep group-wise variational diffeomorphic image registration}, author = {T.F.A. van der Ouderaa, I. Išgum, W.B. Veldhuis, B.D. de Vos}, url = {https://arxiv.org/abs/2010.00231}, year = {2020}, date = {2020-10-08}, booktitle = {MICCAI workshop on Thoracic Image Analysis }, journal = {The Second International Workshop on Thoracic Image Analysis, Medical Image Computing and Computer Assisted Intervention (MICCAI 2020)}, abstract = {Deep neural networks are increasingly used for pair-wise image registration. We propose to extend current learning-based image registration to allow simultaneous registration of multiple images. To achieve this, we build upon the pair-wise variational and diffeomorphic VoxelMorph approach and present a general mathematical framework that enables both registration of multiple images to their geodesic average and registration in which any of the available images can be used as a fixed image. In addition, we provide a likelihood based on normalized mutual information, a well-known image similarity metric in registration, between multiple images, and a prior that allows for explicit control over the viscous uid energy to effectively regularize deformations. We trained and evaluated our approach using intra-patient registration of breast MRI and Thoracic 4DCT exams acquired over multiple time points. Comparison with Elastix and VoxelMorph demonstrates competitive quantitative performance of the proposed method in terms of image similarity and reference landmark distances at significantly faster registration. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Deep neural networks are increasingly used for pair-wise image registration. We propose to extend current learning-based image registration to allow simultaneous registration of multiple images. To achieve this, we build upon the pair-wise variational and diffeomorphic VoxelMorph approach and present a general mathematical framework that enables both registration of multiple images to their geodesic average and registration in which any of the available images can be used as a fixed image. In addition, we provide a likelihood based on normalized mutual information, a well-known image similarity metric in registration, between multiple images, and a prior that allows for explicit control over the viscous uid energy to effectively regularize deformations. We trained and evaluated our approach using intra-patient registration of breast MRI and Thoracic 4DCT exams acquired over multiple time points. Comparison with Elastix and VoxelMorph demonstrates competitive quantitative performance of the proposed method in terms of image similarity and reference landmark distances at significantly faster registration. |
4. | B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum Mutual information for unsupervised deep learning image registration Inproceedings In: SPIE Medical Imaging, pp. 113130R, 2020. @inproceedings{deVos2020, title = {Mutual information for unsupervised deep learning image registration}, author = {B.D. de Vos, B.H.M. van der Velden, J. Sander, K.G.A. Gilhuijs, M. Staring, I. Išgum}, url = {https://spie.org/MI/conferencedetails/medical-image-processing#2549729}, doi = {10.1117/12.2549729}, year = {2020}, date = {2020-03-10}, booktitle = {SPIE Medical Imaging}, volume = {11313}, pages = {113130R}, abstract = {Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current unsupervised deep learning-based image registration methods are trained with mean squares or normalized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration. |
5. | S.G.M. van Velzen, B.D. de Vos, H.M. Verkooijen, T. Leiner, M.A. Viergever, I. Išgum Coronary artery calcium scoring: can we do better? Inproceedings In: SPIE Medical Imaging, pp. 113130G, 2020. @inproceedings{vanVelzen2020b, title = {Coronary artery calcium scoring: can we do better?}, author = {S.G.M. van Velzen, B.D. de Vos, H.M. Verkooijen, T. Leiner, M.A. Viergever, I. Išgum}, url = {https://spie.org/MI/conferencedetails/medical-image-processing#2549557}, doi = {10.1117/12.2549557}, year = {2020}, date = {2020-03-10}, booktitle = {SPIE Medical Imaging}, volume = {11313}, pages = {113130G}, abstract = {Conventional identification of coronary artery calcification (CAC) scoring uses a 130HU threshold, which may lead to under- or over-estimation of the amount of CAC. We propose a method for CAC quantification without the need for thresholding. A CycleGAN is employed to generate synthetic images without CAC from images containing CAC. By subtracting these, a CAC map is created that is used to quantify CAC. As the true amount of CAC cannot be determined, the method is evaluated through scoring reproducibility and compared with clinical CAC-scoring. The method can identify CAC lesions without thresholding and is more reproducible than clinical CAC-scoring.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Conventional identification of coronary artery calcification (CAC) scoring uses a 130HU threshold, which may lead to under- or over-estimation of the amount of CAC. We propose a method for CAC quantification without the need for thresholding. A CycleGAN is employed to generate synthetic images without CAC from images containing CAC. By subtracting these, a CAC map is created that is used to quantify CAC. As the true amount of CAC cannot be determined, the method is evaluated through scoring reproducibility and compared with clinical CAC-scoring. The method can identify CAC lesions without thresholding and is more reproducible than clinical CAC-scoring. |
6. | J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI Inproceedings In: SPIE Medical Imaging, pp. 1094919, 2019. @inproceedings{2019jsander, title = {Towards increased trustworthiness of deep learning segmentation methods on cardiac MRI}, author = {J. Sander, B.D. de Vos, J.M. Wolterink, I. Išgum}, url = {https://arxiv.org/pdf/1809.10430.pdf}, doi = {10.1117/12.2511699}, year = {2019}, date = {2019-02-17}, booktitle = {SPIE Medical Imaging}, volume = {10949}, pages = {1094919}, abstract = {Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain regarding the obtained segmentation almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs. |
7. | B.H. van der Velden, B.D. de Vos, C. E. Loo, H.J. Kuijf, I. Išgum, K.G.A. Gilhuijs In: SPIE Medical Imaging, pp. 109500D, 2019. @inproceedings{vandervelden2018, title = {Response monitoring of breast cancer on DCE-MRI using convolutional neural network-generated seed points and constrained volume growing}, author = {B.H. van der Velden, B.D. de Vos, C. E. Loo, H.J. Kuijf, I. Išgum, K.G.A. Gilhuijs}, url = {https://arxiv.org/abs/1811.09063}, doi = {10.1117/12.2508358}, year = {2019}, date = {2019-02-17}, booktitle = {SPIE Medical Imaging}, volume = {10950}, pages = {109500D}, abstract = {Response of breast cancer to neoadjuvant chemotherapy (NAC) can be monitored using the change in visible tumor on magnetic resonance imaging (MRI). In our current workflow, seed points are manually placed in areas of enhancement likely to contain cancer. A constrained volume growing method uses these manually placed seed points as input and generates a tumor segmentation. This method is rigorously validated using complete pathological embedding. In this study, we propose to exploit deep learning for fast and automatic seed point detection, replacing manual seed point placement in our existing and well-validated workflow. The seed point generator was developed in early breast cancer patients with pathology-proven segmentations (N=100), operated shortly after MRI. It consisted of an ensemble of three independently trained fully convolutional dilated neural networks that classified breast voxels as tumor or non-tumor. Subsequently, local maxima were used as seed points for volume growing in patients receiving NAC (N=10). The percentage of tumor volume change was evaluated against semi-automatic segmentations. The primary cancer was localized in 95% of the tumors at the cost of 0.9 false positive per patient. False positives included focally enhancing regions of unknown origin and parts of the intramammary blood vessels. Volume growing from the seed points showed a median tumor volume decrease of 70% (interquartile range: 50%-77%), comparable to the semi-automatic segmentations (median: 70%, interquartile range 23%-76%). To conclude, a fast and automatic seed point generator was developed, fully automating a well-validated semi-automatic workflow for response monitoring of breast cancer to neoadjuvant chemotherapy. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Response of breast cancer to neoadjuvant chemotherapy (NAC) can be monitored using the change in visible tumor on magnetic resonance imaging (MRI). In our current workflow, seed points are manually placed in areas of enhancement likely to contain cancer. A constrained volume growing method uses these manually placed seed points as input and generates a tumor segmentation. This method is rigorously validated using complete pathological embedding. In this study, we propose to exploit deep learning for fast and automatic seed point detection, replacing manual seed point placement in our existing and well-validated workflow. The seed point generator was developed in early breast cancer patients with pathology-proven segmentations (N=100), operated shortly after MRI. It consisted of an ensemble of three independently trained fully convolutional dilated neural networks that classified breast voxels as tumor or non-tumor. Subsequently, local maxima were used as seed points for volume growing in patients receiving NAC (N=10). The percentage of tumor volume change was evaluated against semi-automatic segmentations. The primary cancer was localized in 95% of the tumors at the cost of 0.9 false positive per patient. False positives included focally enhancing regions of unknown origin and parts of the intramammary blood vessels. Volume growing from the seed points showed a median tumor volume decrease of 70% (interquartile range: 50%-77%), comparable to the semi-automatic segmentations (median: 70%, interquartile range 23%-76%). To conclude, a fast and automatic seed point generator was developed, fully automating a well-validated semi-automatic workflow for response monitoring of breast cancer to neoadjuvant chemotherapy. |
8. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, I. Išgum Automatic segmentation of thoracic aorta segments in low-dose chest CT Inproceedings In: SPIE Medical Imaging, pp. 105741S, 2018. @inproceedings{Noothout2018, title = {Automatic segmentation of thoracic aorta segments in low-dose chest CT}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, I. Išgum}, url = {https://arxiv.org/abs/1810.05727 https://doi.org/10.1117/12.2293114}, year = {2018}, date = {2018-10-03}, booktitle = {SPIE Medical Imaging}, volume = {10574}, pages = {105741S}, abstract = {Morphological analysis and identification of pathologies in the aorta are important for cardiovascular diagnosis and risk assessment in patients. Manual annotation is time-consuming and cumbersome in CT scans acquired without contrast enhancement and with low radiation dose. Hence, we propose an automatic method to segment the ascending aorta, the aortic arch and the thoracic descending aorta in low-dose chest CT without contrast enhancement. Segmentation was performed using a dilated convolutional neural network (CNN), with a receptive field of 131X131 voxels, that classified voxels in axial, coronal and sagittal image slices. To obtain a final segmentation, the obtained probabilities of the three planes were averaged per class, and voxels were subsequently assigned to the class with the highest class probability. Two-fold cross-validation experiments were performed where ten scans were used to train the network and another ten to evaluate the performance. Dice coefficients of 0.83, 0.86 and 0.88, and Average Symmetrical Surface Distances (ASSDs) of 2.44, 1.56 and 1.87 mm were obtained for the ascending aorta, the aortic arch, and the descending aorta, respectively. The results indicate that the proposed method could be used in large-scale studies analyzing the anatomical location of pathology and morphology of the thoracic aorta.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Morphological analysis and identification of pathologies in the aorta are important for cardiovascular diagnosis and risk assessment in patients. Manual annotation is time-consuming and cumbersome in CT scans acquired without contrast enhancement and with low radiation dose. Hence, we propose an automatic method to segment the ascending aorta, the aortic arch and the thoracic descending aorta in low-dose chest CT without contrast enhancement. Segmentation was performed using a dilated convolutional neural network (CNN), with a receptive field of 131X131 voxels, that classified voxels in axial, coronal and sagittal image slices. To obtain a final segmentation, the obtained probabilities of the three planes were averaged per class, and voxels were subsequently assigned to the class with the highest class probability. Two-fold cross-validation experiments were performed where ten scans were used to train the network and another ten to evaluate the performance. Dice coefficients of 0.83, 0.86 and 0.88, and Average Symmetrical Surface Distances (ASSDs) of 2.44, 1.56 and 1.87 mm were obtained for the ascending aorta, the aortic arch, and the descending aorta, respectively. The results indicate that the proposed method could be used in large-scale studies analyzing the anatomical location of pathology and morphology of the thoracic aorta. |
9. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, T. Leiner, I. Išgum CNN-based Landmark Detection in Cardiac CTA Scans Inproceedings In: Medical Imaging with Deep Learning (MIDL 2018), 2018. @inproceedings{Noothout2018b, title = {CNN-based Landmark Detection in Cardiac CTA Scans}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, T. Leiner, I. Išgum}, url = {https://openreview.net/forum?id=r1malb3jz}, year = {2018}, date = {2018-05-20}, booktitle = {Medical Imaging with Deep Learning (MIDL 2018)}, abstract = {Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans. |
10. | B.D. de Vos, F.F. Berendsen, M.A. Viergever, M. Staring, I. Isgum End-to-end unsupervised deformable image registration with a convolutional neural network Inproceedings In: ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings (Ed.): Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, pp. 204–212, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings 2017. @inproceedings{deVos2017bb, title = {End-to-end unsupervised deformable image registration with a convolutional neural network}, author = {B.D. de Vos, F.F. Berendsen, M.A. Viergever, M. Staring, I. Isgum}, editor = {ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings}, url = {https://arxiv.org/abs/1704.06065}, year = {2017}, date = {2017-10-27}, booktitle = {Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017}, issuetitle = {ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings}, journal = {ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings}, pages = {204--212}, organization = {ML-CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, Proceedings}, abstract = {In this work we propose a deep learning network for deformable image registration (DIRNet). The DIRNet consists of a convolutional neural network (ConvNet) regressor, a spatial transformer, and a resampler. The ConvNet analyzes a pair of fixed and moving images and outputs parameters for the spatial transformer, which generates the displacement vector field that enables the resampler to warp the moving image to the fixed image. The DIRNet is trained end-to-end by unsupervised optimization of a similarity metric between input image pairs. A trained DIRNet can be applied to perform registration on unseen image pairs in one pass, thus non-iteratively. Evaluation was performed with registration of images of handwritten digits (MNIST) and cardiac cine MR scans (Sunnybrook Cardiac Data). The results demonstrate that registration with DIRNet is as accurate as a conventional deformable image registration method with short execution times.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this work we propose a deep learning network for deformable image registration (DIRNet). The DIRNet consists of a convolutional neural network (ConvNet) regressor, a spatial transformer, and a resampler. The ConvNet analyzes a pair of fixed and moving images and outputs parameters for the spatial transformer, which generates the displacement vector field that enables the resampler to warp the moving image to the fixed image. The DIRNet is trained end-to-end by unsupervised optimization of a similarity metric between input image pairs. A trained DIRNet can be applied to perform registration on unseen image pairs in one pass, thus non-iteratively. Evaluation was performed with registration of images of handwritten digits (MNIST) and cardiac cine MR scans (Sunnybrook Cardiac Data). The results demonstrate that registration with DIRNet is as accurate as a conventional deformable image registration method with short execution times. |
11. | H. Sokooti, B.D. de Vos, F. Berendsen, B.P.F. Lelieveldt, I. Isgum, M. Staring Nonrigid image registration using multi-scale 3D convolutional neural networks Inproceedings In: Medical Image Computing and Computer Assisted Intervention − MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part 1, pp. 232–239, 2017. @inproceedings{Sokoti17, title = {Nonrigid image registration using multi-scale 3D convolutional neural networks}, author = {H. Sokooti, B.D. de Vos, F. Berendsen, B.P.F. Lelieveldt, I. Isgum, M. Staring}, url = {https://link.springer.com/content/pdf/10.1007%2F978-3-319-66182-7_27.pdf}, year = {2017}, date = {2017-09-10}, booktitle = {Medical Image Computing and Computer Assisted Intervention − MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part 1}, volume = {10433}, pages = {232--239}, series = {Lecture Notes in Computer Science}, abstract = {In this paper we propose a method to solve nonrigid image registration through a learning approach, instead of via iterative optimization of a predefined dissimilarity metric. We design a Convolutional Neural Network (CNN) architecture that, in contrast to all other work, directly estimates the displacement vector field (DVF) from a pair of input images. The proposed RegNet is trained using a large set of artificially generated DVFs, does not explicitly define a dissimilarity metric, and integrates image content at multiple scales to equip the network with contextual information. At testing time nonrigid registration is performed in a single shot, in contrast to current iterative methods. We tested RegNet on 3D chest CT follow-up data. The results show that the accuracy of RegNet is on par with a conventional B-spline registration, for anatomy within the capture range. Training RegNet with artificially generated DVFs is therefore a promising approach for obtaining good results on real clinical data, thereby greatly simplifying the training problem. Deformable image registration can therefore be successfully casted as a learning problem.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we propose a method to solve nonrigid image registration through a learning approach, instead of via iterative optimization of a predefined dissimilarity metric. We design a Convolutional Neural Network (CNN) architecture that, in contrast to all other work, directly estimates the displacement vector field (DVF) from a pair of input images. The proposed RegNet is trained using a large set of artificially generated DVFs, does not explicitly define a dissimilarity metric, and integrates image content at multiple scales to equip the network with contextual information. At testing time nonrigid registration is performed in a single shot, in contrast to current iterative methods. We tested RegNet on 3D chest CT follow-up data. The results show that the accuracy of RegNet is on par with a conventional B-spline registration, for anatomy within the capture range. Training RegNet with artificially generated DVFs is therefore a promising approach for obtaining good results on real clinical data, thereby greatly simplifying the training problem. Deformable image registration can therefore be successfully casted as a learning problem. |
12. | J. Šprem; B.D. de Vos; P.A. de Jong; M.A. Viergever; I. Isgum In: SPIE Medical Imaging, 2017. @inproceedings{Šprem2017-3103, title = {Classification of coronary artery calcifications according to motion artifacts in chest CT using a convolutional neural network}, author = {J. Šprem and B.D. de Vos and P.A. de Jong and M.A. Viergever and I. Isgum}, url = {https://doi.org/10.1117/12.2253669}, year = {2017}, date = {2017-02-13}, booktitle = {SPIE Medical Imaging}, series = {10133-27}, abstract = {Coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events (CVEs). CAC can be quantified in chest CT scans acquired in lung screening. However, in these images the reproducibility of CAC quantification is compromised by cardiac motion artifacts that occur during scanning, which limits the reproducibility of CVE risk assessment. We present a system for detection of severe cardiac motion artifacts affecting CACs by using a convolutional neural network (CNN). This study included 125 chest CT scans from the National Lung Screening Trial (NLST). The images were acquired with CT scanners from four major CT scanner vendors (GE, Siemens, Philips, Toshiba) with varying tube voltage and slice thickness settings, and without ECG synchronization. An observer manually identified CAC lesions and labelled each CAC according to presence of cardiac motion (strongly affected, not affected). A CNN was designed to automatically label the identified CAC lesions according to the presence of cardiac motion by analyzing a patch from the axial CT slice around each CAC lesion. From 125 CT scans, 9201 CAC lesions were analyzed. 8001 lesions were used for training (19% positive) and the remaining 1200 (50% positive) were used for testing. The CNN achieved a classification accuracy of 85% (86% sensitivity, 84% specificity).}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events (CVEs). CAC can be quantified in chest CT scans acquired in lung screening. However, in these images the reproducibility of CAC quantification is compromised by cardiac motion artifacts that occur during scanning, which limits the reproducibility of CVE risk assessment. We present a system for detection of severe cardiac motion artifacts affecting CACs by using a convolutional neural network (CNN). This study included 125 chest CT scans from the National Lung Screening Trial (NLST). The images were acquired with CT scanners from four major CT scanner vendors (GE, Siemens, Philips, Toshiba) with varying tube voltage and slice thickness settings, and without ECG synchronization. An observer manually identified CAC lesions and labelled each CAC according to presence of cardiac motion (strongly affected, not affected). A CNN was designed to automatically label the identified CAC lesions according to the presence of cardiac motion by analyzing a patch from the axial CT slice around each CAC lesion. From 125 CT scans, 9201 CAC lesions were analyzed. 8001 lesions were used for training (19% positive) and the remaining 1200 (50% positive) were used for testing. The CNN achieved a classification accuracy of 85% (86% sensitivity, 84% specificity). |
13. | B.D. de Vos, M.A. Viergever, P.A. de Jong, I. Isgum Automatic Slice Identification in 3D Medical Images with a ConvNet Regressor Inproceedings In: Deep Learning and Data Labeling for Medical Applications: First International Workshop, LABELS 2016, and Second International Workshop, DLMIA 2016, MICCAI 2016, Athens, Greece, pp. 161–169, 2016. @inproceedings{devos-slice2016, title = {Automatic Slice Identification in 3D Medical Images with a ConvNet Regressor}, author = {B.D. de Vos, M.A. Viergever, P.A. de Jong, I. Isgum}, year = {2016}, date = {2016-09-27}, booktitle = {Deep Learning and Data Labeling for Medical Applications: First International Workshop, LABELS 2016, and Second International Workshop, DLMIA 2016, MICCAI 2016, Athens, Greece}, pages = {161--169}, abstract = {Identification of anatomical regions of interest is a prerequisite in many medical image analysis tasks. We propose a method that automatically identifies a slice of interest (SOI) in 3D images with a convolutional neural network (ConvNet) regressor. In 150 chest CT scans two reference slices were manually identied: one containing the aortic root and another superior to the aortic arch. In two independent experiments, the ConvNet regressor was trained with 100 CTs to determine the distance between each slice and the SOI in a CT. To identify the SOI, a first order polynomial was fitted through the obtained distances. In 50 test scans, the mean distances between the reference and the automatically identified slices were 5.7mm (4.0 slices) for the aortic root and 5.6mm (3.7 slices) for the aortic arch. The method shows similar results for both tasks and could be used for automatic slice identification.}, howpublished = {Deep Learning and Data Labeling for Medical Applications - Lecture Notes in Computer Science}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Identification of anatomical regions of interest is a prerequisite in many medical image analysis tasks. We propose a method that automatically identifies a slice of interest (SOI) in 3D images with a convolutional neural network (ConvNet) regressor. In 150 chest CT scans two reference slices were manually identied: one containing the aortic root and another superior to the aortic arch. In two independent experiments, the ConvNet regressor was trained with 100 CTs to determine the distance between each slice and the SOI in a CT. To identify the SOI, a first order polynomial was fitted through the obtained distances. In 50 test scans, the mean distances between the reference and the automatically identified slices were 5.7mm (4.0 slices) for the aortic root and 5.6mm (3.7 slices) for the aortic arch. The method shows similar results for both tasks and could be used for automatic slice identification. |
14. | B.D. de Vos, J.M. Wolterink, P.A. de Jong, M.A. Viergever, I. Isgum 2D image classification for 3D anatomy localization; employing deep convolutional neural networks Inproceedings In: SPIE Medical Imaging, pp. 97841Y-1-97841Y-7, 2016. @inproceedings{devo2016, title = {2D image classification for 3D anatomy localization; employing deep convolutional neural networks}, author = {B.D. de Vos, J.M. Wolterink, P.A. de Jong, M.A. Viergever, I. Isgum}, year = {2016}, date = {2016-03-01}, booktitle = {SPIE Medical Imaging}, volume = {9784}, pages = {97841Y-1-97841Y-7}, abstract = {Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs (heart, aortic arch, and descending aorta). The scans were evenly divided into training and test sets. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of all CNNs, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs (heart, aortic arch, and descending aorta). The scans were evenly divided into training and test sets. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of all CNNs, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible. |
15. | B.D. de Vos, J. van Setten, P.A. de Jong, W.P. Mali, M. Oudkerk, M.A. Viergever, I. Isgum Genome-Wide Association Study of Coronary and Aortic Calcification in Lung Cancer Screening CT Inproceedings In: SPIE Medical Imaging, pp. 97841L-1-97841L-6, 2016. @inproceedings{devo2016b, title = {Genome-Wide Association Study of Coronary and Aortic Calcification in Lung Cancer Screening CT}, author = {B.D. de Vos, J. van Setten, P.A. de Jong, W.P. Mali, M. Oudkerk, M.A. Viergever, I. Isgum}, year = {2016}, date = {2016-03-01}, booktitle = {SPIE Medical Imaging}, volume = {9784}, pages = {97841L-1-97841L-6}, abstract = {Arterial calcification has been related to cardiovascular disease (CVD) and osteoporosis. However, little is known about the role of genetics and exact pathways leading to arterial calcification and its relation to bone density changes indicating osteoporosis. In this study, we conducted a genome-wide association study of arterial calcification burden, followed by a look-up of known single nucleotide polymorphisms (SNPs) for coronary artery disease (CAD) and myocardial infarction (MI), and bone mineral density (BMD) to test for a shared genetic basis between the traits. The study included a subcohort of Dutch-Belgian lung cancer screening trial comprised of 2,552 participants. The participants underwent baseline CT screening in one of two hospitals participating in the trial. Low-dose chest CT images were acquired without contrast enhancement and without ECG-synchronization. In these images coronary and aortic calcifications were identified automatically. Subsequently, the detected calcifications were quantified using Agatston and volume scores. Genotype data was available for these participants. A genome-wide association study was conducted on 10,220,814 SNPs using a linear regression model. To reduce multiple testing burden, known CAD/MI and BMD SNPs were specifically tested (45 SNPs from the CARDIoGRAMplusC4D consortium and 60 SNPS from the GEFOS consortium). No novel significant SNPs were found. Significant enrichment for CAD/MI SNPs were observed in testing Agatston and coronary volume scores, and moreover a significant enrichment of BMD SNPs was shown in aorta volume scores. This may indicate genetic relation of BMD SNPs and arterial calcification burden.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Arterial calcification has been related to cardiovascular disease (CVD) and osteoporosis. However, little is known about the role of genetics and exact pathways leading to arterial calcification and its relation to bone density changes indicating osteoporosis. In this study, we conducted a genome-wide association study of arterial calcification burden, followed by a look-up of known single nucleotide polymorphisms (SNPs) for coronary artery disease (CAD) and myocardial infarction (MI), and bone mineral density (BMD) to test for a shared genetic basis between the traits. The study included a subcohort of Dutch-Belgian lung cancer screening trial comprised of 2,552 participants. The participants underwent baseline CT screening in one of two hospitals participating in the trial. Low-dose chest CT images were acquired without contrast enhancement and without ECG-synchronization. In these images coronary and aortic calcifications were identified automatically. Subsequently, the detected calcifications were quantified using Agatston and volume scores. Genotype data was available for these participants. A genome-wide association study was conducted on 10,220,814 SNPs using a linear regression model. To reduce multiple testing burden, known CAD/MI and BMD SNPs were specifically tested (45 SNPs from the CARDIoGRAMplusC4D consortium and 60 SNPS from the GEFOS consortium). No novel significant SNPs were found. Significant enrichment for CAD/MI SNPs were observed in testing Agatston and coronary volume scores, and moreover a significant enrichment of BMD SNPs was shown in aorta volume scores. This may indicate genetic relation of BMD SNPs and arterial calcification burden. |
16. | I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P.J. Slomka Automatic detection of cardiovascular risk in CT attenuation correction maps in Rb-82 PET/CTs Inproceedings In: SPIE Medical Imaging, pp. 978405-1-978405-6, 2016. @inproceedings{Isgu16, title = {Automatic detection of cardiovascular risk in CT attenuation correction maps in Rb-82 PET/CTs}, author = {I. Isgum, B.D. de Vos, J.M. Wolterink, D. Dey, D.S. Berman, M. Rubeaux, T. Leiner, P.J. Slomka}, year = {2016}, date = {2016-03-01}, booktitle = {SPIE Medical Imaging}, volume = {9784}, pages = {978405-1-978405-6}, abstract = {An algorithm for automatic coronary calcium (CAC) scoring in cardiac CT was applied for CAC quantification in CT attenuation correction (CTAC) images acquired with PET/CT. Potential coronary calcifications were extracted by intensity-based thresholding and 3D-connected component labeling, and described by location, size, shape and intensity features. Classification was performed using an ensemble of randomized decision trees. Each patient was assigned to a cardiovascular risk category based on Agatston score (0-10, 10-100, 100-400, >400). The correct risk category was assigned to 85% of patients (linearly weighted kappa 0.82). Automatic cardiovascular risk based on CAC scoring in CTAC is feasible.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } An algorithm for automatic coronary calcium (CAC) scoring in cardiac CT was applied for CAC quantification in CT attenuation correction (CTAC) images acquired with PET/CT. Potential coronary calcifications were extracted by intensity-based thresholding and 3D-connected component labeling, and described by location, size, shape and intensity features. Classification was performed using an ensemble of randomized decision trees. Each patient was assigned to a cardiovascular risk category based on Agatston score (0-10, 10-100, 100-400, >400). The correct risk category was assigned to 85% of patients (linearly weighted kappa 0.82). Automatic cardiovascular risk based on CAC scoring in CTAC is feasible. |
17. | N. Lessmann, I. Isgum, A.A.A. Setio, B.D. de Vos, F. Ciompi, P.A. de Jong, M. Oudkerk, W.P.Th.M. Mali, M.A. Viergever, B. van Ginneken Deep convolutional neural networks for automatic coronary calcium scoring in a screening study with low-dose chest CT Inproceedings In: SPIE Medical Imaging, pp. 978511, 2016, (Bla). @inproceedings{less16, title = {Deep convolutional neural networks for automatic coronary calcium scoring in a screening study with low-dose chest CT}, author = {N. Lessmann, I. Isgum, A.A.A. Setio, B.D. de Vos, F. Ciompi, P.A. de Jong, M. Oudkerk, W.P.Th.M. Mali, M.A. Viergever, B. van Ginneken}, url = {https://doi.org/10.1117/12.2216978 http://188.166.76.74/papers/Lessmann2016_CalciumScoringChestCT_DeepLearning_SPIE.pdf}, year = {2016}, date = {2016-03-01}, booktitle = {SPIE Medical Imaging}, volume = {9785}, pages = {978511}, abstract = {Coronary artery calcium (CAC) scoring can identify subjects at risk of cardiovascular events in screening programs with low-dose chest CT. We present an automatic method for CAC scoring based on deep convolutional neural networks. Candidates are extracted by intensity-based thresholding and subsequently classified by three concurrent networks that analyze three orthogonal 2D image patches per voxel. The networks consist of three convolutional steps and one fully-connected layer. In 231 subjects, this method detected on average 194.3 / 199.8mm3 CAC (sensitivity 97.2%), with 10.3mm3 false-positive volume per scan. Accuracy of cardiovascular risk category assignment was 84.4% (linearly weighted kappa 0.89).}, note = {Bla}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Coronary artery calcium (CAC) scoring can identify subjects at risk of cardiovascular events in screening programs with low-dose chest CT. We present an automatic method for CAC scoring based on deep convolutional neural networks. Candidates are extracted by intensity-based thresholding and subsequently classified by three concurrent networks that analyze three orthogonal 2D image patches per voxel. The networks consist of three convolutional steps and one fully-connected layer. In 231 subjects, this method detected on average 194.3 / 199.8mm3 CAC (sensitivity 97.2%), with 10.3mm3 false-positive volume per scan. Accuracy of cardiovascular risk category assignment was 84.4% (linearly weighted kappa 0.89). |
18. | M. Zreik; T. Leiner; B.D. de Vos; R.W. van Hamersvelt; M.A. Viergever; I. Isgum Automatic segmentation of the left ventricle in cardiac CT angiography using convolutional neural networks Inproceedings In: IEEE International Symposium on Biomedical Imaging, pp. pp. 40-43, 2016. @inproceedings{zreik:2016-3002, title = {Automatic segmentation of the left ventricle in cardiac CT angiography using convolutional neural networks}, author = {M. Zreik and T. Leiner and B.D. de Vos and R.W. van Hamersvelt and M.A. Viergever and I. Isgum}, year = {2016}, date = {2016-01-01}, booktitle = {IEEE International Symposium on Biomedical Imaging}, pages = {pp. 40-43}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
19. | B.D. de Vos; P.A. de Jong; J.M. Wolterink; R. Vliegenthart; G.V.F. Wielingen; M.A. Viergever; I. Isgum Automatic machine learning based prediction of cardiovascular events in lung cancer screening data Inproceedings In: SPIE Medical Imaging, pp. 94140D, 2015. @inproceedings{deVos2015, title = {Automatic machine learning based prediction of cardiovascular events in lung cancer screening data}, author = {B.D. de Vos and P.A. de Jong and J.M. Wolterink and R. Vliegenthart and G.V.F. Wielingen and M.A. Viergever and I. Isgum}, year = {2015}, date = {2015-02-02}, booktitle = {SPIE Medical Imaging}, journal = {SPIE Medical Imaging}, volume = {9414}, pages = {94140D}, abstract = {This study investigated whether subjects at risk of a cardiovascular event (CVE) undergoing lung cancer screening can be identified using automatic image analysis and subject characterististics. Coronary and aortic calcifications were automatically identified in 3559 subjects undergoing screening. Number, size and distribution of the detected calcifications were extracted and subject’s age, smoking history and past CVEs were used. A support vector machine classifier using only image features resulted in Az of 0.69. A combination of image and subject related features resulted in an Az of 0.71. Lung cancer screening participants at risk of CVE can be identified using automatic image analysis.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This study investigated whether subjects at risk of a cardiovascular event (CVE) undergoing lung cancer screening can be identified using automatic image analysis and subject characterististics. Coronary and aortic calcifications were automatically identified in 3559 subjects undergoing screening. Number, size and distribution of the detected calcifications were extracted and subject’s age, smoking history and past CVEs were used. A support vector machine classifier using only image features resulted in Az of 0.69. A combination of image and subject related features resulted in an Az of 0.71. Lung cancer screening participants at risk of CVE can be identified using automatic image analysis. |
Abstracts |
|
1. | M. Oudkerk Pool, B.D. De Vos, J.M. Wolterink, S. Blok, M.J. Schuuring, H. Bleijendaal, D.A.J. Dohmen, I.I. Tulevski, G.A. Somsen, B.J.M. Mulder, Y. Pinto, B.J. Bouma, I. Išgum, M.M. Winter Distinguishing sinus rhythm from atrial fibrillation on single-lead ECGs using a deep neural network Abstract In: 2020. @booklet{Pool2020, title = {Distinguishing sinus rhythm from atrial fibrillation on single-lead ECGs using a deep neural network}, author = {M. Oudkerk Pool, B.D. De Vos, J.M. Wolterink, S. Blok, M.J. Schuuring, H. Bleijendaal, D.A.J. Dohmen, I.I. Tulevski, G.A. Somsen, B.J.M. Mulder, Y. Pinto, B.J. Bouma, I. Išgum, M.M. Winter }, url = {https://programme.escardio.org/ESC2020/Abstracts/216331-distinguishing-sinus-rhythm-from-atrial-fibrillation-on-single-lead-ecgs-using-a-deep-neural-network?r=/ESC2020/My-Programme?v=1}, year = {2020}, date = {2020-08-29}, journal = {European Society of Cardiology Conference}, abstract = {Background: The growing availability of mobile phones increases the popularity of portable telemonitoring devices. An atrial fibrillation diagnosis can be reached with a recording of 30s on such telemonitoring devices. However, current commercially available automatic algorithms still require approval by experts. Purpose: In this research we aimed to build an artificial intelligence (AI) algorithm to improve automatic distinction of atrial fibrillation (AF) from sinus rhythm (SR), to ultimately save time, costs, and to facilitate telemonitoring programs. Methods: We developed a deep convolutional neural network (CNN), based on a residual neural network (ResNet), tailored to single-lead ECG analysis. The CNN was trained using publicly available single-lead ECGs from the 2017 PhysioNet/ Computing in Cardiology Challenge. This dataset consists of 60% SR, 9% AF, 30% alternative rhythm, and 1% noise ECGs. The 8528 available ECGs were divided into a training (90%) and validation set (10%) for model development and hyperparameter optimization. Results: The trained CNN was applied to an independent set containing single-lead ECGs of 600 patients equally divided into two groups: SR and AF. Both groups comprised of 300 unique ECGs (SR; 60% male, 63±11 years, AF; 38% male, 56±14 years). In distinguishing between AF and SR, the method achieved an accuracy of 0.92, an F1-score of 0.91, and area under the ROC-curve of 0.98. Conclusion: The results demonstrate that distinguishing SR and AF by a fully automatic AI algorithm is feasible. This approach has the potential to reduce cost by minimizing expert supervision, especially when extending the algorithm to other heart rhythms, like premature atrial/ventricular contractions and atrial flutter.}, month = {08}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } Background: The growing availability of mobile phones increases the popularity of portable telemonitoring devices. An atrial fibrillation diagnosis can be reached with a recording of 30s on such telemonitoring devices. However, current commercially available automatic algorithms still require approval by experts. Purpose: In this research we aimed to build an artificial intelligence (AI) algorithm to improve automatic distinction of atrial fibrillation (AF) from sinus rhythm (SR), to ultimately save time, costs, and to facilitate telemonitoring programs. Methods: We developed a deep convolutional neural network (CNN), based on a residual neural network (ResNet), tailored to single-lead ECG analysis. The CNN was trained using publicly available single-lead ECGs from the 2017 PhysioNet/ Computing in Cardiology Challenge. This dataset consists of 60% SR, 9% AF, 30% alternative rhythm, and 1% noise ECGs. The 8528 available ECGs were divided into a training (90%) and validation set (10%) for model development and hyperparameter optimization. Results: The trained CNN was applied to an independent set containing single-lead ECGs of 600 patients equally divided into two groups: SR and AF. Both groups comprised of 300 unique ECGs (SR; 60% male, 63±11 years, AF; 38% male, 56±14 years). In distinguishing between AF and SR, the method achieved an accuracy of 0.92, an F1-score of 0.91, and area under the ROC-curve of 0.98. Conclusion: The results demonstrate that distinguishing SR and AF by a fully automatic AI algorithm is feasible. This approach has the potential to reduce cost by minimizing expert supervision, especially when extending the algorithm to other heart rhythms, like premature atrial/ventricular contractions and atrial flutter. |
2. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, R.A.P. Takx, T. Leiner, I. Išgum Deep learning for automatic landmark localization in CTA for transcatheter aortic valve implantation Abstract In: Radiological Society of North America, 105th Annual Meeting, 2019. @booklet{Noothout2019, title = {Deep learning for automatic landmark localization in CTA for transcatheter aortic valve implantation}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, R.A.P. Takx, T. Leiner, I. Išgum}, url = {archive.rsna.org/2019/19012721.html }, year = {2019}, date = {2019-12-03}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI.}, howpublished = {Radiological Society of North America, 105th Annual Meeting}, month = {12}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI. |
3. | S.G.M. van Velzen, J.G. Terry, B.D. de Vos, N Lessmann, S. Nair, A. Correa, H.M. Verkooijen J.J. Carr, I. Išgum In: Radiological Society of North America, 105th Annual Meeting, 2019. @booklet{vanVelzen2020c, title = {Automatic prediction of coronary heart disease events using coronary and thoracic aorta calcium among African Americans in the Jackson Heart study}, author = {S.G.M. van Velzen, J.G. Terry, B.D. de Vos, N Lessmann, S. Nair, A. Correa, H.M. Verkooijen J.J. Carr, I. Išgum}, url = {http://archive.rsna.org/2019/19006976.html}, year = {2019}, date = {2019-12-01}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Coronary artery calcium (CAC) and thoracic aorta calcium (TAC) are predictors of CHD events. Given that CAC and TAC identification is time-consuming, methods for automatic quantification in CT have been developed. Hence, we investigate whether subjects who will experience a CHD event within 5 years from acquisition of cardiac CT can be identified using automatically extracted calcium scores. METHOD AND MATERIALS We included 2532 participants (age 59±11, 31% male) of the Jackson Heart Study without CHD history: 111 participants had a CHD event within 5 years from CT acquisition, defined by death certificates and medical records. For each subject a cardiac CT scan(GE Healthcare Lightspeed 16Pro, 2.5mm slice thickness, 2.5mm increment, 120kVP, 400mAs, ECG-triggered, no contrast) was available. Per-artery Agatston CAC scores (left anterior descending, left circumflex, right coronary artery) and TAC volume were automatically extracted with a previously developed AI algorithm. Scores were log transformed, combined with age and sex and all continuous variables were normalized to zero-mean and unit variance. We evaluated 3 models with 3-fold cross-validation where subjects were classified according to occurrence of CHD event using LASSO regression with 1) age, sex and CAC scores, 2) age, sex and TAC scores, and 3) all variables. Performance was evaluated with the area under the ROC curve (AUC). RESULTS In 1468 (58%) subjects no CAC and in 1240 (49%) no TAC was found. In remaining scans, median (range) CAC score was 78.7(1.6-5562.1): 49.5(0.0-4569.4), 0.0(0.0-2735.3), 3.9(0.0-3242.7) in the LDA, LCX and RCA, respectively. Median TAC volume was 116.8(4.7-7275.9). Prediction of CHD events using Model 1, 2 and 3 resulted in an AUC (95% CI) of 0.721(0.672-0.771), 0.735(0.686-0.785) and 0.727(0.678-0.776). Differences between the ROC curves were not significant (Model 1 and 2: p=0.80; 1 and 3: p=0.29; 2 and 3: p=0.76). CONCLUSION Identification of subjects at risk of a CHD event can be performed using automatically extracted CAC or TAC scores from cardiac CT. CLINICAL RELEVANCE/APPLICATION Prediction of CHD events from cardiac CT using TAC instead of CAC is feasible and may be advantageous in scans acquired without ECG-triggering or low image resolution.}, howpublished = {Radiological Society of North America, 105th Annual Meeting}, month = {12}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } PURPOSE Coronary artery calcium (CAC) and thoracic aorta calcium (TAC) are predictors of CHD events. Given that CAC and TAC identification is time-consuming, methods for automatic quantification in CT have been developed. Hence, we investigate whether subjects who will experience a CHD event within 5 years from acquisition of cardiac CT can be identified using automatically extracted calcium scores. METHOD AND MATERIALS We included 2532 participants (age 59±11, 31% male) of the Jackson Heart Study without CHD history: 111 participants had a CHD event within 5 years from CT acquisition, defined by death certificates and medical records. For each subject a cardiac CT scan(GE Healthcare Lightspeed 16Pro, 2.5mm slice thickness, 2.5mm increment, 120kVP, 400mAs, ECG-triggered, no contrast) was available. Per-artery Agatston CAC scores (left anterior descending, left circumflex, right coronary artery) and TAC volume were automatically extracted with a previously developed AI algorithm. Scores were log transformed, combined with age and sex and all continuous variables were normalized to zero-mean and unit variance. We evaluated 3 models with 3-fold cross-validation where subjects were classified according to occurrence of CHD event using LASSO regression with 1) age, sex and CAC scores, 2) age, sex and TAC scores, and 3) all variables. Performance was evaluated with the area under the ROC curve (AUC). RESULTS In 1468 (58%) subjects no CAC and in 1240 (49%) no TAC was found. In remaining scans, median (range) CAC score was 78.7(1.6-5562.1): 49.5(0.0-4569.4), 0.0(0.0-2735.3), 3.9(0.0-3242.7) in the LDA, LCX and RCA, respectively. Median TAC volume was 116.8(4.7-7275.9). Prediction of CHD events using Model 1, 2 and 3 resulted in an AUC (95% CI) of 0.721(0.672-0.771), 0.735(0.686-0.785) and 0.727(0.678-0.776). Differences between the ROC curves were not significant (Model 1 and 2: p=0.80; 1 and 3: p=0.29; 2 and 3: p=0.76). CONCLUSION Identification of subjects at risk of a CHD event can be performed using automatically extracted CAC or TAC scores from cardiac CT. CLINICAL RELEVANCE/APPLICATION Prediction of CHD events from cardiac CT using TAC instead of CAC is feasible and may be advantageous in scans acquired without ECG-triggering or low image resolution. |
4. | P. Moeskops, B.D. de Vos, W.B. Veldhuis, A.M. May, S. Kurk, M. Koopman, P.A. de Jong, T.Leiner, I. Išgum In: Radiological Society of North America, 105th Annual Meeting, 2019. @booklet{Moeskops2019, title = {Automatic quantification of 3D body composition from abdominal CT with an ensemble of convolutional neural networks}, author = {P. Moeskops, B.D. de Vos, W.B. Veldhuis, A.M. May, S. Kurk, M. Koopman, P.A. de Jong, T.Leiner, I. Išgum}, url = {http://archive.rsna.org/2019/19005120.html}, year = {2019}, date = {2019-12-01}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Analysis of body composition based on CT, primarily comprising quantification of fat and muscles, is an important prognostic factor in cardiovascular disease and cancer. However, manual segmentation is time consuming and in 3D practically infeasible. The purpose of this study is to investigate the use of a deep learning-based method for automatic segmentation of subcutaneous fat, visceral fat and psoas muscle from full abdomen CT scans. METHOD AND MATERIALS We included a dataset of 20 native CT scans of the entire abdomen (Siemens Somatom Volume Zoom / Siemens Somatom Definition, 120 kVp, 375 mAs, in-plane resolution 0.63-0.75 mm, slice thickness 5.0 mm, slice increment 5.0 mm). Trained observers defined the reference standard by voxel-wise manual annotation of subcutaneous fat, visceral fat and psoas muscle in all slices that visualize the psoas muscle. Images of 10 patients were used to train a dilated convolutional neural network with a receptive field of 131 × 131 voxels to distinguish between the three tissue classes. To ensure robust results, 5 different networks were trained and subsequently ensembled by averaging the probabilistic results. Voxels were assigned to the class with the highest probability. Images from the remaining 10 patients were used to evaluate the performance of the method. Performance was evaluated with Dice coefficients between the manual and automatic segmentations. Additionally, linear correlation coefficients (Pearson's r) were computed between the manual and automatic segmentation volumes. RESULTS The average Dice coefficients over 10 test scans were 0.89 ± 0.02 for subcutaneous fat, 0.92 ± 0.04 for visceral fat, and 0.76 ± 0.05 for psoas muscle. At the L3 vertebrae level, the average Dice coefficients were 0.92 ± 0.02 for subcutaneous fat, 0.93 ± 0.05 for visceral fat, and 0.87 ± 0.04 for psoas muscle. Pearson's r between the manual and automatic volumes were 0.996 for subcutaneous fat, 0.997 for visceral fat, and 0.941 for psoas muscle. On average, segmentation of a full scan was performed in about 15 seconds. CONCLUSION The results show that accurate fully automatic segmentation of subcutaneous fat, visceral fat and psoas muscle from full abdominal CT scans is feasible. CLINICAL RELEVANCE/APPLICATION The proposed method allows fast and fully automatic analysis of 3D body composition in abdominal CT that can aid in individualized risk assessment in cardiovascular disease and cancer. }, howpublished = {Radiological Society of North America, 105th Annual Meeting}, month = {12}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } PURPOSE Analysis of body composition based on CT, primarily comprising quantification of fat and muscles, is an important prognostic factor in cardiovascular disease and cancer. However, manual segmentation is time consuming and in 3D practically infeasible. The purpose of this study is to investigate the use of a deep learning-based method for automatic segmentation of subcutaneous fat, visceral fat and psoas muscle from full abdomen CT scans. METHOD AND MATERIALS We included a dataset of 20 native CT scans of the entire abdomen (Siemens Somatom Volume Zoom / Siemens Somatom Definition, 120 kVp, 375 mAs, in-plane resolution 0.63-0.75 mm, slice thickness 5.0 mm, slice increment 5.0 mm). Trained observers defined the reference standard by voxel-wise manual annotation of subcutaneous fat, visceral fat and psoas muscle in all slices that visualize the psoas muscle. Images of 10 patients were used to train a dilated convolutional neural network with a receptive field of 131 × 131 voxels to distinguish between the three tissue classes. To ensure robust results, 5 different networks were trained and subsequently ensembled by averaging the probabilistic results. Voxels were assigned to the class with the highest probability. Images from the remaining 10 patients were used to evaluate the performance of the method. Performance was evaluated with Dice coefficients between the manual and automatic segmentations. Additionally, linear correlation coefficients (Pearson's r) were computed between the manual and automatic segmentation volumes. RESULTS The average Dice coefficients over 10 test scans were 0.89 ± 0.02 for subcutaneous fat, 0.92 ± 0.04 for visceral fat, and 0.76 ± 0.05 for psoas muscle. At the L3 vertebrae level, the average Dice coefficients were 0.92 ± 0.02 for subcutaneous fat, 0.93 ± 0.05 for visceral fat, and 0.87 ± 0.04 for psoas muscle. Pearson's r between the manual and automatic volumes were 0.996 for subcutaneous fat, 0.997 for visceral fat, and 0.941 for psoas muscle. On average, segmentation of a full scan was performed in about 15 seconds. CONCLUSION The results show that accurate fully automatic segmentation of subcutaneous fat, visceral fat and psoas muscle from full abdominal CT scans is feasible. CLINICAL RELEVANCE/APPLICATION The proposed method allows fast and fully automatic analysis of 3D body composition in abdominal CT that can aid in individualized risk assessment in cardiovascular disease and cancer. |
5. | B.D. de Vos, N. Lessmann, P.A. de Jong, M.A. Viergever, I. Isgum Direct coronary artery calcium scoring in low-dose chest CT using deep learning analysis Abstract In: 2017. @booklet{deVos2017b, title = {Direct coronary artery calcium scoring in low-dose chest CT using deep learning analysis}, author = {B.D. de Vos, N. Lessmann, P.A. de Jong, M.A. Viergever, I. Isgum}, year = {2017}, date = {2017-11-28}, booktitle = {Radiological Society of North America, 103rd Annual Meeting}, abstract = {PURPOSE Coronary artery calcium (CAC) score determined in screening with low-dose chest CT is a strong and independent predictor of cardiovascular events (CVE). However, manual CAC scoring in these images is cumbersome. Existing automatic methods detect CAC lesions and thereafter quantify them. However, precise localization of lesions may not be needed to facilitate identification of subjects at risk of CVE. Hence, we have developed a deep learning system for fully automatic, real-time and direct calcium scoring circumventing the need for intermediate detection of CAC lesions. METHOD AND MATERIALS The study included a set of 1,546 baseline CT scans from the National Lung Screening Trial. Three experts defined the reference standard by manually identifying CAC lesions that were subsequently quantified using the Agatston score. The designed convolutional neural network analyzed axial slices and predicted the corresponding Agatston score. Per-subject Agatston scores were determined as the sum of per-slice scores. Each subject was assigned to one of five cardiovascular risk categories (Agatston score: 0, 1-10, 10-100, 100-400, >400). The system was trained with 75% of the scans and tested with the remaining 25%. Correlation between manual and automatic CAC scores was determined using the intra class correlation coefficient (ICC). Agreement of CVD risk categorization was evaluated using accuracy and Cohen’s linearly weighted κ. RESULTS In the 386 test subjects, the median (Q1-Q3) reference Agatston score was 54 (1-321). By the reference, 95, 37, 86, 94 and 75 subjects were assigned to 0, 1-10, 10-100, 100-400, >400 risk categories, respectively. The ICC between the automatic and reference scores was 0.95. The method assigned 85% of subjects to the correct risk category with a κ of 0.90. The score was determined in <2 seconds per CT. CONCLUSION Unlike previous automatic CAC scoring methods, the proposed method allows for quantification of coronary calcium burden without the need for intermediate identification or segmentation of separate CAC lesions. The system is robust and performs analysis in real-time. CLINICAL RELEVANCE/APPLICATION The proposed method may allow real-time identification of subjects at risk of a CVE undergoing CT-based lung cancer screening without the need for intermediate segmentation of coronary calcifications.}, month = {11}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } PURPOSE Coronary artery calcium (CAC) score determined in screening with low-dose chest CT is a strong and independent predictor of cardiovascular events (CVE). However, manual CAC scoring in these images is cumbersome. Existing automatic methods detect CAC lesions and thereafter quantify them. However, precise localization of lesions may not be needed to facilitate identification of subjects at risk of CVE. Hence, we have developed a deep learning system for fully automatic, real-time and direct calcium scoring circumventing the need for intermediate detection of CAC lesions. METHOD AND MATERIALS The study included a set of 1,546 baseline CT scans from the National Lung Screening Trial. Three experts defined the reference standard by manually identifying CAC lesions that were subsequently quantified using the Agatston score. The designed convolutional neural network analyzed axial slices and predicted the corresponding Agatston score. Per-subject Agatston scores were determined as the sum of per-slice scores. Each subject was assigned to one of five cardiovascular risk categories (Agatston score: 0, 1-10, 10-100, 100-400, >400). The system was trained with 75% of the scans and tested with the remaining 25%. Correlation between manual and automatic CAC scores was determined using the intra class correlation coefficient (ICC). Agreement of CVD risk categorization was evaluated using accuracy and Cohen’s linearly weighted κ. RESULTS In the 386 test subjects, the median (Q1-Q3) reference Agatston score was 54 (1-321). By the reference, 95, 37, 86, 94 and 75 subjects were assigned to 0, 1-10, 10-100, 100-400, >400 risk categories, respectively. The ICC between the automatic and reference scores was 0.95. The method assigned 85% of subjects to the correct risk category with a κ of 0.90. The score was determined in <2 seconds per CT. CONCLUSION Unlike previous automatic CAC scoring methods, the proposed method allows for quantification of coronary calcium burden without the need for intermediate identification or segmentation of separate CAC lesions. The system is robust and performs analysis in real-time. CLINICAL RELEVANCE/APPLICATION The proposed method may allow real-time identification of subjects at risk of a CVE undergoing CT-based lung cancer screening without the need for intermediate segmentation of coronary calcifications. |
6. | J. Šprem, B.D. de Vos, R. Vliegenthart, M.A. Viergever, P. A. de Jong, I. Isgum Increasing the Interscan Reproducibility of Coronary Calcium Scoring by Partial Volume Correction in Low-Dose non-ECG Synchronized CT: Phantom Study Abstract In: 2015. @booklet{Šprem2015, title = {Increasing the Interscan Reproducibility of Coronary Calcium Scoring by Partial Volume Correction in Low-Dose non-ECG Synchronized CT: Phantom Study}, author = {J. Šprem, B.D. de Vos, R. Vliegenthart, M.A. Viergever, P. A. de Jong, I. Isgum}, year = {2015}, date = {2015-11-29}, booktitle = {Radiological Society of North America}, month = {11}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } |