PhD candidate
Department of Biomedical Engineering & Physics
e-mail: j [dot] m [dot] h [dot] noothout [at] amsterdamumc [dot] nl
Phone: +31 20 56 60226
LinkedIn, Google Scholar
Julia Noothout obtained her Bachelor of Science degree in Medicine in 2013 from Utrecht University. In 2017 she received her Master of Science degree in Biomedical Image Sciences and with this combination of biomedical training and image processing related research she is able to combine her interest in functionality of the human body and medical imaging.
Her master thesis focused on segmentation of the aortic arch in low-dose chest CT by applying weakly supervised training for convolutional neural networks. In June 2017, Julia started her PhD at the Image Sciences Institute at UMC Utrecht and joined the Quantitative Medical Image Analysis Group. In 2019, she moved with the group to the Amsterdam AMC. Her main research topic is Deep Transfer Learning techniques with an application to cardiac spectral CT.
Journal Articles |
|
1. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum Deep learning-based regression and classification for automatic landmark localization in medical images Journal Article IEEE Transactions on Medical Imaging, 39 (12), pp. 4011-4022, 2020, ISSN: 1558-254X. @article{Noothout2020, title = {Deep learning-based regression and classification for automatic landmark localization in medical images}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, E.M. Postma, P.A.M. Smeets, R.A.P. Takx, T. Leiner, M.A. Viergever, I. Išgum}, url = {https://arxiv.org/pdf/2007.05295.pdf}, doi = {10.1109/TMI.2020.3009002}, issn = {1558-254X}, year = {2020}, date = {2020-07-09}, journal = {IEEE Transactions on Medical Imaging}, volume = {39}, number = {12}, pages = {4011-4022}, abstract = {In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. }, keywords = {}, pubstate = {published}, tppubtype = {article} } In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage. |
Inproceedings |
|
1. | J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum Automatic segmentation of the olfactory bulbs in MRI Inproceedings In: SPIE Medical Imaging, pp. 115961J, 2021. @inproceedings{Noothout2021, title = {Automatic segmentation of the olfactory bulbs in MRI}, author = {J.M.H. Noothout, E.M. Postma, S. Boesveldt, B.D. de Vos, P.A.M. Smeets, I. Išgum}, doi = {10.1117/12.2580354}, year = {2021}, date = {2021-02-16}, booktitle = {SPIE Medical Imaging}, volume = {11596}, pages = {115961J}, abstract = {A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each OB in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left OB, right OB, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right OB, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs. |
2. | L.D. van Harten; J.M.H. Noothout; J.J.C. Verhoeff; J.M. Wolterink; I. Išgum Automatic segmentation of organs at risk in thoracic CT scans by combining 2D and 3D convolutional neural networks Inproceedings In: Proc. of SegTHOR challenge at IEEE International Symposium on Biomedical Imaging, 2019. @inproceedings{harten_noothout_2019_automatic, title = {Automatic segmentation of organs at risk in thoracic CT scans by combining 2D and 3D convolutional neural networks}, author = {L.D. van Harten and J.M.H. Noothout and J.J.C. Verhoeff and J.M. Wolterink and I. Išgum}, url = {http://ceur-ws.org/Vol-2349/SegTHOR2019_paper_12.pdf}, year = {2019}, date = {2019-04-08}, booktitle = {Proc. of SegTHOR challenge at IEEE International Symposium on Biomedical Imaging}, abstract = {Segmentation of organs at risk (OARs) in medical images is an important step in treatment planning for patients undergoing radiotherapy (RT). Manual segmentation of OARs is often time-consuming and tedious. Therefore, we propose a method for automatic segmentation of OARs in thoracic RT treatment planning CT scans of patients diagnosed with lung, breast or esophageal cancer. The method consists of a combination of a 2D and a 3D convolutional neural network (CNN), where both networks have substantially different architectures. We analyse the performance for these networks individually and show that a combination of both networks produces the best results. With this combination, we achieve average Dice coefficients of 0.84±0.05, 0.94±0.02, 0.91±0.02, and 0.93±0.01 for the esophagus, heart, trachea, and aorta, respectively. These results demonstrate potential for automating segmentation of organs at risk in routine radiotherapy treatment planning.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Segmentation of organs at risk (OARs) in medical images is an important step in treatment planning for patients undergoing radiotherapy (RT). Manual segmentation of OARs is often time-consuming and tedious. Therefore, we propose a method for automatic segmentation of OARs in thoracic RT treatment planning CT scans of patients diagnosed with lung, breast or esophageal cancer. The method consists of a combination of a 2D and a 3D convolutional neural network (CNN), where both networks have substantially different architectures. We analyse the performance for these networks individually and show that a combination of both networks produces the best results. With this combination, we achieve average Dice coefficients of 0.84±0.05, 0.94±0.02, 0.91±0.02, and 0.93±0.01 for the esophagus, heart, trachea, and aorta, respectively. These results demonstrate potential for automating segmentation of organs at risk in routine radiotherapy treatment planning. |
3. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, I. Išgum Automatic segmentation of thoracic aorta segments in low-dose chest CT Inproceedings In: SPIE Medical Imaging, pp. 105741S, 2018. @inproceedings{Noothout2018, title = {Automatic segmentation of thoracic aorta segments in low-dose chest CT}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, I. Išgum}, url = {https://arxiv.org/abs/1810.05727 https://doi.org/10.1117/12.2293114}, year = {2018}, date = {2018-10-03}, booktitle = {SPIE Medical Imaging}, volume = {10574}, pages = {105741S}, abstract = {Morphological analysis and identification of pathologies in the aorta are important for cardiovascular diagnosis and risk assessment in patients. Manual annotation is time-consuming and cumbersome in CT scans acquired without contrast enhancement and with low radiation dose. Hence, we propose an automatic method to segment the ascending aorta, the aortic arch and the thoracic descending aorta in low-dose chest CT without contrast enhancement. Segmentation was performed using a dilated convolutional neural network (CNN), with a receptive field of 131X131 voxels, that classified voxels in axial, coronal and sagittal image slices. To obtain a final segmentation, the obtained probabilities of the three planes were averaged per class, and voxels were subsequently assigned to the class with the highest class probability. Two-fold cross-validation experiments were performed where ten scans were used to train the network and another ten to evaluate the performance. Dice coefficients of 0.83, 0.86 and 0.88, and Average Symmetrical Surface Distances (ASSDs) of 2.44, 1.56 and 1.87 mm were obtained for the ascending aorta, the aortic arch, and the descending aorta, respectively. The results indicate that the proposed method could be used in large-scale studies analyzing the anatomical location of pathology and morphology of the thoracic aorta.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Morphological analysis and identification of pathologies in the aorta are important for cardiovascular diagnosis and risk assessment in patients. Manual annotation is time-consuming and cumbersome in CT scans acquired without contrast enhancement and with low radiation dose. Hence, we propose an automatic method to segment the ascending aorta, the aortic arch and the thoracic descending aorta in low-dose chest CT without contrast enhancement. Segmentation was performed using a dilated convolutional neural network (CNN), with a receptive field of 131X131 voxels, that classified voxels in axial, coronal and sagittal image slices. To obtain a final segmentation, the obtained probabilities of the three planes were averaged per class, and voxels were subsequently assigned to the class with the highest class probability. Two-fold cross-validation experiments were performed where ten scans were used to train the network and another ten to evaluate the performance. Dice coefficients of 0.83, 0.86 and 0.88, and Average Symmetrical Surface Distances (ASSDs) of 2.44, 1.56 and 1.87 mm were obtained for the ascending aorta, the aortic arch, and the descending aorta, respectively. The results indicate that the proposed method could be used in large-scale studies analyzing the anatomical location of pathology and morphology of the thoracic aorta. |
4. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, T. Leiner, I. Išgum CNN-based Landmark Detection in Cardiac CTA Scans Inproceedings In: Medical Imaging with Deep Learning (MIDL 2018), 2018. @inproceedings{Noothout2018b, title = {CNN-based Landmark Detection in Cardiac CTA Scans}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, T. Leiner, I. Išgum}, url = {https://openreview.net/forum?id=r1malb3jz}, year = {2018}, date = {2018-05-20}, booktitle = {Medical Imaging with Deep Learning (MIDL 2018)}, abstract = {Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches further from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans : the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans. |
Abstracts |
|
1. | J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, R.A.P. Takx, T. Leiner, I. Išgum Deep learning for automatic landmark localization in CTA for transcatheter aortic valve implantation Abstract In: Radiological Society of North America, 105th Annual Meeting, 2019. @booklet{Noothout2019, title = {Deep learning for automatic landmark localization in CTA for transcatheter aortic valve implantation}, author = {J.M.H. Noothout, B.D. de Vos, J.M. Wolterink, R.A.P. Takx, T. Leiner, I. Išgum}, url = {archive.rsna.org/2019/19012721.html }, year = {2019}, date = {2019-12-03}, booktitle = {Radiological Society of North America, 105th Annual Meeting}, abstract = {PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI.}, howpublished = {Radiological Society of North America, 105th Annual Meeting}, month = {12}, keywords = {}, pubstate = {published}, tppubtype = {booklet} } PURPOSE Fast and accurate automatic landmark localization in CT angiography (CTA) scans can aid treatment planning for patients undergoing transcatheter aortic valve implantation (TAVI). Manual localization of landmarks can be time-consuming and cumbersome. Automatic landmark localization can potentially reduce post-processing time and interobserver variability. Hence, this study evaluates the performance of deep learning for automatic aortic root landmark localization in CTA. METHOD AND MATERIALS This study included 672 retrospectively gated CTA scans acquired as part of clinical routine (Philips Brilliance iCT-256 scanner, 0.9mm slice thickness, 0.45mm increment, 80-140kVp, 210-300mAs, contrast). Reference standard was defined by manual localization of the left (LH), non-coronary (NCH) and right (RH) aortic valve hinge points, and the right (RO) and left (LO) coronary ostia. To develop and evaluate the automatic method, 412 training, 60 validation, and 200 test CTAs were randomly selected. 100/200 test CTAs were annotated twice by the same observer and once by a second observer to estimate intra- and interobserver agreement. Five CNNs with identical architectures were trained, one for the localization of each landmark. For treatment planning of TAVI, distances between landmark points are used, hence performance was evaluated on subvoxel level with the Euclidean distance between reference and automatically predicted landmark locations. RESULTS Median (IQR) distance errors for the LH, NCH and RH were 2.44 (1.79), 3.01 (1.82) and 2.98 (2.09)mm, respectively. Repeated annotation of the first observer led to distance errors of 2.06 (1.43), 2.57 (2.22) and 2.58 (2.30)mm, and for the second observer to 1.80 (1.32), 1.99 (1.28) and 1.81 (1.68)mm, respectively. Median (IQR) distance errors for the RO and LO were 1.65 (1.33) and 1.91 (1.58)mm, respectively. Repeated annotation of the first observer led to distance errors of 1.43 (1.05) and 1.92 (1.44)mm, and for the second observer to 1.78 (1.55) and 2.35 (1.56)mm, respectively. On average, analysis took 0.3s/CTA. CONCLUSION Automatic landmark localization in CTA approaches second observer performance and thus enables automatic, accurate and reproducible landmark localization without additional reading time. CLINICAL RELEVANCE/APPLICATION Automatic landmark localization in CTA can aid in reducing post-processing time and interobserver variability in treatment planning for patients undergoing TAVI. |