Increasing the value of deep learning in high-dimensional, low sample size medical image data.

A common challenge in medical image analysis is the phenomenon of large amounts of data being generated per patient (high dimensional data), but few patients in the dataset (low sample size). Examples of this phenomenon are 4D CTP,  3D posterior stroke and MS lesion segmentation.     

Convolutional Neural Networks (CNNs) have allowed for large improvements in a variety of medical image analysis tasks, such as automatic stroke lesion quantification and ASPECTS scoring. However, CNNs require a large number of patients to capture the heterogeneity in patient populations to obtain good and generalisable results.     

Consequently, research on how to effectively train CNNs using high dimensional data, with a low sample size is important to reduce the cost of manual annotation of the data and to allow CNNs to obtain good results on pathologies with a low prevalence.

This project investigates different strategies based on transfer learning, self-supervised learning and weakly supervised learning to achieve these goals.