Application of Deep Learning to Predict Standardized Uptake Value Ratio and Amyloid Status on 18F-Florbetapir PET Using ADNI Data. AJNR. American journal of neuroradiology Reith, F., Koran, M. E., Davidzon, G., Zaharchuk, G., Alzheimers Disease Neuroimaging Initiative 2020


BACKGROUND AND PURPOSE: Cortical amyloid quantification on PET by using the standardized uptake value ratio is valuable for research studies and clinical trials in Alzheimer disease. However, it is resource intensive, requiring co-registered MR imaging data and specialized segmentation software. We investigated the use of deep learning to automatically quantify standardized uptake value ratio and used this for classification.MATERIALS AND METHODS: Using the Alzheimer's Disease Neuroimaging Initiative dataset, we identified 2582 18F-florbetapir PET scans, which were separated into positive and negative cases by using a standardized uptake value ratio threshold of 1.1. We trained convolutional neural networks (ResNet-50 and ResNet-152) to predict standardized uptake value ratio and classify amyloid status. We assessed performance based on network depth, number of PET input slices, and use of ImageNet pretraining. We also assessed human performance with 3 readers in a subset of 100 randomly selected cases.RESULTS: We have found that 48% of cases were amyloid positive. The best performance was seen for ResNet-50 by using regression before classification, 3 input PET slices, and pretraining, with a standardized uptake value ratio root-mean-square error of 0.054, corresponding to 95.1% correct amyloid status prediction. Using more than 3 slices did not improve performance, but ImageNet initialization did. The best trained network was more accurate than humans (96% versus a mean of 88%, respectively).CONCLUSIONS: Deep learning algorithms can estimate standardized uptake value ratio and use this to classify 18F-florbetapir PET scans. Such methods have promise to automate this laborious calculation, enabling quantitative measurements rapidly and in settings without extensive image processing manpower and expertise.

View details for DOI 10.3174/ajnr.A6573

View details for PubMedID 32499247