PURPOSE: To compare physicians' ability to read Alberta Stroke Program Early CT Score (ASPECTS) in patients with a large vessel occlusion within 6 hours of symptom onset when assisted by a machine learning-based automatic software tool, compared with their unassisted score.MATERIALS AND METHODS: 50 baseline CT scans selected from two prior studies (CRISP and GAMES-RP) were read by 3 experienced neuroradiologists who were provided access to a follow-up MRI. The average ASPECT score of these reads was used as the reference standard. Two additional neuroradiologists and 6 non-neuroradiologist readers then read the scans both with and without assistance from the software reader-augmentation program and reader improvement was determined. The primary hypothesis was that the agreement between typical readers and the consensus of 3 expert neuroradiologists would be improved with software augmented vs. unassisted reads. Agreement was based on the percentage of the individual ASPECT regions (50 cases, 10 regions each; N=500) where agreement was achieved.RESULTS: Typical non-neuroradiologist readers agreed with the expert consensus read in 72% of the 500 ASPECTS regions, evaluated without software assistance. The automated software alone agreed in 77%. When the typical readers read the scan in conjunction with the software, agreement improved to 78% (P<0.0001, test of proportions). The software program alone achieved correlations for total ASPECT scores that were similar to the expert readers who had access to the follow-up MRI scan to help enhance the quality of their reads.CONCLUSION: Typical readers had statistically significant improvement in their scoring of scans when the scan was read in conjunction with the automated software, achieving agreement rates that were comparable to neuroradiologists.
View details for DOI 10.1016/j.jstrokecerebrovasdis.2021.105829
View details for PubMedID 33989968