New Artificial Intelligence Tool Improves Diagnoses of Brain Aneurysms

A new artificial intelligence (AI) tool could soon help doctors in diagnosing brain aneurysms, which refer to bulges in the brain’s blood vessels that can leak or rupture, possibly leading to brain damage, stroke, or even death.

In this brain scan, the location of an aneurysm is indicated by HeadXNet using a transparent red highlight. (Image credit: Allison Park)

Developed by Stanford University researchers and described in a paper published in JAMA Network Open, on June 7th, 2019, the AI tool emphasizes areas of a brain scan that can potentially contain an aneurysm.

There’s been a lot of concern about how machine learning will actually work within the medical field. This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool.

Allison Park, Study Co-Lead Author and Graduate Student, Department of Statistics, Stanford University

The innovative AI tool, which is based on an algorithm known as HeadXNet, enhanced the ability of clinicians to correctly detect brain aneurysms at a level similar to detecting six more aneurysms in 100 scans containing aneurysms. The tool also improved consent among the interpreting clinicians. Although these experiments demonstrated HeadXNet as a promising effective tool, the researchers, who are experts in neurosurgery, radiology, and machine learning, cautioned that more research is required to assess the generalizability of this AI tool before implementing real-time clinical deployment, considering the differences in imaging protocols and scanner hardware across various hospital centers.

The team is planning to address these issues through multi-center collaboration.

Augmented expertise

Brain scans are generally combed for signs of an aneurysm, but to do this, hundreds of images have to be scrolled through. Moreover, aneurysms come in many different shapes and sizes and bulge out at awkward angles—some aneurysms register as no more than a blip within the movie-like sequence of images.

Search for an aneurysm is one of the most labor-intensive and critical tasks radiologists undertake. Given the inherent challenges of complex neurovascular anatomy and potential fatal outcome of a missed aneurysm, it prompted me to apply advances in computer science and vision to neuroimaging.

Kristen Yeom, Study Co-Senior Author and Associate Professor, Department of Radiology, Stanford University

Yeom originally brought the concept to the AI for Healthcare Bootcamp operated by Stanford’s Machine Learning Group, which is headed by Andrew Ng, co-senior author of the paper and adjunct professor of computer science. The main challenge was developing an artificial intelligence tool that can precisely process these massive stacks of 3D images and supplement clinical diagnostic practice.

In order to train the new algorithm, Yeom worked with Christopher Chute, a graduate student in computer science, and Park, and delineated clinically relevant aneurysms that can be detected on 611 computerized tomography (CT) angiogram head scans.

We labelled, by hand, every voxel—the 3D equivalent to a pixel—with whether or not it was part of an aneurysm. Building the training data was a pretty grueling task and there were a lot of data.

Christopher Chute, Study Co-Lead Author and Graduate Student, Department of Computer Science, Stanford University

Subsequent to training, the AI algorithm decides for every voxel of a scan whether an aneurysm is present or not. The ultimate outcome of the HeadXNet tool is the algorithm’s conclusions superimposed as a semi-transparent highlight on scan top. Through this representation of the algorithm’s decision, clinicians can still easily see what the scans actually look like without any input from the HeadXNet.

We were interested how these scans with AI-added overlays would improve the performance of clinicians. Rather than just having the algorithm say that a scan contained an aneurysm, we were able to bring the exact locations of the aneurysms to the clinician’s attention.

Pranav Rajpurkar, Study Co-Lead Author and Graduate Student, Department of Computer Science, Stanford University

HeadXNet was tested by eight clinicians, who assessed a group of 115 brain scans for aneurysm—once without the help of HeadXNet and once with it. With the aid of the tool, the clinicians were able to accurately identify more numbers of aneurysms, and thus effectively reduced the “miss” rate; this might also improve consensus among the clinicians.

HeadXNet did not have an effect on the time it took for the clinicians to reach a diagnosis or their potential to accurately detect scans without aneurysms—a protection against telling individuals that they have an aneurysm when they actually don’t.

To other tasks and institutions

The machine learning techniques at the core of HeadXNet can possibly be trained to detect other types of diseases, both outside and inside the brain. For instance, Yeom envisages that a future version could work on expediting the detection of aneurysms after they have ruptured, thus saving valuable time in an emergency situation. However, a significant barrier still exists in combining any artificial intelligence medical tools with day-to-day clinical workflow in radiology across different hospitals.

Present-day scan viewers are not developed to function with deep learning assistance, and as a result, the team had to custom-build tools to incorporate HeadXNet inside the scan viewers. In a similar way, differences in real-world data—when compared to the data on which the algorithm was both tested and trained—could lower the performance of the model. Moreover, the algorithm may not work as anticipated, if it processes data from varied kinds of imaging protocols or scanners, or a population of patients that were not part of its original training.

Because of these issues, I think deployment will come faster not with pure AI automation, but instead with AI and radiologists collaborating,” stated Ng. “We still have technical and non-technical work to do, but we as a community will get there and AI-radiologist collaboration is the most promising path.”

Additional Stanford co-authors are Joe Lou, undergraduate in computer science; Robyn Ball, senior biostatistician at the Quantitative Sciences Unit (also affiliated with Roam Analytics); graduate students Katie Shpanskaya, Rashad Jabarkheel, Lily H. Kim and Emily McKenna; radiology residents Joe Tseng and Jason Ni; Fidaa Wishah, clinical instructor of radiology; Fred Wittber, diagnostic radiology fellow; David S. Hong, assistant professor of psychiatry and behavioral sciences; Thomas J. Wilson, clinical assistant professor of neurosurgery; Safwan Halabi, clinical associate professor of radiology; Sanjay Basu, assistant professor of medicine; Bhavik N. Patel, assistant professor of radiology; and Matthew P. Lungren, assistant professor of radiology.

Hong and Yeom are also members of Stanford Bio-X, the Stanford Maternal and Child Health Research Institute and the Wu Tsai Neurosciences Institute. Patel is also a member of Stanford Bio-X and the Stanford Cancer Institute. Lungren is a member of Stanford Bio-X, the Stanford Maternal and Child Health Research Institute and the Stanford Cancer Institute.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback