Posted in | Medical Robotics

New Artificial Intelligence System Enables Accurate Diagnosis and Classification of Intracranial Hemorrhage

Using artificial intelligence, researchers from the Massachusetts General Hospital (MGH) Department of Radiology have created a new system to rapidly diagnose and define brain hemorrhages and also to offer the basis of its decisions from comparatively tiny image datasets.

These images show the system’s ability to explain its diagnosis of subarachnoid (left above) and intraventricular (left below) hemorrhage by displaying images with similar appearances (right) from an atlas of images used to train the system. (Image credit: Hyunkwang Lee, Harvard School of Engineering and Applied Sciences, and Sehyo Yune, MD, Massachusetts General Hospital Department of Radiology)

Such systems could turn out to be a critical tool for hospital emergency departments that evaluate patients with symptoms of stroke—which is a potentially life-threatening medical condition—and allow them to provide accurate and rapid treatments. The researchers’ report has appeared online in Nature Biomedical Engineering.

The availability of big datasets and the ever-increasing computational power have considerably enhanced machine learning—the process through which computers examine data, detect patterns, and fundamentally teach themselves on how to execute a job without any direct intervention from a human programmer. However, major barriers can hinder such kinds of systems from being incorporated into clinical decision making. These comprise of the need for big and well-annotated datasets—imaging analysis systems, which were developed in the past and had the ability to duplicate a physician’s performance, were trained with over 100,000 images—and the “black box” problem, which is the systems’ inability to explain how they reached a decision. The U.S. Food and Drug Administration stipulates any decision support system to offer data that enable users to evaluate the reasons behind its findings.

It is somewhat paradoxical to use the words ‘small data’ or ‘explainable’ to describe a study that used deep learning,” stated Hyunkwang Lee, one of the two lead authors of the study and a graduate student at the Harvard School of Engineering and Applied Sciences. “However, in medicine it is especially hard to collect high-quality big data. It is critical to have multiple experts label a dataset to ensure consistency of data. This process is very expensive and time-consuming.”

Some critics suggest that machine learning algorithms cannot be used in clinical practice, because the algorithms do not provide justification for their decisions. We realized that it is imperative to overcome these two challenges to facilitate the use in health care of machine learning, which has an immense potential to improve the quality of and access to care,” added co-lead author Sehyo Yune, MD, of MGH Radiology.

In order to train their system, the MGH group first started with 904 head CT scans, each containing about 40 separate images, labeled by five MGH neuroradiologists as to whether they showed no hemorrhage or one of the five subtypes of hemorrhage, depending on the location inside the brain. In an effort to enhance the precision of this deep-learning system, the research team—headed by senior author Synho Do, PhD, director of the MGH Radiology Laboratory of Medical Imaging and Computation and an assistant professor of Radiology at Harvard Medical School—incorporated steps that emulate the way radiologists examine images. These consist of adjusting factors like brightness and contrast to expose the slight variations that are not instantly visible and scrolling via neighboring CT scan slices to find out whether or not something appearing on one image reflects a meaningless artifact or a true problem.

After developing the model system, the researchers tested it on two individual sets of CT scans such as a retrospective set taken prior to the development of the system containing 100 scans with intracranial hemorrhage and 100 scans without intracranial hemorrhage, and also a potential set of 79 scans with hemorrhage and 117 scans without hemorrhage taken after the development of the model. The model system, in its study of the retrospective set, was found to be as precise in identifying and classifying intracranial hemorrhages just like the radiologists who had assessed the scans had been. In its study of the prospective set, it demonstrated to be much better than novice human readers.

In order to overcome the “black box” issue, the researchers reviewed the system and saved the images from the training dataset that vividly showed the characteristic features of all the five hemorrhage subtypes. Utilizing this atlas of distinctive features, the system was able to show a set of images analogous to those of the CT scan being examined to describe the basis of its decisions.

Rapid recognition of intracranial hemorrhage, leading to prompt appropriate treatment of patients with acute stroke symptoms, can prevent or mitigate major disability or death. Many facilities do not have access to specially trained neuroradiologists—especially at night or over weekends—which can require non-expert providers to determine whether or not a hemorrhage is the cause of a patient’s symptoms. The availability of a reliable, ‘virtual second opinion’—trained by neuroradiologists—could make those providers more efficient and confident and help ensure that patients get the right treatment.

Michael Lev, MD, Study Co-Author, MGH Radiology.

In addition to providing that much needed virtual second opinion, this system also could be deployed directly onto scanners, alerting the care team to the presence of a hemorrhage and triggering appropriate further testing before the patient is even off the scanner. The next step will be to deploy the system into clinical areas and further validate its performance with many more cases. We are currently building a platform to allow for the widespread application of such tools throughout the department. Once we have this running in the clinical setting, we can evaluate its impact on turnaround time, clinical accuracy and the time to diagnosis.

Shahein Tajmir, MD, Study Co-Author, MGH Radiology.

Mohammad Mansouri, Myeongchan Kim, Claude E. Guerrier, MD, Sarah A. Ebert, MD, Stuart R. Pomerantz, MD, Javier M. Romero, MD, Shahmir Kamalian, MD, and Ramon G. Gonzalez, MD, PhD, all from MGH Radiology are the other co-authors of the Nature Biomedical Engineering. The study was supported by National Institutes of Health grant 5U01 EB025153.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback