Posted in | News | Medical Robotics

Real-Time Data for Cost-Effective Nystagmus Diagnosis via Smartphone

Researchers from Florida Atlantic University and associates have created a novel proof-of-concept deep learning model that uses real-time data to help diagnose nystagmus, a condition marked by involuntary, rhythmic eye movements frequently connected to vestibular or neurological disorders, according to a study published in Cureus.

FAU researchers are also experimenting with a wearable headset equipped with deep learning capabilities to detect nystagmus in real-time. Image Credit: Florida Atlantic University

Artificial intelligence is becoming increasingly important in modern medicine, notably in analyzing medical images to assist doctors in determining disease severity, making treatment decisions, and tracking disease development. Despite these advances, most existing AI models are built on static information, which limits their flexibility and real-time diagnostic capabilities.

Gold-standard diagnostic methods for detecting nystagmus include videonystagmography (VNG) and electronystagmography. However, these approaches have significant limitations, including exorbitant prices (VNG equipment can cost more than $100,000), large setups, and patient annoyance during testing. FAU's AI-powered technology provides a cost-effective, patient-friendly solution for rapid and reliable screening for balance problems and atypical eye movements.

The technology enables patients to record their eye movements with a smartphone, securely submit the video to a cloud-based system, and obtain remote diagnostic analysis from vestibular and balance experts without leaving their homes.

This concept is built around a deep learning framework that analyzes eye movements in real time using facial landmark monitoring. The AI system automatically maps 468 face landmarks and assesses slow-phase velocity, a critical parameter for determining nystagmus severity, duration, and direction. It then provides simple graphs and reports that audiologists and other doctors can easily comprehend during virtual consultations.

The results of a pilot research involving 20 individuals revealed that the AI system's assessments closely reflected those acquired using traditional medical instruments. This early accomplishment demonstrates the model’s accuracy and promises for clinical dependability, even in its early phases.

Our AI model offers a promising tool that can partially supplement – or, in some cases, replace – conventional diagnostic methods, especially in telehealth environments where access to specialized care is limited. By integrating deep learning, cloud computing and telemedicine, we’re making diagnosis more flexible, affordable and accessible – particularly for low-income rural and remote communities.

Ali Danesh, Ph.D., Study Principal Investigator, Senior Author and Professor, Department of Communication Sciences and Disorders, Florida Atlantic University

The researchers trained their algorithm on almost 15,000 video frames, using a 70:20:10 ratio for training, testing, and validation. This thorough methodology guaranteed that the model was resilient and adaptable across a wide range of patient demographics. The AI also uses clever filtering to remove artifacts like eye blinks, resulting in accurate and consistent results.

The technology is intended to optimize clinical processes in addition to diagnostics. Through telehealth systems, doctors and audiologists may access AI-generated data, compare them to patients' electronic medical records, and create individualized treatment regimens. In turn, patients gain from less travel, fewer expenses, and the ease of doing follow-up evaluations by just submitting fresh recordings from home, which allows doctors to monitor the course of the condition over time.

In addition, FAU researchers are testing a wearable headset equipped with deep learning capabilities to detect nystagmus in real time. Early studies in controlled situations have shown promise, but more advancements are required to overcome issues such as sensor noise and human unpredictability.

While still in its early stages, our technology holds the potential to transform care for patients with vestibular and neurological disorders. With its ability to provide non-invasive, real-time analysis, our platform could be deployed widely – in clinics, emergency rooms, audiology centers and even at home.

Harshal Sanghvi, Ph.D., Study First Author, Postdoctoral Fellow, College of Medicine and College of Business, Florida Atlantic University

Sanghvi collaborated closely with his mentors and co-authors on this project, including Abhijit S. Pandya, Ph.D., FAU Department of Electrical Engineering and Computer Science and FAU Department of Biomedical Engineering, and B. Sue Graves, Ed.D., FAU Charles E. Schmidt College of Science, Department of Exercise Science and Health Promotion.

This interdisciplinary initiative brings together collaborators from FAU's College of Business, College of Medicine, College of Engineering and Computer Science, and College of Science, as well as partners from Advanced Research, Marcus Neuroscience Institute (part of Baptist Health) at Boca Raton Regional Hospital, Loma Linda University Medical Center, and Broward Health North. They collaborate to improve the model's accuracy, expand testing across varied patient populations, and get FDA clearance for wider clinical use.

As telemedicine becomes an increasingly integral part of health care delivery, AI-powered diagnostic tools like this one are poised to improve early detection, streamline specialist referrals, and reduce the burden on health care providers. Ultimately, this innovation promises better outcomes for patients –regardless of where they live,” added Danesh.

In addition to Pandya and Graves, the study's co-authors include: Shailesh Gupta, M.D., Broward Health North; Kakarla Chalam, M.D., Ph.D., Loma Linda University; Gurnoor S. Gill, FAU College of Medicine; Sandeep K. Reddy, Ph.D., FAU College of Engineering and Computer Science; and Jilene Moxam, Advanced Research LLC.

Research Video of Eye Movements Recorded with a Smartphone

A smartphone was used to record eye movements of a research participant in response to optokinetic visual stimuli. Parallel recordings were also obtained using an infrared camera through a Videonystagmography (VNG) system. An AI-generated algorithm analyzing the smartphone video successfully replicated the results of the VNG recordings with high accuracy. This method shows promise for remote assessment of abnormal eye movements and could support clinicians in diagnosing patients who are unable to attend in-person appointments. Video Credit: Florida Atlantic University

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.