Artificial Intelligence Used to Translate the ‘Language of Behavior’

Hollywood actors in “motion capture” suits, acting in full-body costumes speckled with sensors that allow a computer to change them into a dragon, a Hulk or an enchanted beast is something one may have seen.

The researchers on the project include (from left): Michail Kislin, a postdoctoral research associate; Lindsay Willmore, a graduate student; Prof. Joshua Shaevitz; Prof. Sam Wang; Talmo Pereira, a graduate student; and Prof. Mala Murthy. (Photo by Denise Applewhite, Office of Communications)

Now, a partnership between the labs of Princeton professors Mala Murthy and Joshua Shaevitz has gone one step ahead, using the newest advances in artificial intelligence (AI) to automatically monitor animals’ individual body parts in the existing video.

Their new tool, LEAP Estimates Animal Pose (LEAP), can be trained within minutes to automatically track an animal’s individual body parts over millions of frames of video with high accuracy, without having to incorporate any physical labels or markers.

The method can be used broadly, across animal model systems, and it will be useful to measuring the behavior of animals with genetic mutations or following drug treatments.

Mala Murthy, Associate Professor of Molecular Biology and the Princeton Neuroscience Institute (PNI)

The paper detailing the new technology will be published in the January 2019 issue of the journal Nature Methods, but its open-access version, released in May, has already led to the software being adopted by a number of other labs.

When the scientists integrate LEAP with other quantitative tools designed in their labs, they can explore what they term “the language of behavior” by detecting patterns in animal body movements, said Shaevitz, a professor of physics and the Lewis-Sigler Institute for Integrative Genomics.

This is a flexible tool that can in principle be used on any video data. The way it works is to label a few points in a few videos and then the neural network does the rest. We provide an easy-to-use interface for anyone to apply LEAP to their own videos, without having any prior programming knowledge.

Talmo Pereira, Study First Author and Graduate Student, PNI

When asked if LEAP functioned as well on large mammals as it did on the mice and flies that made up most of the early subjects, Pereira promptly produced a motion-tagged video of a giraffe taken from the live feed from Mpala Research Centre in Kenya, a field research station where Princeton is managing partner.

“We took a video of a walking giraffe from the Mpala research station … and labeled points in 30 video frames, which took less than an hour,” Pereira said. “LEAP was then able to track motion from the entire rest of the video (roughly 500 frames) in seconds.”

Previous efforts to develop AI tools that could track human motion have relied on large training sets of manually annotated data. That allowed the software to work robustly on diverse kinds of data, with vastly different backgrounds or lighting conditions.

“In our case, we optimized similar methods to work on data collected in a laboratory setting, in which conditions are consistent across recordings,” said Murthy. “We built a system that allows the user to choose a neural network appropriate for the kind of data that the user collected rather than being constrained by what other researchers or companies have worked on.”

This project came about due to a unique partnership between a senior thesis student in the Murthy lab, Diego Aldarondo of the Class of 2018, and his graduate student mentor, Pereira, who is jointly guided by Murthy and Shaevitz.

Diego was exploring the use of deep neural networks for annotating animal behavioral data via one of his computer science classes at Princeton, and over late-night chats in the lab with Talmo, he realized that these methods could be powerfully applied to their own data: videos of fruit flies interacting during their courtship ritual. The collaboration took off from there, and it was incredible fun to work together—Diego and Talmo showed how effective these AI methods can be.

Mala Murthy, Associate Professor of Molecular Biology and the Princeton Neuroscience Institute (PNI)

The research has considerable potential outside of neuroscience as well, said Monica Daley, a senior lecturer at the Structure and Motion Laboratory of the Royal Veterinary College in the United Kingdom, who was not involved in this study.

“Much of my research aims to understand how animals move effectively under different terrain and environmental conditions,” Daley said. “One of the biggest ongoing challenges in the field is pulling meaningful information about animal movement from video footage. We either process videos manually, requiring many hours of tedious work, or focus on very simplistic and limited analysis that can be automated. The algorithms presented in this paper have potential to automate the labor-intensive part of our work more than has been possible previously, which could allow us to study a greater variety of animal locomotor behaviors.”

After they gather a database of motion and behaviors, the neuroscientists on the team can make connections to the neural processes behind them. This will permit scientists “to not only gain a better understanding of how the brain produces behaviors,” said Shaevitz, “but also to explore future diagnostics and therapies that rely on a computer interpreting someone’s actions.”

A similar tool was shared over the summer by a team of Harvard scientists, who used current neural network architecture, while the Princeton team developed their own. “Our method and theirs have different advantages,” said Murthy. “This is an incredibly exciting field right now with a lot of activity in developing AI tools for studies of behavior and neural activity.”

“We use a different approach, where smaller, leaner networks can achieve high accuracy by specializing on new datasets quickly,” said Pereira. “More importantly, we show that there are now easy-to-use options for animal pose tracking via AI, and we hope that this encourages the field to begin to adopt more quantitative and precise approaches to measurement of behavior.”

In the last five years, neuroscience has made enormous strides in the technology observing and manipulating brain activity. Now, automatic classification of behavior adds a critical complement to that technology. Princeton is becoming a central hub in the budding field of computational neuroethology.

Samuel Wang, Study Co-Author and Professor of Molecular Biology and PNI

“Fast animal pose estimation using deep neural networks” by Talmo Pereira, Diego Aldarondo, Lindsay Willmore, Mikhail Kislin, Samuel Wang, Mala Murthy, and Joshua Shaevitz was released online on December 20th and published in the January 2019 issue of Nature Methods. The study was supported by the National Institutes of Health R01-NS104899-01 BRAIN Initiative Award, R01-MH115750 and R01-NS045193; the National Science Foundation through a BRAIN Initiative EAGER Award and GRFP DGE-1148900; the Nancy Lurie Marks Family Foundation; and a Howard Hughes Medical Institute Faculty Scholar Award.

Translating the ‘language of behavior’ with artificially intelligent motion capture

Princeton researchers created LEAP, a flexible motion-capture tool that can be trained in a matter of minutes to track body parts over millions of frames of video with high accuracy, without any physical markers or labels. Here, graduate student Talmo Pereira took giraffe footage from the Mpala Research Centre’s live video feed, labeled 30 frames to train LEAP’s neural network, and then LEAP generated this within seconds. (Left: Raw video footage courtesy of mpalalive.org Center and right: Courtesy of the researchers)

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.