Posted in | News | Remote Monitoring

Artificial Intelligence Used to Detect Anomalies in Power Grid Failures

Finding a fault in the power grid of a country can be like attempting to locate a needle in a huge haystack.

Artificial Intelligence Used to Detect Anomalies in Power Grid Failures

Image Credit: metamorworks/Shutterstock.com

Hundreds of thousands of interconnected sensors spread throughout the United States capture data on electric current, voltage, and other important information instantaneously, regularly taking several recordings per second.

Scientists at the MIT-IBM Watson AI Lab have come up with a computationally efficient technique that can automatically identify anomalies in those data streams instantaneously. The team showed that their artificial intelligence technique, which learns to model the interconnectedness of the power grid, is much better at spotting these problems than some other prevalent methods.

Since the machine-learning model they created does not need annotated data on power grid glitches for training, it would be easier to apply in everyday circumstances where high-quality, labeled datasets are frequently tough to find.

The model is also versatile and can be applied to other circumstances where many interconnected sensors gather and report data, like traffic tracking systems. It could, for example, detect traffic bottlenecks or expose how traffic jams flow.

In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise.

Jie Chen, Senior Study Author and Research Staff Member and Manager, IBM Watson AI Lab, MIT

“We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” Jie Chen added.

The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This study will be showcased at the International Conference on Learning Representations.

Probing Probabilities

The team started by labeling an anomaly as an event that has a low probability of taking place, for instance, an unexpected spike in voltage. They consider the power grid data as a probability distribution, so if they can approximate the probability densities, they can detect the low-density values in the dataset. Those data points, which are least probable to take place, correspond to irregularities.

Approximating those probabilities is not an easy job, particularly since each sample records numerous time series, and each time series is a set of multidimensional data points captured over time. Additionally, the sensors that record all that data are conditional on one another, meaning they are linked in a certain configuration and one sensor can occasionally influence others.

To learn the intricate conditional probability distribution of the data, the scientists used a distinctive type of deep-learning model known as a normalizing flow, which is most effective at approximating the probability density of a sample.

Researchers improved standardizing the flow model using a type of graph, called a Bayesian network, which can learn the intricate, causal relationship structure between various sensors. This graph structure allows the scientists to observe patterns in the data and approximate anomalies more accurately, Chen explains.

The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities.

Jie Chen, Senior Study Author and Research Staff Member and Manager, IBM Watson AI Lab, MIT

This Bayesian network factorizes or breaks down, the joint probability of the numerous time-series data into less intricate, conditional probabilities that are much easier to parameterize, learn, and assess. This allows the scientists to approximate the probability of observing specific sensor readings, and to detect those readings that have a low probability of occurring, meaning they are glitches.

Their technique is particularly robust because this complex graph structure does not have to be defined earlier—the model can learn the graph independently, without any supervision.

A Powerful Technique

The team verified this framework by observing how well it could detect glitches in traffic data, power grid data, and water system data. The datasets used for testing had anomalies that had been detected by humans, so the scientists were able to compare the irregularities their model spotted with real anomalies in each system.

Their model outdid all the baselines by sensing a higher percentage of true anomalies in each dataset.

For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us.

Jie Chen, Senior Study Author and Research Staff Member and Manager, IBM Watson AI Lab, MIT

The approach is also flexible. With a large, unlabeled dataset, they can tweak the model to make effective anomaly estimations in other situations, such as traffic patterns.

Once the model is installed, it would carry on learning from a steady flow of new sensor data, familiarizing to possible drift of the data distribution and upholding accuracy over time, says Chen.

Though this specific project is close to its end, he is keen on using the lessons learned in other areas of deep-learning research, mainly on graphs.

Chen and his colleagues could employ this method to create models that map other intricate, conditional relationships. They also are keen to investigate how they can competently learn these models when the graphs become massive, maybe with millions or billions of interlinked nodes.

Additionally, instead of detecting anomalies, they could also use this method to enhance the accuracy of forecasts based on datasets or simplify other classification methods.

This study received funding from the MIT-IBM Watson AI Lab and the U.S. Department of Energy.

Reference

Dai, E., and Chen, J. (2022) Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series.ICLR 2022 https://arxiv.org/abs/2202.07857

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.