Posted in | News | Machine-Vision

Researchers Provide a Glimpse into the Diverse “Intelligence” Spectrum Observed in Current AI Systems

Today, machine learning algorithms and artificial Intelligence (AI) such as deep learning have become an integral part of people’s day-to-day lives.

The heatmap shows quite clearly that the algorithm makes its ship/not ship decision on the basis of pixels representing water and not on the basis of pixels representing the ship. (Image credit: Nature Communications Copyright und CC BY Lizenz)

For example, these algorithms enhance medical diagnostics, allow translation services or digital speech assistants, and are also a crucial part of upcoming technologies like autonomous driving. Learning algorithms—based on robust new computer architectures and an ever-growing amount of data—seem to reach human capabilities, at times even surpassing beyond.

However, to date, users still do not know how the AI systems precisely reach their conclusions. As a result, it may usually remain indistinct, whether the decision making behavior of the AI is really “intelligent” or whether the processes are merely averagely effective.

At TU Berlin, Fraunhofer Heinrich Hertz Institute HHI, and Singapore University of Technology and Design (SUTD), researchers have addressed this question and have gone ahead to give an insight into the varied “intelligence” spectrum seen in the present-generation of AI systems, particularly examining these AI systems with an innovative technology that enables automatized analysis as well as quantification.

A technique previously developed by Fraunhofer HHI and TU Berlin is the most significant prerequisite for this new technology. The technique, what is known as Layer-wise Relevance Propagation (LRP) algorithm, enables visualization based on which decisions are made by input variables AI systems.

The new Spectral relevance analysis (SpRAy)—extending the LRP algorithm—is capable of detecting and quantifying a broad spectrum of learned decision making behavior. In this way, the detection of unwanted decision making has now become feasible even in extremely large sets of data.

According to Dr Klaus-Robert Müller, Professor for Machine Learning at TU Berlin, the so-called “explainable AI” has been one among the most significant steps towards a hands-on application of AI.

Specifically in medical diagnosis or in safety-critical systems, no AI systems that employ flaky or even cheating problem solving strategies should be used,” he stated.

When researchers applied their recently developed algorithms, they were finally able to test any prevalent AI system and also extract quantitative data about them—an entire spectrum ranging from cheating strategies and simple problem-solving behavior through to highly extensive “intelligent” strategic solutions is seen.

We were very surprised by the wide range of learned problem-solving strategies. Even modern AI systems have not always found a solution that appears meaningful from a human perspective, but sometimes used so-called ‘Clever Hans Strategies’.

Dr Wojciech Samek, Group Leader, Fraunhofer HHI.

During the 1900s, a horse called clever Hans could allegedly count and was believed to be a scientific sensation. Since it was discovered later, Hans was not able to ace math but in approximately 90% of the cases, he could obtain the right answer from the reaction of the questioner.

The research group around Wojciech Samek and Klaus-Robert Müller also found analogous “Clever Hans” strategies in different AI systems. For instance an AI system, which received a number of international image classification competitions several years ago, pursued an approach that can be regarded as immature from a human’s standpoint. It was able to mainly classify images based on the context. To elaborate further, images were assigned to the “ship” category when there was plenty of water in the picture. Other pictures were categorized as “train” when rails were present. Still other images were assigned the right category through their copyright watermark. As a result, the actual task, such as detecting the concepts of trains or ships, was not resolved by this AI system—even if it was able to classify most of the pictures accurately.

In addition, the researchers successfully detected these kinds of faulty problem-solving approaches in certain sophisticated AI algorithms, that is, the so-called deep neural networks. So far, these algorithms were believed to be immune against such lapses. Such networks partly based their classification decision on artifacts produced at the time of image preparation and have nothing to do with the real image content.

Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers. It is quite conceivable that about half of the AI systems currently in use implicitly or explicitly rely on such ‘Clever Hans’ strategies. It’s time to systematically check that, so that secure AI systems can be developed.

Dr Klaus-Robert Müller, Professor for Machine Learning, TU Berlin.

Using their novel technology, the investigators were even able to identify AI systems that have learned “smart” strategies—a fact that was rather surprising. Instances include systems that have learned to play Pinball and Breakout, which are the Atari games.

Here the AI clearly understood the concept of the game and found an intelligent way to collect a lot of points in a targeted and low-risk manner. The system sometimes even intervenes in ways that a real player would not,” stated Wojciech Samek.

Beyond understanding AI strategies, our work establishes the usability of explainable AI for iterative dataset design, namely for removing artefacts in a dataset which would cause an AI to learn flawed strategies, as well as helping to decide which unlabeled examples need to be annotated and added so that failures of an AI system can be reduced.

Alexander Binder, Assistant Professor, Singapore University of Technology and Design.

Our automated technology is open source and available to all scientists. We see our work as an important first step in making AI systems more robust, explainable and secure in the future, and more will have to follow. This is an essential prerequisite for general use of AI.

Dr Klaus-Robert Müller, Professor for Machine Learning, TU Berlin.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.