Posted in | News | Machine-Vision

Explanation Processes can Make Artificial Intelligence Understandable

Studying X-ray images, sifting through job applications and indicating new track list—human-machine interaction has become an important part of contemporary life.

Explanation Processes can Make Artificial Intelligence Understandable

Image Credit: Peshkova/shutterstock.com

Algorithmic decision-making is the basis for such artificial intelligence (AI) processes. But since these processes are usually hard to interpret, they generally prove less useful than expected.

Now, scientists from Paderborn University and Bielefeld University are planning to change this and are debating how the explainability of AI can be enhanced and adapted to the requirements of human users.

The researchers’ study was recently published in the renowned journal IEEE Transactions on Cognitive and Developmental Systems. The team has described explanation as a social practice, wherein both parties jointly build the process of understanding.

Explainability Research

Artificial systems have become complex. This is a serious problem – particularly when humans are held accountable for computer-based decisions.

Philipp Cimiano, Professor and Computer Scientist, Bielefeld University

According to Cimiano, it is important to figure out how machine-based decisions are made, specifically, in the field of legal contexts or medical prognosis.

Cimiano further pointed out that while certain techniques that deal with the explainability of these systems already exist, they do not sufficiently go far. Katharina Rohlfing, a Professor from Paderborn University, agreed that more action is urgently required.

Citizens have the right for algorithmic decisions to be made transparent. There are good reasons why this issue is specifically mentioned in the European Union’s General Data Protection Regulation.

Katharina Rohlfing, Professor, Paderborn University

The objective of making algorithms accessible is crucial to the so-called “eXplainable Artificial Intelligence (XAI).”

In explainability research, the focus is currently on the desired outcomes of transparency and interpretability,” added Rohlfing, explaining the new study.

Understanding How Decisions are Made

The researchers involved in this study have gone one step further and are studying computer-based explanations from different standpoints. They started from the hypothesis that explanations are understandable to users as long as they are not simply presented to the users and that the users are actually involved in creating them.

As we know from many everyday situations, good explanations are worth nothing if they do not take account of the other person’s knowledge and experience. Anyone who wonders why their application was rejected by an algorithm is not generally interested in finding out about the technology of machine learning, but asks instead about how the data was processed with regard to their own qualifications.

Katharina Rohlfing, Professor, Paderborn University

When people interact with one another, the dialogue between them ensures that an explanation is tailored to the understanding of the other person. The dialogue partner asks questions for further explanation or can express incomprehension which is then resolved. In the case of artificial intelligence there are limitations to this because of the limited scope for interaction,” added Professor Rohlfing

To deal with this problem, economists, psychologists, sociologists, media researchers and computer scientists are closely working together in an interdisciplinary group.

All these experts are studying complex AI systems, computer models, as well as roles in communicative interaction.

Explanation as a Social Practice

The researchers from Paderborn University and Bielefeld University have created a conceptual framework for the social design of explainable AI systems.

Professor Rohlfing added, “Our approach enables AI systems to answer selected questions in such a way that the process can be configured interactively. In this way, an explanation can be tailored to the dialogue partner, and social aspects can be included in decision-making.”

According to the researchers, explanations are a series of actions that are brought together by both parties in a kind of social practice. The goal is to guide this by 'scaffolding' and 'monitoring.' Such terms originally come from the area of developmental studies.

To put it simply, scaffolding describes a method in which learning processes are supported by prompts and guidance, and are broken down into partial steps. Monitoring means observing and evaluating the reactions of the other party,” Professor Rohlfing explained. The team is now aiming to use these principles on AI systems.

New Forms of Assistance

The goal of the new method is to extend the present study and offer new solutions to societal challenges in relation to AI.

The fundamental assumption is that the only effective way to get a better understanding and more action from an explanation is to involve the dialog participant in the explanation process. At its core, this is about the participation of humans in socio-technical systems.

Our objective is to create new forms of communication with genuinely explainable and understandable AI systems, and in this way to facilitate new forms of assistance,” concluded Professor Rohlfing.

Journal Reference:

Rohlfing, K. J., et al. (2020) Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Journals & Magazine. doi.org/10.1109/TCDS.2020.3044366.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.