Researchers Demonstrate the Flexible Use of AI in Home, Industry, and Healthcare

Is it possible for robots to change their working techniques to complete difficult operations? Scientists at the Chalmers University of Technology in Sweden have created a new form of AI that can adjust to accomplish tasks in a changing environment by monitoring human behavior.

Researchers Demonstrate the Flexible Use of AI in Home, Industry, and Healthcare.

Robot Tiego is ready to stack cubes. Image Credit: Maximilian Diehl

The goal is that robots with this level of flexibility will be able to collaborate with humans on a much larger scale.

Robots that work in human environments need to be adaptable to the fact that humans are unique, and that we might all solve the same task in a different way. An important area in robot development, therefore, is to teach robots how to work alongside humans in dynamic environments.

Maximilian Diehl, Main Researcher and Doctoral Student, Electrical Engineering, Chalmers University of Technology

When doing a simple action, like arranging a table, humans may handle the task in a variety of ways, depending on the circumstances. If a chair gets in the way, people have the option of moving it or walking around it. People switch between their right and left hands, take pauses and engage in a variety of spontaneous activities.

Robots, on the other hand, do not operate in the same way. To achieve the end goal, robots need accurate programming and directions. In contexts where they must always follow the same pattern, like factory processing lines, this strategy is incredibly efficient. On the other hand, in healthcare or customer service sectors, robots will need to acquire much more adaptable ways of working to effectively engage with people.

In the future we foresee robots accomplishing some basic household activities, such as setting and cleaning a table, placing kitchen utensils in the sink, or help organizing groceries.

Karinne Ramirez-Amaro, Assistant Professor, Electrical Engineering, Chalmers University of Technology

The Chalmers University researchers seek to see if they can educate a robot to solve problems in more humanoid ways — to create an “explainable AI” that gathers general rather than specific knowledge during a demonstration and then plans a flexible and adaptable path to a long-term objective. Explainable AI (XAI) is a sort of artificial intelligence that allows humans to comprehend how they came to a certain choice or conclusion.

Teaching a Robot to Stack Objects under Changing Conditions

In a virtual reality environment, the scientists challenged various people to repeat an identical task — stacking piles of little cubes — twelve times. The task was carried out differently each time, and the motions of the participants were tracked using a system of laser sensors.

When we humans have a task, we divide it into a chain of smaller sub-goals along the way, and every action we perform is aimed at fulfilling an intermediate goal. Instead of teaching the robot an exact imitation of human behavior, we focused on identifying what the goals were, looking at all the actions that the people in the study performed.

Karinne Ramirez-Amaro, Assistant Professor, Electrical Engineering, Chalmers University of Technology

The researchers' method meant that the AI was disciplined in extracting the intent of the sub-goals and built libraries consisting of different actions for each one. The AI then constructed a planning tool for a TIAGo robot, which is a mobile service robot that works in interior areas.

Even when the surrounding conditions varied, the robot was able to autonomously construct a plan for a specific task of stacking cubes on top of one another with the use of the tool.

In a nutshell, the robot was given the assignment of piling the cubes and then, based on the circumstances, which varied somewhat from an attempt to attempt, it chose a mixture of multiple feasible actions to build a sequence that would contribute to the task’s completion. The end outcome was a huge success.

With our AI, the robot made plans with a 92% success rate after just a single human demonstration. When the information from all twelve demonstrations was used, the success rate reached up to 100%,” says Maximilian Diehl.

The research was presented at IROS 2021, one of the most renowned robotics meetings in the world. The scientists will study how robots can connect with humans and clarify what went wrong and why if they lose a task in the next part of the experiment.

Industry and Healthcare

The long-term goal is for robots to assist workers with tasks that could create long-term health issues, such as tightening bolts/nuts on truck wheels. It could be tasks such as bringing and gathering medicine or meals in the healthcare industry.

We want to make the job of healthcare professionals easier so that they can focus on tasks which need more attention,” says Karinne-Ramirez Amaro.

Maximilian Diehl further states, “It might still take several years until we see genuinely autonomous and multi-purpose robots, mainly because many individual challenges still need to be addressed, like computer vision, control, and safe interaction with humans. However, we believe that our approach will contribute to speeding up the learning process of robots, allowing the robot to connect all of these aspects and apply them in new situations.”

The study was conducted in partnership with Chris Paxton, an NVIDIA research scientist. The Chalmers AI Research Centre (CHAIR) funded this project.


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type