A grasp system including robot hands and with the ability to autonomously acquaint itself with new objects has been created by Scientists at the Bielefeld University. This system operates without even knowing the properties of the objects (e.g. tools or pieces of fruit) beforehand.
The system was created under the extensive research project Famula at the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University. The insights acquired from this study might enable the development of future service robots, for example, with the ability to independently get accustomed to operating in new households. An amount of nearly one million Euros has been invested by CITEC in Famula. The Famula Project Coordinators have described the new innovation in a new “research_tv” report from Bielefeld University.
Even though the robot hands are strong enough to crush the apple, they dole out their strength for a fine-touch grip that also won’t damage delicate objects. This is made possible by connecting tactile sensors developed at CITEC with intelligent software. CREDIT: Bielefeld University.
Our system learns by trying out and exploring on its own—just as babies approach new objects,” stated Dr. Helge Ritter, a Neuroinformatics Professor who is the Head of the Famula project along with Professor Dr. Thomas Schack, a Cognitive Psychologist and Sports Scientist, and Dr. Sven Wachsmuth, a Robotics Privatdozent.
At present, the focus of the CITEC research team is on developing a robot that has two hands and closely resembles human hands with respect to mobility as well as shape. For these hands, the brain of the robot must learn to distinguish between everyday objects such as dishes, pieces of fruit, or stuffed animals based on their shape and color, as well as what counts while trying to hold the object.
The Human Being as the Model
A button can be pressed, and a banana can be held. “
The system learns to recognize such possibilities as characteristics, and constructs a model for interacting and re-identifying the object,” explained Ritter.
In order to achieve this, the interdisciplinary research integrates artificial intelligence studies with research from other fields. For example, the research team headed by Thomas Schack analyzed the characteristics which the study participants considered to be important in performing grasping actions. In one of the studies, test subjects were made to compare the similarity of over 100 objects.
It was surprising that weight hardly plays a role. We humans rely mostly on shape and size when we differentiate objects.
Professor Dr. Thomas Schack, Cognitive Psychologist and Sports Scientist
In one more study, eyes of the test subjects were covered and they were made to grasp cubes of different shapes, weights and sizes. The hand movements of the test subjects were recorded by using infrared cameras.
Through this, we find out how people touch an object, and which strategies they prefer to use to identify its characteristics. Of course, we also find out which mistakes people make when blindly handling objects.
Dirk Koester, a Member of Schack’s research team
System Puts Itself in the Position of Its “Mentor”
Dr. Robert Haschke, one of the collaborators of Helge Ritter, stood against a large metal cage with both the robot arms and a table with a range of test objects. Functioning as a human learning mentor, Dr. Haschke assists the system in getting familiar with new objects, instructing the robot hands on the next object to be inspected from the table. In order to perform this, Haschke points toward individual objects or provides spoken hints, for example, the direction (e.g. “behind, at left”) in which the robot can find a fascinating object. By means of depth sensors and color cameras, the way in which the system familiarizes with its surroundings and acts according to instructions from humans is displayed by two monitors.
In order to understand which objects they should work with, the robot hands have to be able to interpret not only spoken language, but also gestures. And they also have to be able to put themselves in the position of a human to also ask themselves if they have correctly understood.
Sven Wachsmuth, CITEC’s Central Labs
Apart from taking responsibility for the system’s language capabilities, Wachsmuth and his colleagues have as well given a face for the system. It can be observed from one monitor that Flobi, a stylized robot head, acts in accordance with the movements of the hands as well as in accordance with the instructions of the Researchers. Through facial expressions, it complements the actions and language of the robot. At present, the virtual version of Flobi robot is in use as a central theme of the Famula system.
Understanding Human Interaction
Using the Famula project, the CITEC research team has been carrying out fundamental research that can enhance the futuristic self-learning robots for industrial as well as household use.
We want to literally understand how we learn to ‘grasp’ our environment with our hands. The robot makes it possible for us to test our findings in reality and to rigorously expose the gaps in our understanding. In doing so, we are contributing to the future use of complex, multi-fingered robot hands, which today are still too costly or complex to be used, for instance, in industry.
Dr. Helge Ritter, Neuroinformatics Professor and Head of the Famula project
Famula is an acronym for “Deep Familiarization and Learning Grounded in Cooperative Manual Action and Language: From Analysis to Implementation,” and has been carried out from 2014. At present it is restricted to October 2017. Eight research teams belonging to the Cluster of Excellence CITEC are part of the project. This is one of the four extensive projects carried out at CITEC. Other such projects are the walking robot Hector, a robot serviced apartment and the virtual coaching space ICSpace. The state and federal governments (EXC 277) fund the CITEC as part of the Excellence Initiative of the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG).