Posted in | News | Machining Robotics

New Algorithm Enables Robots to Ask Questions

Brown undergraduate Eric Rosen lays out items on a table for Baxter the robot to retrieve. A new algorithm Rosen helped to develop let's Baxter ask a question if it's unsure which item it's supposed to fetch. Credit: Nick Dentamaro / Brown University

Brown University has developed an algorithm that enables robots to ask a question when they are confused, helping these robots to fetch objects in a better manner, which is a significant task for future robot assistants.

If an individual is asked to take a wrench from a table full of wrenches of different sizes, then he/she will probably stop and ask, “which one?” A team of researchers from Brown University have recently developed an algorithm that allows robots do the same thing, meaning that robots will be able to ask for clarification when they are not sure what is needed by a person.

The research will be presented this spring by Brown’s Humans to Robots Lab headed by Stefanie Tellex, a computer science professor, at the International Conference on Robotics and Automation in Singapore. Stefanie Tellex’s work deals with human-robot collaboration, which refers to the manufacturing of robots capable of being good helpers to people in the workplace and at home.

Fetching objects is an important task that we want collaborative robots to be able to do. But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.

Stefanie Tellex, Computer Science Professor, Brown University

An algorithm previously developed by Tellex’s lab enables robots to receive information from human gestures in addition to being able to receive even speech commands. This refers to a form of interaction used by people all of the time. When individuals ask someone for an object, they often point to the object as they ask for it. Tellex and her team demonstrated that robots could get better at accurately interpreting user commands if they could merge the speech commands with gestures.

However, the system still does not seem to be perfect. The system experiences issues when several similar objects are arranged in close proximity to each other. The workshop table can be considered as an example here. Just asking for “a wrench” is not precise enough, and points out an unclear picture of which wrench the person is asking for when an increasing number of wrenches are gathered close together.

What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object.

Stefanie Tellex, Computer Science Professor, Brown University

This can be achieved by the new algorithm, which allows the robot to quantify how sure it is about what a user wants. When the robot’s level of certainty is high, it will then hand over the object as requested in a simple manner. When the robot is not sure, then it makes its best guess about what the person needs. After this the robot asks for confirmation by suspending its gripper on the object and asking, “this one?”

One significant feature of the system refers to the fact that the robot does not ask questions with every single interaction but instead asks intelligently.

When the robot is certain, we don’t want it to ask a question because it just takes up time, but when it is ambiguous, we want it to ask questions because mistakes can be more costly in terms of time.

Eric Rosen, Undergraduate, Brown University

Despite the fact that the system asks only an extremely simple question, “it’s able to make important inferences based on the answer,” Whitney said.

For instance, say a user asks for a wrench in a scenario where there are two wrenches placed on a table. If the robot is informed by the user that its first guess was wrong, the algorithm then realizes that the other wrench must be the one wanted by the user. The robot will then hand over that wrench to the user without asking another question. Such inferences, known as implicatures, increase the efficiency of the algorithm.

The researchers tested their system by asking untrained participants to come into the lab and work together with Baxter, a well-known research and industrial robot. These untrained participants asked Baxter for objects under varied scenarios. It was also possible for the team to set the robot to ask questions only when uncertain, ask a question every time, or to never ask questions.

The trials highlighted that asking questions in an intellectual manner by using the new algorithm was considered to be considerably better in terms of speed and accuracy compared to the other two conditions.

Based on the excellent performance of the system, the participants thought that the robot had capabilities it actually did not possess. A very simple language model was used by the researchers for this study. This model only understood the names of objects.

However, the participants informed the researchers that they felt that the robot was capable of understanding prepositional phrases such as “closest to me,” or “on the left,” which it could not. They also assumed that the robot could also be tracking their eye-gaze, which in fact was not possible. The system was just making smart inferences after asking an extremely simple question.

Tellex and her team plan to incorporate the algorithm with more robust speech recognition systems, which could increase the speed and accuracy of the system.

Finally, Tellex says, she hopes systems like the one mention here will enable robots to become useful collaborators both at work and at home.

The work was partially funded by grants from NASA and the Defense Advanced Research Projects Agency (W911NF-15-1-0503 and D15AP00102).

Reducing Errors in Object-Fetching Interactions through Social Feedback

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.