As an analogy to the way “The Little Engine That Could” formerly chanted, researchers from the “I think I can, I think I can” university, sponsored by the Office of Naval Research, have been striving to impart the potential of self-assessment to robots and other autonomous systems.
Xiang Zhi Tan, a PhD student at Carnegie Mellon University’s Robotics Institute, works with a Baxter robot, part of a multi-university research program to enable robots to assess their own performance at tasks. (Image credit: Carnegie Mellon University)
According to Aaron Steinfeld, associate research professor in
Carnegie Mellon University’s Robotics Institute, prevalent robots are not so equipped to prefigure how efficiently they can carry out a task, to sense whether a task is being carried out properly, or even to understand whether they performed a job efficiently, after completion of the task.
Several strategies are used by people to assess whether or how efficiently they can carry out something, such as drilling a hole in a wall, turning a valve, or throwing a baseball. Steinfeld added that as robots become more and more autonomous, they will require this potential, as well.
Scientists from Carnegie Mellon, Brigham Young University, Tufts University, and the University of Massachusetts Lowell have collaborated together in a 7.5-million-dollar, five-year Multidisciplinary University Research Initiative program to devise methods and metrics for machine self-assessment.
This self-assessment could be very simple, such as a robot being in a position to detect if a task was satisfactorily finished, or could include assessment and evaluation of proficiency. In certain cases, it could also include a robot giving an explanation to a human in relation to its performance.
You’d like the robot to be able to explain why it can or why it can’t do a task,” stated Steinfeld, such as a self-driving car explaining its occupants about its inability in dropping them off at their requested destination. The identification of a lack of strength, dexterity, or knowledge could allow a robot to operate more efficiently with human partners.
Holly Yanco, a professor of computer science at UMass Lowell, stated that scientists will investigate self-assessment strategies with the help of dexterous search tasks for robots, such as manipulating objects to unearth contents, maneuvering limbs to examine obscured items, and adversarial manipulation.
She added that these search tasks can be easily scaled up to applications such as urban search and rescue, micro-drone swarms sent for mapping buildings, and robots deployed for emergency repairs.
Apart from Steinfeld and Yanco, the researchers who were involved in the study include Jacob Crandall, associate professor of computer science, and Michael Goodrich, chair of computer science, both of Brigham Young; Matthias Scheutz, professor of cognitive and computer science at Tufts; and Henny Admoni, assistant professor of robotics at CMU. Reid Simmons, research professor of robotics and computer science at CMU, will join the team this fall. Extensive investigations will be carried out at Carnegie Mellon’s Robotics Institute and UMass Lowell’s New England Robotics Validation and Experimentation (NERVE) Center.
The study was one among 24 awards issued this year by the highly competitive MURI program of the Department of Defense.