Eight computer science professors in Oregon State University’s College of Engineering have received a $6.5 million grant from the Defense Advanced Research Projects Agency to make artificial-intelligence-based systems like autonomous vehicles and robots more trustworthy.
The success of the deep neural networks branch of artificial intelligence has enabled significant advances in autonomous systems that can perceive, learn, decide and act on their own.
The problem is that the neural networks function as a black box. Instead of humans explicitly coding system behavior using traditional programming, in deep learning the computer program learns on its own from many examples. Potential dangers arise from depending on a system that not even the system developers fully understand.
The four-year grant from DARPA will support the development of a paradigm to look inside that black box, by getting the program to explain to humans how decisions were reached.
“Ultimately, we want these explanations to be very natural – translating these deep network decisions into sentences and visualizations,” said Alan Fern, principal investigator for the grant and associate director of the College of Engineering’s recently established Collaborative Robotics and Intelligent Systems Institute.
Developing such a system that communicates well with humans requires expertise in a number of research fields. In addition to having researchers in artificial intelligence and machine learning, the team includes experts in computer vision, human-computer interaction, natural language processing, and programming languages.
To begin developing the system, the researchers will use real-time strategy games, like StarCraft, to train artificial-intelligence “players” that will explain their decisions to humans. StarCraft is a staple of competitive electronic gaming.
Later stages of the project will move on to applications provided by DARPA that may include robotics and unmanned aerial vehicles.
Fern said the research is crucial to the advancement of autonomous and semi-autonomous intelligent systems.
“Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust,” he said.
The researchers from Oregon State were selected by DARPA for funding under the highly competitive Explainable Artificial Intelligence program. Other major universities chosen include Carnegie Mellon, Georgia Tech, Massachusetts Institute of Technology, Stanford, Texas and University of California, Berkeley.