Posted in | News | Machine-Vision

Scientists Develop New Artificial Intelligence Agent that Glimpses Around and Infers its Environment

At The University of Texas at Austin, an artificial intelligence agent was taught by computer scientists on how to perform something that can normally be done only by human beings—that is, take several rapid glimpses around and deduce its entire environment.

Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do--take a few quick glimpses around and infer its whole environment. (Image credit: Jenna Luecke/University of Texas at Austin.)

This skill is required for developing effective search-and-rescue robots that could possibly enhance the effectiveness of risky missions in the future. The researchers, headed by Professor Kristen Grauman, former PhD candidate Dinesh Jayaraman (currently at the University of California, Berkeley), and PhD candidate Santhosh Ramakrishnan, have recently published their results in the journal, Science Robotics.

The majority of the AI agents—computer systems that have the ability to endow various kinds of machines, including robots, with intelligence—are trained for extremely specific operations—for instance, to identify an object or predict its volume—in an environment or surrounding they have experienced in the past, like a factory. However, Grauman and Ramakrishnan have developed a general purpose agent that collects visual data, which can be subsequently used for many different tasks.

We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise. It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.

Kristen Grauman, Professor, Department of Computer Science, The University of Texas at Austin

Deep learning—a kind of machine learning inspired by the brain’s neural networks—was used by the researchers to train their artificial intelligence agent on a countless number of 360-degree images of varied environments.

Now, when the agent is presented with a scene that it has never viewed before, it utilizes its experience to select a few glimpses—for example, a tourist standing in the center of a cathedral taking some photographs in various directions—that collectively add up to below 20% of the entire scene. This system is highly effective in a sense that it does not simply take snapshots in haphazard directions but, after every glimpse, it also selects the subsequent shot that it assumes will add most of the new information about the entire scene. This can be compared to individuals visiting a grocery store that they had never visited before, and when they see apples, they would anticipate to find oranges closely; however, to find the milk, they might look at the other direction. On the basis of glimpses, the artificial intelligence agent deduces what it would have viewed if it had glanced in all the other directions, thus rebuilding a full 360-degree image of its environment.

Just as you bring in prior information about the regularities that exist in previously experienced environments—like all the grocery stores you have ever been to—this agent searches in a nonexhaustive way. It learns to make intelligent guesses about where to gather visual information to succeed in perception tasks.

Kristen Grauman, Professor, Department of Computer Science, The University of Texas at Austin

Designing an artificial intelligence agent that can operate under rigorous time constraints was one of the major challenges the researchers set for themselves. This would be vital in a search-and-rescue application. For instance, in a burning building, a robot would be required to rapidly find people, locate hazardous materials and flames, and then pass that information to firefighters.

Currently, the innovative agent works similar to a person standing in one place, with the potential to point a camera in any kind of direction but unable to shift to a new position. Or, equally, the artificial intelligence agent may look at an object that is being held by it and may choose to turn the object to examine the other side of it. The team is now further developing the system to work in a completely mobile robot.

With the help of supercomputers at UT Austin’s Department of Computer Science and Texas Advanced Computing Center, the researchers took roughly a whole day to train their agent by utilizing an artificial intelligence method known as reinforcement learning. Through Ramakrishnan’s leadership, the research team came up with a technique for expediting the training process—that is, constructing a second agent, known as a sidekick, to help the primary agent.

Using extra information that’s present purely during training helps the [primary] agent learn faster.

Santhosh Ramakrishnan, PhD Candidate¸ Department of Computer Science, The University of Texas at Austin

The study was partly supported by the U.S. Air Force Office of Scientific Research, the U.S. Defense Advanced Research Projects Agency, Sony Corp., and IBM Corp.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.