Editorial Feature

Robotic Learning - Simulation Based Behaviour Learning

Simulation-based behavior learning is environment recognition by an autonomous mobile robot based on actions. The robot moves around in many different environments using embedded behaviors. The action sequences thus obtained are categorized by self-organizing maps (SOM), which the environment structure can be identified.

Basically, the robot performs a series of actions with the help of information it recognizes from its surrounding environment. These actions are based on some behaviors such as wall-following. In each environment, the action sequences obtained are recorded and converted into environment vectors. A robot refers to its set of vectors and identifies the environments familiar to it.

Basic Principle

The basic principle of behavior-based control is ‘think the way you act’. This is based on a situated, embodied systems design. Behavior-based systems is behavior that can be defined as activity patterns arising from robot-environment interactions, which are observable. Such systems comprise a group of survival behaviors, e.g., obstacle-avoidance. These forms of behaviors couple the actions of the robot to sensory inputs.

More complex behaviors, such as viz., target-chasing, wall-following, homing, and exploration, are also part of these systems. These behaviors are introduced in increments into the system until the interactions lead to desired results. Behavior-based systems have many layers, which have similar representation and time-scale. Behavior-based robot controllers are capable of storing representations in a distributed way, which ultimately enables deliberation and thus learning.

The ultimate goal of robotic learning systems is optimization of system performance during its lifetime. Information for robotic control is encoded in a particular process for behavior representation.  The information obtained can be used to explore several data structures and support behavioral learning. Robots capable of learning and adapting are fundamental proof of machine intelligence and will continue to evolve into highly sophisticated structures for multiple applications.

Though once a fictional concept, research developments are making it possible to teach these robots concepts, senor usage, motor skills, information gathering, navigation and even expression of emotions. Learning is enabled via many different techniques which utilize previous experience to showcase a better and effective performance. Different robots might need different types of learning.  For instance, industrial robots are designed to repeat the same task on a daily basis, whereas a mobile office assistant robot will need to learn new tasks every day and adapt to emerging situations.

Types of Learning

Another major decision to take is whether robotic learning can happen in real time while performing tasks or in a simulated environment offline. For some tasks, the robot may get enough time for learning in real time; whereas, in some other crucial and time-limited tasks, the robot will need to master some basic skills early on and then perform the task. A hybrid learning approach would be to start the training offline and then continue learning in real time while at the task and test out new strategies then and there.  

The following are the different types of learning strategies involved:

Artificial Neural Networks – This is supervised learning carried out by tweaking weights in different nodes of a neural network.

Reinforcement Learning – This involves unsupervised learning where robots learn through trial and error.

Evolutionary Learning – This is also unsupervised learning, but has controllers deduced by changing the initial population of the program code.

Learning by Imitation – This kind of learning is biologically inspired and uses a developmental paradigm to help the robot learn by emulation.

Artificial Neural Networks – These networks are algorithms that are loosely based on the phenomenon of spreading activation. Such networks are capable of accurately encoding knowledge imparted during training in the form of node connections. Input nodes receive stimulation which moves through the network layers and give rise to an output which is then assessed by a trainer who reinforces and changes the future response of the network. Such training is often called robot shaping which basically involves human-robot interaction. Similar to the human brain, neural networks allow robots to develop and condition their knowledge based on repeated experience.

Reinforcement Learning – In this type of learning, robots use an automated critic, which punishes or reinforces actions. The aim of this learning method is to map the actions and maximize reinforcement.  Reinforcement learning changes the nature of programming done by the designer. A basic challenge encountered in this type of learning is how to train a robot to orchestrate its learning process and select actions that will fetch reward.

Evolutionary Learning – This method is inspired by biological evolution processes. It involves several unsupervised learning processes which use the crossover reproduction method where fit individuals are chosen and put together to give rise to successive controller generations. Evolutionary computing comprises many computational systems such as evolutionary strategies, genetic algorithms, genetic programming and classifier systems. Of these, genetic algorithm is the earliest form of evolutionary computing.

Learning by Imitation - Imitative learning involves any one of the learning approaches mentioned above, though it follows a unique methodology. It assumes that robots are capable of being shown how to behave in certain situations. Researchers from MIT have given life to some new robots which can learn by imitating the actions demonstrated to them by humans.  

One major issue in robotic learning is the need to impart domain knowledge without allowing them to over-specialize in any of the domains. Robots need the correct amount of coded and learned knowledge, which largely depends on the variety of tasks they are required to perform. The more coded knowledge a robot has, the more constrained it will be to set of specific tasks.

Sources and Further Reading

  • Liu D., Wang L., Tan K.C. 2009. Design and Control of Intelligent Robotic Systems. Germany, Berlin: Springer.
  • Thrun S. An Approach to Learning Mobile Robot Navigation. Universität Bonn, Institut für Informatik III. Robotics And Autonomous Systems special issue on Robotic Learning, 1995.  
  • Conforth M., Meng Y. An Artificial Neural Network Based learning Method for Mobile Robot Localization. Published in: Robotics, Automation and Control, Book edited by: Pavla Pecherková, Miroslav Flídr and Jindøich Duník, ISBN 978-953-7619-18-3, pp. 494, October 2008, I-Tech, Vienna, Austria.
  • Mataraiæ M.J. Learning in behavior-based multi-robot systems: policies, models, and other agents. Journal of Cognitive Systems Research 2001; 81–93.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Submit