IntroductionBasic PrincipleFuture WorkReferences
Genetic algorithms are used to train several neural networks by selecting a population of robots based on a fitness function, which encourages the ability of the robot to explore its environment. This method is employed owing to the difficulties faced in employing programming-specific strategies under diverse environments. This method finds its application in the updation and/or construction of maps with the help of limited sensor information for performing tasks such as floor cleaning done by the ROOMBA robot.
Genetic algorithms are used to acquire a single competence known as photoaxis, which results in allowing the physical robots to perform a task and modify their strings via communication. In the simulate-and-transfer mechanism, a controller is developed on a simulator with the help of a genetic algorithm, and the resulting solution is then shifted to the robot to enable the robot to push a box towards the light source. These algorithms are also called physically embedded genetic algorithms as they are embedded on the real robots physically instead of being executed by software personnel.
Basic Principle - Working Examples of Learning Robots Using Genetic Algorithms - The ZORC Project
The major goal of the ZORC project was to train a physical robot to walk. The robot is taught to control its servo-motors and process sensor inputs through the implementation of a software-system developed using the Genetic Programming paradigm.
A virtual machine-code program can be included with the basic commands and other simple operators such as +,-,*,/ and conditional operators through the Genetic Programming system.
The virtual programs were executed in a physics simulator that controls a ZORC 3D model, after being interpreted. Fitness Function is used to evaluate the program behaviors in order to determine the effect of programs on the movement and control of the robot.
When these programs are successfully executed in the physical robot, they are executed in the real robot, which means that the interpreter stops controlling the simulation of the robot but not the sensors and servos of the actual robot. Eventually, a robust walking algorithm is accomplished in the robot.
A large number of robots have to be experimented in order to analyse if this distributed learning algorithm can benefit larger colonies. However, the optimal colony size must also be taken into account. Experimental results show that the robots acquire either the capability to choose between different behaviours or sensor-motor competences. These experiments need to be further extrapolated to allow the robots acquire both management and behavioural strategies. It is also necessary to analyse whether these experiments can be scaled up by the expansion of the robots’ state space.