Posted in | News | Humanoids | Remote Monitoring

Researchers Develop Self-Identifying Robot

The body image is not always precise or realistic, but it is a crucial piece of information that affects how individuals behave in the world, as any athlete or fashion-conscious person can attest to.

Image Credit: Columbia Engineering

The brain is continuously preparing for movement while we play ball or get dressed so that it is possible to move our body without bumping, tripping, or falling.

As babies, humans develop their ideal body types, and robots are doing the same. A team from Columbia Engineering stated that they had developed a robot that is, for the first time, capable of learning a model of its whole body from scratch without the aid of humans.

The researchers describe how their robot built a kinematic model of itself and utilized that model to plan movements, accomplish goals, and avoid obstacles in a range of scenarios in a new article published in Science Robotics. Even damage to its body was automatically detected, repaired, and then detected again.

Robot Watches Itself Like an Infant Exploring Itself in a Hall of Mirrors

A robotic arm was positioned inside a circle made up of five streaming video cameras by the researchers. Through the cameras, the robot observed itself as it freely oscillated. The robot squirmed and twisted to discover precisely how its body moved in reaction to various motor inputs, like a baby discovering itself for the first time in a hall of mirrors.

The robot eventually halted roughly after three hours. Its inbuilt deep neural network had finished figuring out how the robot’s movements related to how much space it took up in its surroundings.

We were really curious to see how the robot imagined itself. But you can’t just peek into a neural network; it is a black box.

Hod Lipson, Professor, Mechanical Engineering and Director, Creative Machines Lab, Columbia University

The self-image eventually came into being as the researchers experimented with numerous visualization approaches.

Lipson added, “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body. As the robot moved, the flickering cloud gently followed it.

To within 1% of its workspace, the robot’s self-model was accurate.

Self-Modeling Robots Will Lead to More Self-Reliant Autonomous Systems

Robots should be able to create models of themselves without assistance from engineers for a variety of reasons. It not only reduces labor costs, but also enables the robot to maintain its own wear and tear, as well as identify and repair damage.

The authors contend that this capability is crucial since increased independence is required of autonomous systems. For example, an industrial robot could notice that something is not moving properly and make adjustments or request assistance.

We humans clearly have a notion of self. Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.

Boyuan Chen, Study First Author and Assistant Professor, Duke University

Self-Awareness in Robots

The research is a part of Lipson’s decades-long effort to discover strategies for granting robots a semblance of self-awareness.

Self-modeling is a primitive form of self-awareness. If a robot, animal, or human has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage,” stated Lipson.

The limitations, dangers, and issues associated with providing robots more autonomy through self-awareness is acknowledged to the researchers.

The level of self-awareness shown in this study is, as Lipson noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.

Can robots be sentient? (at Jeff Bezos' MARS 2022)

A robot arm learns to reach a target sphere while avoiding the cuboid obstacle using the learned visual self-model. Video Credit: Boyuan Chen, Jane Nisselson/Columbia Engineering

Journal Reference:

Chen, B., et al. (2022) Fully body visual self-modeling of robot morphologies. Science Robotics. doi:10.1126/scirobotics.abn1944.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.