By Andy Choi
The research team at Purdue University led by Zygmunt Pizlo has been conducting a critical research at the Visual Perception Lab. The team’s newly developed a technology will be licensed and commercialized shortly.
A robot named Èapek has been developed that can observe and mimic every movement and activity of the members of the research team. The major objective in this research is simulating robot’s visual perception that makes the robot "see" like humans.
Purdue’s professor at the Department of Psychological Sciences, Pizlo said that the most challenging task in artificial intelligence and robotics is to make the robots and other machines capable of 3-D visualization similar to humans. So far, robotic vision researches have been conducted on recording and analyzing 2-D images, but have not focused on 3-D visual perception. This approach features the ability of robot to comprehend the anterior 3-D scene in sync with an object present in the field of view.
With over three decades of expertise, Pizlo has been working with Postdoctoral research associates Yunfeng Li and Tadamasa Sawada. Sawada comments that humans can carry out computationally complex cognitive functions. Making the robot capable of accomplishing such intricate functions is a real challenge.
Figure-ground perception can be defined as the human visual system’s ability to simplify a picture or scene into a main object, followed by shifting other things into the background cognitively.
According to Pizlo, multiple cameras combined with sensors and laser range finders have been employed in conventional robotic vision technology for detection. Although capable of basic object recognition, the existing systems do not mimic the human-based 3-D capabilities.