Posted in | Machine-Vision

Stanford Researchers Design New 4D Camera with Extra-Wide Field of View

Based on a technology originally described by Stanford Researchers over two decades ago, a new camera has been built which could produce the kind of information-rich images that robots require to navigate the world. This camera, which creates a four-dimensional image, can also capture approximately 140 degrees of information.

Assistant Professor Gordon Wetzstein, left, and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields. (Image credit: L.A. Cicero)

We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.

Donald Dansereau, a Postdoctoral Fellow in Electrical Engineering

Keeping robotics in mind, Dansereau and Gordon Wetzstein, Assistant Professor of Electrical Engineering, along with colleagues from the University of California, San Diego have developed the first-ever single-lens, wide field of view, light field camera, which they are showcasing at the computer vision conference CVPR 2017 on July 23rd.

At this time in technology, robots have to move around, collecting different views, if they want to comprehend certain aspects of their environment, such as movement and material composition of various objects. This camera could allow them to collect much of the same information in a single image. The Researchers also see this employed in augmented and virtual reality technologies as well as in autonomous vehicles.

“It’s at the core of our field of computational photography,” said Wetzstein. ”It’s a convergence of algorithms and optics that’s facilitating unprecedented imaging systems.”

From a peephole to a window

The difference between viewing through a normal camera and the new design is like the difference between looking through a window and a peephole, the Researchers said.

A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.

Donald Dansereau, a Postdoctoral Fellow in Electrical Engineering

That extra information is procured from a type of photography called light field photography, first defined in 1996 by Stanford Professors Marc Levoy and Pat Hanrahan. Light field photography captures the same image as a conventional 2D camera plus information relating to the direction and distance of the light hitting the lens, generating what is termed as a 4D image. A well-established feature of light field photography is that it allows users to refocus images after they are captured because the images include information relating to the light position and direction. Robots might use this to see through rain and other things, which could make their vision unintelligible.

The very wide field of view, which includes almost a third of the circle around the camera, comes from a specifically designed spherical lens. However, this lens also created an important hurdle: how to interpret a spherical image onto a flat sensor. Earlier approaches to solving this issue had been heavy and susceptible to error, but merging the optics and fabrication expertise of UCSD and the signal processing and algorithmic expertise of Wetzstein’s lab brought about a digital solution to this issue that not only leads to the formation of these extra-wide images but improves them.

Robotics up close

This camera system’s wide field of view, potential compact size and comprehensive depth information are all necessary features for imaging systems integrated into wearables, autonomous vehicles, robotics and augmented and virtual reality.

“It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of,” said Wetzstein. “This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”

Although it can also function like a conventional camera at great distances, this camera is engineered to optimize close-up images. Examples where it would be predominantly useful include landing drones, robots that have to move through small areas and self-driving cars. As part of an augmented or virtual reality system, its depth information could bring about more continuous renderings of real scenes and support improved integration between those scenes and virtual components.

The camera is presently a proof-of-concept and the team is aiming to develop a compact prototype next. That version would hopefully be adequately small and light to test on a robot. A camera that humans could wear may then follow.

Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras that are designed for consumer photography. This is the first example I know of a light field camera built specifically for robotics and augmented reality. I’m stoked to put it into peoples’ hands and to see what they can do with it.

Donald Dansereau, a Postdoctoral Fellow in Electrical Engineering

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback