Current video surveillance technology produces a
2-dimensional image, while a camera with a pair of lenses is able to deliver
3-D images. Although 3-D offers more detail and better depth of field new
research into camera lenses could provide something that’s immeasurably better
still.

A Stanford electronics research team is working on an
image sensor that not only has more pixels, it also incorporates multiple
lenses into the sensor substrate itself creating a sensor comprising multiple
sensors. The result of all this is a 3 megapixel image sensor made up of 12,616
individual on-chip cameras, each camera combining 256 0.7 micron pixels topped
by a lens. Importantly, the technology can do away with current lens technology
and that means cameras will offer more picture for less money.

If this doesn’t sound much chop, consider that a camera
with these specifications would be able to measure the exact distance between a
subject’s eyes, nose, ears and chin, as well as giving 3-D modeling of all
objects in a scene.

According to the researchers, the camera will allow every
part of an image stream to be in perfect focus yet the hardware would
essentially look exactly the same as current technology.

The way it all works is that the lens of the multi-aperture
camera focuses its image about 40 microns (a micron is a millionth of a meter)
above the image sensor arrays. What this means is that every point in a scene
is being captured by 4 of the tiny cameras on the image sensor and this makes
for multiple views that have slightly different perspectives.

According to the researchers, this sort of detail creates
a depth map that is electronically stored along with the image that essentially
represents a virtual model of the target area. The beauty of this modeling is
that it allows image manipulation. Researchers say the technology would allow
users to chose to see only objects at a particular distance or from a
particular perspective and nothing else.

According to researchers, Professors El Gamal, Fife and Wong, the multi-aperture image sensor has some
key advantages. It’s small and doesn’t require lasers, bulky camera gear,
multiple photos or complex calibration. And they say it has excellent color
quality. Each of the 256 pixels in a specific array detects the same color. In
an ordinary digital camera, red pixels may be arranged next to green pixels,
leading to undesirable “crosstalk” between the pixels that degrade
color.

The sensor also can take advantage of smaller pixels in a
way that an ordinary digital camera cannot, El Gamal explains. This is because
current lens technology is getting to the optical limit of the smallest point
it’s possible for them to resolve. A pixel smaller than that spot will not give
better images but the multi-aperture sensor’s smaller pixels produce far more
information.

Another key element of the technology is that it may
represent a step forward in the development of the gigapixel camera, a device
that looks set to offer 140x the pixels of today’s 7-megapixel systems – that’s
upwards of a billion pixels on one sensor.