In search of the artificial retina

April 1, 2007
There is a big difference between the way the silicon industry implements computation and the way brains behave,” says Tobias Delbrück of the Institute of Neuroinformatics at the University/ETH (Zurich, Switzerland).

There is a big difference between the way the silicon industry implements computation and the way brains behave,” says Tobias Delbrück of the Institute of Neuroinformatics at the University/ETH (Zurich, Switzerland). “Logic synthesis tools have made it relatively easy to produce chips with millions of transisitors, as long as you have lots of money, are willing to stick to the paradigm of synchronous logic, and have a large semiconductor company behind you to support you with the latest tools and libraries. Despite this, there are alternatives to this approach for many applications, in particular, in areas such as vision. And much of the inspiration for these designs, their organization, and the way we think about them derives from studies of the human nervous system.

“The notion of frame-based imaging is practically taken for granted in machine vision,” says Delbrück. “Frame-based imagers have advantages such as small pixels and easy interfacing to computers, but they also have disadvantages such as small dynamic range and fixed uniform sample rate. Nowadays, a garden-variety CMOS imager has an intrascene dynamic range of about 40-50 dB, which is only 100-300 lm in luminance. And usually all the pixels from every frame must be processed to extract meaning. Retinas have fantastic local gain control, they reduce redundancy dramatically, and ganglion cells only spike when they have something to say.”

Delbrück’s research at the Institute for Neuroinformatics centers on using neuromorphic design principles to make practical vision sensors.
Click here to enlarge image

Delbrück and his colleagues have built a silicon retina that captures the key properties of biological vision as expressed in the transient pathway of the retina. The retina chip pixels respond with precisely timed events to relative intensity change. Movement of the scene or of an object with constant reflectance and illumination causes relative intensity change; thus, the pixels are intrinsically invariant to scene illumination and directly encode scene reflectance changes. “For many dynamic vision problems this information is useful,” says Delbrück. Integrated with a USB 2 interface, the device can be controlled using a viewer application that renders events in a variety of formats.

Work on event-based imaging is also being pursued by the E-Lab at Yale University, the Neuroengineering Lab at the University of Pennsylvania, CSEM, and the Computational Neuroengineering Laboratory at the University of Florida. At CSEM, for example, research has led to the development of a miniature vision sensor system that enables broad-based monitoring and interpretation of visual data in real time and any light conditions. The low-cost system, ViSe, allows OEMs to develop application-specific image analysis and response systems. The ViSe camera is composed of a vision sensor and a DSP that can run identification algorithms. The system speeds throughput of visual data by enabling the vision sensor chip to extract key image features needed for interpretation prior to sending it for software processing on the DSP.

“The conventional approach involves acquiring an image with a camera, converting the intensity distribution to a digital representation, and processing it,” says Christian Enz, vice president of CSEM’s microelectronics division. “This poses problems in certain applications, where vision tasks must be executed in real time in environments with uncontrolled lighting levels.”

The ViSe sensor chip captures a detailed image but extracts contrast strength and orientations, permitting it to pass on only key features of the scene needed for analysis. This allows a reduction in the amount of data transmitted off the sensor chip, enabling rapid postprocessing of images in real time. It has a dynamic range of more than 120 dB, allowing the sensor to function in natural scenes with wide illumination range.

Websites to watch

University/ETH
Zurich, Switzerland
siliconretina.ini.uzh.ch/

E-Lab at Yale Univ.
New Haven, CT, USA
www.eng.yale.edu/elab

Neuroengineering Lab at the University of Pennsylvania
Philadelphia, PA, USA
www.neuroengineering.upenn.edu

CSEM
Neuchatel, Switzerland
www.devise.ch/index.php

Computational Neuroengineering Laboratory at the University of Florida
Gainesville, FL, USA
www.cnel.ufl.edu

“Brains in Silicon” at Stanford University
Stanford, CA, USA
www.stanford.edu/group/brainsinsilicon

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!