Event-based cameras target surveillance, autonomous vehicle applications

May 1, 2019

Many of today’s imaging systems such as those used in security and surveillance, retail automation, and autonomous vehicles deploy cameras with frame-based image sensors. When paired with convolutional neural networks, these cameras can be deployed to identify and classify objects, including people, animals, vehicles, road signs, and various other obstacles.

However, such a setup is expensive and requires power-hungry CPUs or GPUs. Additionally, cameras utilizing frame-based sensors capture visual information at a predetermined frame rate, and each frame conveys information from all pixels in the sensor, uniformly sampling them at the same time, increasing data throughput requirements for image processing, regardless of whether there are any actual dynamic changes in the scene.

Seeking a method requiring less power and less data, startup company Kelzal (San Diego, CA, USA; www.kelzal.com) developed event-based sensors called “Perception Appliances,” available in two models, that utilize third-generation neural networks for identification and classification applications.

“These systems were inspired by how the brain works, and how the human and mammal visual system works,” says Olivier Coenen, CTO, Founder & Interim CEO. “This is what led us to develop these products. We are trying to come up with a more efficient way to process visual information for machines and computers.”

The company’s sensors are based on the principle of event-based imaging, in which the image sensor only tracks changes in a scene, and anything that remains constant in the field of view of the camera is not transmitted by the sensor. Unlike traditional cameras, there are no frames, and every time the light intensity changes in an individual pixel, it accumulates.

When the intensity passes a certain threshold, the camera system provides a response with a binary value used to identify whether the intensity change was positive or negative.

The “fast” recognition system developed by the company, the Ultra-Fast Perception Appliance, can record the trajectory of a bullet without the need for special equipment such as a high-bandwidth hard drive, for example, and processing can be done as the data is captured, according to Coenen.

“A major benefit of event-based imaging is that the sensors can capture movements at speeds of up to 1000x faster than a camera using a frame-based image sensor for detection and tracking,” he says. “This is ultimately both useful and necessary for an application such as self-driving cars.”

The “fast” recognition system targets object recognition and classification for autonomous vehicles and robots and is based on a 1 MPixel CMOS image sensor. The Ultra-Low Power Perception Appliance is based on a nearlyQVGA CMOS image sensor and requires typically just a few mW or less to run. Designed for surveillance and retail automation, these systems will reportedly run on a single CRV3 battery for multiple years.

Coupled with these sensors are either an FPGA or a neuroprocessor—a class of processors designed to accelerate algorithms related to neural networks—on which the neural network software runs. The company developed proprietary algorithms for neural network object and activity recognition.

While the “fast” recognition system is leveraging existing, high-speed, event-based sensors, the ultra-low-power device is based on a novel technology developed in academia that focuses on reducing power, according to Coenen.

“The design of the Ultra-Low Power pixel is different than what you find in conventional frame-based cameras and also different than the event-based vision sensors,” he says. “A typical event-based sensor has, at each pixel, an amplifier. This makes it possible to respond very quickly.”

“To keep this amplifier on,” says Coenen, “you need power, and it takes up space around the pixel. The Ultra-Low Power approach does not use a continuous amplifier. Other techniques are employed, which dramatically reduce power. The net effect is that the pixel size is one of the smallest that we know of that exists today.”

The company recently received $3 million in a seed financing round secured and led by Motus Ventures (Redwood City, CA, USA; www.motusventures.com), for commercialization and deployment of the cameras.

As part of this funding, Gioia Messinger, former Founder and CEO of Avaak, Inc. (Now Arlo; San Jose, CA, USA; www.arlo.com) inventor of the popular wire-free, battery-operated smart home cameras, joins the Kelzal board of directors and will participate in the company’s daily activities as an executive consultant.

“With these sensors, we are focusing on specific markets and applications, because we think they will have the biggest impact early on,” says Coenen. “But in the future, I think that event-based sensors like these may even replace a lot of conventional frame-based cameras in a lot of applications.”

“Wherever a machine must have visual perception,” Coenen continues, “this type of event-based sensor will become an integral part of vision processing.”

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!