Design tools model camera architectures

Aug. 1, 2008
To optimize a camera design to meet specific performance criteria, different types of integrated circuits can be used.

To optimize a camera design to meet specific performance criteria, different types of integrated circuits can be used. For example, to perform Bayer interpolation in real time, many camera designers use on-board FPGAs. In smart-camera designs, the addition of a DSP can be used to allow the camera to perform programmable functions such as blob analysis.

The emergence of electronic system-level design tools such as CoFluent Studio from CoFluent Design (San Jose, CA, USA; www.cofluentdesign.com) are now allowing systems to be modeled in software and their performance characteristics determined. For example, when designing a camera system for automotive and security applications, Sensata Technologies (Attleboro, MA, USA: www.sensatatechnologies.com) used these tools to select and optimize the camera architecture. A model of a camera system was created and its behavior simulated. Architectural choices were studied and hardware and software partitioning alternatives explored. “For each architecture option, local memory requirements, potential traffic bottlenecks, execution times, and the complexity of image processing functions were also studied and analyzed,” says Qing Song, DSP systems-on-chip architect at Sensata.

In the camera design, a 642 × 482 12-bit imager was selected to capture images at 60 frames/s in 8-bit/primary RGB format. Using CoFluent Studio, image-sensing and display functions are described as video data source and sinks (see figure). The video source function reads test files about the simulation from the hard disk and sends the data to image-processing functions that include defective-pixel removal, white balance, demosaicing, and image sharpening. The video sink then displays the received data in RGB format. These four sub-functions are pipelined and message queues used to model FIFO channels between stages of the pipeline. This enables independent and asynchronous communications between different pipelined stages.

After initialization, an infinite loop waits to receive a frame from stage N-1 through ChannelIn, processes the frame, and sends the result to stage N+1 through ChannelOut. C algorithms are then added to perform each color processing function. As the model of the color-processing system is repetitive, each color processing stage that is created includes one of four algorithms.

Using CoFluent Studio, image-processing functions are described as video data source and sinks. In designing a digital camera, Sensata used these function blocks to describe defective-pixel removal, white balance, demosaicing, and image sharpening functions that can be pipelined to model FIFO channels between stages of the pipeline.
Click here to enlarge image

“Since CoFluent models are timed, durations of computations and I/Os also need to be defined,” explains Song. “For example, the duration of an algorithm operation equals the number of cycles per pixel × the number of pixels per frame. Similarly, image-capture duration is set at 16 ms for outputing images at 60 frames/s, ChannelIn and ChannelOut are set to 10 ns for send, and 1 cycle/pixel transfer × the number of pixels/frame for receive.”

To calculate the pipeline latency for each frame, additional time-stamp fields can be added, enabling latency to be calculated. For its camera, Sensata wanted to study how two separate components could be merged into a single system-on-chip or partitioned between separate components. These included a DSP and RAM and an FPGA with RAM for data buffering. In the simulation, various pixels of either 12 or 24 bits link the different elements.

The CoFluent Design software allows generic hardware components to model computing, communication, or storage resources. Using CoFluent Studio, ASICs, FPGAs, or software-based computing units such as DSPs, CPUs, and MCUs are called processors and communication links or nodes are characterized as either bus-based, as a routing network or point-to-point interfaces.

To model its camera, Sensata created and characterized three hardware configurations that consisted of running image sensor control, and color image processing functions on an FPGA or on a DSP. For all three hardware configurations, display, communications and monitoring functions were partitioned on the DSP and imager control functions on an FPGA.

Results of these simulations allowed the Sensata to examine the price/performance trade-off of each camera design before committing to any hardware design. “Although the optimal architecture was achieved through iterating various hardware configurations, the research and data gathering of the performance and processing capability of algorithms and hardware were key to performing a successful simulation,” says Song.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!