This issue is atypical of most from Vision Systems Design. It contains six feature-length articles instead of the usual three or four. First, Editor James Carroll takes an early look at some of the trends and technologies emerging as focus areas for VISION 2018, the world’s largest trade fair for machine vision and imaging held from November 6-8 in Stuttgart, Germany. These include hyperspectral imaging and lighting, as well as deep learning and embedded vision (see page 9). Then on page 15, I write about how machine vision system designers are employing a new line of industrial machine vision cameras using hybrid CCD in CMOS Time Delay Integration (TDI) sensors in applications such as digitizing film, as well as electronics and semiconductor inspection. These cameras combine the high TDI sensitivity of CCD gates with the high-speed and system integration benefits of CMOS technology.
Next on page 18, European Editor Andrew Wilson describes the cameras, lighting and software used in a real-world machine vision system that performs high-speed inspection of molded tube-like containers called cuvettes. These containers are produced by Carclo Technical Plastics using a plastic injection molding process at speeds as fast as two parts per second. Because they are designed to hold samples such as blood for spectroscopic measurement, the tube-like containers with straight sides and a circular or square cross section are sealed at one end and made of a clear, transparent material such as plastic, which must be free of defects such as blemishes or tinting that may occur during the molding process. In this article, Wilson chronicles how Envisage Systems has developed two automated imaging systems that each employ five imaging stations to inspect the parts for the correct dimensions, color and any blemishes that may be present, and the benefits this brings to the manufacturingprocess.
Next, we have two contributed articles. On page 22, Ryan Johnson, Lead Engineer – Computer Vision at Twisthink describes a design approach that allows developers to iteratively adjust imaging subsystem performance while increasing the fidelity of the detection algorithm. In this article, Johnson discusses how developing camera-based detection systems requires iteration. He also talks about the importance of understanding the impact of sampling and sharpness on image quality, and how to use a dataset to evaluate system performance. On page 26, Vice President, Imaging Business Unit, Edmund Optics, Greg Hollows kicks off a three-part article series reviewing the different stages of developing an imaging lens, what is required for success, and how to mitigate undesired outcomes. Part one begins with specification development and designconsiderations.
Finally, in our product focus on page 28, Andrew Wilson examines how smart cameras simplify systems integration tasks by combining lighting, imagers, software and I/O. As always, I hope you enjoy thisissue.