Eliminating blind spots in industrial machine vision
Machine vision has played an unarguable role driving up the quality of in-line inspection systems. And not just a subset of processes, but for a near unendingset.
Improvements in transmission standards, and the bandwidth they provide, enable ever more advanced features to be delivered from GigE’s use of the IEEE1588 precision time protocol for synchronizing the firing of modules, lighting, robotics etc. to those that give extra information such as wide dynamicrange.
And, of course, processing power has improved in line with Moore’s Law.
The blind spot
Most machine vision systems inspect a part’s surface only, with the component being held by a manipulator or mounted on a guidance system. This limits the analysis to only a particular segment of the 3D geometry or requires mechanical changes for each new batch, which must also be free of mixtures of different parts.
On display for the first time at VISION 2018 was a prototype that claims to offer a new way of working. The Instituto Tecnológico de Informática (ITI; Valencia, Spain; https://www.iti.es/en/) in conjunction with Sony Europe’s Image Sensing Solutions division (Weybridge, UK; www.image-sensing-solutions.eu) are developing what they call “the industry’s most versatile machine vision inspection system,” dubbed Zero Gravity3D.
The system removes the blind spot by launching an object vertically into an imaging chamber to precisely capture it from multiple angles while in flight. This process means the geometry and entire surface of even highly complex components can be captured from 360° with out any blind spots. The technology has successfully completed proof of concept testing and the ITI and Sony will continue to collaborate and bring system tomarket.
ITI’s prototype uses 16 Sony XCG-CG510C GigE modules - ITI is also able to scale the imaging chamber size, with the number of cameras changing as it scales. The 16 modules each output 5.1 MPixel images at a frame rate of 23fps.
The output of these cameras is then transmitted via the GigE standard and stitched together digitally, enabling an operator to digitally rotate an image and examine any potential flaws picked up by the system.
Objects enter the imaging chamber from a conveyor belt connected to a linear motor which fires the object vertically into a polyhedral structure. Once captured and in free-fall, the object is caught by the same linear actuator moving to match the object’s speed and prevent impact and off-loaded to a second conveyor belt, ortray.
With the object moving at a relatively high speed, synchronization is essential - with any mistimed firing resulting in a corrupted image. To achieve this, the modules utilize the IEEE1588 precision timing protocol with firing timed to match the top of the object’s flight. Using IEEE1588 also allows the imaging chamber’s LED lighting to be synchronized with the module’s firing.
The XCG-CG510 is one of a small group of camera modules that allow IEEE1588 master functionality, enabling any module to be dynamically assigned as the master should a device fail - giving ITI the reliability itneeds.
The proof of concept technology has been tested running at 50 parts per minute with a single linear actuator, or 80 parts per minute in a dual-actuator imaging system.
This process allows manufacturers to run multiple types of components for analysis in a single batch, and to easily switch components being captured without mechanicalchanges.
Applications include three-dimensional surface reconstruction with textural analysis, as well as surface defect detection, be it a scratch, stain, crack, corrosion or geometric alteration.
Editor’s note: This article was written by Arnaud Destruels, European Visual Communication Product Manager, Sony EuropeLtd.
T-CUP camera captures ten trillion frames per second
Nothing beats a clear image, says Institut National de la Recherche Scientifique (INRS; Quebec City, QC, Canada; http://www.inrs.ca/english) professor and ultrafast imaging specialist Jinyang Liang. He and his colleagues, led by California Institute of Technology’s (Pasadena, CA, USA; www.caltech.edu) Lihong Wang, have developed what they call T-CUP: the world’s fastest camera, capable of capturing ten trillion fps. This new camera makes it almost possible to freeze time to see phenomena—and even light—in extremely slow motion. In recent years, the junction between innovations in nonlinear optics and imaging has opened the door for new methods for microscopic analysis of dynamic phenomena in biology and physics. But to harness the potential of these methods, there needs to be a way to record images at a very short temporal resolution—in a single exposure.
Using current imaging techniques, measurements taken with ultrashort laser pulses must be repeated many times, which is appropriate for some types of inert samples, but impossible for other more fragile ones. For example, laser-engraved glass can tolerate only a single laser pulse, leaving less than a picosecond to capture the results. In such a case, the imaging technique must be able to capture the entire process in real time. Compressed ultrafast photography (CUP) was a good starting point. At 100 billion fps, this method approached, but did not meet, the specifications required to integrate femtosecond lasers. To improve on the concept, the T-CUP system was developed based on a femtosecond streak camera that also incorporates a data acquisition type used in applications such as tomography.
“We knew that by using only a femtosecond streak camera, the image quality would be limited,” says Wang, the Bren Professor of Medial Engineering and Electrical Engineering at Caltech and the Director of Caltech Optical Imaging Laboratory. “So to improve this, we added another camera that acquires a static image. Combined with the image acquired by the femtosecond streak camera, we can use what is called a Radon transformation to obtain high-quality images while recording ten trillion frames persecond.”
This camera makes it possible to analyze interactions between light and matter at an unparalleled temporal resolution. The first time it was used, the camera captured the temporal focusing of a single femtosecond laser pulse in real time. This process was recorded in 25 frames taken at an interval of 400 femtoseconds and detailed the light pulse’s shape, intensity, and angle of inclination.
“It’s an achievement in itself,” says Liang, “but we already see possibilities for increasing the speed to up to one quadrillion (1015) frames per second!” Speeds like that are sure to offer insight into as-yet undetectable secrets of the interactions between light and matter. View more information:http://bit.ly/VSD-TCUP.
Editor’s note: This article was written by Gail Overton, Editor, Laser Focus World, and originally appeared on the Laser Focus World website.