Machine vision components that provide 3D images require specialized software for the creation and processing of 3D data, as well as application software for the specialized tasks that 3D imaging enables, such as metrology, inspection, and guidance. As component offerings and capabilities in 3D machine vision grow, potential use cases for the technology and related software offerings in the market also expand.
Image formation and system configuration
Most 3D imaging component systems for machine vision include dedicated software drivers that execute the acquisition of an image and the conversion native 2D greyscale information into a 3D image and data (a notable exception being Time of Flight cameras, where the pixel data is depth information by default). In practice, the imaging functions may take place onboard the imaging system itself, on host computer, or both - depending on the component.
Not trivial by any means, the task involves the control of one or more cameras, usually tightly integrated with and calibrated to a light source specific to the 3D imaging technique employed by the device. In some cases, multiple images must be acquired—with the camera/object in motion or static depending on the technology—and combined. The resulting 3D image data available for analysis varies by components but likely will include a point map (or point cloud) or a depth map, and perhaps rectified (spatially correct) 2D images in greyscale, RGB, or other 3D datasets.
Along with the low-level software that executes image formation, most 3D component manufacturers provide a graphical front end (HMI) that facilitates important configuration of the system and sometimes provides calibration functionality. A software development kit (SDK) often accompanies the configuration software application, providing low level programming support related to image capture and system configuration/calibration for those who wish to create their own applications using the component. These SDKs do not address specific applications, however.Related: How AI vision systems can succeed with human input
Software for 3D applications
Key to the recent growth of 3D imaging in industrial automation is the proliferation of application software that uses 3D images for specific tasks. While precise image formation remains critical in every application, software drives application capability in automation environments.
3D systems targeting metrology and inspection typically provide software with configurable tools that support accurate location of geometric features and provide the means to make precision measurements of those features and their physical relationships to each other. The application software might also have the capability to identify very localized variations to detect small defects or incorrectly formed parts. In some implementations, configuration of the system or comparison of features might be accomplished using CAD models of the part being inspected.
Some of the most visible developments in 3D software have been in 3D vision guided robotics, particularly for 3D random part pick. This application use case has been widely publicized and is in high demand. The difficult tasks for such software include segmenting individually presented, random objects and providing a usable position (pose) in a calibrated 3D workspace that can be used by a robot to grasp the object. Various approaches to the execution are available including primitive feature analysis, CAD model matching, and even deep learning. In addition, these software packages often provide additional capabilities related to the robotic pick including grip optimization and interference avoidance.