To increase productivity and eliminate errors performed by manual inspectors, many manufacturing companies are now deploying machine vision systems. Initially, these systems were used to perform simple verification tasks such as bar-code reading, but as technology has evolved so too have the applications in which vision systems are used.
Today, vision systems can be found in such diverse applications as semiconductor analysis, medical imaging, forensics and print and packaging inspection. All of these well-defined applications present a clear opportunity for today's suppliers of OEM machine vision components such as cameras, frame grabbers, lighting and software.
Notably, all of these applications operate in constrained environments where electro-mechanical systems can be tightly controlled. In future, however, emerging markets in humanoid robotics, embedded systems, agriculture and automated transportation will require systems that operate in unconstrained environments.
To design vision-guided robotic systems for agricultural tasks such as planting, tending and harvesting crops, for example, requires systems that operate in environments where lighting and weather conditions may vary, as you will read on page 21 of this issue. These applications require integrated systems that incorporate visible, infrared, 3D imaging, robotics, ultrasound, and inertial guidance and GPS systems.
Developing systems to operate in such unconstrained environments will prove more difficult than, for example, designing a system to inspect fill levels in containers. To do so, developers will need to have a firm understanding of numerous imaging, sensing and positioning techniques, imaging and machine vision software, and robotics. Such challenges will be made more complex when the system requirements include reduced size, weight and power (SWAP).
Aside from the added challenges of getting to grips with numerous new technologies, the developers of such systems will have to deal with developing software that will allow the systems to communicate with one another in an effective manner. Many systems integrators familiar with developing robotically based vision inspection systems today will already be familiar with the challenge of enabling their vision systems to effectively communicate commands to and from the robotic systems. The future will hold even greater challenges for software developers who will need to address the issues of interfacing multiple disparate systems to perform their vision tasks in a swift and easy way.
Whether existing OEM suppliers choose to participate in such emerging markets is yet to be seen, since this may require an even greater investment in software development on their part. Hopefully those luminaries in the vision industry who recognize these future challenges will develop software that will enable easier integration of numerous additional hardware components into their systems.