Vision Systems Design provides a webcast devoted entirely to the subject of how to choose the correct camera for a machine vision application. Since 2015, the number of cameras in the marketplace has exploded. Each year, Vision Systems Design publishes our Worldwide Industrial Camera Directory. In the November 2018 edition of the directory, more than 150 camera vendors were listed, with key camera specifications for more than 1,000 different camera products.
There are the aforementioned area array and line scan cameras, with different models available in both monochrome and color versions. There are high-resolution cameras, and cameras dedicated to high speed applications.
There are a variety of data output protocols like USB, which now includes USB 2, USB 3, and USB 3.1 versions. Gigabit Ethernet is a popular data output method, with dual GigE and 10 GigE technology also available. Camera Link and Camera Link HS interfaces offer high bandwidth capability. CoaXPress is a newer standard that shines on high-resolution and high-speed cameras, with CoaXPress 2.0 rising in popularity. In the midst of these technological evolutions, older firewire and analog cameras are still available.
Generating the digital video and image data conveyed by these protocols depends on the ability to light an object and capture its image. Most machine vision cameras on the marketplace capture images from the visible wavelengths of light, or light waves between 400 and 700 nanometers in length. There are also cameras available that image across a wider spread of the electromagnetic spectrum, including x-ray, ultraviolet, and short wave, mid-wave, and long wave infrared. Imagers even exist for the terahertz region of the spectrum.
The following discussion is aimed at machine vision applications that image within the visible light range, and how to select the proper camera for those applications. However, the questions we’ll be concerned with would also apply to infrared imaging applications.
Under the lens of a basic machine vision camera is an image sensor and a set of boards. This back-end circuitry can have a number of various functions including analog-to-digital converters, amplifiers, clocks for exposure and frame rate timing, readout circuitry, a Field Programmable Gate Array (FPGA), memory to store frames coming in from the camera, and buffer memory.
To begin selecting the appropriate camera for an application, first document the application’s restraints. Size, weight, and power are important considerations. Take, for example, a vision system being developed for deployment on an unmanned aerial vehicle. Drones are increasingly being used for crop monitoring to help farmers irrigate, or apply fertilizers, herbicides, and pesticides more judiciously. In this application, the smallest and lightest cameras would be preferable so as not to weigh down the UAV.
Power is also an important, less obvious consideration. To follow the example, the more power a UAV-mounted camera required, the shorter the drone’s flight time before it had to land to recharge or refuel.
Machine vision applications also have environmental considerations. Will the camera be used outdoors? Does it need an enclosure? Will it be subjected to vibration or will it be in a well-controlled environment? Many cameras will list related protection ratings among their specifications.
Knowing the speed of the object to be imaged is also important. If the object will be moving very quickly, and it will be necessary to freeze the action of the object in order to analyze fine details, high-speed cameras are required.
As previously stated, imaging is not possible without light. One therefore must understand the lighting conditions under which the application will take place. Will there be plenty of bright light available, or does the machine vision system need to be deployed within a confined space that will limit the number of lights that can be deployed? If the latter, one needs to consider cameras that are much more sensitive to light compared to other camera choices.
Imaging in monochrome is frequently all an application requires. Producing color images may be important in some cases, however, for instance detecting the color of a sample in a clinical analyzer. Color and monochrome imaging have different lighting considerations and requirements.
Will the application images be recorded, or is a live video feed required? If the application is analyzing the data as the video is streaming and determining actionable items, retaining the video may not be required. High-speed frequency analysis, on the other hand, may require saving all the captured video.
Finally, on top of these other considerations, budget may determine the final camera selection. Knowing all these constraints beforehand will allow you to narrow down your camera selection more efficiently. Again, Vision Systems Design has made a webcast available that is devoted entirely to this subject and we encourage you to avail yourself of the resource once you are ready to learn more.