Extracting features from an image is an important part of any software application written for the intelligent transportation system (ITS) market. Fortunately, a number of tools developed for industrial machine vision can be put to use.
Blob detection can be employed to detect points and/or regions in the image that differ in brightness or color compared to their surroundings. Using blob analysis, it is possible to count the number of vehicles in a particular set of images. However, before this analysis can be accomplished, a background image of the environment may need to be separated from the foreground images of the vehicles. To calibrate such images, dimensions of static features in can be used. In the rotary shown in Fig. 1, the dimensions of the garden in the center can be used to determine the size of the vehicles in the image.
Pattern matching is also commonly used in traffic applications. For example, normalized correlation -- a computationally intensive mathematical approach of pattern matching and target tracking -- involves using a known template image of a vehicle in another image to locate similar vehicles in the target image.
An alternative is to use a geometric matcher based on the generalized Hough Transform. This can be used to identify positions of arbitrary shapes -- most commonly circles or ellipses but sometimes other features that have been extracted from the image. In certain situations, geometric pattern matching will exhibit higher performance than methods that use normalized correlation. In others -- typically where very busy backgrounds are present -- normalized correlations will exhibit higher performance than their geometric counterparts.
Pattern matching can also be used in ITS applications to stitch images together, creating an image mosaic, or measuring the velocity of an object by capturing two images of the object as it moves through the field of view of the camera. However, these pattern-matching tasks are challenging since no two vehicles are alike.
An image-processing system must capture one frame of an image, look for features of interest, find them in a new frame, and discard any artifacts in the image that may have been caused by temporal aliasing. In some cases, however, there may be redundancy in an image that needs to be taken into account. In the image of the car shown in Fig. 2, the area between the two lines will change very little as the car moves slightly, making performing velocity or movement estimation problematic.
Repetitive features may also occur in images. For instance, on the side of a boxcar there are a number of vertical ribs. Using pattern matching effectively to identify features on the boxcar by capturing one or more of these features and differentiating one from another is complex; it demands application-specific knowledge and custom software.
The beginning, the middle, and the end
Detecting when a vehicle enters and leaves the field of view of the imaging system or gaps between vehicles or tractors and their trailers is also complex. One way to solve such problems is to forgo the use of a vision system altogether and to use inductive loops, pressure pads, lasers, acoustic sensors, or radar systems. However, in employing a vision-based solution, designers can avoid the use of many of these additional sensors by leveraging techniques such as pattern-based video triggering.
There are many issues to be considered when programming a system to perform this type of triggering function. These include changes in the weather as the vehicle passes the field of view of the camera, unrelated objects that may be moving in the field of view, and vehicles that have empty regions, which can mislead the system into concluding that no vehicle is present. A complete solution to this problem may require image subtraction, averaging, pattern matching, and even segmenting the pattern analysis across different parts of the image.
Over the years, vision-based systems have become increasingly used in metrology applications to capture images of parts and components from which the system can then extract dimensional features—often with micron resolution. In ITS applications, however, such accuracy is unnecessary; coarse measurements on the dimensions of objects within the image will often suffice.
Optical character recognition and verification
Optical character recognition (OCR) and verification (OCV) are as important in traffic applications as they are in machine vision. However, in the vehicle and transportation marketplace, the deployment of OCR and barcode equipment is not straightforward.
In the United States alone, there are more than 80 different license plate fonts in use (see Fig. 3). As if recognizing all of these is not complex enough, developing a system for the international marketplace requires that character sets such as Arabic be recognized, a task that may prove more computationally intensive. Compounding the issue, prepayment tags (such as E-Z Pass transponders) on vehicles may frequently be moving at high speed or may be obscured, dented, or scratched. This requires the the tag to be located, filtered, and enhanced before it can be recognized.
It is important to be aware of the role that hardware acceleration can play in image-processing applications. Most systems today have multicore CPUs and graphics processors that can be harnessed in parallel. In traffic applications, multicore systems are deployed to capture and process images at high speed, performing pattern recognition to assist with velocity measurement and stitching, 30 frames/sec OCR, and image enhancement.
Right from the outset
One of the most important aspects in the development of any traffic application is to determine the nature of the application right from the start of a project. Then the system must be carefully documented prior to undertaking any development work.
Aspects of the design that may seem immaterial to a customer -- such as vehicle speed or whether the system is deployed inside a building or outdoors -- are not trivial issues. Indeed, should even one of these parameters need to be modified during the course of system development, it may completely change the nature of the equipment that is used.
It is absolutely vital to consider the lighting of the scene before choosing optics and cameras. Before engaging in any software development, it is imperative to consider the weather during the day and at night -- most importantly, the hour around dawn, dusk, noon, and midnight. It is at these specific times that the variation in lighting will be most dramatic and software will need to be able to handle the varying conditions in which images are taken.
Because lighting, weather, and the condition of vehicles can vary, numerous images must be captured at the site where the system is to be installed. These must then be compiled in a database from which they can be analyzed before the algorithms required to process the images are developed. Many images are required to enable software developers to validate that their code will operate correctly in the field. Such images also allow engineers to perform regression testing -- software testing that seeks to uncover new software bugs, or regressions, in existing systems -- when a new version of the software is deployed in the field.