Sorting Color-Coded Cubes

Feb. 1, 2010
 Machine vision and custom color image-processing combined with pneumatic actuators sort clothing identification tags at 300 parts per minute 
Machine vision and custom color image-processing combined with pneumatic actuators sort clothing identification tags at 300 parts per minute

Andrew Wilson, Editor

To identify the type and size of clothing purchased from large department stores, identification tags are often attached to an item’s coat hanger. After the consumer purchases the clothing, these round, colored identification tags known as “cubes” are removed by the shopkeeper and returned to a central depot for re-use. This process is environmentally friendly and saves the department store money since these identification tags are recycled.

“In the past,” says Earl Yardley, director of Industrial Vision Systems, “this process was performed manually. At one central depot, for example, 15 workers were assigned the task of sorting these clothes tags into the appropriate bins. With 50 variants of cubes with different colors, numbers, and letters that vary depending on the size and type of clothing, this was a very time-consuming and repetitive process.”

Automated sorting

To automate sorting of the identification tags, IVS and RNA Automation have developed a system that leverages IVS’s vision and RNA Automation’s mechanical design expertise (see Fig. 1). At first the sorting may seem like an easy process, but the variety of numbers, colors, and letters that appear on each cube makes this task more difficult.

FIGURE 1. To sort 50 different types of clothes tags at rates of 300 parts/min, IVS has developed a system that combines color analysis, OCR, and neural networks.

“In fact,” says Yardley, “there is no correlation between the number, color, or letters that appear on each tag. A red tag may be printed with the letters XL or XXL, or the number 10, for example, while a yellow tag may be printed with ‘3’ or any other combination of letters and numbers. The only consistent feature is that both letters and numbers are printed in white on each of the tags.”

In operation, the automatic sorting system must identify each of the 50 different variants of tags, sort them, and then bin them for re-use. To accomplish this, tens of thousands of tags are first placed into a hopper and transported using a vertical conveyor into a vibrating bowl feeder. This bowl feeder causes the parts to move up a helical track in the bowl to an indexing tool that separates the parts at equal intervals onto a horizontal conveyor belt (see video below).

“Since the orientation of each part is not known,” says Yardley, “the machine-vision system must accommodate parts that are rotated in every direction and may be both right-side up and upside down.” As the tags pass along the conveyor, they are imaged using two 656 × 494-pixel NCS-10C3 FireWire RGB CCD cameras from Neurocheck, which are placed 270° apart (see Fig. 2).

FIGURE 2. As the tags pass along the conveyor, they are imaged using two FireWire cameras placed 270° apart. After parts are identified, they are blown into 50 separate bins by pneumatic actuators.

Since the white characters and numbers are printed twice around the circumference of the tags, only two cameras are required to capture the complete details of each part. “Because the field of view of the cameras is approximately 1 × 0.75 in.,” says Yardley, “the cameras are both fitted with a fixed-focus 35-mm lens using a 5-mm extension tube.”

Color capture

Since the color of each part and its characters must be properly identified, lighting played a critical role in the design of the system. The tags are all colored, so a linear white LED diffused toplight from Spectrum Illumination was placed above the conveyor. This diffused light achieved an even illumination of each part with no hot spots occurring in the captured images.

As parts pass along the conveyor, the LED light is triggered asynchronously and two 24-bit RGB images are captured and transferred to the host PC over two FireWire interfaces. After the images are stored in the host PC, they are first transformed from RGB color space to the more perceptually uniform La*b* color space using Neurocheck software.

In the La*b* color space, it is easier to discriminate between colors such as pink and red that may appear to be similar in RGB space. By using color segmentation, the white characters or numbers of each part can be separated from the color background. Features of each white number or character on a black background are then measured to determine which of the images contains the most discernable features.

After this process, the image with the most discernable features is thresholded and filtered to extract the number or character on the tag. Any character or letter on the tag is then identified using optical character recognition (OCR) software within the Neurocheck software package. Should a tag be upside down, the image is first rotated and then OCR is performed.

Neural networks

“Because there are a finite number of tags with different colors, letters, and characters,” says Yardley, “a specific number of tags can be grouped together by color and number. For example, tags with different colors with characters of XL, XL, XXL, and XXXL could all be grouped into one.”

By categorizing the colors and features into 28 groups such as these, a lookup table can be built and the results of the OCR and color analysis matched to specific known patterns within each group using a neural network algorithm in the Neurocheck software. This is important in reducing the processing time because the system is expected to process 300 tags/min.

After the parts are properly identified, they must be sorted into 50 different bins. To accomplish this, an encoder on the conveyor belt is used to track the location of each part as it moves out of the imaging system’s field of view.

As each part passes along the belt, a series of 25 pneumatic actuators and escape tubes located on either side of the conveyor are actuated to blow each part down the correct tube. These pneumatically driven actuators are controlled from a programmable logic controller (PLC) that is in turn interfaced to the PC using a 16-bit digital I/O board, also from Neurocheck.

Parts then fall down these tubes where they travel along another conveyor and into the correct bin. Should a part fail to be classified, it travels to the end of the conveyor where it is collected and either fed back into the system or discarded.

In addition to performing clothes-tag sorting at rates of 300 parts/min, the system can also be used to track the number and types of parts processed. This data can then be transferred from the Neurocheck software to database software where the number and sizes of individual garment types can be analyzed by the department store management.

Company Info

Industrial Vision Systems
Kingston Bagpuize, UK
www.industrialvision.co.uk

Neurocheck
Stuttgart, Germany
www.neurocheck.com

RNA Automation
Birmingham, UK
www.rna-uk.com

Spectrum Illumination
Montague, MI, USA
www.spectrumillumination.com

More Vision Systems Issue Articles

Vision Systems Articles Archives

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!