Vision and neural nets inspect disk platters

Aug. 1, 2005
To reduce the cost of manufacturing disk platters used in hard drives, IBM and other manufacturers use automated processes to transport the raw platters through the different manufacturing stages that include coating, hardening, and polishing.

By Andrew Wilson

To reduce the cost of manufacturing disk platters used in hard drives, IBM and other manufacturers use automated processes to transport the raw platters through the different manufacturing stages that include coating, hardening, and polishing. To move numerous platters simultaneously, a number of platters are placed in plastic cassettes and moved by a series of conveyors though the production process. “Typically,” says Art Gaffin, CEO of Sightech (San Jose, CA, USA; www.sightech.com), “a robot unloads platters from the cassettes and loads the wafers into sputtering systems for coating. After each manufacturing phase is complete, platters are robotically replaced into the cassettes for transport to the next station for further processing.”

During the reloading process, however, platters can be re-placed in the cassette incorrectly. This can result in some slots of the cassette having missing disks while others have multiple disks placed in the same slot. In some cases, cross-slotting can occur, where the disk is placed between two slots in the cassette. “This misplacement must be detected before cassettes are transported to the next station or production may be disrupted,” says Gaffin. In the system installed at IBM, a central CPU controls all the movement of the cassettes (containing platters) between stations and routes the cassettes through an inspection station where a check is performed for missing, crossed, and/or doubled platters.

Sightech PC Eyebot uses neural-network-based software. Two ROIs within the field of view of a camera are visually trained to determine whether the angle (top) and location of the platter (bottom) are correct.
Click here to enlarge image

To do this inspection, a DFW-X710 FireWire camera from Sony (Park Ridge, NJ, USA; www.sony.com/videocameras) is positioned horizontally to image the cassette basket and wafers. Image data from the camera are then processed by the PC-Eyebot from Sightech-a machine-vision system with self-learning technology that can inspect products in seconds just by looking at them. Unlike many vision systems that use feature-extraction algorithms to process such data, the PC Eyebot uses a neural RAM-based algorithm that extracts, learns, and inspects visual features within each image and can learn 13 million features per second. “Each type of feature within the image determines the PC Eyebot’s perception of the product inspected and thus controls what kind of information will be learned,” Gaffin explains.

To determine whether platters are missing, crossed, and/or doubled within each cassette slot, literally no C or GUI-based programming is required. Instead, the system developer presents the vision system with a number of options that include correct and incorrect platter positioning within the cassette. The system then compiles a database of features in association with other features. After correct and incorrect images are presented, the system compares newly presented images with this database of features.

Says Gaffin, “It was necessary for the system to learn two regions of interest (ROIs) within each captured image. While the first ROI is positioned to image the bottom of the cassette, it determines how the platter is situated within the groove of the cassette. “Since the platters can vibrate within the grooves,” says Gaffin, “this ROI determines whether each platter is situated correctly.” The second ROI, positioned at the top of the cassette, checks the angle of the platter within the cassette. “If the angle is too high, for example, the platters may be crossed in the groove of the cassette.” Should the system detect mispositioned or missing platters, a serial interface on the PC Eyebot triggers a warning stop light at the inspection station,” says Gaffin.

Gaffin sees opportunities in other areas, such as food inspection. “Most produce is mechanically harvested. Although this is efficient, such systems have a tendency to harvest soda cans!” In the past, sorting this material has been done manually. But at the recent Vision Show West (San Jose, CA, USA; May 2005), Sightech showed how its self-learning machine-vision technology could be used to distinguish erroneous material from different-colored chili peppers using the spatial relationship between color sets.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!