Robots embed off-the-shelf imaging parts for intelligence

March 1, 2000
Microrovers for urban search and rescue operations need to be small enough to navigate the rubble of the Oklahoma City, OK, federal building, for example, and inexpensive enough to discard in the event of problems that may endanger human lives.

Using low-cost imaging components, intelligent robots are being readied for urban search and rescue missions.

By R. Winn Hardin, Contributing Editor

Microrovers for urban search and rescue operations need to be small enough to navigate the rubble of the Oklahoma City, OK, federal building, for example, and inexpensive enough to discard in the event of problems that may endanger human lives. To meet these goals, robots need to use low-cost cameras, image-acquisition systems, and motor controllers.

Click here to enlarge image

Graduate students Brian Minten (right) and Mark Powell (left) of the University of Southern Florida pose with Silver Bullet, a robot designed for urban search and rescue operations. Armed with a computer, a global positioning system, sonar, and two video cameras, this mother robot tracks a smaller daughter robot (Bujold) by deducing the distance between the robots with high accuracy.

In developing such an object tracking and navigational system, researchers at the University of South Florida (USF; Tampa, FL) have built a pair of robots that use standard CCD cameras, CPUs, and a variation on a coordinate transform algorithm used to classify skin cancer. Led by USF professor Robin Murphy, the group's automated system acts as an automated visual navigation and docking program for two robots capable of search and rescue or urban exploration (see photo).

Armed with a global positioning system (GPS), sonar, on-board computer, and two video cameras, the mother robot, dubbed Silver Bullet, tracks a smaller daughter robot (Bujold) by deducing the distance between robots with the same accuracy as more-expensive and computationally complex stereovision systems.

Several standard approaches exist for locating objects and performing distance measurement in three-dimensional space. Stereovision cameras triangulate the distance between two cameras and an object based on small differences in image pairs. Because triangulation requires two images, such systems are more computer-intensive than single-camera-based approaches. Stereovision can provide finer detail but only if the baseline distance between the two cameras is about 1 m—too large for a small maneuverable robot. Laser rangefinders can provide significantly higher spatial resolution, but at higher cost.

Click here to enlarge image

FIGURE 1. In the SCT algorithm, a blue vertex is placed at the top of the color triangle because blue cones of the human eye can discriminate more shades of blue than red or green cones. Regions near the white point, where B is close to its mean value are smaller than those nearer the perimeter where B is near its minimum or maximum. This is analogous to human vision where more colors near the white point of the color triangle are more distinguishable than colors that lie closer to the outer perimeter.

To track the daughter robot, the USF team applies a color segmentation approach that uses a spherical-coordinate-transform (SCT) algorithm as opposed to algorithms that depend on hue saturation intensity (HSI). According to Mark Powell, a computer vision graduate student at USF, "HSI is a textbook algorithm used to separate colors from brightness or intensity within an image. But SCT is better because it handles very light and very dark colors more robustly, which means it can handle changes in intensity better as it travels through unstructured environments. We also found that it lends itself to easier implementation."

In the SCT algorithm

Click here to enlarge image

null

L is the intensity band, and A and B determine color independent of intensity. If a particular color is plotted as a point in RGB space, then the norm of the vector from the origin to the point is L, the angle between the vector and the blue axis is A, and the angle between the red axis and the projection of the vector onto the RG plane is B. The resulting color space can be represented as a color triangle. The color of any pixel is defined in two-dimensions as the point within the triangle that the vector passes through (see Fig. 1).

For color segmentation, the color triangle can be partitioned into a number of sub-areas determined by calculating the minimum and maximum of the A and B values, defining a subspace within the color triangle that is divided into sub-areas of equal angular increments. As the number of sub-areas increases, the segmentation separates out a greater number of different colors. All of the pixels whose color falls inside one particular sub-area are labeled uniquely as the mean of the RGB values of the pixels in the sub-area to which it belongs. This ability to create color sub-areas leads to better performance in varied lighting conditions.

Spherical coordinate transform is more intuitive to human operators because it mimics the human visual system. According to Powell, two elements of the algorithm emulate the characteristics of human visual perception. First, the blue cones of the human eye can discriminate more shades of blue than red or green cones. Because of this, the blue vertex was placed at the top of the color triangle. Subareas near the blue vertex are more finely distributed compared to the red and green vertex. Second, regions near the center of the triangle (the white point), where B is close to its mean value, are smaller than those nearer the perimeter, where B is near its minimum or maximum. The perceptual analog to this property is that, in human vision, more colors to the white point of the color triangle are distinguishable than colors that lie closer to a given point close to the outer perimeter.

On-board hardware

The initial training and testing of the SCT algorithm was conducted using still photographs scanned into a workstation from Sun Microsystems (Palo Alto, CA). Written in C as part of CVIPtools developed by Scott Umbaugh at Southern Illinois University (Edwardsville, IL), the algorithm was then loaded onto the robot's CPU.

Click here to enlarge image

FIGURE 2. For image acquisition, the Silver Bullet robot uses a camcorder for wide-angle gross tracking of the daughter robot, Bujold. The algorithm performed on the camcorder images provides fine control of the pan-servo motor on a second camera that provides the images for distance determination.

Aboard Silver Bullet, image processing is performed on a Windows-based AMD K6-3 500-MHz processor motherboard with 64 Mbyte of RAM. Attached to the motherboard is a RangeLAN2 wireless Ethernet card from Proxim (Sunnyvale, CA) that keeps the mother robot in contact with a Proxim base-station transceiver. A Jupiter GPS receiver from Conexant Systems (Newport Beach, CA) and custom-built array of six sonars from Polaroid (Cambridge, MA) are also attached to the CPU through serial port connections (see Fig. 2).

For image acquisition, Silver Bullet uses a camcorder from JVC (Wayne, NJ) for wide-angle gross tracking of Bujold. The SCT algorithm performed on the camcorder video images provides fine control of the pan-servo motor on the second camera. This 2200 series CCD camera from Cohu (San Diego, CA) provides the images for distance determination. Both cameras feed analog video output to a pair of Meteor frame grabbers from Matrox (Dorval, Quebec, Canada).

The frame grabber attached to the Cohu camera reduces images from standard NTSC to 256 x 240 for color segmentation, direction, and distance determination. To move the camera, the Silver Bullet uses a pan servo from Omnitech Robotics (Englewood, CO) connected to the Cohu camera and controlled by a CanPC (Omnitech) PCI-bus card.

As a 0.75 x 1.5-ft microrobot from Inuktun Services (Cedar, BC Canada), Bujold is supplied with its own NTSC video camera. This will eventually be used to allow Bujold to locally determine its distance from Silver Bullet and return to the Silver Bullet's carrying compartment. To track Bujold, an orange ball was placed on an antenna wire attached o the robot to act as a fiducial mark for Silver Bullet to track. Communication between the mother and daughter robots is conducted through a 100-ft self-feeding tether. Distance and direction determination begins with images from the JVC camcorder.

To extract image data matching the orange ball, the SCT algorithm is applied to captured images. The location of the ball in the camcorder's frame of reference provides control of the pan-servo attached to the base of the Cohu camera. The SCT algorithm then processes images from the Cohu camera, and distance is determined based on the size of the ball based on a look-up-table of size versus distance. Using the AMD K6-3 processor, Silver Bullet is able to update Bujold's location three times per second. Future versions may use more powerful processors to increase this refresh rate.

Experimental data showed average errors in distance measurements to within 6 cm at 6-m separation, or levels commensurate with more expensive stereovision systems. Murphy's group is currently adding more colors to the fiducial marker to allow orientation determination similar to methods used by NASA on the Mars Rover. This will lead to an adaptive SCT algorithm to allow the Silver Bullet to better adapt to lighting changes. Other improvements include the addition of a tilt servo to the Cohu camera and automated focus to extend the resolution and separation distance between the two robots. A wireless connection between mother and daughter will also eliminate the need for a tether.

Company Information

AMD
Sunnyvale, CA 94088
(800) 538-8450
Web: www.amd.com

Cohu
San Diego, CA. 92186-5623
(858) 277-6700
www.cohu.com

Conexant Systems
Newport Beach, CA 92658-8902
(949) 483-4600
Fax: (949) 483-4078
Web: www.conexant.com

Inuktun Services
Cedar, BC Canada, V9X 1W1
(250) 722-2209
Fax: (250) 722-2031
Web: www.inuktun.com

JVC
Wayne, NJ 07470
(800) 582-5825
Web: www.jvc.com

Matrox
Dorval, Quebec Canada H9P 2T4
(514) 685-7230
Fax: (514) 685-2853
Web: www.matrox.com/imaging

Omnitech Robotics
Englewood, CO 80110
(303) 922-7773
Fax: (303) 922-7775
Web: www.omnitech.com

Polaroid
Cambridge, MA 02139
(781) 386-2000
Web: www.polaroid.com

Proxim
Sunnyvale, CA 94086
(408) 731-2700
Fax: (408) 731-3675
Web: www.proxim.com

Southern Illinois University Edwardsville
Edwardsville, IL 62026
(618) 650-2000
Web: www.siue.edu

Sun Microsystems
Palo Alto, CA 94303
Web: www.sun.com

University of South Florida (USF)
Computer Science and Engineering

Tampa, FL 33620
(813) 974-4756
Web: www.csee.usf.edu/~murphy

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!