Teleoperated search-and-rescue robot operators gain accuracy with practice

May 6, 2011
Urban search and rescue task forces are essential for locating, stabilizing, and extricating people who become trapped in confined spaces following a catastrophic event. Sometimes the search area is too unstable for a live rescue team, so rescuers have turned to robots that incorpoarte vision systems and video cameras.

Urban search and rescue (USAR) task forces are essential for locating, stabilizing, and extricating people who become trapped in confined spaces following a catastrophic event. Sometimes the search area is too unstable for a live rescue team, so rescuers have turned to robots that incorpoarte vision systems and video cameras. Most recently, the USAR robots have been employed by rescuers following the Japanese earthquake and tsunami. The rescuers control, or teleoperate, from a safe location. However, teleoperation can be problematic because robots frequently become stuck, which can destabilize the search area and hinder rescue operations.

“The World Trade Center site was the first major real-world evaluation of robots as tools for USAR,” says Keith Jones, a human factors and ergonomics (HF/E) researcher at Texas Tech University (Lubbock, TX, USA). “Overall, the robots performed well. One problem that did surface, however, was that the robots got stuck, a lot.” Jones, with coauthors Brian Johnson and Elizabeth Schmidlin, published a study of USAR robot teleoperation in a special issue of theJournal of Cognitive Engineering and Decision Making on human-robot interaction.

In a series of experiments, Jones and colleagues asked participants to drive a USAR robot through the openings of various structures. Successful navigation through openings depended on the size of the robot and the operator’s level of driving skill. Results indicated that, surprisingly, untrained operators could accurately judge the robot’s size relative to the opening. However, operators perceived their skill at guiding the USAR robot through the opening as greater than their performance demonstrated. This judgment factors in the size of the robot, the operator's driving skill, and the size of the aperture. Jones et al. did find that, with practice, participants improved their driveability judgments.

“Our research seeks to understand why operators are getting their robots stuck,” says Jones. “With that knowledge, hopefully, we can reduce the problem, and increase the amount of time that operators spend searching for survivors.”

SOURCE:The Human Factors and Ergonomics Society

--Posted byVision Systems Design

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!