Expert system tools enable robot deployment by physically disabled

Sept. 1, 1998
Conventional programming languages, such as FORTRAN and C, are designed and optimized for the procedural manipulation of data such as numbers and arrays. However, whereas conventional object-oriented languages such as C++ and Java are well suited to handle symbolic problems, such procedural languages are not good at "brainstorming" to solve problems as people do.

Expert system tools enable robot deployment by physically disabled

Conventional programming languages, such as FORTRAN and C, are designed and optimized for the procedural manipulation of data such as numbers and arrays. However, whereas conventional object-oriented languages such as C++ and Java are well suited to handle symbolic problems, such procedural languages are not good at "brainstorming" to solve problems as people do.

One key result of artificial-intelligence (AI) research has been the development of techniques that allow the modeling of information at higher levels of abstraction. These techniques are embodied in languages or tools that can build programs that closely resemble human logic in their implementation and are therefore easier to develop and maintain.

These AI languages can describe real-world states and the relationships between them in the form of rules. Once the start and goal states of a system are specified, AI tools can use "inference engines" to reach the goal state from the start state. Unlike procedural methods, this process frees programmers from specifying the sequence of required actions.

Expert systems tools, such as CLIPS, developed by the Software Technology Branch of the NASA/ Lyndon B. Johnson Space Center (Houston, TX), have greatly reduced the efforts and costs involved in developing an expert robotic system at the Computer Vision Group of the University of Toronto (Toronto, Ontario, Canada). Dubbed Playbot, the robot is designed to enable physically disabled children to access and manipulate toys. "Current robots for the disabled rely on the user`s visual system to be an integral part of a closed-loop control system," says Gilbert Verghese, who heads the university project. "In one class of robotic aids, specialized sensors are developed for fingers, eyebrows, or eye movements. The user decides what the robot manipulator should grasp, and then through a long series of microactivations of the robot, visually guides the manipulator to the target object," he adds.

"Playbot is designed with the goal of `short-circuiting` this control loop," says Verghese. While the user`s visual system is still needed to determine how objects are manipulated, the robot`s visual system takes the place of user-interaction in the closed-loop control of the robot during the execution of the task

Consisting of a robot platform, a color, stereo, active robotic head, a robot arm with a two-jointed gripper, and a mouse-driven screen interface, Playbot is controlled by a network of computers in a client-server architecture. Each robot component is controlled by a computer that acts as a server. Each server is then mapped to client computers that send requests to the servers to perform actions or return status information.

To digitize images and perform object-recognition and tracking tasks, Verghese and his group use two IC-STD-COMP PCI-bus frame grabbers from Imaging Technology (Bedford, MA) in a two-processor Pentium II-400 MMX system. A Precision MicroControl (Carlsbad, CA) DCX-PC100 servo-motor controller in the same system controls the robot components. "We have recently added voice-command recognition using IBM Corp. Via Voice SDK and expanded the vision system to recognize human faces and interpret some static hand gestures," says Verghese.

"In addition to C and C++ application programmer interfaces," notes Verghese, "all client functions are complied into the CLIPS programming language. As an object-oriented LISP-like interpreted language with inference, pattern matching, and tracing capabilities, Playbot`s planner server is written in CLIPS and has client connections of its own to all other servers. In operation, the planner server receives commands and goals directly from the user interface and can pre-empt any goal or robot action in progress."

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!