Everyday human-human interactions rely on multiple modes of communication, including spoken language, facial expressions, body pose, and gestures, thus allowing humans to pass large amounts of information in a short time. Researchers at the Technical University of Munich are mimicing this approach to improve human-robot interactions.
Frank Wallhoff, Christoph Mayer,and Tobias Rehrl are focusing on nonverbal communications such as gestures or facial expressions, which demonstrate a human's emotional state, show agreement or disagreement, provide various greeting signs, and may augment or even replace information passed on by spoken language. They have set up a real-time-capable gesture-interaction interface for human-robot interaction to evaluate gestures (head, hand) as well as facial expressions.
To date, the real-time capable framework can support human-robot interaction in two scenarios, an assistive household and an industrial hybrid assembly station. In the assistive-household scenario, humans are an integrated part of the environment and this setup is, therefore, a good starting point for research. The objective for integration of gesture recognition in the industrial context is mainly driven by the fact that reliable speech recognition is not always available, since ambient and unpredictable noise within a factory hinders the speech-recognition process.
The researchers have presented a demonstrator at trade fairs, scientific conferences, laboratory tours, and on TV. They say that the major drawback so far is that classifiers are learned offline in advance. Online learning of gestures and facial expressions would enable adaptation to either a single human or a specific environment.
For the full article from SPIE, please click HERE.