Researchers led by professor Giovanna Sansoni at the Laboratory of Optoelectronics in the Department of Engineering at the University of Brescia (Brescia, Italy) have developed a vision-based robotic bartender that can serve customers different varieties of beer. Commissioned by Denso Europe (Weesp, the Netherlands), the system is designed to demonstrate how vision and robotic systems can be used in tandem.
The left and right arms of the robot consist of two Denso VP-6242G robots embedded into a thorax-shaped case onto which a PC-based supervisory control system has been placed (see Fig. 1). On the left arm of the robot is a pneumatic parallel gripper that picks beer bottles from a conveyor and pours drinks into glasses; the right arm is equipped with a plastic hand and a bottle opener.
|FIGURE 1. To demonstrate the effectiveness of vision-guided robotics, professor Giovanna Sansoni and her colleagues at the University of Brescia have developed a service robot "bartender" that can identify the location and orientation of glasses before filling them with beer.|
Beneath the robot sits a table with a tray that can be rotated 180°. When customers place empty glasses onto the table, the robotic bartender detects their presence and the table is rotated by the robot's right arm to fill the glasses. The table is then again rotated to present the filled glasses to the customer.
To capture images of the bottles and glasses, the vision system uses two CMOS digital uEye 1540-M 1280 × 1024-pixel USB 2.0 cameras from IDS Imaging (Obersulm, Germany), one of which is mounted on the top of the control system monitor and the other at the end of the robot's right arm.
Both cameras are interfaced to the PC-based supervisory control system running Halcon 9.1 software from MVTec Software (Munich, Germany). This software is integrated with a suite of motion-control software routines written in VB.NET using Denso's robotic Orin interface. It controls table rotation; picking, uncorking, and filling operations; and the conveyor, as well as a robotic disposal mechanism.
After the system is initialized and calibrated, an image of the table is captured by the camera above the controller. Blob analysis counts the number of glasses. Image erosion and area filtering then discriminate between glasses that can be filled from those that are upside down.
When a predefined number of glasses have been detected, the table is moved to the back of the system, after which the coordinates of each glass's center are determined prior to filling.
After the bottles appear on the conveyor, the camera on the end of the robot's right arm is oriented to acquire images of each bottle. Since a variety of different objects could be placed on the conveyor, an object detection and classification routine determines the nature of the objects (see Fig. 2).
|FIGURE 2. As the bottles travel down a feed conveyor to the robotic station, an object detection algorithm determines what type of beer they contain, or whether they are unrecognizable.|
If the object is a beer bottle, the software determines the brand of beer it contains by comparing the image of the unknown beer variety on the conveyor with a library of known bottle templates. Once identified as a beer of the correct type, the bottle is removed from the conveyor, opened, and the beer poured into a glass.
The vision-guided system also handles situations where unknown objects are placed on the conveyor; if detected, the robotic arm removes them.
Although the developers of the robotic bartender have successfully demonstrated that the system performed as expected, it is unlikely that the service robot will be replacing human bar staff—the system was developed for the sole purpose of demonstrating the effectiveness of vision-guided robots in industrial environments.