February 2017 snapshots: Robots for food delivery, greenhouse harvesting, disaster relief, and autonomous vehicles

Feb. 14, 2017
In the February 2017, read about vision-guided robots that were developed for such purposes as food delivery, tomato harvesting, disaster relief scenarios, and autonomous vehicles.

Autonomous robot successfully delivers takeout for the first time

Navigating the sidewalks of London using a multi-component vision system, an autonomous robot developed by Starship Technologies (London, UK;www.starship.xyz)-a startup launched by Skype founder Ahti Heinla and Janus Friss-has successfully delivered takeout, the first time such a task has been completed by a robot.

Just Eat, a UK food delivery service, sent the robot to Turkish restaurant Taksim Meze in London to pick up an order of falafel and lamb cutlets. The food then made its way back to the customer in the robot's locked compartment.

First shown at the Mobile World Congress in February of 2016, the robot features a vision system comprised of multiple parts. The robot features a minimum of nine visible cameras, including three stereo pairs for a total of six visible spectrum cameras, along with three 3D Time of Flight cameras. It uses an Nvidia Tegra K1 mobile processor to perform machine vision and autonomous driving functions, as well as ultrasonic sensors with 360° view, and GPS and IMU / accelerometer. All of these components provide navigation, situational awareness, and reaction controls for the robot.

GPS on the robot is accurate to around 30 m, which the company says it not enough, so the robot utilizes computer vision algorithms to build a 3D map of the neighborhood before it can operate autonomously. Once the robot builds a map, it compares what it sees in its field of view to what it mapped previously, enabling the robot to know where it is located. In order to avoid obstacles, it has what Starship Technologies refers to as a 'situational awareness bubble,' which is created using the robot's vision system and ultrasonic sensors. Topping out at only 4 mph, the robot is able to sense potential obstacles and stop at a safe distance.

Weighing nearly 40 lbs., the robot can carry about 22 lbs. in its its 16 x 13.5 x 13 in cargo bay. Its battery lasts for two hours and it measures 27 x 22 x 22 in. Additionally, the robot is able to go up curbs and has a braking distance of nearly 1 ft.

With a business headquarters in London and an R&D office in Estonia, the company began testing its robots in early 2016 in the US, UK, and Estonia, as well as 13 other countries. In July of 2016, the company launched a number of pilot programs with commercial partners testing the robots and robotic delivery platforms in Europe and the US.

Commercial deployments are planned for 2017. The company's business model plan is to do last mile deliveries on behalf of partners or to help run a delivery service by providing robots as a 'platform-as-a-service.'

On January 12, The Wall Street Journal reported that multi-national automotive company Daimler led a new $17.2 million funding round in Starship Technologies to help bring the robots to sidewalks all over the globe.

Vision-guided robot trims leaves off tomatoes in greenhouse

In order to create an automated and cost-effective alternative to manually de-leafing tomato crops grown in greenhouses, engineers at Priva (De Lier, The Netherlands;www.priva.com) developed a vision-guided robot fit with telescopic cutters that autonomously handles the process.

Called the Kompano Deleaf-Line robot, Priva's automation robot travels on tube rails down lanes in the greenhouse which is populated with tomato plants spaced at intervals on either side of the track. The robot is able to move from plant to plant sequentially and identify and remove leaves from each tomato. In order to identify the small green leafs that need to be removed among the other green leaves in the greenhouse, as well as being able to operate in various lighting conditions, Priva built the system with a pair of stereoscopic cameras.

Two cameras were custom built from two pairs of FLIR Integrated Imaging Solutions (Formerly Point Grey; Richmond, BC, Canada;www.ptgrey.com) Chameleon3 cameras. These cameras feature the 1.3 MPixel ON Semiconductor (Phoenix, AZ, USA; www.onsemi.com) PYTHON 1300 CMOS image sensor, which features a 4.8 µm pixel size and can acquire images at up to 149 fps. The cameras capture a wide field of view from both the left and right side of each of the tomato plants

"To enable each pair of stereo cameras to capture reliable images of the tomato plants regardless of the lighting conditions in the greenhouse, the system employs a Xenon strobe light which illuminates the plant. As the strobe emits light every two seconds, the strobe triggers the stereo cameras to expose images at 30 microsecond intervals. This enables the system to capture a uniform set of images each time," said Dr. Tomas de Boer, the Priva engineer responsible for the design of the system.

As images were captured by the two stereo cameras, they are transferred over USB interface to a PC running the open-source Ubuntu operating system and the Robot Operating System (ROS), which is an open-source software framework hosted by the Open Source Robotics Foundation, Inc. (OSRF) for creating robotic applications across a variety of platforms.

At this point, according to FLIR Integrated Imaging Solutions, custom image processing algorithms running inside OpenCV are used to process the images from both sets of cameras to identify the location of leaves in the tomato plants in a certain height range that was previously defined by the tomato growers. Once the location of the leaves is identified, the software then calculates the exact 3D coordinates of the petioles (the stalks that attach tomato leaf blades to their stems), which need to be cut by the effector at the end of the robotic arm.

Coordinate data generated by the software is passed to the ROS, which transfers the data to set of intelligent servo drives that power motors that drive the telescopic robot arm to the correct location on the plant, where it then cuts the petioles, removing the leaves from the tomato plant. With the cameras being on the same platform as the robotic arm, the cameras move as the robotic arm moves, capturing new sets of images of the plant from a different perspective. From there, additional leaves are identified and removed until the system cannot identify any more leaves.

Priva's team is currently working with members of the tomato growing consortium in the Netherlands to finalize a pre-production prototype of the system. Depending on the number of robots that are ordered by the consortium, it will then become available for other tomato growers starting in June 2017.

Researchers develop construction robot for disaster relief situations

A group of researchers in Japan has developed a prototype of a vision-guided robot that will aid in disaster relief situations.

As part of the Impulsing Paradigm Challenge through Disruptive Technologies Program (ImPACT's) Tough Robotics Challenge Program, researchers from Osaka University (Osaka, Japan;www.osaka-u.ac.jp), Kobe University (Kobe, Japan; www.kobe-u.ac.jp/en), Tohoku University (Miyagi, Japan; www.tohoku.ac.jp), The University of Tokyo (Tokyo, Japan; www.u-tokyo.ac.jp/index_e.html), and Tokyo Institute of Technology (Tokyo, Japan; www.titech.ac.jp/English) developed the construction robot. While the robot looks like an ordinary hydraulic shovel, it is fitted with a number of additional technologies to enable it to be deployed into disaster relief scenarios.

One such technology is a vision system comprised of visible cameras and a long wave infrared (LWIR) camera. While camera types or vendor names are not specifically mentioned, Osaka University notes that the infrared camera is used so that the operator can use the robot while assessing the situation even in bad weather conditions such as fog. Additionally, four fish-eye cameras mounted on the robot provide the operator images of an overhead view.

The team worked to develop technology that "quickly and stably controls the heavy power machine with high inertia by achieving target values regarding location and speed through fine tuning and by controlling pressures on a cylinder at high speeds." They also worked to develop a number of other elemental technologies, including vibrotactile feedback, which is accomplished via a force sensor installed at the end effector of the robot, which enables the robot to measure high frequency vibration.

Technology was also developed for estimating the external load in each hydraulic cylinder with multiple degrees of freedom. The estimated force is then used for force control or force feedback to the operator of the robot. Additionally, the researchers developed technology for flying a drone in the proximity of the robot to provide images and assess the area around the robot.

The researchers are also developing additional elemental technologies and making efforts to improve overall technical performance. They are also developing new robots with a double-rotation mechanism and dual arms with the purpose of achieving higher operability and terrain adaptability.

Two large investments into autonomous vehicles announced

Both Intel (Santa Clara, CA, USA;www.intel.com), and BlackBerry (Waterloo, ON, Canada; www.blackberry.com) have announced major investments into the development of autonomous vehicle technologies.

First, at the LA Auto Show AutoMobility conference on November 15, Brian Krzanich, CEO at Intel (pictured), announced that Intel Capital is targeting more than $250 million of new investments over the next two years to make fully autonomous driving a reality.

"We are committed to providing end-to-end solutions that drive insights and create value from data," said Krzanich in a blog post. "Let Intel be your trusted partner as the world moves toward fully automated driving, and together data will improve safety, mobility and efficiency."

In his address, Krzanich spoke of the automotive industry being on the cusp of transformation, which demands unprecedented levels of computing, intelligence, and connectivity. This includes technology such as sensors, sonar, LIDAR, and vision, embedded in autonomous cars, highlighting the need for the industry to prepare for a deluge of more than 4,000 GB of data coming from a single car each day.

In addition, BlackBerry also announced the opening its of BlackBerry QNX Autonomous Vehicle Innovation Centre (AVIC) in Ottawa, ON, Canada, which was developed to accelerate the realization of connected and self-driving cars by developing software.

Specifically, according to BlackBerry, the aim is to develop production-ready software independently and in collaboration with partners in the private and public sector. As part of the initiative, BlackBerry QNX will recruit and hire local software engineers to work on ongoing and emerging engineering projects for connected and autonomous cars. The Ministry of Transportation of Ontario recently approved BlackBerry QNX to test autonomous vehicles on Ontario roads as part of a pilot program. One of the first projects will be supporting this pilot as well as BlackBerry QNX's work with the University of Waterloo (Waterloo, ON, Canada;https://uwaterloo.ca), PolySync (Portland, OR, USA; www.polysync.io), and Renesas Electronics (Tokyo, Japan; www.renesas.com) to build an autonomous concept vehicle.

At the official unveiling of the center (pictured), John Chen, Executive Chairman and CEO of BlackBerry Limited and Canadian Prime Minister Justin Trudeau offered their comments.

"Autonomous vehicles require software that is extremely sophisticated and highly secure," said Chen. "Our innovation track record in mobile security and our demonstrated leadership in automotive software make us ideally suited to dominate the market for embedded intelligence in the cars of the future."

Trudeau added, "With the opening of its innovation centre in Ottawa, BlackBerry is helping to establish our country as the global leader in software and security for connected car and autonomous vehicle development," he said. "This centre will create great middle-class jobs for Canadians, new opportunities for recent university graduates, and further position Canada as a global hub for innovation."

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!