Amazon Testing Robotic Arm that Identifies Individual Products

Dec. 21, 2022
The robot can manipulate 65% of Amazon’s sortable inventory of more than 100 million products, not including large products such as appliances.

Amazon (Seattle, WA, USA; https://www.amazon.com/) is using computer vision and artificial intelligence (AI) to automate its logistics operation with robotics, and it’s touting its endeavors publicly.

It has unveiled several experimental robots in 2022 and showcased a production robot that sorts packages.

In November 2022, Amazon introduced Sparrow, which the company is testing in the field. The core of the robotic system is an arm (M-20tD25) from FANUC (Yamanashi, Japan; https://www.fanuc.co.jp/eindex.html) that Amazon customized.

Sparrow uses suction cups to grip and then move individual products, such as vitamins or board games, from one tote, or bin, to another.

Leveraging computer vision and AI, “Sparrow is the first robotic system in our warehouses that can detect, select, and handle individual products in our inventory,” Amazon says in the November news release.

In a video Amazon created in its robotics lab near Boston, Massachusetts, Sparrow sorts products of different shapes and sizes, moving them from a yellow tote to one of four gray totes. 

Machine Vision and AI

While not referring specifically to Sparrow, Amazon says it uses cameras positioned at different angles combined with machine learning to help its robots visualize individual objects within a crowded scene and determine how to pick them up, according to in a 2022 article posted on Amazon Science.1.

In the Amazon-created video of Sparrow, which we have posted here with the company's permission, three cameras are visible—one mounted on the ceiling, one on the robotic arm, and one on the wall. In the video.

David Dechow, Machine Vision Expert and Vision Systems Design Contributing Editor, reviewed the video of Sparrow in action. He has designed similar applications and uses that knowledge to speculate on how Sparrow works.

Here’s how he breaks it down:

  • A 3D camera mounted on the ceiling takes an image of the items in the yellow tote after the robotic arm clears that tote with an object in its gripper. The image provides information on the basic shapes of objects in the yellow bin, allowing the vision application to determine how to pick up the next object.
  • There are eight vacuum tubes in the suction-type gripper. By retracting some of the tubes, the robotic arm can vary how many of the tubes it uses to pick up a given item. The number of tubes selected would depend on the available surface area and shape of an item. “They've done a really nice job with this gripper because it gives them the flexibility to pick all of those really different shapes and different sizes,” Dechow says.
  • Two 2D cameras (one mounted on the robot and a second on the wall) take an image of an object after it is in Sparrow’s grip. The arm pauses at this point, and Dechow speculates that this allows the vision application time to determine what the item is and which of the four gray totes it should be placed in. “If they are using AI for the identification, that’s interesting,” Dechow says.
  • The illumination for the application appears to be LED. Dechow notes that the lighting devices have multiple segments, giving Amazon the flexibility to determine which segment to flash in each situation.

Challenges with Robotic Bin Picking at Scale

Tom Brennan, President of Artemis Vision (Denver, CO;  https://www.artemisvision.com), says the operation depicted in photos and videos accompanying the news release—picking an item from one tote and moving it to another tote—is a challenging machine vision application.

It is difficult enough to develop an application in which a robotic arm takes a part out of a tote and puts it on a production line, but that is only one part, Brennan notes. It is much more challenging to develop a vision system that can recognize and pick as many arbitrary items as would be necessary in Amazon’s environment, he says.

“I’ve seen tons of robotic arms ‘doing’ this at tradeshows in constrained environments but not really at distribution centers,” Brennan says.

However, it would be important to know what error rate Sparrow has logged during testing to fully evaluate its potential, he adds. Nonetheless, “if it works on a very broad set of products, that’s pretty amazing,” Brennan concludes.

Dechow says that other organizations are experimenting with this type of robotic process, but two stumbling blocks often prevent these applications from moving successfully to a production environment:

  • Accurately recognizing all objects, particularly those with few unique features or identifying information, such as printed words or logos on a label.
  • Picking and placing fast enough to keep up with production rates.

Field Testing

Amazon is field testing Sparrow at a site in Texas. In those tests, “Sparrow is working on a process known as inventory consolidation within our fulfillment process,” explains Amazon Spokesperson Xavier Van Chau. "It is handling items from one tote and consolidating them in another tote, densely packing that tote to ensure it is completely full and helping maximize inventory we can hold within an operating site. This means Sparrow has to deal with a great deal of clutter and have the ability to pick items in a crowded tote while also densely packing them in another bin,” he says.

Van Chau adds that Sparrow can manipulate 65% of Amazon’s sortable inventory of more than 100 million products, not including large products like appliances.

In 2021, Amazon “picked, stowed, or packed approximately 5 billion packages” worldwide, according to the news release.

Summing up the testing with Sparrow, Van Chau says, “We’re excited by the progress we are making, but it’s too soon to share our plans for broader deployment.”

However, Dechow says that there are at least two other points in a logistics operation where Sparrow could be deployed: picking customers’ orders or sorting stock for storage after it arrives at a warehouse from a manufacturer’s operation.

Many Robots at Amazon

At Amazon, numerous robots are in various stages of development or production use.

Robin is another robotic arm from FANUC (M-20tD25) that Amazon has customized. Robin, which also has suction-type grippers, sorts parcels and mailers as they move down a conveyor belt. To do so, Robin grabs each package, rotates it, and scans the label to read the zip code. Robin also removes packages with rips, tears, or illegible addresses from the conveyor, so that employees can fix those issues before the packages are shipped to customers, according to articles posted on Amazon Science.2, 3  

Robin sorts packages at Amazon's fulfillment centers.Courtesy of Amazon

Currently, “We have deployed over 1,000 Robin robotic systems across our operations network,” Van Chau says.   

For safety reasons, Robin operates in restricted areas of Amazon’s fulfillment centers.

In an article in Amazon Science from 2021 describing Robin, Amazon says Robin “brings many core technologies to new levels and acts as a glimpse into the possibilities of combining vision, package manipulation, and machine learning.” 2

Training Robots

Training robots in visual perception tasks requires a dataset of annotated images to teach robots how to distinguish between types of packages (a mailer vs. a box, for example) or types or products (such as different shampoos and conditioners).

Since Amazon has found that publicly available image datasets aren’t good enough for training robots on the products shipped from the company’s fulfillment centers, it’s developing an in-house trove that the company believes will cut the “setup time required to develop vision-based machine learning solutions from between six to 12 months to just one or two,” Amazon explains in a November 10, 2022, blog post on Amazon Science.4  

Next Steps

While Sparrow and Robin use suction to grip items, Amazon also is experimenting with pinch grasping, which more closely mimics the actions of human hands, the company explains in the 2022 article posted on Amazon Science.1  

Sparrow and Robin aren’t the only robots Amazon has discussed publicly this year. The company also says it’s experimenting with an autonomous robot, Proteus, which uses perception and navigation technology developed in-house to move in and around Amazon’s employees, meaning it isn’t restricted to robot-only areas of the fulfillment centers. 

Proteus will initially move GoCarts, which are tall, wheeled shelving systems, around the outbound area of its fulfillment centers. However, the goal is for the robot to bring the shelving units directly to employees situated at workstations in the fulfillment centers. Currently, employees move the shelving units around manually, Amazon says.

Overall, the company says it has more than 185 fulfillment centers, including 50 of them that incorporate more than 520,000 individual robotic drive units.

Amazon says the impetus for embracing computer vision and machine learning is not only to improve operational efficiency but also to reduce employee’s risk of injury and provide them with jobs that are more satisfying.

“The design and deployment of robotics and technology across our operations have created over 700 new categories of jobs that now exist within the company—all because of the technology we’ve introduced into our operations,” the company says in the November press release introducing Sparrow.  

Resources:

1. “Pinch-Grasping Robot Handles Items with Precision,” Amazon Science. https://www.amazon.science/latest-news/pinch-grasping-robot-handles-items-with-precision

2. “Amazon’s Robot Arms Break Ground in Safety and Technology,” Amazon Science. https://www.amazon.science/latest-news/amazon-robotics-see-robin-robot-arms-in-action

3. “Robin Deals with a World Where Things Are Changing All Around It,” Amazon Science. https://www.amazon.science/latest-news/robin-deals-with-a-world-where-things-are-changing-all-around-it

4. “How a Universal Model is Helping One Generation of Amazon Robots Train the Next,” Amazon Science. https://www.amazon.science/latest-news/how-a-universal-model-is-helping-one-generation-of-amazon-robots-train-the-next

About the Author

Linda Wilson | Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!