Michigan State University's Vision System Sorts and Grades Sweet Potatoes

June 5, 2024
Machine vision system deploys YOLOv8 algorithms to analyze images of sweet potatoes.

Researchers at Michigan State University used a LiDAR camera and convolutional neural network to inspect and grade sweet potatoes moving on a roller conveyor system.

Sweet potatoes are a popular vegetable in the United States. Indeed, 126,300 acres of sweet potatoes were harvested in 2023. They are typically inspected and graded manually by humans in packing houses based on size, shape, and quality standards specified by the U.S. Department of Agriculture. However, many packing houses first use electronic optical sizers to sort the potatoes by size.

The manual sweet potato-inspection process is time consuming and expensive, as the Michigan State University (East Lansing, MI, USA) researchers explain in a journal article in Computers and Electronics in Agriculture. (bit.ly/3VcnQvp). However, automating quality assessment of sweet potatoes is challenging, compared to other fruits and vegetables, because of the naturally occurring shape irregularities in the potatoes, the researchers note.  

Creating a Deep Learning-Enabled Vision System to Inspect Potatoes

“This study represents, to the best of our knowledge, the first effort to develop innovative machine vision technology powered by state-of-the-art AI for real-time, online quality grading of sweet potatoes,” the researchers wrote.

The automated process uses YOLOv8-based (You Only Look Once) algorithms. With a convolutional neural network architecture, YOLO is a one stage image detector, meaning that it views images only once.

The system developed by the researchers grades sweet potatoes based on size and surface defects, snapping images of the potatoes as they move and rotate at a rate of 5 cm/s on a custom-designed roller conveyor system. Researchers manually load the potatoes onto the conveyor, which uses a chain-gear rack system to rotate the potatoes, enabling the system to collect images of the entire surface of each sweet potato.

Related: Deep Learning Algorithm Identifies 3D Images in Real Time

The machine vision system also includes a RealSense L515 LiDAR camera from Intel Corp. (Santa Clara, CA, USA), a computer and a monitor. The Intel camera, positioned above the conveyor and pointed downward, captured color video at 30 fps with a resolution of 1920 x 1080 pixels.

Although the camera produces RGB and depth images, the researchers only used the RGB images.

The computer is equipped with an i7-13650HX @2.6 GHz processor and NVIDIA (Santa Clara, CA, USA) GeForce RTX 4060 GPU.

On the software side, the researchers integrated a tracker algorithm with YOLOv8, which allows them to both segment and track the sweet potatoes in real time. They developed an algorithm to estimate size using OpenCV (Open Source Computer Vision Library), which has more than 2,500 computer vision and machine learning algorithms.

Evaluating the Inspection System's Performance

To test their system, the researchers purchased 123 sweet potatoes at local grocery stores. Trained people manually labeled and evaluated the potatoes based on their length, width, and surface defects. “This manual evaluation is necessary for establishing a ground truth for supervised learning,” explains Yuzhen Lu, a study author and assistant professor in the Department of Biosystems and Agricultural Engineering at Michigan State University.

Lu and a second author, Research Associate Jiajun Xu, divided the potatoes into batches and then loaded them manually onto the conveyor to capture video of them. They created 19 separate videos.

They randomly divided the videos into a training set of 13 videos representing 87 potatoes and a testing set of six videos with images of 36 potatoes.

Researchers extracted 285 still images from the videos. Because more than one instance of a potato typically appeared in an image, they ended up with1,564 instances of the potatoes, which were divided into two groups: 1080 for training and 484 for testing.

Researchers manually drew bounding polygons around each sweet potato instance and assigned a quality grade based on visual inspection of the images. This round of manually derived labels and grading was “necessary for both training and evaluating the instance segmentation model for defect grading,” Lu says.

Related: Inspecting Surface Defects in Wood Paneling and Siding

During testing, samples moved forward on the conveyor while rotating. The algorithms estimated the size, width, noted surface defects, and then determined quality grades for individual instances in the still photos. Based on the accumulated data for all images of a single potato, the vegetable was assigned to a final quality grade.

The final grading achieved an overall accuracy of 91.7% in assigning the potatoes to one of four quality categories.

Researchers noted several limitations of their study. First, the samples were culled from grocery stores, so the quality range was narrow, compared to what would be the case in a packing facility. Second, the 5 cm/s speed of the conveyor is likely too slow to automate the grading of large quantities of sweet potatoes.

Related: Inspection Systems Strive for Balance

The work continues. “We have already integrated the computer vision algorithms with sorting mechanisms, which creates a more complete system,” says Lu.  “Next steps are to optimize the system for better grading and sorting performance and test it at more practically meaningful conveyor speeds (e.g., 0.5 m/s or higher),” he adds.

Another avenue the researchers are exploring is using multispectral imaging. The idea is to acquire color and near-infrared (NIR) images simultaneously. “It will conceivably deliver more information relevant to quality evaluation of sweet potatoes, as some defects can be better resolved in NIR images,” Lu says.

 

About the Author

Linda Wilson | Editor in Chief

Linda Wilson joined the team at Vision Systems Design in 2022. She has more than 25 years of experience in B2B publishing and has written for numerous publications, including Modern Healthcare, InformationWeek, Computerworld, Health Data Management, and many others. Before joining VSD, she was the senior editor at Medical Laboratory Observer, a sister publication to VSD.         

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!