Machine vision a focus at Cornell Cup 2013

May 30, 2013
In its second year, Cornell Cup USA saw teams of college students compete in an embedded design competition designed to empower the teams to become inventors of pioneering technologies. At this year’s event, innovative machine vision applications were showcased a number of teams looking to break into the industry.

In its second year, Cornell Cup USA saw teams of college students compete in an embedded design competition designed to empower the teams to become inventors of pioneering technologies. At this year’s event, innovative machine vision applications were showcased a number of teams looking to break into the industry.

Here is a look at some of the Cornell Cup entries:

Robot feeding arm

One such entry was the ARM (Autonomous robotic mechanism), from a University of Massachusetts Lowell team. This device aims to provide independence to people who are unable to feed themselves by designing a robotic feeding arm that can be manipulated by push buttons, head movements and facial recognition. Using OpenCV, a free open-source library of computer vision algorithm components, the team enabled the arm to locate the food and pick it up from the bowl or plate to deliver it to the user’s mouth, even if the user’s head is moving.

Intelligent shopping cart

Another machine vision-related entry was the “Mengbaolity” an intelligent shopping cart developed by a University of California, Berkeley team. Keeping in mind the theme of helping those who are unable to perform certain tasks in life, the “Intelligent Cart” is designed to automatically follow its owner inside a supermarket. By utilizing computer vision, wireless networking and automation control, the team says the Intelligent Cart will have a smartphone app that will allow the user to command the cart to retrieve selected items with human supervision and return back to the customer automatically.

Self-powered robotic table

The Ouroboros submission from Columbia University proposed Alfred: An intuitively-controlled, mobile, self-balancing table with voice over IP capability, lifting capacity of up to 50 lbs. The group explains that force transducer arrays on the platform’s circumference translate a touch input into omnidirectional motion or a change in platform height, which allows Alfred’s autonomous and intuitive controls to navigate and control the platform. In addition, an infrared camera allows Alfred to detect and follow the user where space is constrained. Once again, this application was designed to provide intuitive assistance for those who are in need.

Read more about the 2013 Cornell Cup entries.

Also check out:

Robot inspects power lines

Vision-guided robotic 'bartender' serves drinks

Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design

To receive news like this in your inbox, click here.

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!