Embedded Vision

[Page 2] Q&A: Machine vision software, advancing algorithms

Jan. 18, 2016
4 min read

Editor's note: This article is continued from page one.

Is there anything vision-related that you’ve seen recently, in any particular market, that you thought was particularly cool?

There are a lot of really cool products coming out in the drone market, which from our point of view, is of great interest since a drone can be thought of a camera in 3D space. There are many applications that can leverage this freedom of movement especially when working with larger objects and localization (i.e. where exactly is the drone) is possible.

Depth sensing is still a hot topic with improvements from the original Kinect to provide more accurate measurements with the same amount of cost. We have started to see these devices become used in commercial settings where accuracy of product counting and quality assurance is greatly improved by adding depth. While the depth 'image' if fundamentally different from a visual image, the same techniques one uses for RGB images can help analyze depth images so there is a lot of functionality overlap.

Why do you have so many hardware-specific modules?

When RoboRealm first started more than 10 years ago, we focused primarily on developing vision modules. It became quickly apparent that once you are able to process an image you need to do something with that extracted knowledge.

What you need to do with that information varies tremendously depending on your project but will always require some form of integration. Because integration is typically the most time consuming and difficult part of any project, adding in custom modules that speak specific languages to hardware modules was very convenient rather than custom coding the integration between systems. This allowed much quicker data flow and simpler problem solving with the modules built right into RoboRealm.

What modules are you working on for 2016?

RoboRealm already has almost 500 modules but our to-do list includes at least another 500 and grows every month! While there are many new modules slated for development in 2016 there is a common strategy that we are starting to leverage in some of these new modules. That strategy is one of artificial intelligence.

While one can argue that image analysis is already a form of artificial intelligence, we are talking more about leveraging image analysis functions in learning models such as Neural Nets and Genetic Algorithms. We have found that we can produce an unbelievable number of statistics from images that tell us various things about the image but it can be very time consuming to incorporate those features into a desired outcome. ‘

For example, car counting is a common vision application. In a trivial implementation, one can use image subtraction in an attempt to determine if a car is present in part of the image but this can fail once the lighting changes (i.e. a cloud passes by). A better solution is to train an automated solution on the same spot in the image and provide it with all statistics possible (i.e. color, texture, shape, etc.) and let it determine the best solution to determine the presence or absence of a car. The beauty of this solution is that with only minor adjustments the same technique can be used to count people!

Our goal is to create more complex modules but with a simpler user configuration interface.

You are involved with the FIRST (For Inspiration and Recognition of Science and Technology) robotics competition. Can you tell us more about that?

RoboRealm has always been a supporter of STEM/STEAM organizations as that's the best way to secure our future not only in the USA, but also in the rest of the world. We are in our fourth year of officially supporting FIRST by donating licenses to each of the almost 3,000 student teams that participate in FIRST worldwide.

Each year the competition challenges kids in mechanical, electrical, sensor, and programming areas to create a robot that is specific to each year's challenge. This includes using cameras during the competition to identify targets of various kinds in order to aim the robot or projectiles. RoboRealm is included as one of the tools teams can chose to use identify these targets and was selected for inclusion after several teams found it to be easy to use back in 2010! The challenge is to be announced this weekend and triggers the start of the six week build period. This is an exciting time for us as the unexpected ways in which the software is used is always unique!

Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design

To receive news like this in your inbox,
click here.

Page 1 | Page 2

About the Author

James Carroll

Former VSD Editor James Carroll joined the team 2013.  Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.

Sign up for Vision Systems Design Newsletters

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!