Focus on Vision: August 26, 2022

Aug. 26, 2022
In this edition of Focus on Vision, Chris Mc Loone covers a recent Webinar, autonomous vehicle technology, and Vision System Design's most recent On Vision column.

On Demand Webinar: Robotic Bin Picking

During this Webinar, Jared Glover, CEO, CapSen Robotics, will provide an overview of recent advancements in 3D computer vision software for environments where clutter is either unavoidable or too expensive to eliminate with special-purpose hardware such as bowl-feeders or shake tables, and for applications that need to meet stringent cycle times.

Specifically, Glover will discuss the state of the art in 3D object pose estimation and 3D object detection in clutter, with applications including robotic bin picking, machine tending, and repacking. He will discuss bin picking and precise placement of small, entangled objects; tightly packed flat automotive and medical supplies parts; and the real-life methods used to accomplish these tasks, including optimized geometric and machine learning algorithms. The ability to handle safety-critical components is important in repacking applications as to not damage the parts. The vision, motion planning, and end effector technology all play a vital role in achieving this.

Focus on Vision: August 12, 2022

Focus on Vision: July 29, 2022

Focus on Vision: July 15, 2022

Depth Cameras Can Fill LiDAR’s Autonomous Vehicle Blind Spots—Here’s How

LiDAR stands a good chance of finding a defining place in the AV market. But, many believe that depth vision systems, integrated alongside LiDAR, are well suited to removing the remaining blind spots inside and outside the vehicle. In so doing, 3D imaging will be the last piece needed to make autonomous driving a reality on the world’s streets and highways.

Depth cameras use RGBD (color-depth) technology. They are typically implemented using dual cameras that provide stereovision for enabling depth perception of the surrounding area, including the position, distance, and speed of nearby objects, and an RGB (color) camera for added textures.

On Vision: Gage R & R Studies in Machine Vision

Installers of vision systems for measuring tasks often get caught out by simply stating that the pixel calibration task is completed by dividing the number of pixels by the measurement value. This does provide a calibration constant, but it’s not the whole story. After all, vision systems are based on pixels whose size is arbitrarily dependent on the sensor resolutions, fields of view, presentation angle, lighting, and optical quality. It’s essential that any measurements are validated and confirmed from shift to shift, day to day—and that these measurements are repeatable and accurate.

About the Author

Chris Mc Loone | Editor in Chief

Former Editor in Chief Chris Mc Loone joined the Vision Systems Design team as editor in chief in 2021. Chris has been in B2B media for over 25 years. During his tenure at VSD, he covered machine vision and imaging from numerous angles, including application stories, technology trends, industry news, market updates, and new products.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!