When former Editor Andy Wilson first asked me to host a Webinar on 3D vision for Vision Systems Design, I had just published an article on a structured light scanner in an optics publication and, as a shameless self-promoter, asked the editor of Prosilica’s monthly newsletter to include an article on the system in its next publication. On the very same day that newsletter was released, Andy contacted me from Germany. As it was explained to me, Vision Systems Design had never had a 3D vision Webinar, and it was a goal of the editors to introduce one. Reading my article, Andy thought I seemed like a good fit. Well, here we are 12 years later, and I continue to present the Webinar on an annual basis. And looking over the many Power Point slides, I am amazed how the technology has evolved. In pretty much every presentation, I describe a technology that was at its early stages of university study only to have a product available a year or two later.
Today, 3D vision has become commonplace as users of machine vision systems can buy scanners and time-of-flight (ToF) sensors off the shelf. Thanks to brands like Vision Systems Design as well as their exceptional series of Webinars, customers are more savvy and know the differences between passive and active stereovision. Perhaps it’s nice to think that applications for machine vision drove much of this growth, but much of the imaging technology we take for granted today was actually developed for video gaming. This is certainly true of ToF cameras that have evolved from 3DVSystems, which originally set out to design a depth sensor for Sony’s Playstation, to be perfected by Microsoft with the Kinect V2 and now the Azure Kinect. Incidentally, LIDAR systems have seen similar evolution but, given their cost, needed an application like self-driving cars to justify the expense of the research and development. Certainly, machine vision has benefited greatly from these markets, as hardened ToF cameras are available from machine vision manufacturers at prices that shock me to this day.
My own research with structured light was largely sidelined by the introduction of Prime Sense’s first-generation cameras and further supplanted by Intel’s Real Sense line of cameras. This isn’t to say that my work is no longer of value—quite the contrary as I address problems with multipath interference as well as moiré interference between the square sampling grids of the camera and the projector. Machine vision structured light systems still achieve orders of magnitude better resolution than RGB-D cameras. However, there is a certain loss of pizzazz that existed early on when lay persons were wowed with the videos I created of moving targets. Obviously, we see the same thing happening with artificial intelligence (AI) today, as early projects wowed people with what could be achieved, but as the technology became more pervasive, the wow factor rescinded, and things have become almost mundane.
Of course, the obvious question now is what technology or industry is going to drive the evolution of machine vision for the next 12 years? For those who have attended my presentations, it was my firm belief that augmented reality would drive the future of imaging, but I’ve since changed my mind on this. Instead, I think the future of machine imaging is in hyperspectral imaging, as the number of technologies for acquiring hyperspectral data is increasing; prices are coming down to the point where they are manageable for some applications; and while hyperspectral imaging is not a new concept, the advances in AI are making it much easier to handle the volume of data associated with hyperspectral imaging cameras. Hand in hand with these is that the number of applications for hyperspectral imaging is expanding rapidly, which drives further improvements in AI and brings hardware costs down—again, feeding the cycle.
Only time will tell of course, and I’m certainly looking forward to it. I’ve already got a slide deck ready to go.