If only machine vision vendors would go on tour, maybe their audience feedback would be greater.
This summer, my son Douglas decided to get a summer job so he could “invest” in more of the latest high-tech musical equipment for his ongoing home recording-studio project. After about two weeks of heavy manual labor and $250 in the bank, he quit. But he still went to the local music store to purchase a brand new $500 Vox ToneLab SE, a large-format multi-effect modeling pedal board processor with what some reviewers have called “the awesome sonic power and warm and fat distortion of pure tube sound.”
What impressed me the most about the unit was not the sound but the technology used. Because, you see, the unit includes what Vox calls a Valve Reactor that uses a 12AX7/ECC83 dual triode as a power amp tube. Among other things it emulates a number of different “classic” amplifiers in sound, feel, distortion, and presence. For old folks, the warm orange glow of the glass tube also provides a sad reminder of the days before the transistor became king.
Perhaps the most important feature of the Vox ToneLab SE is its ease of use. Although there is a small LCD, the user configures the unit using a bank of switches that in a moment can make a Fender Stratocaster playing through a Fender amp sound like a Gibson Les Paul being thrashed through stacks of Marshall amps. As you can imagine, I am now even more popular with my neighbors.
In the consumer marketplace, you can always tell when a product is ready for prime time. Design engineers carefully consider how the customer will interact with the product, making ease of use a priority. Perhaps the most successful example of this in the consumer-electronics market was the introduction of Apple’s iPod, a product soon to be copied by Microsoft, which will no doubt introduce a Windows CE-based look-alike.
In configuring machine-vision systems for the industrial-automation market, however, engineers are faced with some tough challenges in the areas of optics, lighting, illumination, cameras, frame grabbers, and software. To overcome these challenges, many companies make smart cameras that attempt to alleviate part of the system-integration problem (see this issue, p. 45). Not surprisingly, and unlike Vox’s ToneLab SE, none of these products feature tube technology!
Some years ago, however, I was introduced to a product that incorporated switches and buttons. Developed by Sightech (San Jose, CA, USA; www.sightech.com), the neural-network-based Eyebot machine-vision system used a number of buttons and switches to allow the user to load, learn, test, view, and display images. Perhaps this product was ahead of its time, because, in its latest incarnation, these switches and buttons have been replaced by a user interface.
After being given a demonstration of the device, I was very impressed with how, like the Vox ToneLab SE, it was very easy to set up and operate. Not all machine-vision systems, of course, are based on neural networks that “learn” specific features within images and then extract data from similar images to perform pattern recognition. Many are based on a number of algorithms that include point, filtering, and global operators. To build systems based on these algorithms, developers must combine them in a number of ways. One is left to wonder, however, whether system developers really need to know that a Gaussian filter may be used to achieve the effect, of say, image smoothing. Certainly, the folks at Vox don’t think so. Nowhere in their manual does it describe functions used to produce specific effects, just the effects themselves.
It’s a lesson that vendors of image-processing software, machine-vision systems, and smart cameras could learn. Perhaps it will be a step in the right direction for future imaging and machine-vision systems that will need to encompass higher-level image-understanding algorithms. Only then can more sophisticated user interfaces finally bring machine vision to the mass market.