Silicon Software Q&A: Embedded vision, deep learning, and 20 years of frame grabbers and software
This year, Silicon Software—a 2017 Innovators Awards gold-level honoree—celebrates its 20th year as a vision company providing frame grabbers and imaging software solutions. To celebrate its anniversary, managing directors Dr. Ralf Lay and Dr. Klaus-Henning Noffz looked back at the company’s beginnings, while offering a look at current trends and growth areas.
This year,Silicon Software—a 2017 Innovators Awards gold-level honoree—celebrates its 20th year as a vision company providing frame grabbers andimaging software solutions. To celebrate its anniversary, managing directors Dr. Ralf Lay and Dr. Klaus-Henning Noffz looked back at the company’s beginnings, while offering a look at current trends and growth areas.
While Silicon Software published their own Q&A with Dr. Ralf Lay and Dr. Klaus-Henning Noffz (listed as RL and KHN), we wanted to ask a few specific questions of the company’s managing directors to get a better idea of where they stand on a few specific "hot topics." Below are the questions Vision Systems Design asked:
Embedded vision is a technology that is gaining more and more popularity. Why do you think this is, and what is your company’s role in the technology?
KHN: Embedded vision expands the possibilities for image processing into new areas such as autonomous driving, surveillance, transport, logistics and service robots, industries in which PC’s are not suitable nor desired. Low system costs, a higher mobility, better integration and reducing the system size also supports this approach.
However, due to the complexity and heterogeneity (components from different manufacturers), it requires new concepts and technologies. For this reason the Embedded Vision Study Group, under my chairmanship, has been developing new Embedded Vision Standards (OPC UA Vision Companion Specification and Embedded GenICam) since the end of 2015 in the areas of process technology, software compatibility and integration into automation (industry 4.0, IIoT). Complimenting this is the parameterization of Vision devices and the representation of their complex functions and formats in GenICam.
Our own approach focuses on intelligent, virtually self-sufficient embedded image processing components based on FPGA technology which greatly reduces the data load on CPUs. The FPGA in a way represents the control center of embedded cameras and vision sensors, and they are particularly suited for image and signal processing, communicating with the peripherals and IT systems as well as real-time processing using VisualApplets - programmed via its graphical user interface. With the graphical FPGA programming of vision devices, it is possible to quickly equip complete product lines with partially autonomous capacity at an accelerated time to market. Through this concept, embedded vision components can achieve the comparable performance of large solutions in many embedded applications.
Deep learning and artificial intelligence is also becoming more popular. How, if at all, does your company plan to become involved in this, if they are not already?
RL: Machine learning is an important topic for us, as it provides new approaches to many new applications with clearly better results. Convolutional neural networks (CNN) are very well suited for pattern recognition by training and complex classifications via the linear combination of convolutions, e.g. to detect defects. FPGA processors are perfectly suited for CNNs due their native parallel processing architecture and by providing a single step from image acquisition to classification result. FPGAs have much lower power consumption than GPUs and are the best choice for the high amount of data to be processed in machine learning. They can also be used in embedded vision devices, leveraging machine learning in the Machine Vision environment. Machine learning has become a high priority for us: A CNN operator within VisualApplets as well as a new frame grabber are being developed specifically adapted for this purpose. The other benefit for our partners is the use of our CNN operator functionality within their VisualAppplets Ready devices.
Now that you are celebrating your 20th anniversary, what would you like to see your company do in the next 20 years?
KHN: We would like to continue establishing VisualApplets as the market standard for FPGA programming. Since FPGA programming is made easy for everyone by the graphical user interface of VisualApplets, without needing any HDL (hardware description language) knowledge, we will further enhance our software according to user and market needs via special functionalities, e.g. for machine/deep learning and others. As we consider the future, we envision VisualApplets covering all the most important vision applications and requirements by providing task specific applets or operators and will be used by a large number of hardware and software developers as well as application engineers. We foresee VisualApplets being used in many varied FPGA based devices running versatile applications - from extremely compact ones to huge image processing systems.
The same objectives are valid for our frame grabbers. We will cover all future and relevant high-speed camera interfaces, connected via the quick configuration of the system components, so that all demanding vision applications in all industries can be addressed. The frame grabbers will be still needed and we will remain a technology pioneer for our industry.
In conclusion, expanding our hardware and software company by providing the market with concise tailor-made solutions which meet the requirements of our customers – this is my vision for the next 20 years of our company.
As mentioned above, Silicon Software also published their own Q&A that provides a look at the start of the company, and brings us up to current day. That Q&A can be downloaded here.
Pictured: Dr. Ralf Lay (left) and Dr. Klaus-Henning Noffz (right)
View more information onSilicon Software.