Sentinel Vision and Zebra Technologies Launch AI, 3D Automated Inspection System

Sentinel Vision and Zebra Technologies have developed an AI-driven 3D inspection system that improves automotive door assembly quality by replacing manual checks with automated, scalable solutions, reducing errors and increasing efficiency.
March 30, 2026
7 min read

Key Highlights

  • The system uses a dual-camera, single-laser 3D architecture to effectively capture complex geometries and reduce blind spots in inspection.
  • It replaces traditional manual and legacy machine vision methods.
  • By combining 3D and 2D sensing with AI, the system provides comprehensive inspection coverage, reducing false positives and improving defect detection.
  • The solution minimizes commissioning time through pre-configured setup based on sample door assemblies.

Sentinel Vision (Rubi, Barcelona, Spain) has partnered with Zebra Technologies Corporation (Lincolnshire, IL, USA) to develop an AI-powered 3D Vision inspection system. The system utilizes Zebra’s AltiZ Series 3D profile sensor and Sentinel’s Vision Core AI software and KAI Maker platform. 
Designed for scalability and fast integration, the system has proven effective for manufacturing inspection systems in a variety of industrial sectors, including automotive, pharmaceutical and food and beverage. Most recently, the system was successfully deployed for a manufacturer that makes car doors.
The system is a Poka-Yoke—Japanese for mistake proofing/error prevention—inspection system for automotive door panels. Modern car doors can have more than 80 components of various sizes, shapes, and configurations per panel. This system was designed to ensure that each door panel matches the expected build specification for the vehicle on which it will be installed. 
Until recently, inspections for car doors were performed by human inspectors and/or legacy machine vision systems that could require using up to 500 mechanical sensors. 
By contrast, Sentinel’s system requires one dual-camera, single-laser sensor and eight 2D industrial cameras, and no human inspectors.
VSD wanted to learn more about the system and its features and capabilities, so we reached out to Álvaro Oliveira, Sentinel Vision’s co-founder and chief technology officer.
Editor’s note: This Q&A may have been edited for clarity and/or style.
Vision Systems Design (VSD): What solution is the system providing and what is it replacing?
Álvaro Oliveira (AO): The system is designed to verify that every component has been assembled correctly, in the correct position, with the correct part reference, and with the intended functional state. 
It replaces a process that is typically dependent on manual visual checks, operator experience, fixture-based validation, and partial downstream functional verification. In many plants, these controls are either inconsistent, difficult to scale across multiple door variants, or only capable of detecting certain error types after the assembly has progressed further downstream.
Our system brings these checks into a single automated in-line verification step, allowing manufacturers to detect missing parts, incorrect parts, mispositioned components, and functional assembly errors earlier and more reliably. The goal is not only to reduce escapes, but also to make quality assurance more repeatable, traceable, and less dependent on individual operator judgement.
VSD: What are the components (hardware and software) the system?
AO: The system combines 3D and 2D vision to deliver both geometric verification and detailed component-level inspection. 
On the hardware side, we use one Zebra Altiz 3D camera to capture the three-dimensional structure of the door assembly, together with eight FLIR (Wilson, OR, USA) 2D cameras to inspect the presence, position, identity, and visual condition of the various components. For illumination, we use flat dome lighting, which provides highly diffuse illumination and helps reduce reflections across the varied materials and surface finishes typically found in automotive door panels.
The system utilizes Sentinel’s proprietary software ecosystem, namely VisionCore, Kaimaker, and HMIBuilder. An advantage of this software ecosystem is that data flows automatically between the different Sentinel platforms. This reduces engineering overhead, simplifies deployment, and helps shorten commissioning time.
VSD: How is the system set up? 
AO: In the case of this Poka-Yoke for door panels, the system is delivered as a fully functional machine built on top of Sentinel’s broader software ecosystem, rather than as a loose collection of components that the customer must integrate themselves. The customer sends us representative door assemblies, and based on those samples, we configure the inspection strategy, adjust the different vision checks, and train the system before deployment. Much of the application tuning is completed before the system reaches the customer’s production floor, which helps reduce integration risk and shortens commissioning time.
VSD: Why a dual-camera, single-laser 3D sensor architecture? What were some technical challenges in developing and implementing the system and how were they solved? Does geometry/occlusion handling improve materially over single-camera systems for door assemblies?
AO: We chose a dual-camera, single-laser 3D architecture because automotive door assemblies present exactly the kind of geometry where a single-view 3D setup can struggle. There are recessed areas, steep surface transitions, and reflective regions that can lead to occlusions or lower-quality 3D capture.
The dual-view approach gives us two perspectives of the same laser profile, which helps reduce blind spots and improves digitization quality in the more difficult areas of the part. 
Another challenge was making the system fast to deploy in a real production environment. We wanted to avoid a situation where achieving good AI performance required collecting and labelling very large image datasets, because that would slow commissioning significantly. We designed the system so that AI models could be trained effectively with a limited number of images, supported by a broader inspection framework that reduces dependence on purely data-driven methods. 
VSD: Before this system, what were some of the major pain points in door assembly inspections?
AO: Previously, the main pain points came from the fact that many inspection approaches were too rigid for the reality of automotive door assembly. Mechanical sensors, for example, often struggled to cope with normal product tolerances, making it difficult to maintain reliable inspection performance without excessive adjustment.
Flexibility was another issue. Whenever a new door reference was introduced, the machines typically had to be reconfigured, sometimes extensively. That made changeovers slower and increased the engineering burden each time the product mix evolved.
In addition, inspection was often only partial, rather than covering the full panel consistently.
Finally, legacy smart camera systems were no longer keeping pace with the increasing complexity of the product. The result was a process that was harder to adapt, harder to scale, and less robust than manufacturers needed.
VSD: What are some unique technical features/advantages of the system?
AO: One advantage is that it can achieve useful performance with relatively few training samples. That is a significant benefit in production environments, because it reduces the effort needed to commission the system and makes it easier to bring new applications online quickly.
Another advantage is the full-panel digitalization in both 3D and 2D. Instead of limiting inspection to selected points or relying on a single sensing approach, the system creates a much richer digital representation of the entire door panel. That allows us to choose the most effective inspection method for each requirement.
The system also connects directly to the customer’s ERP environment, which makes the solution much more scalable in high-variant manufacturing environments.
VSD: How modular is the system for customers who already use other vision software?
AO: Different types of cameras can be added depending on the inspection requirements, but the inspection logic and vision application itself are implemented using Sentinel software.
This approach is intentional, because it allows us to maintain consistency across acquisition, inspection orchestration, AI workflow, traceability, and operator interaction. It also helps us reduce integration complexity and keep commissioning times under control.
At the same time, the system is designed to integrate well with the customer’s broader production environment .
VSD: Automotive doors can include more than 80components per panel with many variants—how do you ensure robust performance amidst variant overfitting? What happens to performance when lighting or surface finishes change?
AO: The key is that we do not treat the problem as one large end-to-end AI task. Instead, we combine controlled image acquisition, structured inspection logic, 2D and 3D sensing, and AI only where it provides clear value. That architecture is important because it reduces the risk of overfitting to specific door variants.
We also organize the inspection around components, zones, and assembly expectations, rather than trying to learn every variant only as a whole finished product. This makes the system more scalable and more robust when new references are introduced, especially when they are built from already known components.
To improve generalization, the AI models are trained with synthetic data variation and augmentation, which helps them remain robust against realistic production differences. In parallel, the machine operates in a controlled lighting environment, so one of the biggest sources of variability is already minimized at acquisition level.
When lighting conditions or surface finishes change, performance is protected by that combination of controlled illumination, diffuse lighting, and multimodal inspection. Because the system uses both 2D and 3D information, it is less sensitive than a single-modality approach to changes in reflectivity or appearance.

Related: Next Gen Machine Vision: Anomaly Detection For High Speed, Hi Mix Production Environments

Related: How to Build an AI-Enabled Industrial Vision System

About the Author

Jim Tatum

Senior Editor

VSD Senior Editor Jim Tatum has more than 25 years experience in print and digital journalism, covering business/industry/economic development issues, regional and local government/regulatory issues, and more. In 2019, he transitioned from newspapers to business media full time, joining VSD in 2023.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!