• Choosing the Right Lens for a Machine Vision Application

    The following is a Q&A with Edmund Optics’ Nicholas Sischka about choosing the right lens for your machine vision application.
    June 24, 2025
    6 min read

    Nicholas Sischka is the Director of Imaging Product Development at Edmund Optics’ Cherry Hill, NJ, USA office. Sischka specializes in optics for vision systems and supports imaging and machine vision customers with application knowledge and design assistance for modification or customization requirements. Additionally, Sischka serves as the Association for Advancing Automation NextGen Chair and teaches on the A3 Education Committee. He holds a BS in Optical Sciences and Engineering from the University of Arizona.

    Related: Podcast: Interview with Nick Sischka

    What is an important point to consider when choosing a lens for a machine vision application?

    There's a really good rule of thumb that we like to use in developing machine vision lenses and in developing machine vision lenses into applications specifically.

    And that is, if you want to maximize your performance and minimize the price, you want to try to aim for a field of view to working distance ratio of about four to one. 

    Related: Lens Specs Matter—Do Your Homework

    What if you already have a solution designed, but it doesn’t fit that 4:1 ratio. Can’t you just decrease or increase the working distance to achieve the same goal?

    You can go to a shorter working distance and maintain that same field of view, but what you'll wind up needing to go to a wider angle lens, which is more difficult to design, requires more glass, and you're inevitably going to take performance hits. You're also going to be paying more money.

    On the flip side, it's like, well, why don't I go to, say, 8:1 on my working distance to field of view ratio? But you still can wind up paying more for the lens, because now the lens is so far away from the object you're trying to look at that the lens optics need to grow in order to collect the light in order to see it. So it's not really an image quality problem; it's a light collection issue and so that's why that 4:1 ratio is really, really helpful for us.

    That's the single most important thing that I try to pay attention to—how close can we get to that ratio.

    What are some other challenges that need to be considered and addressed?

    When it comes to choosing components, I think the lighting is the most difficult part. I've been doing this for about a decade and a half, and lighting is consistently the most challenging thing—can you or can't you resolve the object that you're trying to look at, at a specific contrast that you care about right now?

    There are some things that can sometimes get tricky in terms of, say, needing to see inside of things or around things, but for the most part, you're trying to see an object at a specific contrast. So you're going to maximize the contrast on the object you care about, and the feature that you care about, because it doesn't matter if it looks good to us, it's can we highlight that contrast on the machine vision system. So, from the lens perspective, we want to make sure that we're maximizing to the camera sensor we're using and that we're paying attention to everything holistically.

    Related: Fundamentals of Imaging Lenses

    What’s a common pitfall when it comes to choosing a lens for a machine vision application?

    The single biggest pitfall that I've seen is that, in general, full systems are developed and often the lens and the camera system are left until the very end. I realize sometimes there can be instances where machines are getting retrofitted, or there's specific package constraints and things like that, but when you artificially Introduce envelope constraints into imaging systems, you're immediately putting boundary conditions around your price and performance.

    So, from a design standpoint, should you start with the lens first?

    It does make a lot of sense to start with the lens because, given the super high performance of camera sensors on today's market, with incredible image quality relative to even what existed half a decade ago, the lenses are a bottleneck now in the vision system performance. 

    So focusing on the object you need to resolve, and the detail you need to resolve on that, is going to dictate what your field of view needs to be. Once you know that, then that will inform the type of lens you're going to need, which will help inform the type of camera.

    Usually, I like to pick the lens and the camera at the same time, as opposed to just focusing on the lens. But often it's really difficult when cameras are just arbitrarily chosen, because on the one hand, it makes our life easier as a lens manufacturer, because now I need to just choose a lens that's going to get as close as I can on this particular sensor. But it also dramatically limits the available choices and you are almost always going to make a non- optimized choice at that point.

    What are some other common pitfalls people can stumble over in the design process?

    Making sure that everything that's been designed into a system was actually intended for industrial use is really important. For example, there's all these sensors that are out there now that are designed for consumer electronics, sensors that are going into, say, smart phones. Those sensors can look really attractive at first because of the super tiny pixels and high megapixel counts. And generally, they're not all that expensive because they're manufactured at volumes that are going into consumer electronics. 

    But we're not making millions of lenses for an iPhone. Our manufacturing techniques are a lot different. All of those (consumer product) sensors have a very specific type of optics geometry that goes on to them. And so they usually have a specification for the chief ray angle. It's really steep because they're designed to get optics close to the sensor plane because of package constraints within a consumer device. We can't hit those types of chief ray angles in the rear of a system, for example.

    So, if you had to give one over-arching piece of advice to someone who is designing a machine vision application about choosing a lens, what would you say?

    Start thinking about your vision portion of your system as soon as you possibly can, try to maximize that 4:1 ratio of working distance to field of view, and if possible, go monochromatic—you'll immediately get a 20% performance bump for free.

     

     

    About the Author

    Jim Tatum

    Senior Editor

    VSD Senior Editor Jim Tatum has more than 25 years experience in print and digital journalism, covering business/industry/economic development issues, regional and local government/regulatory issues, and more. In 2019, he transitioned from newspapers to business media full time, joining VSD in 2023.

    Sign up for Vision Systems Design Newsletters
    Get the latest news and updates.

    Voice Your Opinion!

    To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!