Explained: Trifecta of lens f-stop, wavelength and Airy disc

In this blog we tackle a set of issues well-known to experts. It’s complex enough to be non-obvious, but easy enough to understand through this short tutorial. And better to learn via a no-cost article rather than through trial and error.

Alternative to reading on, let us help you get the optics right for your application. Or read on and then let us help you anyway. Helping machine vision customers choose optimal components is what we do. We’ve staked our reputation on it.

Aperture size and F-stop

Most understand that the F-stop on a lens specifies the size of the aperture. Follow that last link to reveal the arithmetic calculations, if you like, but the key thing to keep in mind at the practical level is that F-stop values are inversely correlated with the size of the aperture. So a large F-number like f/8 indicates a narrow aperture, while a small F-number like f/1.4 corresponds to a large aperture. Some lens designs span a wider range of F-numbers than others, but the inverse correlation always applied.

Iris controls the aperture – Courtesy Edmund Optics

Maximizing contrast might seem to suggest a large aperture

For machine vision it’s always important to maximize contrast. The target object can only be discerned when it is sufficiently contrasted against the background or other objects. Effective lighting and lensing is crucial, in addition to a camera sensor that’s up to the task.

“Maximizing light” (without over-saturating) is often a challenge, unless one adds artificial light. That would tend to suggest using a large aperture to let more light pass while still keeping exposure time short enough to “freeze” motion or maximize frames per second.

So for the moment, let’s hold that thought that a large aperture sounds promising. Spoiler alert: we’ll soften our position on this point in light of forthcoming points.

Depth of Field – DoF

While a large aperture seems attractive so far, one argument against that is depth of field (DoF). In particular, the narrowest effective aperture maximizes depth of field, while the largest aperture minimizes DoF.

Correlation of aperture size and depth of field – Courtesy Edmund Optics

Depending on the lens design, the difference in DoF between largest vs. smallest aperture may vary from as little as a few millimeters to as great as many centimeters. Your applications knowledge will inform you how much wiggle room you’ve got on DoF.

So what’s the sweet spot for aperture?

Barring further arguments to the contrary, the largest aperture that still provides sufficient depth of field is a good rule of thumb.

Where do diffraction limits and the Airy disc come into it?

Optics is a branch of physics. And just like absolute zero in the realm of temperature, Boyle’s law with respect to gases, etc., there are certain constraints and limits that apply to optics.

Whenever light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk pattern from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

And wavelength’s a factor too?

Indeed wavelength is also a contributor to contrast and the Airy disc. As beings who see, we tend to default to thinking of light as white light or daylight, which is a composite segment of the spectrum, from indigo, blue, green, yellow, orange, and red. That’s from about 380 nm to 780 nm. Below 380 nm we find ultraviolet light (UV) in the next segment of the spectrum. Above 780 nm the next segment is infrared (IR).

Monochrome light better than white light

An additional topic relative to the Airy disc is that monochrome light is better than white light. When light passes through a lens, it refracts (bends) differently in correlation with the wavelength. This is referred to as chromatic aberration.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point.

An easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

The moral of the story

The takeaway point is that the trifecta of aperture (F-stop) and wavelength each have a bearing on the Airy disc, and that one wants to choose and configure the optics and lighting to optimize the Airy disc. This leads to effective applications performance – a must have. But it can also lead to cost-savings, as lower cost lenses, lighting, and sensors, optimally configured, may perform better than higher cost components chosen without sufficient understanding of these principles.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

EO HPI + Fixed Focal Length Lenses

HPI+ Fixed Focal Length Lenses – Courtesy Edmund Optics

Front-loading the article by unpacking the acronyms:

EO = Edmund Optics, longstanding innovators in machine vision lensing

HP = High Performance

I = Denotes “instrumentation” – Streamlined mechanical designs and fixed apertures

+ = Targeted for larger 4th gen SONY Pregius sensors: 24.5MP 1.2” IMX530 and IMX540 sensors

Fixed Focal Length Lenses… ok no acronym to unpack there… but worth noting that fixed focal length lenses, with fewer moving parts, offer high performance with lower manufacturing costs. Which translates to a compelling value proposition.

With 18 members in the EO HPI+ Fixed Focal Length Lens family, it’s possible to get the optimal fit in focal length and F-stop. These industrial lenses are built for exceptional performance in demanding factory automation (FA) and machine vision environments. The locking focus and iris rings prevent accidental adjustments.

Contact us for a quote

SONY Pregious sensors – once more with feeling

While not the only player in the sensor space, SONY remains one of the most innovative and respected manufacturers. They regularly succeed their own prior releases through incremental and disruptive innovation. As we write this, there are four generations of SONY Pregious sensors. The 4th generation Pregius S captures up to 4x as much light as Sony’s own highly-praised 2nd generation Pregius from just a few years ago!

Surface- vs back-illuminated image sensors – courtesy SONY Semiconductor Solutions Corporation

24.5MP 1.2” SONY IMX530 and SONY IMX540

Consider the SONY IMX540 sensor for a moment. It’s designed in to at least 17 different camera models carried by 1stVision, across three different camera maufacturers: Allied Vision, IDS Imaging, and JAI.

First few rows of 1stVision’s camera offerings using Sony IMX540 sensor

At almost 25MP, with 2.74µm square pixels, yet only a 1.2″ diagonal size, it’s suited to the C-mount lens format. That’s a robust mount design that’s widely popular in machine vision, so adopters of cameras with this sensor and mount have a wide range of lenses from which to choose. That in turn offers a range of choices along the price : performance spectrum.


EO HPI+ FFL Lens Performance

Machine vision pros know that lens performance is often characterized by the optical transfer function, also called the modulation transfer function. The shape and position of the curve says a lot about lens quality and performance. It’s also useful when comparing lenses from different manufacturers – or even lenses from different product families by the same manufacturer.

Here’s the MTF curve for one of the Edmund Optics lenses:

25mm, f/2.8: Identical to 29-278 – Courtesy Edmund Optics

That’s just a representative example. We’ve got the MTF curves for each lens… either on our website datasheets or on request.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Edmund Optics C-Series Fixed Focal Length SWIR Lenses

Ideal when paired with SONY IMX990 or SONY IMX991 sensors, Edmund Optics’ C-Series fixed focal length SWIR lenses support a 2.8µm pixel pitch far smaller than classic SWIR pixel sizes in the 5 – 15µm range.

Fixed focal lengths help the lens designers achieve great performance while minimizing production costs due to fewer parts.

Industry-insider tip

Certain sensors marketed as Vis-SWIR (Visible plus SWIR spectrum coverage) are far less expensive than those traditionally designed for SWIR alone – and perform really well in the SWIR range (900 – 1700nm). The SONY IMX990 and SONY IMX991 are two such sensors, the former available in AVT Goldeye 130, and the latter in AVT Alvium 1800. So are SONY IMX992 and SONY IMX993, as featured in AVT Alvium cameras with diverse interface options.

So while certain users buy those sensors for applications that generate an image in both the visible and SWIR portions of the spectrum – MOST buyers are purchasing these sensors “just” do do SWIR applications in a cost-effective way.

It’s a bit like buying a dual-function toaster oven and never using one of the functions – but if it creates a valuable solution for you, who cares about the feature not used?

Edmund Optics saw the opportunity to create a lens series for the customers using the sensors referenced above to do dedicated SWIR applications. So they created their C-Series fixed focal length SWIR family, with 7 members, and focal lengths from 6 – 50mm.

Did we mention performance?

Recall that lens performance is typically expressed by the Modular Transfer Function (MTF). Below is the MTF chart for the 6mm FL at 1.3µm wavelength, from the Edmund Optics C-Series fixed focal length lenses. All 8 members of the family show comparable performance – see spec sheets for details.

MTF graph for the 6mm FL at 1.3µm wavelength” – Courtesy Edmund Optics

Shorter focal lengths not always easy to find

With fixed focal lengths at 6mm, 8.5mm, 12mm, 16mm, 25mm, 35mm, and 50mm, knowledgeable customers may note that especially the shorter focal length offerings are not that common in the machine vision optical market.

Compact and cost-effective

As fixed focal length lenses, each member of this lens series only need a focus adjustment – fine tuning – which is lockable against vibration slippage. They do NOT need the complexity of a varifocal lens. That means fewer glass elements and less metal, yielding a smaller form factor, handy if space is an issue.

It also means the lenses are less expensive to manufacture, a savings the user can enjoy in achieving a cost-effective way to get good performance in the SWIR spectrum.

Built as a variation on another lens series

It’s worth noting this SWIR-optimized lens series piggybacks on Edmund Optics visible spectrum C-Series fixed focal lenses. The key difference is the new lens series are optically coated for the SWIR spectrum. The benefit to the user is that Edmund Optics could do a spin on an existing lens series, which is cost-effective for the customer as well.

Optimized for factory automation applications

Both the visible and SWIR versions of the C-Series lenses have been optimized with factory automation in mind, particularly with respect to WD, size, and cost.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Tips on selecting a telecentric lens

Why might I want a telecentric lens?

Metrology, when done optically, requires that an object’s representation be invariant to the distance and position in the field of view. Telecentric lenses deliver precisely that capability. Telecentric lenses only “pass” incoming light rays that are parallel to the optical axis of the lens. That’s helpful because we measure the distance between those parallel rays to measure objects without touching them.

Telecentric lens eliminates the parallax effect – Courtesy Edmund Optics

Parallax effect

Human vision and conventional lenses have angular fields of view. That can be very useful, especially for depth perception. Our ability to safely drive a car in traffic derives in no small part from not just identifying the presence of other vehicles and hazards, but also from gauging their relative nearness to our position. In that context parallax delivers perspective, and is an asset!

But with angular fields of view we can only guess at the size of objects. Sure, if we see a car and a railroad engine side by side, we might guess that the car is about 5 feet high and the railroad engine perhaps 15 or 16 feet. In metrology we want more precision than to the nearest foot! In detailed metrology such as precision manufacturing we want to differentiate to sub-millimeter accuracy. Telecentric lenses to the rescue!

Assorted telecentric lenses – Courtesy Edmund Optics

Telecentric Tutorial

Telecentric lenses only pass incoming light rays that are parallel to the optical axis of the lens. It’s not that the oblique rays don’t reach the outer edge of the telecentric lens. Rather, it’s about the optical design of the lens in terms of what it passes on through the other lens elements and onto the sensor focal plane.

Let’s get to an example. In the image immediately below, labeled “Setup”, we see a pair of cubes positioned with one forward of the other. This image was made with a conventional (entocentric) lens, whereby all three dimensions appear much the same as for human vision. It looks natural to us because that’s what we’re used to. And if we just wanted to count how many orange cubes are present, the lens used to make the setup image is probably good enough.

Courtesy Edmund Optics.

But suppose we want to measure the X and Y dimensions of the cubes, to see if they are within rigorous tolerance limits?

An object-space telecentric lens focuses the light without the perspective of distance. Below, the image on the left is the “straight on” view of the same cubes positioned as in “Setup” above, taken with a conventional lens. The forward cube appears larger, when in fact we know it to be exactly the same size.

The rightmost image below was made with a telecentric lens, which effectively collapses the Z dimension, while preserving X and Y. If measuring X and Y is your goal, without regard to Z, a telecentric lens may be what you need.

Courtesy Edmund Optics.

How to select a telecentric lens?

As with any engineering challenge, start by gathering your requirements. Let’s use an example to make it real.

Object of interest is the circled chip – Image courtesy Edmund Optics

Object size

What is your object size? What is the size of the surrounding area in which successive instances of the target object will appear? This will determine the Field of View (FOV). In the example above, the chip is 6mm long and 4mm wide, and the boards always present within 4mm. So we’ll assert 12mm FOV to add a little margin.

Pixels per feature

In theory, one might get away with just two pixels per feature. In practice it’s best to allow 4 pixels per feature. This helps to identify separate features by permitting space between features to appear in contrast.

Minimum feature size

The smallest feature we need to identify is the remaining critical variable to set up the geometry of the optical parameters and imaging array. For the current example, we want to detect features as small as 25µm. That 25µm feature might appear anywhere in our 12mm FOV.

Example production image

Before getting into the calculations, let’s take a look at an ideal production image we created after doing the math, and pairing a camera sensor with a suitable telecentric lens.

Production image of the logic chip – Courtesy Edmund Optics

The logic chip image above was obtained with an Edmund Optics SilverTL telecentric lens – in this case the 0.5X model. More on how we got to that lens choice below. The key point for now is “wow – what a sharp image!”. One can not only count the contacts, but knowing our geometry and optical design, we can also inspect them for length, width, and feature presence/absence using the contrast between the silver metallic components against the black-appearing board.

Resuming “how to choose a telecentric lens?”

So you’ve got an application in mind for which telecentric lens metrology looks promising. How to take the requirements figures we determine above, and map those to camera sensor selection and a corresponding telecentric lens?

Method 1: Ask us to figure it out for you.

It’s what we do. As North America’s largest stocking distributor, we represent multiple camera and lens manufacturers – and we know all the products. But we work for you, the customer, to get the best fit to your specific application requirements.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Method 2: Take out your own appendix

Let’s define a few more terms, do a little math, and describe a “fitting” process. Please take a moment to review the terms defined in the following graphic, as we’ll refer to those terms and a couple of the formulas shortly.

Telecentric lens terms and formulas – Courtesy Edmund Optics

For the chip inspection application we’re discussing, we’ve established the three required variables:

H = FOV = 12mm

p = # pixels per feature = 4

µ = minimum feature size = 25µm

Let’s crank up the formulas indicated and get to the finish line!

Determine required array size = image sensor

Array size formula for the chip inspection example – Courtesy Edmund Optics

So we need about 1900 pixels horizontally, plus or minus – with lens selection, unless one designs a custom lens, choosing an off-the-shelf lens that’s close enough is usually a reasonable thing to do.

Reviewing a catalog of candidate area scan cameras with horizontal pixel counts around 1900, we find Allied Vision Technology’s (AVT) Manta G-131B, where G indicates a GigEVision interface and B means black-and-white as in monochrome (vs. the C model that would be color). This camera uses a sensor with 2064 pixels in the horizontal dimension, so that’s a pretty close fit to our 1920 calculation.

Determine horizontal size of the sensor

H’ is the horizontal dimension of the sensor – Courtesy Edmund Optics

Per Manta G-319 specs, each pixel is 3.45µm wide, so 20643.(45) = 7.1mm sensor width.

Determine magnification requirements

The last formula tells us the magnification factor to fit the values for the other variables:

Magnification = sensor width / FOV Courtesy Edmund Optics

Choose a best-fit telecentric lens

Back to the catalog. Consider the Edmund Optics SilverTL Series. These C-mount lenses work with sensor sizes 1/2″, 2/3″, and 1/1.8″ sensors, and pixels as small as 2.8µm, so that’s a promising fit for the 1/1.8″ sensor at 3.45µm pixel size found in the Manta G-131B. Scrolling down the SilverTL Series specs, we land on the 0.50X Silver TL entry:

Some members of the SilverTL telecentric lens series – Courtesy Edmund Optics

The 0.5x magnification is not a perfect fit to the 0.59x calculated value. Likewise the 14.4mm FOV is slightly larger than the 12mm calculated FOV. But for high-performance ready-made lenses, this is a very close fit – and should perform well for this application.

Optics fitting is part science and part experience – and of course one can “send in samples” or “test drive” a lens to validate the fit. Take advantage of our experience in helping customers match application requirements to lens and camera selection, as well as lighting, cabling, software, and other components.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about