Explained: Trifecta of lens f-stop, wavelength and Airy disc

In this blog we tackle a set of issues well-known to experts. It’s complex enough to be non-obvious, but easy enough to understand through this short tutorial. And better to learn via a no-cost article rather than through trial and error.

Alternative to reading on, let us help you get the optics right for your application. Or read on and then let us help you anyway. Helping machine vision customers choose optimal components is what we do. We’ve staked our reputation on it.

Aperture size and F-stop

Most understand that the F-stop on a lens specifies the size of the aperture. Follow that last link to reveal the arithmetic calculations, if you like, but the key thing to keep in mind at the practical level is that F-stop values are inversely correlated with the size of the aperture. So a large F-number like f/8 indicates a narrow aperture, while a small F-number like f/1.4 corresponds to a large aperture. Some lens designs span a wider range of F-numbers than others, but the inverse correlation always applied.

Iris controls the aperture – Courtesy Edmund Optics

Maximizing contrast might seem to suggest a large aperture

For machine vision it’s always important to maximize contrast. The target object can only be discerned when it is sufficiently contrasted against the background or other objects. Effective lighting and lensing is crucial, in addition to a camera sensor that’s up to the task.

“Maximizing light” (without over-saturating) is often a challenge, unless one adds artificial light. That would tend to suggest using a large aperture to let more light pass while still keeping exposure time short enough to “freeze” motion or maximize frames per second.

So for the moment, let’s hold that thought that a large aperture sounds promising. Spoiler alert: we’ll soften our position on this point in light of forthcoming points.

Depth of Field – DoF

While a large aperture seems attractive so far, one argument against that is depth of field (DoF). In particular, the narrowest effective aperture maximizes depth of field, while the largest aperture minimizes DoF.

Correlation of aperture size and depth of field – Courtesy Edmund Optics

Depending on the lens design, the difference in DoF between largest vs. smallest aperture may vary from as little as a few millimeters to as great as many centimeters. Your applications knowledge will inform you how much wiggle room you’ve got on DoF.

So what’s the sweet spot for aperture?

Barring further arguments to the contrary, the largest aperture that still provides sufficient depth of field is a good rule of thumb.

Where do diffraction limits and the Airy disc come into it?

Optics is a branch of physics. And just like absolute zero in the realm of temperature, Boyle’s law with respect to gases, etc., there are certain constraints and limits that apply to optics.

Whenever light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk pattern from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

And wavelength’s a factor too?

Indeed wavelength is also a contributor to contrast and the Airy disc. As beings who see, we tend to default to thinking of light as white light or daylight, which is a composite segment of the spectrum, from indigo, blue, green, yellow, orange, and red. That’s from about 380 nm to 780 nm. Below 380 nm we find ultraviolet light (UV) in the next segment of the spectrum. Above 780 nm the next segment is infrared (IR).

Monochrome light better than white light

An additional topic relative to the Airy disc is that monochrome light is better than white light. When light passes through a lens, it refracts (bends) differently in correlation with the wavelength. This is referred to as chromatic aberration.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point.

An easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

The moral of the story

The takeaway point is that the trifecta of aperture (F-stop) and wavelength each have a bearing on the Airy disc, and that one wants to choose and configure the optics and lighting to optimize the Airy disc. This leads to effective applications performance – a must have. But it can also lead to cost-savings, as lower cost lenses, lighting, and sensors, optimally configured, may perform better than higher cost components chosen without sufficient understanding of these principles.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

FPD-Link III vs GMSL2 vs CSI-2 vs USB considerations for deployment

New interface options arrive so frequently that trying to keep up can feel like drinking water from a fire hose. While data transfer rates are often the first characteristic identified for each interface, it’s important to also note distance capabilities, power requirements, EMI reduction, and cost.

Which interfaces are we talking about here?

This piece is NOT about GigE Vision or Camera Link. Those are both great interfaces suitable for medium to long-haul distances, are well-understood in the industry, and don’t require any new explaining at this point.

We’re talking about embedded and short-haul interface considerations

Before we define and compare the interfaces, what’s the motivation? Declining component costs and rising performance are driving innovative vision applications such as driver assistance cameras and other embedded vision systems. There is “crossover” from formerly specialized technologies into machine vision, with new camera families and capabilities, and it’s worth understanding the options.

Alvium camera with FPD-Link or GMSL interface – Courtesy Allied Vision Technologies

How shall we get a handle on all this?

Each interface has standards committees, manufacturers, volumes of documentation, conferences, and catalogs behind it. One could go deep on any of this. But this is meant to be an introduction and overview, so we take the following approach.

  • Let’s identify each of the 4 interfaces by name, acronym, and a few characteristics
  • While some of the links jump to a specific standard’s full evolution (e.g. FPD-Link including Gen 1, 2, and 3), per the blog header it’s the current standards as of Fall 2024 that are compelling for machine vision applications: CSI-2, GMSL2, and FPD-Link III, respectively
  • Then we compare and contrast, with a focus on rules of thumb and practical guidance

If at any point you’ve had enough reading and prefer to just talk it through:

FPD-Link III – Flat Panel Display Link

A free and open standard, FPD-Link has classically been used to connect a graphics display unit (GPU) to a laptop screen, LCD TV, or similar display.

FPD-Link automotive applications schematic – Courtesy Texas Instruments

FPD-Link has subsequently become widely adopted in the automotive industry, for backup cameras, navigation systems, and driver-assistance systems. FPD-Link exceeds the automotive standards for temperature ranges and electrical transients, making it attractive for harsh environments. That’s why it’s interesting for embedded machine vision too.

GMSL2 – Gigabit Multimedia Serial Link

GMSL – Courtesy Analog Devices

GMSL is widely used for video distribution in cars. It is an asymmetric full duplex technology. Asymmetric in that it’s designed to move larger volumes of data downstream, and smaller volumes upstream. Plus power and control data, bi-directionally. Cable length can be up to 15m.

CSI-2 – Camera Serial Interface (Gen. 2)

CSI-2 registered logo – Courtesy mipi alliance

As the Mobile Industry Processor Interface (MIPI) standard for communications between a camera and host processor, CSI-2 is the sweet spot for applications in the CSI standards. CSI-2 is attractive for low power requirements and low electromagnetic interference (EMI). Cable length is limited to about 0.5m between camera and processor.

USB – USB3 Vision

USB3 Vision registered logo – Courtesy Association for Advancing Automation

USB3 Vision is an imaging standard for industrial cameras, built on top of USB 3.0. USB3 Vision has the same plug-and-play characteristics of GigE Vision, including power over the cable, and GenICam compliance. Passive cable lengths are supported up to 5m (greater distances with active cables).

Compare and contrast

In the spirit of keeping this piece as a blog, in this compare-and-contrast segment we call out some highlights and rules-of-thumb. That, together with engaging us in dialogue, may well be enough guidance to help most users find the right interface for your application. Our business is based upon adding value through our deep knowledge of machine vision cameras, interfaces, software, cables, lighting, lensing, and applications.

CABLE LENGTHS COMPARED(*):

  • CSI-2 is limited to 0.5m
  • USB3 Vision passive cables to 5m
  • FPD-Link distances may be up to 10m
  • GMSL cables may be up to 15m

(*) The above guidance is rule-of-thumb. There can be variances between manufacturers, system setup, and intended use, so check with us for an overall design consultation. There is no cost to you – our sales engineers are engineers first and foremost.

BANDWIDTH COMPARED#:

  • USB3 to 3.6 Gb/sec
  • FPD-Link to 4.26 Gb/sec
  • GMSL to 6 Gb/sec
  • CSI-2 to 10 Gb/sec

(#) Bandwidth can also vary by manufacturer and configuration, especially for MIPI and SerDes [SerializerDeserializer], and per chipset choices. Check with us for details before finalizing your choices.

RULES OF THUMB:

  • CSI-2 often ideal if you are building your own instrument(s) with short cable length
  • USB3 is also good for building one’s own instruments when longer distances are needed
  • FPD-Link has great EMI characteristics
  • GMSL is also a good choice for EMI performance
  • IF torn between FPD-Link vs. GMSL, note that there are more devices in the GMSL universe, so that might skew towards easier sourcing for other components

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Artificial intelligence in machine vision – today

This is not some blue-sky puff piece about how AI may one day be better / faster / cheaper at doing almost anything at least in certain domains of expertise. This is about how AI is already better / faster / cheaper at doing certain things in the field of machine vision – today.

Classification of screw threads via AI – Courtesy Teledyne DALSA

Conventional machine vision

There are classical machine vision tools and methods, like edge detection, for which AI has nothing new to add. If the edge detection algorithm is working fine as programmed in your vision software, who needs AI? If it ain’t broke, don’t fix it. Presence / absence detection, 3D height calculation, and many other imaging techniques work just fine without AI. Fair enough.

From image processing to image recognition

As any branch of human activity evolves, the fundamental building blocks serve as foundations for higher-order operations that bring more value. Civil engineers build bridges, confident the underlying physics and materials science lets them choose among arch, suspension, cantilever, or cable-stayed designs.

So too with machine vision. As the field matures, value-added applications can be created by moving up the chunking level. The low-level tools still include edge-detection, for example, but we’d like to create application-level capabilities that solve problems without us having to tediously program up from the feature-detection level.

Traditional methods (left) vs. AI classification (right) – Courtesy Teledyne DALSA
Traditional Machine Vision ToolsAI Classification Algorithm
– Can’t discern surface damage vs water droplets– Ignores water droplets
– Are challenged by shading and perspective changes– Invariant to surface changes and perspective
For the application images above, AI works better than traditional methods – Courtesy Teledyne DALSA

Briefly in the human cognition realm

Let’s tee this up with a scenario from human image recognition. Suppose you are driving your car along a quiet residential street. Up ahead you see a child run from a yard, across the sidewalk, and into the street.

While it may well be that the rods and cones in your retina, and your visual cortex, and your brain used edge detection to process contrasting image segments to arrive at “biped mammal” – child, , and on to evaluating risk and hitting the brakes – isn’t how we usually talk about defensive driving. We just think in terms of accident avoidance, situational awareness, and braking/swerving – at a very high level.

Applications that behave intelligently

That’s how we increasingly would like our imaging applications to behave – intelligently and at a high level. We’re not claiming it’s “human equivalent” intelligence, or that the AI method is the same as the human method. All we’re saying is that AI, when well-managed and tested, has become a branch of engineering that can deliver effective results.

So as autonomous vehicles come to market of course we want to be sure sufficient testing and certification is completed, as a matter of safety. But whether the safe-driving outcome is based on “AI” or “vision engineering”, or the melding of the two, what matters is the continuous sequence of system outputs like: “reduce following distance”, “swerve left 30 degrees”, and “brake hard”.

Neural Networks

One branch of AI, neural networks, has proven effective in many “recognition” and categorization applications. Is the thing being imaged an example of what we’re looking for, or can it be dismissed? If it is the sort of thing we’re looking for, is it of sub-type x, y, or z? “Good” item – retain. “Bad” item – reject. You get the idea.

From training to inference

With neural networks, instead of programming algorithms at a granular feature analysis level, one trains the network. Training may include showing “good” vs. “bad” images – without having to articulate what makes them good or bad – and letting the network infer the essential characteristics. In fact it’s sometimes possible to train only with “good” examples – in which case anomaly detection flags production images that deviate from the trained pool of good ones.

Deep Neural Network (DNN) example – Courtesy Teledyne DALSA

Enough theory – what products actually do this?

Teledyne DALSA Astrocyte software creates a deep neural network to perform a desired task. More accurately – Astrocyte provides a graphical user interface (GUI) and a neural network framework, such that an application-specific neural network can be developed by training it on sample images. With a suitable collection of images, Teledyne DALSA Astrocyte can create an effective AI model in under 10 minutes!

Gather images, Train the network, Deploy – Courtesy Teledyne DALSA

Mix and match tools

In the diagram above, we show an “all DALSA” tools view, for those who may already have expertise in either Sapera or Sherlock SDKs. But one can mix and match. Images may alternatively be acquired with third party tools – paid or open source. And one may not need rules-based processing beyond the neural network. Astrocyte builds the neural network at the heart of the application.

Contact us

User-friendly AI

The key value proposition with Teledyne DALSA Astrocyte is that it’s user-friendly AI. The GUI used to configure the training and to validate the model requires no programming. And one doesn’t need special training in AI. Sure, it’s worth reading about the deep learning architectures supported. They include: Classification, Anomaly Detection, Object Detection, and Segmentation. And you’ll want to understand how the training and validation work. It’s powerful – it’s built by Teledyne DALSA’s software engineers standing on the shoulders of neural network researchers – but you don’t have to be a rocket scientist to add value in your field of work.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Machine vision lights as important as sensors and optics

Lighting matters as much or more than camera (sensor) selection and optics (lensing). A sensor and lens that are “good enough”, when used with good lighting, are often all one needs. Conversely, a superior sensor and lens, with poor lighting, can underperform. Read further for clear examples why machine vision lights are as important as sensors and optics!

Assorted white and color LED lights – courtesy of Advanced Illumination

Why is lighting so important? Contrast is essential for human vision and machine vision alike. Nighttime hiking isn’t very popular – for a reason – it’s not safe and it’s no fun if one can’t see rocks, roots, or vistas. In machine vision, for the software to interpret the image, one first has to obtain a good image. And a good image is one with maximum contrast – such that photons corresponding to real-world coordinates are saturated, not-saturated, or “in between”, with the best spread of intensity achievable.

Only with contrast can one detect edges, identify features, and effectively interpret an image. Choosing a camera with a good sensor is important. So is an appropriately matched lens. But just as important is good lighting, well-aligned – to set up your application for success.

What’s the best light source? Unless you can count on the sun or ambient lighting, or have no other option, one may choose from various potential types of light:

  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode
  • Metal Halide (Mercury)
  • Xenon (Strobe)
Courtesy of Advanced Illumination

By far the most popular light source is LED, as it is affordable, available in diverse wavelengths and shapes (bar lights, ring lights, etc.), stable, long-life, and checks most of the key boxes.

The other light types each have their place, but those places are more specialized. For comprehensive treatment of the topics summarized here, see “A Practical Guide to Machine Vision Lighting” in our Knowledgebase, courtesy of Advanced Illumination.

Download whitepaper
Download whitepaper

Lighting geometry and techniques: There’s a tendency among newcomers to machine vision lighting to underestimate lighting design for an application. Buying an LED and lighting up the target may fill up sensor pixel wells, but not all images are equally useful. Consider images (b) and (c) below – the bar code in (c) shows high contrast between the black bars and the white field. Image (b) is somewhere between unusable or marginally usable, with reflection obscuring portions of the target, and portions of the (should be) white field appearing more grey than white.

Courtesy of Advanced Illumination

As shown in diagram (a) of Figure 22 above, understanding bright field vs dark field concepts, as well as the specular qualities of the surface being imaged, can lead to radically different outcomes. A little bit of lighting theory – together with some experimentation and tuning, is well worth the effort.

Now for a more complex example – below we could characterize images (a), (b), (c) and (d) as poor, marginal, good, and superior, respectively. Component cost is invariant, but the outcomes are sure different!

Courtesy of Advanced Illumination

To learn more, download the whitepaper or call us at (978) 474-0044.

Contact us

Color light – above we showed monochrome examples – black and white… and grey levels in between. Many machine vision applications are in fact best addressed in the monochrome space, with no benefit from using color. But understanding what surfaces will reflect or absorb certain wavelengths is crucial to optimizing outcomes – regardless of whether working in monochrome, color, infrared (IR), or ultraviolet (UV).

Beating the same drum throughout, it’s about maximizing contrast. Consider the color wheel shown below. The most contrast is generated by taking advantage of opposing colors on the wheel. For example, green light best suppresses red reflection.

Courtesy of Advanced Illumination

On can use actual color light sources, or white light together with well-chosen wavelength “pass” or “block” filters. This is nicely illustrated in Fig. 36 below. Take a moment to correlate the configurations used for each of images (a) – (f), relative to the color wheel above. Depending on one’s application goals, sometimes there are several possible combinations of sensor, lighting, and filters to achieve the desired result.

Courtesy of Advanced Illumination

Filters – can help. Consider images (a) and (b) in Fig. 63 below. The same plastic 6-pack holder shown is shown in both images, but only the image in figure (b) reveals stress fields that, were the product to be shipped, might cause dropped product, reduced consumer confidence in one’s brand. By designing in polarizing filters, this can be the basis for a value-added application, automating quality control in a way that might not have been otherwise achievable – or not at such a low cost.

Courtesy of Advanced Illumination

For more comprehensive treatment of filter applications, see either or both Knowledgebase documents:


Powering the lights – should the be voltage-driven or current-driven? How are LEDs powered? When to strobe vs running in continuous modes? How to integrate light controller with the camera and software. These are all worth understanding – or having someone in your team – whether in-house or a trusted partner – who does.

For comprehensive treatment of the topics summarized here, see Advanced Illumination’s “A Practical Guide to Machine Vision Lighting” in our Knowledgebase:

Download whitepaper
Download whitepaper

This blog is intended to whet the appetite for interest in lighting – but it only skims the surface. Machine vision lights as important as sensors and optics. Please download the guide linked just above – to deepen your knowledge. Or if you want help with a specific application, you may draw on the experience of our sales engineers and trusted partners.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!