How to calculate line rate on a line scan camera based on conveyor speed

Unless one calculates and sets the line rate correctly, there’s a risk of blur and sub-optimal performance. And/or purchasing a line scan camera that’s not up to the task; or that’s overkill and costs you more than would have been needed.

Line Scan concept – Courtesy Teledyne DALSA

Optional line scan review or introduction

Skip to the next section if you know line scan concepts already. Otherwise…

Perhaps you know about area scan imaging, where a 2D image is generated with a global shutter, exposing all pixels on a 2D sensor concurrently. And you’d like to understand line scan imaging by way of comparing it to area scan. See our blog What is the difference between an Area Scan and a Line Scan Camera?

30 minute informative overview of Line Scan imaging – Courtesy Teledyne DALSA

Maybe you prefer seeing a specific high-end product overview and application suggestions, such as the Teledyne DALSA 16k TDI line scan camera with 1MHz line rate. Or a view to tens of different line scan models, varying not only by manufacturer, but by sensor size and resolution, interface, and whether monochrome or color.

Either you recall how to determine resolution requirements in terms of pixel size relative to defect size, or you’ve chased the link in this sentence for a tutorial. So we’ll keep this blog as simple as possible, dealing with line rate calculation only.

Line scan cameras – Courtesy Teledyne DALSA

Calculate the line rate

Getting the line rate right is the application of the Goldilocks principle to line scanning.

Line rate too slow…Line rate too fast…
Blurred image if due to too long exposure, and/or missed segments due to skipped “slices”Oversampling can create confusion by identifying the same feature as two distinct features
Why we need to get the line rate rate right

A rotary encoder is typically used to synchronize the motion of the conveyor or web with the line scan camera (and lighting if pulsed). Naturally the system cannot be operated faster than the maximum line speed, but it may sometimes operator more slowly. This may happen during ramp up or slow down phases – when one may still need to obtain imaging – or by operator choice to conserve energy or avoid stressing mechanical systems.

Naming the variables … with example values

Resolution A = object space correlation to sensor; FOV / pixel array; e.g. if 550mm FOV and 2k sensor = 550/2000 = 0.275 pixels per mm

Transport speed T = mm per sec; e.g. 4k / 1mm yields rate of motion

Sampling frequency F = T / A; for example values above F = 4000 / 0.275 = 14545.4545 = 14.5kHz; spelled out: Frequency = Transport_speed / Pixel_spatial_resolution (what 1 pixel equals in target space)

For the example figures used above, a line scan camera with 2k resolution and a line scan frequency of about 14.5 kHz will be sufficient.

Download spreadsheet with labeled fields and examples:

Just click here, or on the image below, to download the spreadsheet calculator. It includes clearly labeled fields, and examples, as the companion piece for this blog:

Not included here… but happy to show you how

We’ve kept this blog intentionally lean, to avoid information overload. Additional values may also be calculated, of course, such as:

Data rate in MB / sec: Useful to confirm camera interface can sustain the data rate

Frame time: The amount of time to process each scanned image. Important to be sure the PC and image processing software are up to the task – based on empirical experience or by conferring with software provider.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Explained: Trifecta of lens f-stop, wavelength and Airy disc

In this blog we tackle a set of issues well-known to experts. It’s complex enough to be non-obvious, but easy enough to understand through this short tutorial. And better to learn via a no-cost article rather than through trial and error.

Alternative to reading on, let us help you get the optics right for your application. Or read on and then let us help you anyway. Helping machine vision customers choose optimal components is what we do. We’ve staked our reputation on it.

Aperture size and F-stop

Most understand that the F-stop on a lens specifies the size of the aperture. Follow that last link to reveal the arithmetic calculations, if you like, but the key thing to keep in mind at the practical level is that F-stop values are inversely correlated with the size of the aperture. So a large F-number like f/8 indicates a narrow aperture, while a small F-number like f/1.4 corresponds to a large aperture. Some lens designs span a wider range of F-numbers than others, but the inverse correlation always applied.

Iris controls the aperture – Courtesy Edmund Optics

Maximizing contrast might seem to suggest a large aperture

For machine vision it’s always important to maximize contrast. The target object can only be discerned when it is sufficiently contrasted against the background or other objects. Effective lighting and lensing is crucial, in addition to a camera sensor that’s up to the task.

“Maximizing light” (without over-saturating) is often a challenge, unless one adds artificial light. That would tend to suggest using a large aperture to let more light pass while still keeping exposure time short enough to “freeze” motion or maximize frames per second.

So for the moment, let’s hold that thought that a large aperture sounds promising. Spoiler alert: we’ll soften our position on this point in light of forthcoming points.

Depth of Field – DoF

While a large aperture seems attractive so far, one argument against that is depth of field (DoF). In particular, the narrowest effective aperture maximizes depth of field, while the largest aperture minimizes DoF.

Correlation of aperture size and depth of field – Courtesy Edmund Optics

Depending on the lens design, the difference in DoF between largest vs. smallest aperture may vary from as little as a few millimeters to as great as many centimeters. Your applications knowledge will inform you how much wiggle room you’ve got on DoF.

So what’s the sweet spot for aperture?

Barring further arguments to the contrary, the largest aperture that still provides sufficient depth of field is a good rule of thumb.

Where do diffraction limits and the Airy disc come into it?

Optics is a branch of physics. And just like absolute zero in the realm of temperature, Boyle’s law with respect to gases, etc., there are certain constraints and limits that apply to optics.

Whenever light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Correlation of aperture to diffraction pattern – Courtesy Edmund Optics

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk pattern from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

And wavelength’s a factor too?

Indeed wavelength is also a contributor to contrast and the Airy disc. As beings who see, we tend to default to thinking of light as white light or daylight, which is a composite segment of the spectrum, from indigo, blue, green, yellow, orange, and red. That’s from about 380 nm to 780 nm. Below 380 nm we find ultraviolet light (UV) in the next segment of the spectrum. Above 780 nm the next segment is infrared (IR).

Monochrome light better than white light

An additional topic relative to the Airy disc is that monochrome light is better than white light. When light passes through a lens, it refracts (bends) differently in correlation with the wavelength. This is referred to as chromatic aberration.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point.

An easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

Or call us at 978-474-0044

The moral of the story

The takeaway point is that the trifecta of aperture (F-stop) and wavelength each have a bearing on the Airy disc, and that one wants to choose and configure the optics and lighting to optimize the Airy disc. This leads to effective applications performance – a must have. But it can also lead to cost-savings, as lower cost lenses, lighting, and sensors, optimally configured, may perform better than higher cost components chosen without sufficient understanding of these principles.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

FPD-Link III vs GMSL2 vs CSI-2 vs USB considerations for deployment

New interface options arrive so frequently that trying to keep up can feel like drinking water from a fire hose. While data transfer rates are often the first characteristic identified for each interface, it’s important to also note distance capabilities, power requirements, EMI reduction, and cost.

Which interfaces are we talking about here?

This piece is NOT about GigE Vision or Camera Link. Those are both great interfaces suitable for medium to long-haul distances, are well-understood in the industry, and don’t require any new explaining at this point.

We’re talking about embedded and short-haul interface considerations

Before we define and compare the interfaces, what’s the motivation? Declining component costs and rising performance are driving innovative vision applications such as driver assistance cameras and other embedded vision systems. There is “crossover” from formerly specialized technologies into machine vision, with new camera families and capabilities, and it’s worth understanding the options.

Alvium camera with FPD-Link or GMSL interface – Courtesy Allied Vision Technologies

How shall we get a handle on all this?

Each interface has standards committees, manufacturers, volumes of documentation, conferences, and catalogs behind it. One could go deep on any of this. But this is meant to be an introduction and overview, so we take the following approach.

  • Let’s identify each of the 4 interfaces by name, acronym, and a few characteristics
  • While some of the links jump to a specific standard’s full evolution (e.g. FPD-Link including Gen 1, 2, and 3), per the blog header it’s the current standards as of Fall 2024 that are compelling for machine vision applications: CSI-2, GMSL2, and FPD-Link III, respectively
  • Then we compare and contrast, with a focus on rules of thumb and practical guidance

If at any point you’ve had enough reading and prefer to just talk it through:

FPD-Link III – Flat Panel Display Link

A free and open standard, FPD-Link has classically been used to connect a graphics display unit (GPU) to a laptop screen, LCD TV, or similar display.

FPD-Link automotive applications schematic – Courtesy Texas Instruments

FPD-Link has subsequently become widely adopted in the automotive industry, for backup cameras, navigation systems, and driver-assistance systems. FPD-Link exceeds the automotive standards for temperature ranges and electrical transients, making it attractive for harsh environments. That’s why it’s interesting for embedded machine vision too.

GMSL2 – Gigabit Multimedia Serial Link

GMSL – Courtesy Analog Devices

GMSL is widely used for video distribution in cars. It is an asymmetric full duplex technology. Asymmetric in that it’s designed to move larger volumes of data downstream, and smaller volumes upstream. Plus power and control data, bi-directionally. Cable length can be up to 15m.

CSI-2 – Camera Serial Interface (Gen. 2)

CSI-2 registered logo – Courtesy mipi alliance

As the Mobile Industry Processor Interface (MIPI) standard for communications between a camera and host processor, CSI-2 is the sweet spot for applications in the CSI standards. CSI-2 is attractive for low power requirements and low electromagnetic interference (EMI). Cable length is limited to about 0.5m between camera and processor.

USB – USB3 Vision

USB3 Vision registered logo – Courtesy Association for Advancing Automation

USB3 Vision is an imaging standard for industrial cameras, built on top of USB 3.0. USB3 Vision has the same plug-and-play characteristics of GigE Vision, including power over the cable, and GenICam compliance. Passive cable lengths are supported up to 5m (greater distances with active cables).

Compare and contrast

In the spirit of keeping this piece as a blog, in this compare-and-contrast segment we call out some highlights and rules-of-thumb. That, together with engaging us in dialogue, may well be enough guidance to help most users find the right interface for your application. Our business is based upon adding value through our deep knowledge of machine vision cameras, interfaces, software, cables, lighting, lensing, and applications.

CABLE LENGTHS COMPARED(*):

  • CSI-2 is limited to 0.5m
  • USB3 Vision passive cables to 5m
  • FPD-Link distances may be up to 10m
  • GMSL cables may be up to 15m

(*) The above guidance is rule-of-thumb. There can be variances between manufacturers, system setup, and intended use, so check with us for an overall design consultation. There is no cost to you – our sales engineers are engineers first and foremost.

BANDWIDTH COMPARED#:

  • USB3 to 3.6 Gb/sec
  • FPD-Link to 4.26 Gb/sec
  • GMSL to 6 Gb/sec
  • CSI-2 to 10 Gb/sec

(#) Bandwidth can also vary by manufacturer and configuration, especially for MIPI and SerDes [SerializerDeserializer], and per chipset choices. Check with us for details before finalizing your choices.

RULES OF THUMB:

  • CSI-2 often ideal if you are building your own instrument(s) with short cable length
  • USB3 is also good for building one’s own instruments when longer distances are needed
  • FPD-Link has great EMI characteristics
  • GMSL is also a good choice for EMI performance
  • IF torn between FPD-Link vs. GMSL, note that there are more devices in the GMSL universe, so that might skew towards easier sourcing for other components

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Artificial intelligence in machine vision – today

This is not some blue-sky puff piece about how AI may one day be better / faster / cheaper at doing almost anything at least in certain domains of expertise. This is about how AI is already better / faster / cheaper at doing certain things in the field of machine vision – today.

Classification of screw threads via AI – Courtesy Teledyne DALSA

Conventional machine vision

There are classical machine vision tools and methods, like edge detection, for which AI has nothing new to add. If the edge detection algorithm is working fine as programmed in your vision software, who needs AI? If it ain’t broke, don’t fix it. Presence / absence detection, 3D height calculation, and many other imaging techniques work just fine without AI. Fair enough.

From image processing to image recognition

As any branch of human activity evolves, the fundamental building blocks serve as foundations for higher-order operations that bring more value. Civil engineers build bridges, confident the underlying physics and materials science lets them choose among arch, suspension, cantilever, or cable-stayed designs.

So too with machine vision. As the field matures, value-added applications can be created by moving up the chunking level. The low-level tools still include edge-detection, for example, but we’d like to create application-level capabilities that solve problems without us having to tediously program up from the feature-detection level.

Traditional methods (left) vs. AI classification (right) – Courtesy Teledyne DALSA
Traditional Machine Vision ToolsAI Classification Algorithm
– Can’t discern surface damage vs water droplets– Ignores water droplets
– Are challenged by shading and perspective changes– Invariant to surface changes and perspective
For the application images above, AI works better than traditional methods – Courtesy Teledyne DALSA

Briefly in the human cognition realm

Let’s tee this up with a scenario from human image recognition. Suppose you are driving your car along a quiet residential street. Up ahead you see a child run from a yard, across the sidewalk, and into the street.

While it may well be that the rods and cones in your retina, and your visual cortex, and your brain used edge detection to process contrasting image segments to arrive at “biped mammal” – child, , and on to evaluating risk and hitting the brakes – isn’t how we usually talk about defensive driving. We just think in terms of accident avoidance, situational awareness, and braking/swerving – at a very high level.

Applications that behave intelligently

That’s how we increasingly would like our imaging applications to behave – intelligently and at a high level. We’re not claiming it’s “human equivalent” intelligence, or that the AI method is the same as the human method. All we’re saying is that AI, when well-managed and tested, has become a branch of engineering that can deliver effective results.

So as autonomous vehicles come to market of course we want to be sure sufficient testing and certification is completed, as a matter of safety. But whether the safe-driving outcome is based on “AI” or “vision engineering”, or the melding of the two, what matters is the continuous sequence of system outputs like: “reduce following distance”, “swerve left 30 degrees”, and “brake hard”.

Neural Networks

One branch of AI, neural networks, has proven effective in many “recognition” and categorization applications. Is the thing being imaged an example of what we’re looking for, or can it be dismissed? If it is the sort of thing we’re looking for, is it of sub-type x, y, or z? “Good” item – retain. “Bad” item – reject. You get the idea.

From training to inference

With neural networks, instead of programming algorithms at a granular feature analysis level, one trains the network. Training may include showing “good” vs. “bad” images – without having to articulate what makes them good or bad – and letting the network infer the essential characteristics. In fact it’s sometimes possible to train only with “good” examples – in which case anomaly detection flags production images that deviate from the trained pool of good ones.

Deep Neural Network (DNN) example – Courtesy Teledyne DALSA

Enough theory – what products actually do this?

Teledyne DALSA Astrocyte software creates a deep neural network to perform a desired task. More accurately – Astrocyte provides a graphical user interface (GUI) and a neural network framework, such that an application-specific neural network can be developed by training it on sample images. With a suitable collection of images, Teledyne DALSA Astrocyte can create an effective AI model in under 10 minutes!

Gather images, Train the network, Deploy – Courtesy Teledyne DALSA

Mix and match tools

In the diagram above, we show an “all DALSA” tools view, for those who may already have expertise in either Sapera or Sherlock SDKs. But one can mix and match. Images may alternatively be acquired with third party tools – paid or open source. And one may not need rules-based processing beyond the neural network. Astrocyte builds the neural network at the heart of the application.

Contact us

User-friendly AI

The key value proposition with Teledyne DALSA Astrocyte is that it’s user-friendly AI. The GUI used to configure the training and to validate the model requires no programming. And one doesn’t need special training in AI. Sure, it’s worth reading about the deep learning architectures supported. They include: Classification, Anomaly Detection, Object Detection, and Segmentation. And you’ll want to understand how the training and validation work. It’s powerful – it’s built by Teledyne DALSA’s software engineers standing on the shoulders of neural network researchers – but you don’t have to be a rocket scientist to add value in your field of work.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.