FPD-Link III vs GMSL2 vs CSI-2 vs USB considerations for deployment

New interface options arrive so frequently that trying to keep up can feel like drinking water from a fire hose. While data transfer rates are often the first characteristic identified for each interface, it’s important to also note distance capabilities, power requirements, EMI reduction, and cost.

Which interfaces are we talking about here?

This piece is NOT about GigE Vision or Camera Link. Those are both great interfaces suitable for medium to long-haul distances, are well-understood in the industry, and don’t require any new explaining at this point.

We’re talking about embedded and short-haul interface considerations

Before we define and compare the interfaces, what’s the motivation? Declining component costs and rising performance are driving innovative vision applications such as driver assistance cameras and other embedded vision systems. There is “crossover” from formerly specialized technologies into machine vision, with new camera families and capabilities, and it’s worth understanding the options.

Alvium camera with FPD-Link or GMSL interface – Courtesy Allied Vision Technologies

How shall we get a handle on all this?

Each interface has standards committees, manufacturers, volumes of documentation, conferences, and catalogs behind it. One could go deep on any of this. But this is meant to be an introduction and overview, so we take the following approach.

  • Let’s identify each of the 4 interfaces by name, acronym, and a few characteristics
  • While some of the links jump to a specific standard’s full evolution (e.g. FPD-Link including Gen 1, 2, and 3), per the blog header it’s the current standards as of Fall 2024 that are compelling for machine vision applications: CSI-2, GMSL2, and FPD-Link III, respectively
  • Then we compare and contrast, with a focus on rules of thumb and practical guidance

If at any point you’ve had enough reading and prefer to just talk it through:

FPD-Link III – Flat Panel Display Link

A free and open standard, FPD-Link has classically been used to connect a graphics display unit (GPU) to a laptop screen, LCD TV, or similar display.

FPD-Link automotive applications schematic – Courtesy Texas Instruments

FPD-Link has subsequently become widely adopted in the automotive industry, for backup cameras, navigation systems, and driver-assistance systems. FPD-Link exceeds the automotive standards for temperature ranges and electrical transients, making it attractive for harsh environments. That’s why it’s interesting for embedded machine vision too.

GMSL2 – Gigabit Multimedia Serial Link

GMSL – Courtesy Analog Devices

GMSL is widely used for video distribution in cars. It is an asymmetric full duplex technology. Asymmetric in that it’s designed to move larger volumes of data downstream, and smaller volumes upstream. Plus power and control data, bi-directionally. Cable length can be up to 15m.

CSI-2 – Camera Serial Interface (Gen. 2)

CSI-2 registered logo – Courtesy mipi alliance

As the Mobile Industry Processor Interface (MIPI) standard for communications between a camera and host processor, CSI-2 is the sweet spot for applications in the CSI standards. CSI-2 is attractive for low power requirements and low electromagnetic interference (EMI). Cable length is limited to about 0.5m between camera and processor.

USB – USB3 Vision

USB3 Vision registered logo – Courtesy Association for Advancing Automation

USB3 Vision is an imaging standard for industrial cameras, built on top of USB 3.0. USB3 Vision has the same plug-and-play characteristics of GigE Vision, including power over the cable, and GenICam compliance. Passive cable lengths are supported up to 5m (greater distances with active cables).

Compare and contrast

In the spirit of keeping this piece as a blog, in this compare-and-contrast segment we call out some highlights and rules-of-thumb. That, together with engaging us in dialogue, may well be enough guidance to help most users find the right interface for your application. Our business is based upon adding value through our deep knowledge of machine vision cameras, interfaces, software, cables, lighting, lensing, and applications.

CABLE LENGTHS COMPARED(*):

  • CSI-2 is limited to 0.5m
  • USB3 Vision passive cables to 5m
  • FPD-Link distances may be up to 10m
  • GMSL cables may be up to 15m

(*) The above guidance is rule-of-thumb. There can be variances between manufacturers, system setup, and intended use, so check with us for an overall design consultation. There is no cost to you – our sales engineers are engineers first and foremost.

BANDWIDTH COMPARED#:

  • USB3 to 3.6 Gb/sec
  • FPD-Link to 4.26 Gb/sec
  • GMSL to 6 Gb/sec
  • CSI-2 to 10 Gb/sec

(#) Bandwidth can also vary by manufacturer and configuration, especially for MIPI and SerDes [SerializerDeserializer], and per chipset choices. Check with us for details before finalizing your choices.

RULES OF THUMB:

  • CSI-2 often ideal if you are building your own instrument(s) with short cable length
  • USB3 is also good for building one’s own instruments when longer distances are needed
  • FPD-Link has great EMI characteristics
  • GMSL is also a good choice for EMI performance
  • IF torn between FPD-Link vs. GMSL, note that there are more devices in the GMSL universe, so that might skew towards easier sourcing for other components

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

New Alvium cameras with Sony SenSWIR InGaAs sensors

Short Wave Infrared (SWIR) imaging enables applications in a segment of the electromagnetic spectrum we can’t see with the human eye – or traditional CMOS sensors. See our whitepaper on SWIR camera concepts, functionality, and application fields.

Until recently, SWIR imaging tended to require bulky cameras, sometimes with cooling, which were not inexpensive. Cost-benefit analysis still justified such cameras for certain applications, but made it challenging to conceive of high-volume or embedded systems designs.

Enter Sony’s IMX992/993 SenSWIR InGaAs sensors. Now in Allied Vision Technologies’ Alvium camera families. These sensors “see” both SWIR and visible portions of the spectrum. So deploy them for SWIR alone – as capable, compact, cost-effective SWIR cameras. Or you can design applications that benefit from both visible and SWIR images.

Alvium configuration and interface options – Courtesy Allied Vision Technologies

Camera models and options first

The same two sensors, both the 5.3 MP Sony IMX992 and the 3.2 MP Sony IMX993, are available in the Allied Vision Alvium 1800 series with USB3 or MIPI CSI-2 interfaces. As well as in the Alvium G5 series with 5GigE interfaces.

And per the Alvium Flex option, besides the housed presentation available for all 3 interfaces, both the USB3 and CSI-2 versions may be ordered with bare board or open-back configuration, ideal for embedded designs.

Broken out by part number the camera models are:

More about the Sony IMX992 / IMX993 sensors

The big brother IMX992 at 5.3 MP and sibling IMX993 at 3.2 MP share the same underlying design and features. Both have 3.45 µm square pixels. Both are sensitive across a wide spectral range from 400 nm – 1700 nm with impressive quantum efficiencies. Both provide high frame rates – to 84 fps for the 5.3 MP camera, and to 125 fps at 3.2 MP.

Distinctive features HCG and DRRS

Sony provides numerous sensor features to the camera designer, which Allied Vision in turn makes available to the user. Two new features of note include High-Conversion-Gain (HCG) and Dual-Read-Rolling-Shutter (DRRS). Consider the images below, to best understand these capabilities:

Illustrating the benefits of HCG and DRRS modes – Courtesy Sony

With the small pixel size of 3.45 µm, an asset in terms of compact sensor size, Sony innovated noise control features to enhance image quality. Consider the three images above.

The leftmost was made with Sony’s previously-released IMX990. It’s been a popular sensor and it’s still suitable for certain applications. But it doesn’t have the HCG nor DRRS features,

The center image utilized the IMX992 High-Conversion-Gain feature. HCG reduces noise by amplifying the signal immediately after light is converted to an electrical signal. This is ideal when shooting in dark conditions. In bright conditions one may use Low-Conversion-Gain (LCG), essentially “normal” mode.

The rightmost image was generated using Dual-Read-Rolling-Shutter mode in addition to HCG. DRRS mode delivers a pair of images. The first contains the imaging signal together with the embedded noise. The second contains just the noise components. The camera designer can subtract the latter from the former to deliver a synthesized image with approximately 3/4 of the noise eliminated.

Alvium’s SWaP+C characteristics ideal for OEM systems

With small Size, low Weight, low Power requirements, and low Cost, Alvium SWIR cameras fit the SWaP+C requirements. OEM system builders need or value each of those characteristics to build cost-effective embedded and machine vision systems.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Teledyne DALSA Linea 9K Line scan NUV+VIS

Some applications require line scan cameras, where the continuously moving “product” is passed below a sensor that is wide in one dimension and narrow in the other, and fast enough to keep up with the pace of motion. See our piece on area scan vs. line scan cameras for an overview.

Teledyne DALSA’s new Linea HS 9k BSI Near ultraviolet (NUV) / visible camera is such a line scan camera, at 9216 x 192 resolution, and speeds to 400 kHz (mono mode) and 200 kHz (HDR mode).

Linea HS 9k BSI (NUV) / visible camera – Courtesy Teledyne DALSA

Visible spectrum as well as Near Ultraviolet (NUV)

The camera uses Teledyne DALSA’s own charge-domain CMOS TDI sensor with a 5×5 μm pixel size. In addition to the visible spectrum 400 nm – 700 nm, the sensor delivers good quantum efficiency to 300 nm, qualifying Near Ultraviolet (NUV) applications in the blue range as well.

Backside illumination enhances performance

Backside illumination (BSI) improves quantum efficiency (QE) in both the UV and visible wavelengths, boosting the signal-to-noise ratio.

Interface

The Linea HS 9k BSI camera uses the CLHS (Camera Link High Speed) data interface to provide a single-cable solution for data, power, and strobe. And Active optical cable (AOC) connectors support distances up to 100m. That avoids the need for a repeater while achieving data reliability and cost control. See an overview of the Camera Link standards. Or see all of 1stVision’s Camera Link HS cameras.

Applications

Delivering high speed high sensitivity images in low light conditions, the Linea 9k HS is used in applications such as:

  • PCB inspection
  • Wafer inspection
  • Digital pathology
  • Gene sequencing
  • FPD inspection
Linea 9k HS suitable for diverse applications – Courtesy Teledyne DALSA

Request a quote

The part number for the Linea HS 9k BSI camera is DALSA HL-HM-09K40H.

Lots of line scan cameras to choose from

Teledyne DALSA’s Linea families have a variety of interfaces, resolutions, frame rates, pixel sizes, and options. So if the new model isn’t the right one for your needs, browse the link at the start of this sentence, or ask us to guide you among the many choices.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Tips on selecting a telecentric lens

Why might I want a telecentric lens?

Metrology, when done optically, requires that an object’s representation be invariant to the distance and position in the field of view. Telecentric lenses deliver precisely that capability. Telecentric lenses only “pass” incoming light rays that are parallel to the optical axis of the lens. That’s helpful because we measure the distance between those parallel rays to measure objects without touching them.

Telecentric lens eliminates the parallax effect – Courtesy Edmund Optics

Parallax effect

Human vision and conventional lenses have angular fields of view. That can be very useful, especially for depth perception. Our ability to safely drive a car in traffic derives in no small part from not just identifying the presence of other vehicles and hazards, but also from gauging their relative nearness to our position. In that context parallax delivers perspective, and is an asset!

But with angular fields of view we can only guess at the size of objects. Sure, if we see a car and a railroad engine side by side, we might guess that the car is about 5 feet high and the railroad engine perhaps 15 or 16 feet. In metrology we want more precision than to the nearest foot! In detailed metrology such as precision manufacturing we want to differentiate to sub-millimeter accuracy. Telecentric lenses to the rescue!

Assorted telecentric lenses – Courtesy Edmund Optics

Telecentric Tutorial

Telecentric lenses only pass incoming light rays that are parallel to the optical axis of the lens. It’s not that the oblique rays don’t reach the outer edge of the telecentric lens. Rather, it’s about the optical design of the lens in terms of what it passes on through the other lens elements and onto the sensor focal plane.

Let’s get to an example. In the image immediately below, labeled “Setup”, we see a pair of cubes positioned with one forward of the other. This image was made with a conventional (entocentric) lens, whereby all three dimensions appear much the same as for human vision. It looks natural to us because that’s what we’re used to. And if we just wanted to count how many orange cubes are present, the lens used to make the setup image is probably good enough.

Courtesy Edmund Optics.

But suppose we want to measure the X and Y dimensions of the cubes, to see if they are within rigorous tolerance limits?

An object-space telecentric lens focuses the light without the perspective of distance. Below, the image on the left is the “straight on” view of the same cubes positioned as in “Setup” above, taken with a conventional lens. The forward cube appears larger, when in fact we know it to be exactly the same size.

The rightmost image below was made with a telecentric lens, which effectively collapses the Z dimension, while preserving X and Y. If measuring X and Y is your goal, without regard to Z, a telecentric lens may be what you need.

Courtesy Edmund Optics.

How to select a telecentric lens?

As with any engineering challenge, start by gathering your requirements. Let’s use an example to make it real.

Object of interest is the circled chip – Image courtesy Edmund Optics

Object size

What is your object size? What is the size of the surrounding area in which successive instances of the target object will appear? This will determine the Field of View (FOV). In the example above, the chip is 6mm long and 4mm wide, and the boards always present within 4mm. So we’ll assert 12mm FOV to add a little margin.

Pixels per feature

In theory, one might get away with just two pixels per feature. In practice it’s best to allow 4 pixels per feature. This helps to identify separate features by permitting space between features to appear in contrast.

Minimum feature size

The smallest feature we need to identify is the remaining critical variable to set up the geometry of the optical parameters and imaging array. For the current example, we want to detect features as small as 25µm. That 25µm feature might appear anywhere in our 12mm FOV.

Example production image

Before getting into the calculations, let’s take a look at an ideal production image we created after doing the math, and pairing a camera sensor with a suitable telecentric lens.

Production image of the logic chip – Courtesy Edmund Optics

The logic chip image above was obtained with an Edmund Optics SilverTL telecentric lens – in this case the 0.5X model. More on how we got to that lens choice below. The key point for now is “wow – what a sharp image!”. One can not only count the contacts, but knowing our geometry and optical design, we can also inspect them for length, width, and feature presence/absence using the contrast between the silver metallic components against the black-appearing board.

Resuming “how to choose a telecentric lens?”

So you’ve got an application in mind for which telecentric lens metrology looks promising. How to take the requirements figures we determine above, and map those to camera sensor selection and a corresponding telecentric lens?

Method 1: Ask us to figure it out for you.

It’s what we do. As North America’s largest stocking distributor, we represent multiple camera and lens manufacturers – and we know all the products. But we work for you, the customer, to get the best fit to your specific application requirements.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Method 2: Take out your own appendix

Let’s define a few more terms, do a little math, and describe a “fitting” process. Please take a moment to review the terms defined in the following graphic, as we’ll refer to those terms and a couple of the formulas shortly.

Telecentric lens terms and formulas – Courtesy Edmund Optics

For the chip inspection application we’re discussing, we’ve established the three required variables:

H = FOV = 12mm

p = # pixels per feature = 4

µ = minimum feature size = 25µm

Let’s crank up the formulas indicated and get to the finish line!

Determine required array size = image sensor

Array size formula for the chip inspection example – Courtesy Edmund Optics

So we need about 1900 pixels horizontally, plus or minus – with lens selection, unless one designs a custom lens, choosing an off-the-shelf lens that’s close enough is usually a reasonable thing to do.

Reviewing a catalog of candidate area scan cameras with horizontal pixel counts around 1900, we find Allied Vision Technology’s (AVT) Manta G-131B, where G indicates a GigEVision interface and B means black-and-white as in monochrome (vs. the C model that would be color). This camera uses a sensor with 2064 pixels in the horizontal dimension, so that’s a pretty close fit to our 1920 calculation.

Determine horizontal size of the sensor

H’ is the horizontal dimension of the sensor – Courtesy Edmund Optics

Per Manta G-319 specs, each pixel is 3.45µm wide, so 20643.(45) = 7.1mm sensor width.

Determine magnification requirements

The last formula tells us the magnification factor to fit the values for the other variables:

Magnification = sensor width / FOV Courtesy Edmund Optics

Choose a best-fit telecentric lens

Back to the catalog. Consider the Edmund Optics SilverTL Series. These C-mount lenses work with sensor sizes 1/2″, 2/3″, and 1/1.8″ sensors, and pixels as small as 2.8µm, so that’s a promising fit for the 1/1.8″ sensor at 3.45µm pixel size found in the Manta G-131B. Scrolling down the SilverTL Series specs, we land on the 0.50X Silver TL entry:

Some members of the SilverTL telecentric lens series – Courtesy Edmund Optics

The 0.5x magnification is not a perfect fit to the 0.59x calculated value. Likewise the 14.4mm FOV is slightly larger than the 12mm calculated FOV. But for high-performance ready-made lenses, this is a very close fit – and should perform well for this application.

Optics fitting is part science and part experience – and of course one can “send in samples” or “test drive” a lens to validate the fit. Take advantage of our experience in helping customers match application requirements to lens and camera selection, as well as lighting, cabling, software, and other components.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about