What are the factors in 3D laser triangulation line rates?

When designing an application, one likes to read the specifications to determine whether a candidate solution will satisfy the applications requirements. Let’s say you want to design an application to do laser profiling of your continuously moving target(s). You know Teledyne DALSA is well-regarded for their Z-Trak 3D Laser Profiler. In the specifications you may see that up to 3.3K second are achievable, but what factors could influence the rate?

What factors affect the line rate?

When choosing a pickup truck or SUV, cubic displacement and horsepower matter. But so do whether you plan to tow a trailer of a certain weight. And whether the terrain is hilly or flat.

With an area scan camera, maximum framerate is expressed for reading out all pixels when operating at full resolution. Faster rates can be achieved by reading out partial rows with a reduced area of interest. One must match camera and interface capabilities to application requirements.

Laser triangulation is an effective 3D technique

Here too one must read the specifications – and think about application requirements.

Figure 1: Key laser profiler terms and concepts in relation to each other – Courtesy Teledyne DALSA

What considerations affect 3D triangulation laser profilers?

Data volume: With reference to Figure 2 below, the number of pixels per row (X) and the frequency of scans in the Y dimension, together with the number of Bytes expressed per pixel, determine the data volume. Ultimately you need what you need, and may purchase a line scanner with a wider or smaller field of view, or a faster or slower interface, or a more intense laser light, accordingly. Required resolution has a bearing on data volumes, too, and that’s the key consideration we’ll go into further below.

Figure 2: Each laser profile scan delivers X pixels’ Z values to build Y essentially continuous slices – Courtesy Teledyne DALSA

Resolution has a bearing on data volumes and application performance

Presumably it’s clear that application performance will require certain precision in resolution. In the Y dimension, how frequently do you need each successive data slice in order to track feature changes over time? In the Z dimension, how fine grained do you need to know of changes in object height? And in the X dimension, how many points must be captured at what resolution?

While you might be prepared to negotiate resolution tolerances as an engineering tradeoff on performance or cost or risk, generally speaking you’ve got certain resolutions you are aiming for if the technology and budget can achieve it.

We’re warming up to the key point of this article – how line rate varies according to application features. Consider Figure 3 below, noting the trapezoidal shape for 3 respective fields of view, in correlation with working distance.

Figure 3: Working distance in which Z dimension may vary also impacts resolution achievable for each value in the X dimension – Courtesy Teledyne DALSA.

Trapezoid bottom width and required X dimension resolution

To drive this final point home, consider both Figure 2 and Figure 3. Figure 2, among other things, reminds us that we need to capture each successive scan from the Y dimension at precisely timed intervals. Otherwise how would we usefully track the changes in height in the Z dimension as the target moves down the conveyance?

That means that regardless of target height, each scan must always take exactly the same time as each other scan – it cannot vary. But per Figure 3, regardless of whether using a short, medium, or longer working distance, X pixels correlating to target values found high up in the trapezoidal FoV will yield a de facto higher resolution than the same X pixels lower down.

Suppose the top of the trapezoid is 50cm wide, and the bottom of the trapezoid is 100cm wide. For any given short span along a line in the X dimension, the real-space mapped into a sensor pixel will be 2x and long for targets sampled at the bottom of the FoV.

Since the required minimum resolution and precision is an applications requirement, the whole system must be configured for sufficient resolution when sampling at the bottom of the trapezoid. So one must purchase a system the covers the required resolution, and deploy it in such a way that the “worst case” sampling at the limits of the system are within the requirements. One must sample as many points as needed at the bottom of the FoV, and that impacts line scan rate.

Height of object matters too

Not only the position of the object in the FoV matters – but also the maximum height of any object whose Z dimension you need to detect. Let’s illustrate the point:

Figure 4. The maximum height anticipated matters too – Courtesy Teledyne DALSA

Consider item labeled Object in Figure 4. Your application’s object(s) may of course be shaped differently, but this generic object serves discussion purposes just fine. In this conceptual application, there’s a continuous conveyor belt (the dark grey surface) moving at continous speed in the Y dimension. Whenever no Object is present, i.e. the gaps between Object_N and Object_N+1, we expect the profiler to deliver a Z value of 0 for each pixel. But when an Object is present, we anticipate positive values corresponding to the height of the object. That’s the whole point of the 3D application.

Important note re. camera sensor in 2D

While the laser emits a flat line as it exits the projector, the reflection sensed inside the camera is two-dimensional. The camera sensor is a rectangular grid or array of pixels, typically in a CMOS chip, similar to that used in an area-scan camera. If one needs all the data from the sensor, the higher data volume takes longer to transfer than if one only needs a subset. If you know your application’s design well, you may be able to achieve optimized performance by avoiding the transfer of “empty” data.

Now let’s do a thought experiment where we re-imagine the Object towards two different extremes:

Extreme 1: Imagine the Object flattened down to a few sheets of paper in a tight stack, or perhaps the flap of a cardboard box.

Extreme 2: Imagine the Object is stretched up to the height of a full box, as high in the Z dimension as in the X dimension shown.

If the Object would never be higher than Extreme 1, the number of pixel rows in the camera sensor registering non-zero values will be just a few rows. Which can be read out quickly, not bothering to read out the unused rows. Yielding a relatively faster line rate.

But if the Object(s) will sometimes be at Extreme 2, many/most of the pixel rows in the camera sensor will register non-zero values, per the reflected laser line ranging up to the full height of the Object. Consequently more lines must be read-out from the camera sensor in order to build the laser profile.

1. The application must be designed to perform for the tallest anticipated Object, as well as the width of the Object in the X dimension and the speed of motion in the Y dimension.

2. All other things being equal, shorter objects, utilizing less camera sensor real estate, will support faster line rates, than taller object.

Summary points regarding object height

By careful planning for your FoV, knowing your timing constraints, and selecting your laser profiler model within it’s performance range, you can optimize your outcomes.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Also consider – interface capacity; exposure time

Just as with area scan cameras, output rates may be limited by any of interface limits, exposure duration, or data volumes.

Interface limits: Whether using GigE Vision, USB3 Vision, Camera Link HS – whatever – the interface standard, camera settings, cable, and PC adapter card together determine a maximum frame rate or line rate expressed in Gigabits per second (Gbps), typically. Your intended data volume is a function of exposure time and line rate or frame rate. Be sure to understand maximum practical throughput, choosing components accordingly.

Exposure duration: Even without readout timing considerations (overlapped readout together with start of next exposure – or completion of readout n before start of exposure n+1), if there are, say, 100 exposures per second, one cannot receive more than 100 datasets per second. Even if the camera is capable of faster rates.

That may seem obvious to experienced machine vision applications designers, but it needs mentioning for any new to this. Every application needs to achieve good contrast between the imaging subject and its background field. And if lighting and lensing are optimized, exposure time is the last variable to control. Ideally, lighting and lensing, together with the camera sensor, permit exposures brief enough so that exposure time meets application objectives.

But whether manually parameterized or under auto-exposure control, one has to do the math and/or the empirical testing to insure your achievable line rates aren’t exposure-limited.

Planning for your laser profiler application

Some months ago we wrote a blog which summarizes Teledyne DALSA’s Z-Trak line scan product families. Besides highlighting the characteristics of three distinct product families, we provided a worksheet to help users identify key applications requirements for line scanning. It’s worth offering that same worksheet again below. Consider printing the page or creating a copy of it in a spreadsheet, and fill in the values for your known or evolving application.

3D application key attributes

The moral of the story…

The takeaway is that the scan rate you’ll achieve for your application is more complex to determine than just reading a spec sheet about a laser profiler’s maximum performance. Your application configuration and constraints factor into overall performance.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Plug’n Stream IDS Imaging μEye SCP / μEye SLE industrial dashcams

Unpacking the μEye SCP/ μEyeSLE product names to preview what’s on offer, the “S” stands for Streaming. The rest of the product names come from IDS Imaging’s popular GigE Vision camera families μEye CP and μEye LE, respectively. So the value proposition is a bundled streaming solution piggybacked on top of another product platform. This creates economies of scale for the manufacturer and the customer alike.

uEye SCP / SLE single-device solution for process monitoring – Courtesy IDS Imaging

Continuous monitoring with event-triggered video recording

Anybody running complex systems has to monitor them for performance and quality control – and/or to recover from breakdowns or detected concerns. Traditionally one had two options:

  • Wait for a breakdown and try to deduce what went wrong, or
  • Construct a video monitoring system from the constituent components… and program as needed

The “wait and see” options is attractively inexpensive on the face of it. But it risks expensive losses from the period prior to detecting the failures. Worse, it may not be possible to determine what went wrong if one missed the event that triggered the failure.

Constructing a video monitoring system from scratch is possible – and many have done it. Until now it generally required sourcing camera, lens, and PC, and writing complex software capable of episodic streaming and recording, and event-detection and logging.

IDS μEye SCP / μEye SLE provide Plug’n Stream no-PC-needed solution

Systems evolution in many fields, including machine vision, periodically takes what once had to be programmed to something that need only be configured. The system provider helpfully packages the algorithms into parameterized controls that are user-friendly to the deployer. That way one can focus on the application domain, event management, and process control.

IDS Imaging has done exactly that to create μEye SCP / μEye SLE – think “industrial dashcam” – with both housed and board-level options. The comprehensive 9 minute video below provides a great introduction to the product, its capabilities, and some applications examples.

The comprehensive 9 minute video below provides a great introduction to the product, its capabilities, and some applications examples.


9 minute introduction and overview – Courtesy IDS Imaging

Event Recording

The system streams continuously to internal persistent memory, periodically overwriting previous streams that were not part of any events deemed worth saving. This creates a recorded stream for a defined period from x seconds prior to an event, through the event, and to y seconds afterwards, where x and y are user-definable.

That documents machine malfunctions or failures. Which makes it easier to analyze process errors – and address them for system improvement.

No PC needed – System on a Chip (SoC)

With System on a Chip (SoC) from Ambarella, the camera has the onboard smarts to directly process and evaluate image data.

The user need only configure the parameters that define an “event”, the duration to capture before and after the event, which of several formats to record, and whether to operate standalone or integrated into other systems.

Use cases

Just to get the juices flowing, consider use cases like the following:

Industrial process monitoring – the human operator has the overview but μEye SCP / SLE can monitor automatically at the detail level – recording events and raising alerts if needed – Courtesy IDS Imaging
Video analysis – for scouting or officiating for example – Courtesy IDS Imaging
Smart city applications – let the system identify pedestrians within a specific field of view – Courtesy IDS Imaging

WebCockpit configuration

With no PC required to operate the system in standalone mode, configuration may be done through a frontend in the browser. The frontend settings control streaming, recording, and video modes.

Optionally on can use the web service and a REST API to seamlessly integrate into existing systems, for those who prefer or require integrate over standalone deployments.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Edmund Optics C-Series Fixed Focal Length SWIR Lenses

Ideal when paired with SONY IMX990 or SONY IMX991 sensors, Edmund Optics’ C-Series fixed focal length SWIR lenses support a 2.8µm pixel pitch far smaller than classic SWIR pixel sizes in the 5 – 15µm range.

Fixed focal lengths help the lens designers achieve great performance while minimizing production costs due to fewer parts.

Industry-insider tip

Certain sensors marketed as Vis-SWIR (Visible plus SWIR spectrum coverage) are far less expensive than those traditionally designed for SWIR alone – and perform really well in the SWIR range (900 – 1700nm). The SONY IMX990 and SONY IMX991 are two such sensors, the former available in AVT Goldeye 130, and the latter in AVT Alvium 1800. So are SONY IMX992 and SONY IMX993, as featured in AVT Alvium cameras with diverse interface options.

So while certain users buy those sensors for applications that generate an image in both the visible and SWIR portions of the spectrum – MOST buyers are purchasing these sensors “just” do do SWIR applications in a cost-effective way.

It’s a bit like buying a dual-function toaster oven and never using one of the functions – but if it creates a valuable solution for you, who cares about the feature not used?

Edmund Optics saw the opportunity to create a lens series for the customers using the sensors referenced above to do dedicated SWIR applications. So they created their C-Series fixed focal length SWIR family, with 7 members, and focal lengths from 6 – 50mm.

Did we mention performance?

Recall that lens performance is typically expressed by the Modular Transfer Function (MTF). Below is the MTF chart for the 6mm FL at 1.3µm wavelength, from the Edmund Optics C-Series fixed focal length lenses. All 8 members of the family show comparable performance – see spec sheets for details.

MTF graph for the 6mm FL at 1.3µm wavelength” – Courtesy Edmund Optics

Shorter focal lengths not always easy to find

With fixed focal lengths at 6mm, 8.5mm, 12mm, 16mm, 25mm, 35mm, and 50mm, knowledgeable customers may note that especially the shorter focal length offerings are not that common in the machine vision optical market.

Compact and cost-effective

As fixed focal length lenses, each member of this lens series only need a focus adjustment – fine tuning – which is lockable against vibration slippage. They do NOT need the complexity of a varifocal lens. That means fewer glass elements and less metal, yielding a smaller form factor, handy if space is an issue.

It also means the lenses are less expensive to manufacture, a savings the user can enjoy in achieving a cost-effective way to get good performance in the SWIR spectrum.

Built as a variation on another lens series

It’s worth noting this SWIR-optimized lens series piggybacks on Edmund Optics visible spectrum C-Series fixed focal lenses. The key difference is the new lens series are optically coated for the SWIR spectrum. The benefit to the user is that Edmund Optics could do a spin on an existing lens series, which is cost-effective for the customer as well.

Optimized for factory automation applications

Both the visible and SWIR versions of the C-Series lenses have been optimized with factory automation in mind, particularly with respect to WD, size, and cost.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Teledyne DALSA 16k TDI line scan camera 1 MHz line rate

Product innovation continues to serve machine vision customers well. Clever designs are built for evolving customer demands and new markets, supported by electronics miniaturization and speed. Long a market leader in line scan imaging, Teledyne DALSA now offers the Linea HS2 TDI line scan camera family.

Linea HS2 16k TDI line scan camera with 1 MHz line rate – courtesy Teledyne DALSA

Video overview

The video below is just over one minute in duration, and provides a nice overview:

Contact us for a quote

Backside illumination enhances quantum efficiency

Early sensors were all used frontside illumination, and everybody lived with that until about 10 years ago when backside illumination was innovated and refined. The key insight was to let the photons hit the light-sensitive surface first, with the sensor’s wiring layer on the other side. This greatly improves quantum efficiency, as seen in the graph below:

QE substantially enhanced using backside illumination (BSI – Courtesy Teledyne DALSA

Applications

This camera series is designed for high-speed imaging in light staved conditions. Applications include but are not limited to inspecting flat panel displays, semiconductor wafers, high density interconnects, and diverse life science uses.

Courtesy Teledyne DALSA

Line scan cameras

You may already be a user of line scan cameras. If you are new to that branch of machine vision, compare and contrast line scan vs. area scan imaging. If you want the concept in a phrase or two, think “slice” or line of pixels obtained as the continuous wide target is passed beneath the camera. Repeat indefinitely. Can be used to monitor quality, detect defects, and/or tune controls.

Time Delay Integration (TDI)

Perhaps you even use Time Delay Integration (TDI) technology already. TDI builds on top of “simple” line scan by tracking how a pixel appears across several successive time slices, turning motion blur into an asset through hardware or software averaging and analysis.

Maybe you already have one or more of Teledyne DALSA’s prior-generation Linea HS line scan cameras. They feature the same pixel size, optics, and cables as the new Linea HS2 series. With a 2.5x speed increase the Linea HS2 provides a seamless upgrade. The Linea HS2 offers an optional cooling accessory to enhance thermal stability.

Frame grabber

The Linea HS2 utilizes Camera Link High Speed (CLHS) to match the camera’s data output rate with an interface that can keep up. Teledyne DALSA manufactures not just the camera, but also the Xtium2-CL MX4 Camera Link Frame Grabber.

Xtium2-CL MX4 Camera Link Frame Grabber – Courtesy Teledyne DALSA

The Xtium2-CL MX4 is built on next generation CLHS technology and features:

  • 16 Gigapixels per second
  • dual CLHS CX4 connectors
  • drives active optical cables
  • supports parallel data processing in up to 12 PCs
  • allows cable lengths over 100 meters with complete EMI immunity

Which camera to choose?

As this blog is released, the Linea HS2, with 16k/5μm resolution provides an industry leading maximum line rate of 1 MHz, or 16 Gigapixels per second data throughput. Do you need the speed and sensitivity of this camera? Or is one of the “kid brother” models enough – they are already highly performant before the new kid came along. We can help you sort out the specifications according to your application requirements.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Conquer the glare: CCS LFXV Flat Dome Light for Machine Vision

While the endless parade of new CMOS sensors get plenty of attention, each bringing new efficiency or features, lighting and lensing are too often overlooked. The classic three-legged stool metaphor is an apt reminder that each of sensor, lighting, and lensing are critical to achieving optimal outcomes.

LFXV flat dome lights – courtesy CCS Inc.

Lighting matters

If you haven’t investigated the importance of lighting, or want a refresher, see our Knowledge Base resources on lighting. In those illustrated articles, we review the importance of contrast for machine vision, and how lighting is so critical. By choosing the best type of light, the optimal wavelength, and the right orientation, the difference in outcomes can be remarkable. In fact, sometimes with the right lighting design one can utilize less expensive sensors and lenses, achieving great results by letting the lighting do the work.

Pictures worth a thousand words

Before digging into product details on CCS LFXV flat dome lights, let’s take a look at examples achieved without… and with… the selected models.

Consider an example from electronics parts identification:

Hairline surface of capacitor makes text difficult to read, despite diffuse ring lite (red), a seemingly reasonable lighting choice – courtesy CCS Inc.
Using LFXV-25RD (red) flat dome light, hairline finish is essentially eliminated, creating much better contrast – courtesy CCS Inc.

Here’s an example reading 2-D codes from contact lens packages:

Wavy and glossy surface makes 2-D code hard to discern with red ring light – courtesy CCS Inc.
LFXV50RD red flat dome light creates ideal contrast to read 2-D code – courtesy CCS Inc.

Consider identifying foreign materials in food products, for either automated removal or quality control logging:

Foreign object amidst tea leaves is barely discernable using white dome light – courtesy CCS Inc.
LFXV200IR infrared flat dome light creates contrast to easily identify the foreign object – courtesy CCS Inc.

More about wavelength

In the images above, you may have noticed various wavelengths were used – with better or worse outcomes. Above we showed “just” white light, red light, and infrared, but blue, green, and UV are also candidates, not to mention SWIR and LWIR. Light wavelength choice affects contrast – not just when using dome lights – see wavelengths overview in our knowledge base.

Key concepts

By way of contrast, let’s first look at the way a traditional dome light works:

Traditional dome light design – courtesy CCS Inc.

Notice the camera is mounted to the top of a traditional dome light. The reflective diffusion panel coats all the inside surfaces of the dome – except where the camera is mounted. The diffusion pattern created is pretty good in general – but not perfect at hiding the camera hole entirely. If the target object is highly reflective and tends towards flat, one gets a dark spot in the center of the image…. and the application underperforms the surface inspection one hoped to achieve.

So who needs newfangled flat dome lights?

There’s nothing wrong with conventional dome lights per se, if you’ve got the space for them, and they do the job.

Three downsides to traditional dome lights

1. A traditional dome light may leave a dark spot – if the target is flat and highly reflective

2. A traditional dome light takes up a lot of space

Conventional dome light on left vs. flat dome light on right – courtesy CCS Inc.

Notice how much space the conventional dome light takes up, compared to a “see through” LED flat dome light. But space-savings aren’t the only benefit to flat dome lights….

3. Working distance is “fixed” by a traditional dome light

Most imaging professionals know all about camera working distance (WD) and how to set up the optics for the camera sensor, a matching lens, and the object to be imaged, to get the optical geometry right.

Now let’s take a look at light working distance (LWD). Consider the following can-top inspection scenarios:

By varying the light working distance (LWD), easily done with see-through flat LED dome lights, one can emphasize or de-emphasize features, according to application objectives – courtesy CCS Inc.

Wondering how to light your application?

Send us your sample(s)! If you can ship it, we can set up lighting in our labs to do the work for you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

IDS Ensenso B-Series: 3D Vision at Close Range

IDS has developed and released the Ensenso B-Series, ideal for short object distances as close as 21cm. Even that close it achieves a Field of View (FOV) of 30 x 26 cm and depth values accurate to 0.1 mm. While 3D machine vision isn’t new, this camera series is.

Ensenso B compact 3D camera – Courtesy IDS Imaging

Ensenso family of cameras

We introduced IDS’ Ensenso 3D cameras in 2023, bringing new stereo and structured light solutions to the portfolio. Then later in 2023 we announced IDS Ensenso C Series, which added color capabilities. That rounded out the lineup with differentiated offerings under each of the following identifiers: C, N, S, X, and XR. See all Ensenso models.

The new Enenso B-Series

This blog focuses on the new Ensenso B-Series. The cameras are ultra-compact, and can work at close range, still delivering a large FOV.

Ensenso B mounted on robotic arm – Courtesy IDS Imaging

The compact unit contains the stereo cameras as well as the bright pattern projector used to support stereo 3D imaging. The durable housing is rate for IP65/67 protection, and is ideal for harsh industrial environments.

Maybe you need Ensenso B

Or perhaps your application would be best served by Series C, N, S, X, or XR?

IDS Imaging Ensenso 3D cameras and camera systems are built for industrial 3D imaging with a GigE interface for ease of setup. Ensenso 3D cameras are suitable for numerous 3D imaging applications including robotics, bin picking, warehouse automation and 3D measurement tasks. They are widely used for many industrial applications such as factory automation, logistics, and quality assurance.

Ensenso 3D cameras have numerous features, benefits, and options.

Please contact us for more information. We can provide you with additional technical information and help you choose the right 3D camera system for your 3D imaging application.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Sony STARVIS 2 sensors in IDS Imaging uEye cameras

Sony has evolved their successful STARVIS high-sensitivity back-illuminated sensor to the next generation STARVIS 2 sensors. This brings even wider dynamic range, and is available in three specific resolutions of 4MP, 5MP, and 12.5MP. The sensor models are respectively Sony IMX664, IMX675, and IMX676. And IDS Imaging has in turn put these sensors into their uEye cameras.

uEye USB3 C-mount camera available with any of the three Sony STARVIS 2 sensors – Courtesy IDS Imaging

Camera overview before deeper dive on the sensors

The new sensors, responsive in low ambient light to both visible and NIR, are available in IDS’ compact, cost-effective uEye XCP and uEye XLS cameras. They’re available in both the XCP housed cameras with C-mount optics and USB3 interface. And in the XLS board-level format with C/CS, S, and no-mount options, also with the USB3 interface

Choose the XCP models if you want the closed zinc die-cast housing, the screwable USB micro-B connector, and the C-mount lens adaptor for use with a wide range of multi-megapixel lenses. Digital I/O connections plus trigger and flash pins may also be connected.

uEye XCP – Courtesy IDS Imaging

If you prefer a board-level camera for embedded designs, and even lower weight (from 3 – 20 grams) select one of the XLS formats. Options include C/CS and S-mount, or no-mount.

XLS board level models – Courtesy IDS Imaging

All models across both camera families are Vision Standard compliant: U3V / GenICam. So you may use the IDS Peak SDK. Or any other compliant software.

Deeper dive on the sensors themselves

To motivate the technical discussion, let’s start with side-by-side images, only one of which was obtained with a STARVIS 2 sensor:

Left image with IMX236; right image with Sony IMX585 STARVIS 2 sensor – Courtesy Sony.

How is such a dramatic improvement possible, over Sony’s earlier sensors? The key is switching from traditional front-illuminated sensors to STARVIS’ back-illuminated design. The back-illuminated approach collects more incident light – by a factor of 4.6 times – by positioning the photo diodes on top of the wiring layer.

Substantially more light makes it to the photo diodes using back-illumination architecture – Courtesy Sony

See also a compelling 4 minute video showing images and streaming segments generated with and without STARVIS 2 sensors.

NIR as well as VIS sensitivity

The STARVIS 2 sensors are capable of not only conventional visible spectrum performance (VIS), but also do well in the NIR space. If the subject’s NIR sensitivity is sufficient, one may avoid or reduce the need for supplemental NIR lighting. This is useful for license plate recognition applications, security, or other uses where lighting in certain spectra or intensities would disturb humans.

Left image from sensor with no NIR response; right image with STARVIS 2 sensor – Courtesy Sony.

Performance and feature highlights

The 4 MP Sony IMX664 delivers up to 48.0 fps, at 2688 x 1536 pixels, with USB3 delivering 5 Gbps. It pairs with lenses matched for up to 1/1.8″.

Sony’s IMX675, with 2592 x 1960 pixels, provides 5 MP at frame rates to 40.0 fps, via the same USB3 interface.

Finally, the 12.62 MP Sony IMX676,is ideal for microscopy with square format 3552 x 3552, but can still deliver up to 17.0 fps for applications with limited motion.

While there are diverse sensor features to explore in the data sheets for both the uEye XCP and uEye XLS cameras, one particularly worth noting is the High Dynamic Range (HDR) feature. These feature controls are made available in the camera, permitting bright scene segments to experience short exposures, while darker segments get longer exposure. This yields a more actionable dynamic range for your application to process.

No HDR in left image; with HDR feature enabled in right image – Courtesy Sony.

Direct links to the cameras

In the table below one finds each camera by model number, family, and sensor, with link to respective landing page for full details, spec sheets, etc.

ModelFamilySensor
U3-34E0XCPuEye XCP housedSONY IMX664
U3-34F0XCPuEye XCP housedSONY IMX675
U3-34L0XCPuEye XCP housedSONY IMX676
U3-34E1XLSuEye XLS boardSONY IMX664
U3-34F1XLSuEye XLS boardSONY IMX675
U3-34L2XLSuEye XLS boardSONY IMX676
IDS Imaging uEye housed and board-level cameras with Sony STARVIS 2 sensors

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Opto Engineering HC 360° Hypercentric 360°Lenses

Inspect the inner sides and bottom of hollow objects simultaneously with Opto Engineering’s HC 360° hypercentric lenses.

HC 360° hypercentric lenses – Courtesy Opto Engineering

The optical path of the rays pass through the narrow openings of hollow objects (pipes, bottles, cans, vials, etc.) without the need to rotate an object, use a probe, or use multi-camera configurations. HC hypercentric lenses are used in diverse inspection applications including beverage, pharmaceutical, and cosmetics industries.

Courtesy Opto Engineering

See landing page for all 8 members of the Opto Engineering HC family

…including part number, image circle size and sensor pairings, FOV, and spec sheet links. And corresponding quote-request links.

Example of a glass bottle inspections with HCSI lens – Courtesy Opto Engineering
Contact us for a quote

IF one didn’t know about 360° hypercentric lenses…

… one might attempt a muti-camera or line scan solution. But there are drawbacks to each of those approaches.

Drawbacks of a multicamera solution – Courtesy Opto Engineering

OK, what about linescan? Linescan is know to be good for high resolution images of elongated objects. Yes, but one would need a separate camera for each of the sides vs. the bottom of the object. Most significant, however, is the requirement for motion essential to a linescan design, as the camera or object must rotate to expose all “slices”, while the object is concurrently progressing down the line.

Linescan continuous motion requirement not compatible with 360° view requirement – Courtesy Opto Engineering

Opto Engineering 360° lenses check all the boxes

Since line scan really isn’t a solution, and a multicamera approach is complex at best, for comprehensive inspection of the inner sides and bottom of hollow objects, these Opto Engineering 360° lenses offer an attractive solution.

Pros and cons of different approaches when 360° view is required – Courtesy Opto Engineering

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

See these other blogs on Opto products!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

FPD-Link III vs GMSL2 vs CSI-2 vs USB considerations for deployment

New interface options arrive so frequently that trying to keep up can feel like drinking water from a fire hose. While data transfer rates are often the first characteristic identified for each interface, it’s important to also note distance capabilities, power requirements, EMI reduction, and cost.

Which interfaces are we talking about here?

This piece is NOT about GigE Vision or Camera Link. Those are both great interfaces suitable for medium to long-haul distances, are well-understood in the industry, and don’t require any new explaining at this point.

We’re talking about embedded and short-haul interface considerations

Before we define and compare the interfaces, what’s the motivation? Declining component costs and rising performance are driving innovative vision applications such as driver assistance cameras and other embedded vision systems. There is “crossover” from formerly specialized technologies into machine vision, with new camera families and capabilities, and it’s worth understanding the options.

Alvium camera with FPD-Link or GMSL interface – Courtesy Allied Vision Technologies

How shall we get a handle on all this?

Each interface has standards committees, manufacturers, volumes of documentation, conferences, and catalogs behind it. One could go deep on any of this. But this is meant to be an introduction and overview, so we take the following approach.

  • Let’s identify each of the 4 interfaces by name, acronym, and a few characteristics
  • While some of the links jump to a specific standard’s full evolution (e.g. FPD-Link including Gen 1, 2, and 3), per the blog header it’s the current standards as of Fall 2024 that are compelling for machine vision applications: CSI-2, GMSL2, and FPD-Link III, respectively
  • Then we compare and contrast, with a focus on rules of thumb and practical guidance

If at any point you’ve had enough reading and prefer to just talk it through:

FPD-Link III – Flat Panel Display Link

A free and open standard, FPD-Link has classically been used to connect a graphics display unit (GPU) to a laptop screen, LCD TV, or similar display.

FPD-Link automotive applications schematic – Courtesy Texas Instruments

FPD-Link has subsequently become widely adopted in the automotive industry, for backup cameras, navigation systems, and driver-assistance systems. FPD-Link exceeds the automotive standards for temperature ranges and electrical transients, making it attractive for harsh environments. That’s why it’s interesting for embedded machine vision too.

GMSL2 – Gigabit Multimedia Serial Link

GMSL – Courtesy Analog Devices

GMSL is widely used for video distribution in cars. It is an asymmetric full duplex technology. Asymmetric in that it’s designed to move larger volumes of data downstream, and smaller volumes upstream. Plus power and control data, bi-directionally. Cable length can be up to 15m.

CSI-2 – Camera Serial Interface (Gen. 2)

CSI-2 registered logo – Courtesy mipi alliance

As the Mobile Industry Processor Interface (MIPI) standard for communications between a camera and host processor, CSI-2 is the sweet spot for applications in the CSI standards. CSI-2 is attractive for low power requirements and low electromagnetic interference (EMI). Cable length is limited to about 0.5m between camera and processor.

USB – USB3 Vision

USB3 Vision registered logo – Courtesy Association for Advancing Automation

USB3 Vision is an imaging standard for industrial cameras, built on top of USB 3.0. USB3 Vision has the same plug-and-play characteristics of GigE Vision, including power over the cable, and GenICam compliance. Passive cable lengths are supported up to 5m (greater distances with active cables).

Compare and contrast

In the spirit of keeping this piece as a blog, in this compare-and-contrast segment we call out some highlights and rules-of-thumb. That, together with engaging us in dialogue, may well be enough guidance to help most users find the right interface for your application. Our business is based upon adding value through our deep knowledge of machine vision cameras, interfaces, software, cables, lighting, lensing, and applications.

CABLE LENGTHS COMPARED(*):

  • CSI-2 is limited to 0.5m
  • USB3 Vision passive cables to 5m
  • FPD-Link distances may be up to 10m
  • GMSL cables may be up to 15m

(*) The above guidance is rule-of-thumb. There can be variances between manufacturers, system setup, and intended use, so check with us for an overall design consultation. There is no cost to you – our sales engineers are engineers first and foremost.

BANDWIDTH COMPARED#:

  • USB3 to 3.6 Gb/sec
  • FPD-Link to 4.26 Gb/sec
  • GMSL to 6 Gb/sec
  • CSI-2 to 10 Gb/sec

(#) Bandwidth can also vary by manufacturer and configuration, especially for MIPI and SerDes [SerializerDeserializer], and per chipset choices. Check with us for details before finalizing your choices.

RULES OF THUMB:

  • CSI-2 often ideal if you are building your own instrument(s) with short cable length
  • USB3 is also good for building one’s own instruments when longer distances are needed
  • FPD-Link has great EMI characteristics
  • GMSL is also a good choice for EMI performance
  • IF torn between FPD-Link vs. GMSL, note that there are more devices in the GMSL universe, so that might skew towards easier sourcing for other components

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

New Alvium cameras with Sony SenSWIR InGaAs sensors

Short Wave Infrared (SWIR) imaging enables applications in a segment of the electromagnetic spectrum we can’t see with the human eye – or traditional CMOS sensors. See our whitepaper on SWIR camera concepts, functionality, and application fields.

Until recently, SWIR imaging tended to require bulky cameras, sometimes with cooling, which were not inexpensive. Cost-benefit analysis still justified such cameras for certain applications, but made it challenging to conceive of high-volume or embedded systems designs.

Enter Sony’s IMX992/993 SenSWIR InGaAs sensors. Now in Allied Vision Technologies’ Alvium camera families. These sensors “see” both SWIR and visible portions of the spectrum. So deploy them for SWIR alone – as capable, compact, cost-effective SWIR cameras. Or you can design applications that benefit from both visible and SWIR images.

Alvium configuration and interface options – Courtesy Allied Vision Technologies

Camera models and options first

The same two sensors, both the 5.3 MP Sony IMX992 and the 3.2 MP Sony IMX993, are available in the Allied Vision Alvium 1800 series with USB3 or MIPI CSI-2 interfaces. As well as in the Alvium G5 series with 5GigE interfaces.

And per the Alvium Flex option, besides the housed presentation available for all 3 interfaces, both the USB3 and CSI-2 versions may be ordered with bare board or open-back configuration, ideal for embedded designs.

Broken out by part number the camera models are:

More about the Sony IMX992 / IMX993 sensors

The big brother IMX992 at 5.3 MP and sibling IMX993 at 3.2 MP share the same underlying design and features. Both have 3.45 µm square pixels. Both are sensitive across a wide spectral range from 400 nm – 1700 nm with impressive quantum efficiencies. Both provide high frame rates – to 84 fps for the 5.3 MP camera, and to 125 fps at 3.2 MP.

Distinctive features HCG and DRRS

Sony provides numerous sensor features to the camera designer, which Allied Vision in turn makes available to the user. Two new features of note include High-Conversion-Gain (HCG) and Dual-Read-Rolling-Shutter (DRRS). Consider the images below, to best understand these capabilities:

Illustrating the benefits of HCG and DRRS modes – Courtesy Sony

With the small pixel size of 3.45 µm, an asset in terms of compact sensor size, Sony innovated noise control features to enhance image quality. Consider the three images above.

The leftmost was made with Sony’s previously-released IMX990. It’s been a popular sensor and it’s still suitable for certain applications. But it doesn’t have the HCG nor DRRS features,

The center image utilized the IMX992 High-Conversion-Gain feature. HCG reduces noise by amplifying the signal immediately after light is converted to an electrical signal. This is ideal when shooting in dark conditions. In bright conditions one may use Low-Conversion-Gain (LCG), essentially “normal” mode.

The rightmost image was generated using Dual-Read-Rolling-Shutter mode in addition to HCG. DRRS mode delivers a pair of images. The first contains the imaging signal together with the embedded noise. The second contains just the noise components. The camera designer can subtract the latter from the former to deliver a synthesized image with approximately 3/4 of the noise eliminated.

Alvium’s SWaP+C characteristics ideal for OEM systems

With small Size, low Weight, low Power requirements, and low Cost, Alvium SWIR cameras fit the SWaP+C requirements. OEM system builders need or value each of those characteristics to build cost-effective embedded and machine vision systems.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Teledyne DALSA Linea 9K Line scan NUV+VIS

Some applications require line scan cameras, where the continuously moving “product” is passed below a sensor that is wide in one dimension and narrow in the other, and fast enough to keep up with the pace of motion. See our piece on area scan vs. line scan cameras for an overview.

Teledyne DALSA’s new Linea HS 9k BSI Near ultraviolet (NUV) / visible camera is such a line scan camera, at 9216 x 192 resolution, and speeds to 400 kHz (mono mode) and 200 kHz (HDR mode).

Linea HS 9k BSI (NUV) / visible camera – Courtesy Teledyne DALSA

Visible spectrum as well as Near Ultraviolet (NUV)

The camera uses Teledyne DALSA’s own charge-domain CMOS TDI sensor with a 5×5 μm pixel size. In addition to the visible spectrum 400 nm – 700 nm, the sensor delivers good quantum efficiency to 300 nm, qualifying Near Ultraviolet (NUV) applications in the blue range as well.

Backside illumination enhances performance

Backside illumination (BSI) improves quantum efficiency (QE) in both the UV and visible wavelengths, boosting the signal-to-noise ratio.

Interface

The Linea HS 9k BSI camera uses the CLHS (Camera Link High Speed) data interface to provide a single-cable solution for data, power, and strobe. And Active optical cable (AOC) connectors support distances up to 100m. That avoids the need for a repeater while achieving data reliability and cost control. See an overview of the Camera Link standards. Or see all of 1stVision’s Camera Link HS cameras.

Applications

Delivering high speed high sensitivity images in low light conditions, the Linea 9k HS is used in applications such as:

  • PCB inspection
  • Wafer inspection
  • Digital pathology
  • Gene sequencing
  • FPD inspection
Linea 9k HS suitable for diverse applications – Courtesy Teledyne DALSA

Request a quote

The part number for the Linea HS 9k BSI camera is DALSA HL-HM-09K40H.

Lots of line scan cameras to choose from

Teledyne DALSA’s Linea families have a variety of interfaces, resolutions, frame rates, pixel sizes, and options. So if the new model isn’t the right one for your needs, browse the link at the start of this sentence, or ask us to guide you among the many choices.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Tips on selecting a telecentric lens

Why might I want a telecentric lens?

Metrology, when done optically, requires that an object’s representation be invariant to the distance and position in the field of view. Telecentric lenses deliver precisely that capability. Telecentric lenses only “pass” incoming light rays that are parallel to the optical axis of the lens. That’s helpful because we measure the distance between those parallel rays to measure objects without touching them.

Telecentric lens eliminates the parallax effect – Courtesy Edmund Optics

Parallax effect

Human vision and conventional lenses have angular fields of view. That can be very useful, especially for depth perception. Our ability to safely drive a car in traffic derives in no small part from not just identifying the presence of other vehicles and hazards, but also from gauging their relative nearness to our position. In that context parallax delivers perspective, and is an asset!

But with angular fields of view we can only guess at the size of objects. Sure, if we see a car and a railroad engine side by side, we might guess that the car is about 5 feet high and the railroad engine perhaps 15 or 16 feet. In metrology we want more precision than to the nearest foot! In detailed metrology such as precision manufacturing we want to differentiate to sub-millimeter accuracy. Telecentric lenses to the rescue!

Assorted telecentric lenses – Courtesy Edmund Optics

Telecentric Tutorial

Telecentric lenses only pass incoming light rays that are parallel to the optical axis of the lens. It’s not that the oblique rays don’t reach the outer edge of the telecentric lens. Rather, it’s about the optical design of the lens in terms of what it passes on through the other lens elements and onto the sensor focal plane.

Let’s get to an example. In the image immediately below, labeled “Setup”, we see a pair of cubes positioned with one forward of the other. This image was made with a conventional (entocentric) lens, whereby all three dimensions appear much the same as for human vision. It looks natural to us because that’s what we’re used to. And if we just wanted to count how many orange cubes are present, the lens used to make the setup image is probably good enough.

Courtesy Edmund Optics.

But suppose we want to measure the X and Y dimensions of the cubes, to see if they are within rigorous tolerance limits?

An object-space telecentric lens focuses the light without the perspective of distance. Below, the image on the left is the “straight on” view of the same cubes positioned as in “Setup” above, taken with a conventional lens. The forward cube appears larger, when in fact we know it to be exactly the same size.

The rightmost image below was made with a telecentric lens, which effectively collapses the Z dimension, while preserving X and Y. If measuring X and Y is your goal, without regard to Z, a telecentric lens may be what you need.

Courtesy Edmund Optics.

How to select a telecentric lens?

As with any engineering challenge, start by gathering your requirements. Let’s use an example to make it real.

Object of interest is the circled chip – Image courtesy Edmund Optics

Object size

What is your object size? What is the size of the surrounding area in which successive instances of the target object will appear? This will determine the Field of View (FOV). In the example above, the chip is 6mm long and 4mm wide, and the boards always present within 4mm. So we’ll assert 12mm FOV to add a little margin.

Pixels per feature

In theory, one might get away with just two pixels per feature. In practice it’s best to allow 4 pixels per feature. This helps to identify separate features by permitting space between features to appear in contrast.

Minimum feature size

The smallest feature we need to identify is the remaining critical variable to set up the geometry of the optical parameters and imaging array. For the current example, we want to detect features as small as 25µm. That 25µm feature might appear anywhere in our 12mm FOV.

Example production image

Before getting into the calculations, let’s take a look at an ideal production image we created after doing the math, and pairing a camera sensor with a suitable telecentric lens.

Production image of the logic chip – Courtesy Edmund Optics

The logic chip image above was obtained with an Edmund Optics SilverTL telecentric lens – in this case the 0.5X model. More on how we got to that lens choice below. The key point for now is “wow – what a sharp image!”. One can not only count the contacts, but knowing our geometry and optical design, we can also inspect them for length, width, and feature presence/absence using the contrast between the silver metallic components against the black-appearing board.

Resuming “how to choose a telecentric lens?”

So you’ve got an application in mind for which telecentric lens metrology looks promising. How to take the requirements figures we determine above, and map those to camera sensor selection and a corresponding telecentric lens?

Method 1: Ask us to figure it out for you.

It’s what we do. As North America’s largest stocking distributor, we represent multiple camera and lens manufacturers – and we know all the products. But we work for you, the customer, to get the best fit to your specific application requirements.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Method 2: Take out your own appendix

Let’s define a few more terms, do a little math, and describe a “fitting” process. Please take a moment to review the terms defined in the following graphic, as we’ll refer to those terms and a couple of the formulas shortly.

Telecentric lens terms and formulas – Courtesy Edmund Optics

For the chip inspection application we’re discussing, we’ve established the three required variables:

H = FOV = 12mm

p = # pixels per feature = 4

µ = minimum feature size = 25µm

Let’s crank up the formulas indicated and get to the finish line!

Determine required array size = image sensor

Array size formula for the chip inspection example – Courtesy Edmund Optics

So we need about 1900 pixels horizontally, plus or minus – with lens selection, unless one designs a custom lens, choosing an off-the-shelf lens that’s close enough is usually a reasonable thing to do.

Reviewing a catalog of candidate area scan cameras with horizontal pixel counts around 1900, we find Allied Vision Technology’s (AVT) Manta G-131B, where G indicates a GigEVision interface and B means black-and-white as in monochrome (vs. the C model that would be color). This camera uses a sensor with 2064 pixels in the horizontal dimension, so that’s a pretty close fit to our 1920 calculation.

Determine horizontal size of the sensor

H’ is the horizontal dimension of the sensor – Courtesy Edmund Optics

Per Manta G-319 specs, each pixel is 3.45µm wide, so 20643.(45) = 7.1mm sensor width.

Determine magnification requirements

The last formula tells us the magnification factor to fit the values for the other variables:

Magnification = sensor width / FOV Courtesy Edmund Optics

Choose a best-fit telecentric lens

Back to the catalog. Consider the Edmund Optics SilverTL Series. These C-mount lenses work with sensor sizes 1/2″, 2/3″, and 1/1.8″ sensors, and pixels as small as 2.8µm, so that’s a promising fit for the 1/1.8″ sensor at 3.45µm pixel size found in the Manta G-131B. Scrolling down the SilverTL Series specs, we land on the 0.50X Silver TL entry:

Some members of the SilverTL telecentric lens series – Courtesy Edmund Optics

The 0.5x magnification is not a perfect fit to the 0.59x calculated value. Likewise the 14.4mm FOV is slightly larger than the 12mm calculated FOV. But for high-performance ready-made lenses, this is a very close fit – and should perform well for this application.

Optics fitting is part science and part experience – and of course one can “send in samples” or “test drive” a lens to validate the fit. Take advantage of our experience in helping customers match application requirements to lens and camera selection, as well as lighting, cabling, software, and other components.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Color models join Teledyne DALSA AxCIS Line Scan Series

As anticipated when Teledyne DALDA’s AxCIS Line Scan Series was introduced a few months ago, color models have now been released. The “CIS” in the product name stands for Contact Image Sensor. In fact a CIS doesn’t actually contact the object being imaged – but it’s so close to touching that the term has become vision industry jargon to help us orient to the category.

Courtesy Teledyne DALSA

What can CIS do for me?

Think “specialized line scan”. Line scan in that it’s a linear array of sensors (vs. and area scan camera), requiring motion to create each successive next slice. And “specialized” in that CIS is positioned very close to the target, Plus low power requirements. And excellent price-performance characteristics.

Why is the new color offering interesting?

Just as with area scan imaging, if the application can be solved with monochrome sensors, that’s often preferred – since monochrome sensors, lensing, and lighting are simpler. If one just needs edge detection and contrast achievable with monochrome – stay monochrome! BUT sometimes color is the sole differentiator for an application, so the addition of color members to the AxCIS family can be a game changer.

Why Teledyne DALSA AxCIS in particular?

A longtime leader in line scan imaging, Teledyne DALSA introduces the AxCIS series in 2023 and continues to release new models and features. Vision Systems Design named the AxCIS family of high-speed high-resolution integrated imaging modules with their 2024 Gold Honoree Award.

Courtesy Vision Systems Design

AxCIS Series Key Attributes

  • Compact modules integrating sensors, lenses and lights
  • Option to customize the integrated lighting for specific CRI to aid in color measurement.
  • Current width choices 400mm (16 inches) or 800mm (32 inches)
  • Customizable lengths coming, in addition to the 400mm and 800mm models
  • CIS covers entire FOV – without missing any pixels and without using interpolation, allowing for accurate measurements. The competition has gaps between sensors causing areas which are not imaged and inability to measure properly
  • Selectable pixel sizes up to 900dpi
  • Gradient index lenses are used so there is no parallax and essentially telecentric.  (Great for gauging applications)  
  • Binning support, summed to provide brighter images
  • 4 available AOIs
  • CameraLink HS interface
  • Up to 120 kHz line rates … and cables lengths to 300m
  • No alignment or calibration required – lighting and sensors are pre-aligned
  • HDR imaging with dual exposure mode

Get quote

See specs for specific models in the Teledyne DALSA AxCIS Series.

Contact us for a quote

HDR – a closer look

HDR Imaging – High Dynamic Range – Courtesy Teledyne DALSA

By using two adjacent rows of sensors, one row may be used for a short exposure to capture the rapidly saturated portions of an image. A second row of sensors can take a longer exposure, creating nuanced pixel values of areas that would otherwise have been undersaturated. Then the values are combined to a composite image with a wider dynamic range with more useful information to be interpreted by the processing algorithms.

Applications

While not limited to the following, popular applications include:

Popular AxCIS applications – Courtesy Teledyne DALSA

Want to see other Teledyne DALSA imaging products?

Teledyne DALSA is long-recognized as a leader and innovator across the diverse range of imaging products – click here to see all Teledyne DALSA products.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

AT – Automation Technology XCS 3D Sensor Laser Profiler

Ideal for industrial applications requiring precision, reliability, high speed, and high resolution, AT – Automation Technology’s XCS 3D sensor laser profiler 3070 WARP achieves speeds up to 200 kHZ with the dual head model. Even the single head can achieve 140 kHz. The key innovations in the XCS series are in the laser-line projection technology.

XCS 3D sensor laser profiler – Courtesy AT – Automation Technology

Aren’t all 3D sensor laser profilers similar?

Many indeed share underlying similarities. Often they use triangulation to make their measurement. And the output is a 3D profile (or point cloud) of a target, built up by rapid laser pulsed stepwise “slices” of the X dimension as the target (or sensor) moves in the Y dimension. Triangulation determines variances in the Z dimension based on how the laser angle reflects from the target surface coordinate onto the sensor. For a brief refresher on the concepts, see our overview article and illustrations.

What’s special about AT – Automation Technology’s XCS Series?

Key attributes are shown in the video and called out in the following text.

30 second overview of XCS series

Homogeneous thickness laser line

Using special optics, the XCS series projects a laser line of homogeneous thickness across the target surface. AT – Automation Technology uses Field Curvature Correction (FCC) to create the uniform projection, overcoming the so-called line “bow” effect. This enables precise scanning of even small structures – regardless of whether such features are in the middle or edge of the laser line. What’s the benefit for the customer? It enables applications with high repeatability and accuracy – such as for ball grid arrays (BGAs), pin grid arrays (PGAs), and surface mount devices (SMDs).

Clean Beam Technology

The XCS Series utilizes AT – Automation Technology’s own Clean Beam function to insure a precisely focused laser beam, effectively suppressing side lobe noise interference.. Clean Beam also assures a uniform intensity distribution, which also contributes to the reliably consistent results.

Scanning a pin-grid array (PGA) – Courtesy AT – Automation Technology

Optional Dual Head to avoid occlusion

X FOV 53mm +/-

X Resolution 13mm +/-

Z Range to 20mm

Z Resolution to 0.4 µm

GigE Vision interface, GenICam compliant

For plug and play configuration with networking cables and adapter cards familiar to many, the GigE Vision interface is one of the most popular machine vision standards. And GenICam compliance means you can use AT – Automation Technology’s software or diverse 3rd party SDKs.

Additional features

Automatic RegionTracking, Automatic RegionSearch, Multiple Regions, MultiPart, AutoStart, History Buffer, Multi-Slope, MultiPeak

contact us

Is the XCS 3D sensor laser profiler best for your application?

AT – Automation Technology is confident there are demanding users for whom the XCS 3D laser profiler delivers just the right value proposition. Is that what your application requires? But AT also provides 3 other product families of laser profilers, including the CS Series, the MCS Series, and the ECS Series. It all comes down to speed and resolution requirements, field of view (FOV), and cost.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Machine vision software –> Sapera Processing

Why read this article?

Generic reason: Compact overview of machine vision software categories and functionality.

Cost-driven reason: Discover that powerful software comes bundled at no cost to users of Teledyne DALSA cameras and frame grabbers. Not just the a viewer and SDK – though of course those – but select image processing software too.

contact us

Software – build or buy?

Without software machine vision is nowhere. The whole point of machine vision is to acquire an image and then process it with an algorithm that achieves something of value.

Whether it’s presence/absence detection, medical diagnostics, thermal imaging, autonomous navigation, pick-and-place, automated milling, or myriad other applications, the algorithm is expressed in software.

You might choose a powerful software library needing “just” parameterization by the user – or AI – or a software development kit (SDK) permitting nearly endless scope of programming innovation. Either way it’s the software that does the processing and delivers the results.

In this article, we survey build vs. buy arguments for several types of machine vision software. We make a case for Teledyne DALSA’s Sapera Software Suite – but it’s a useful read for anyone navigating machine vision software choices – wherever you choose to land.

Sapera Vision Software Suite – Courtesy Teledyne DALSA

Third-party or vision library from same vendor?

Third-party software

If you know and love some particular third-party software, such as LabView, HALCON, MATLAB, or OpenCV, you may have developed code libraries and in-house expertise on which it makes sense to double-down. Even if there are development or run time licensing costs. Do the math on total cost of ownership.

Same vendor for camera and software

Unless the third-party approach described above is your clear favorite, consider the benefits of one-stop shopping for your camera and your software. Benefits include:

  • License pricing: SDK and run-time license costs are structured to favor the customer who sourced his cameras and software from the same provider.
  • Single-source simplicity: Since the hardware and software come from the same manufacturer, it just works. They’ve done all the compatibility validation in-house. And the feature naming used to control the camera fully aligns with the function calls used in the software.
  • Technical support: When it all comes from one provider, if you have support questions there’s no finger pointing.

You – the customer/client – are the first party. It’s all about you. Let’s call the camera manufacturer the second party, since the camera and the sensor therein are at the heart of image acquisition. Should licensed software come from a third party, or from the camera manufacturer? It’s a good question.

contact us

Types/functions of machine vision software

While there are all-in-one and many-in-one packages, some software is modularized to fulfill certain functions, and may come free, bundled, discounted, open-source, or priced, according to market conditions and a developer’s business model. Before we get into commercial considerations, let’s briefly survey the functional side, including each of the following categories in turn:

  • Viewer / camera control
  • Acquisition control
  • Software development kit (SDK)
  • Machine vision library
  • AI training/learning as an alternative to programming

Point of view: Teledyne DALSA’s Sapera software packages by capability

Viewer / camera control – included in Sapera LT

When bringing a new camera online, after attaching the lens and cable, one initially needs to configure and view. Regardless of whether using GigE Vision, CameraLink, CameraLink HS, USB3 Vision, CoaXpress, or other standards, one must typically assign the camera a network address and set some camera parameters to establish communication.

A graphical user interface (GUI) viewer / camera-control-tool makes it easy to quickly get the camera up and running. The viewer capability permits an image stream so one can get the camera aligned, adjust aperture, focus, and imaging modes.

Every camera manufacturer and software provider offers such a tool. Teledyne DALSA calls theirs CamExpert, and it’s part of Sapera LT. It’s free for users of Teledyne DALSA 2D/3D cameras and frame grabbers.

CamExpert – Courtesy Teledyne DALSA

Acquisition control – included in Sapera LT

The next step up the chain is referred to as acquisition control. On the camera side this is about controlling the imaging modes and parameters to get the best possible image before passing it to the host PC. So, one might select a color mode, whether to use HDR or not, gain controls, framerate or trigger settings, and so on.

On the communications side, one optimizes depending whether a single camera is on the databus, or if bandwidth is being shared. Any vendor offering acquisition control software has provide all these controls.

Controlling image acquisition with GUI tools – Courtesy Teledyne DALSA

Those with Sapera LT can utilize Teledyne DALSA’s patented TurboDrive, realizing speed gains of x1.5 to x3, under GigE Vision protocol. This driver brings added bandwidth without needing special programming.

Software development kit (SDK) – included in Sapera LT

GUI viewers are great, but often one needs at least a degree of programming to fully integrate and control the acquisition process. Typically one uses a software development kit (SDK) for C++, C#, .NET, and/or Standard C. And one doesn’t have to start from scratch – SDKs almost always include programming examples and projects one may adapt and extend, to avoid re-inventing the wheel.

Teaser subset of code samples provided – Courtesy Teledyne DALSA

Sapera Vision Software allows royalty free run-time licenses for select image processing functions when combined with Teledyne DALSA hardware. If you’ve just got a few cameras, that may not be important to you. But if you are developing systems for sale to your own customers, this can bring substantial economies of scale.

Machine vision library

So you’ve got the image hitting the host PC just fine – now what? One needs to programmatically interpret the image. Unless you’ve thought up a totally new approach to image processing, there’s an excellent chance your application will need one or more of edge detection, bar code reading, blob analysis, flipping, rotation, cross-correlation, frame-averaging, calibration, or other standard methods.

A machine vision library is a toolbox containing many of these functions pre-programmed and parameterized for your use. It allows you to marry your application-specific insights with proven machine vision processes, so that you can build out the value-add by standing on the shoulders of machine vision developers who provide you with a comprehensive toolbox.

No surprise – Teledyne DALSA has an offering in this space too. It’s called Sapera Processing. It includes all we’ve discussed above in terms of configuration and acquisition control – and it adds a suite of image processing tools. The suite’s tools are best understood across three categories:

  • Calibration – advanced configuration including compensation for geometric distortion
  • Image processing primitives – convolution functions, geometry functions, measurement, transforms, contour following, and more
  • Blob analysis – uses contrast to segment objects in a scene; determine centroid, length and area; min, max, and standard deviation; thresholding, and more
Just some of the free included image processing primitives –
Courtesy Teledyne DALSA

So unless you skip ahead to the AI training/learning features of Astrocyte (next section), Sapera Processing is the programmer’s comprehensive toolbox to do it all. Viewer, camera configuration, acquisition control, and image evaluation and processing functions. From low-level controls if you want them, through parameterized machine vision functions refined, validated, and ready for your use.

AI training/learning as an alternative to programming

Prefer not to program if possible? Thanks to advances in AI, many machine vision applications may now be trained on good vs. bad images, such that the application learns. This enables production images to be correctly processed based on the training sets and the automated inference engine.

No coding required – Courtesy Teledyne DALSA

Teledyne DALSA’s Astrocyte package makes training simple and cost-effective. Naturally one can combine it with parameterized controls and/or SDK programming, if desired. See our recent overview of AI in machine vision – and Astrocyte.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Lens extension tube or close up ring increases magnification

Summary at a glance:

Need a close-up image your preferred sensor and lens can’t quite deliver? A glass-free extension tube or close up ring can change the optics to your advantage.

C-mount extension tube kit – Courtesy Edmund Optics

What’s an extension tube?

An extension tube is a metal tube one positions between the lens and the camera mount. It comes with the appropriate threads for both the lens and camera mount, so mechanically it’s an easy drop-in procedure.

By moving the lens away from the optical plane, the magnification is increased. Sounds like magic! Well almost. A little optical calculation is required – or use of formulas or tables prepared by others. It’s not the case than any tube of any length will surely yield success – one needs to understand the optics or bring in an expert who does.

S-mount extension tube kit – Courtesy Edmund Optics

Note: One can also just purchase a specific length extension tube. We’ve shown images of kits to make it clear there are lots of possibilities. And some may want to own a kit in order to experiment.

Example

Sometimes an off-the-shelf lens matched to the sensor and camera you prefer suits your optical needs as well as your available space requirements. By available space we mean clearance from moving parts, or ability to embed inside an attractively sized housing. Lucky you.

But you might need more magnification than one lens offers, yet not as much as the next lens in the series. Or you want to move the camera and lens assembly closer to the target. Or both. Read on to see how extension rings at varying step sizes can achieve this.

Navigating the specifications

Once clear on the concept, it’s often possible to read the datasheets and accompanying documentation, to determine what size extension tube will deliver what results. Consider, for example, Moritex machine vision lenses. Drilling in on an arbitrary lens family, look at Moritex ML-U-SR Series 1.1″ Format Lenses, then, randomly, the ML-U1217SR-18C.

ML-U1217SR-18C 12mm lens optimized for 3.45um pixels and 12MP sensors – Courtesy Moritex

If you’ve clicked onto the page last linked above, you should see a PDF icon labeled “Close up ring list“. It’s a rather large table showing which extension tube lengths may be used with which members of the ML-U-SR lens series, to achieve what optical changes in the Field-Of-View (FOV). Here’s a small segment cropped from that table:

Field-Of-View changes with extension tubes of differing lengths – Courtesy Moritex

Compelling figures from the chart above:

Consider the f12mm lens in the rightmost column, and we’ll call out some highlights.

Extension tube length (mm)WD (far)Magnification
01000.111x
258.20.164
513.50.414
5mm tube yields 86% closer WD and 4x magnification!

Drum roll here…

Let’s expand on that table caption above for emphasis. For this particular 12mm lens, by using a 5mm extension tube, we can move the camera 86% closer to the target than by using just the unaugmented lens. And we quadruple the magnification from 0.111x to 0.414x. If you are constrained to a tight space, whether for a one-off system, or while building systems you’ll resell at scale, those can be game-changing factors.

contact us

Any downside?

As is often the case with engineering and physics, there are tradeoffs one should be aware of. In particular:

  • The light reaching the focal plane is reduced, per the inverse square law – if you have sufficient light this may not have any negative consequences for you at all. But if pushed to the limit resolution can be impacted by diffraction.
  • Reduced depth of field – does the Z dimension have a lot of variance for your application? Is your application working with the center segment of the image or does it also look at the edge regions where field curvature and spherical aberrations may appear?

We do this

Our team are machine vision veterans, with backgrounds in optics, hardware, lighting, software, and systems integration. We take pride in helping our customers find the right solution – and they come back to us for project after project. You don’t have to get a graduate degree in optics – we’ve done that for you.

Give a brief idea of your application and we’ll provide options.

Related resources

You might also be interested in one or more of the following:

contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Monochrome light better for machine vision than white light

Black and white vs. color sensor? Monochrome or polychrome light frequencies? Visible or non-visible frequencies? Machine vision systems builders have a lot of choices – and options!

Let’s suppose you are working in the visible spectrum. You recall the rule of thumb to favor monochrome over color sensors when doing measurement applications – for same sized sensors.

So you’ve got a monochrome sensor that’s responsive in the range 380 – 700 nm. You put a suitable lens on your camera matched to the resolution requirements and figure “How easy, I can just use white light!”. You might have sufficient ambient light. Or you need supplemental LED lighting and choose white, since your target and sensor appear fine in white light – why overthink it? – you think.

Think again – monochrome may be better

Polychromatic (white) light is comprised of all the colors of the ROYGBIV visible spectrum – red, orange, yellow, green, blue, indigo, and violet – including all the hues within each of those segments of the visible spectrum. We humans perceive it as simple white light, but glass lenses and CMOS sensor pixels see things a bit differently.

Chromatic aberration is not your friend

Unless you are building prisms intended to separate white light into its constituent color groups, you’d prefer a lens that performs “perfectly” to focus light from the image onto the sensor, without introducing any loss or distortion.

Lens performance in all its aspects is a worthwhile topic in its own right, but for purposes of this short article, let’s discuss chromatic aberration. The key point is that when light passes through a lens, it refracts (bends) differently in correlation with the wavelength. For “coarse” applications it may not be noticeable; but trace amounts of arsenic in one’s coffee might go unnoticed too – inquiring minds want to understand when it starts to matter.

Take a look at the following two-part illustration and subsequent remarks.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

In the illustrations above:

  • C denotes red light at 656 nm
  • d denotes yellow light at 587 nm
  • F denotes blue light at 486 nm

Figure 1, showing transverse chromatic aberration, shows us that differing refraction patterns by wavelength shift the focal point(s). If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point. One wants the optical system to honor the real world geometry as closely as possible – we don’t want a scatter plot generated if a single point could be attained.

Figure 2 shows longitudinal chromatic aberration, which is another way of telling the same story. The minimum blur spot is the span between whatever outermost rays correspond to wavelengths occurring in a given imaging instance.

We could go deeper, beyond single lenses to compound lenses; dig into advanced optics and how lens designers try to mitigate for chromatic aberration (since some users indeed want or need polychromatic light). But that’s for another day. The point here is that chromatic aberration exists, and it’s best avoided if one can.

So what’s the solution?

The good news is that a very easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

Making it real

The illustration below shows that certain materials reflect certain wavelengths. Utilize such known properties to generate contrast essential for machine vision applications.

Red light reflects well from gold, copper, and silver – Courtesy CCS Inc.

In the illustration we see that blue light reflects well from silver (Ag) but not from copper (Cu) nor gold (Ag). Whereas red light reflects well from all three elements. The moral of the story is to use a wavelength that’s matched to what your application is looking for.

Takeaway – in a nutshell

Per the carpenter’s guidance to “measure twice – cut once”, approach each new application thoughtfully to optimize outcomes:

Click to contact
Give us an idea of your application and we will contact with lighting options and suggestions.

Additional resources you may find helpful from 1stVision’s knowledge base and blog articles: (in no particular order)

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

LWIR – Long Wave Infrared Imaging – Problems Solved

What applications challenges can LWIR solve?

LWIR is the acronym, is it reminds us where on the electromagnetic spectrum we’re focused – wavelengths around 8 – 14 micrometers (8,000 – 14,000 nm). More descriptive is the term “thermal imaging”, which tells us we’re sensing temperatures not with a contact thermometer – but using non-contact sensors detecting emitted or radiated heat.

Remember COVID? Pre-screening for fever. Courtesy Teledyne DALSA.

Security, medical, fire detection, and environmental monitoring are common applications. More on applications further below. But first…

How does an LWIR camera work?

Most readers probably come to thermal imaging with some prior knowledge or experience in visible imaging. Forget all that! Well not all of it.

For visible imaging using CMOS sensors, photons enter pixel wells and generate a voltage. The array of adjacent pixels are read out as a digital representation of the scene passed through the lens and onto the sensor, according to the optics of the lens and the resolution of the sensor. Thermal camera sensors work differently!

Thermal cameras use a sensor that’s a microbolometer. The helpful part of the analogy to a CMOS sensor is there we still have an array of pixels, which determines the resolution of the camera, as a 2D digital representation of the scene’s thermal characteristics.

But unlike a CMOS sensor whose pixels react to photons, a microbolometers upper pixel surface, the detector, is comprised of IR absorbing material, such as Vanadium oxide. The detector is heated by the IR exposure, and the intensity of exposure in turn changes the electrical resistance. The change in electrical resistance is measured and passed by an electrode to a silicon substrate and readout integrated circuit.

Vanadium oxide (VOx) pixel structure – Courtesy Teledyne DALSA

Just as with visible imaging, for machine vision it’s the digital representation of the scene that matters, as it’s algorithms “consuming” the image in order to take some action: danger vs. safe; good part vs. bad part; steer left, straight, or right – or brake; etc. Whether one generates a pseudo-image for human consumption may well be unnecessary – or at least secondary.

Applications in LWIR

Applications include but are not limited to:

  • Security e.g. intrusion detection
  • Health screening e.g. sensing who has a fever
  • Fire detection – detect heat from early combustion before smoke is detectable
  • Building heat loss – for energy management and insulation planning
  • Equipment monitoring e.g. heat signature may reveal worn bearings or need for lubrication
  • Food safety – monitor whether required cooking temperatures attained before serving

You get the idea – if the thing you care about generates a heat signature distinct from the other things around it, thermal imaging may be just the thing.

What if I wanted to buy an LWIR camera?

We could help you with that. Does your application’s thermal range lie between -25C and +125C? Would a frame rate of 30fps do the job? Does a GigEVision interface appeal?

It’s likely we’d guide you to Teledyne DALSA’s Calibir GX cameras.

Calibir GX front and rear views – Courtesy Teledyne DALSA
Contact us

Precision of Teledyne DALSA Calibir GX cameras

Per factory calibration, one already gets precision to +/- 3 degrees Celsius. For more precision, use a black body radiator and manage your own calibration to +/- 0.5 degrees Celsius!

Thresholding with LUT

Sometimes one wants to emphasize only regions meeting certain criteria – in this case heat-based criteria. Consider the following image:

Everything between 38 and 41°C shown as red – Courtesy Teledyne DALSA

Teledyne DALSA Calibir GX control software let’s users define their own lookup tables (LUTs). One may optionally show regions meeting certain temperatures in color, leaving the rest of the image in monochrome.

Dynamic range

The “expressive power” of a camera is characterized by dynamic range. Just as the singers Enrico Caruso (opera) and Freddie Mercury (rock) were lauded for their range as well as their precision, in imaging we value dynamic range. Consider the image below of an electric heater element:

“Them” (left) vs. us (right) – Courtesy Teledyne DALSA

The left side of the image if from a 3rd party thermal imager – it’s pretty crude essentially showing just hot vs. not-hot, with no continuum. The right side was obtained with a Teledyne DALSA Calibir GX – there we see very hot, hot, warm, slightly warm, and cool – a helpfully nuanced range. Enabled by a 21 bit ADC, the Teledyne DALSA Calibir GX is capable of a dynamic range across 1500°C.

In this short blog we’ve called out just a few of the available features – call us at 978-474-0044 to tell us more about your application goals, and we can guide you to whichever hardware and software capabilities may be most helpful for you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Artificial intelligence in machine vision – today

This is not some blue-sky puff piece about how AI may one day be better / faster / cheaper at doing almost anything at least in certain domains of expertise. This is about how AI is already better / faster / cheaper at doing certain things in the field of machine vision – today.

Classification of screw threads via AI – Courtesy Teledyne DALSA

Conventional machine vision

There are classical machine vision tools and methods, like edge detection, for which AI has nothing new to add. If the edge detection algorithm is working fine as programmed in your vision software, who needs AI? If it ain’t broke, don’t fix it. Presence / absence detection, 3D height calculation, and many other imaging techniques work just fine without AI. Fair enough.

From image processing to image recognition

As any branch of human activity evolves, the fundamental building blocks serve as foundations for higher-order operations that bring more value. Civil engineers build bridges, confident the underlying physics and materials science lets them choose among arch, suspension, cantilever, or cable-stayed designs.

So too with machine vision. As the field matures, value-added applications can be created by moving up the chunking level. The low-level tools still include edge-detection, for example, but we’d like to create application-level capabilities that solve problems without us having to tediously program up from the feature-detection level.

Traditional methods (left) vs. AI classification (right) – Courtesy Teledyne DALSA
Traditional Machine Vision ToolsAI Classification Algorithm
– Can’t discern surface damage vs water droplets– Ignores water droplets
– Are challenged by shading and perspective changes– Invariant to surface changes and perspective
For the application images above, AI works better than traditional methods – Courtesy Teledyne DALSA

Briefly in the human cognition realm

Let’s tee this up with a scenario from human image recognition. Suppose you are driving your car along a quiet residential street. Up ahead you see a child run from a yard, across the sidewalk, and into the street.

While it may well be that the rods and cones in your retina, and your visual cortex, and your brain used edge detection to process contrasting image segments to arrive at “biped mammal” – child, , and on to evaluating risk and hitting the brakes – isn’t how we usually talk about defensive driving. We just think in terms of accident avoidance, situational awareness, and braking/swerving – at a very high level.

Applications that behave intelligently

That’s how we increasingly would like our imaging applications to behave – intelligently and at a high level. We’re not claiming it’s “human equivalent” intelligence, or that the AI method is the same as the human method. All we’re saying is that AI, when well-managed and tested, has become a branch of engineering that can deliver effective results.

So as autonomous vehicles come to market of course we want to be sure sufficient testing and certification is completed, as a matter of safety. But whether the safe-driving outcome is based on “AI” or “vision engineering”, or the melding of the two, what matters is the continuous sequence of system outputs like: “reduce following distance”, “swerve left 30 degrees”, and “brake hard”.

Neural Networks

One branch of AI, neural networks, has proven effective in many “recognition” and categorization applications. Is the thing being imaged an example of what we’re looking for, or can it be dismissed? If it is the sort of thing we’re looking for, is it of sub-type x, y, or z? “Good” item – retain. “Bad” item – reject. You get the idea.

From training to inference

With neural networks, instead of programming algorithms at a granular feature analysis level, one trains the network. Training may include showing “good” vs. “bad” images – without having to articulate what makes them good or bad – and letting the network infer the essential characteristics. In fact it’s sometimes possible to train only with “good” examples – in which case anomaly detection flags production images that deviate from the trained pool of good ones.

Deep Neural Network (DNN) example – Courtesy Teledyne DALSA

Enough theory – what products actually do this?

Teledyne DALSA Astrocyte software creates a deep neural network to perform a desired task. More accurately – Astrocyte provides a graphical user interface (GUI) and a neural network framework, such that an application-specific neural network can be developed by training it on sample images. With a suitable collection of images, Teledyne DALSA Astrocyte can create an effective AI model in under 10 minutes!

Gather images, Train the network, Deploy – Courtesy Teledyne DALSA

Mix and match tools

In the diagram above, we show an “all DALSA” tools view, for those who may already have expertise in either Sapera or Sherlock SDKs. But one can mix and match. Images may alternatively be acquired with third party tools – paid or open source. And one may not need rules-based processing beyond the neural network. Astrocyte builds the neural network at the heart of the application.

Contact us

User-friendly AI

The key value proposition with Teledyne DALSA Astrocyte is that it’s user-friendly AI. The GUI used to configure the training and to validate the model requires no programming. And one doesn’t need special training in AI. Sure, it’s worth reading about the deep learning architectures supported. They include: Classification, Anomaly Detection, Object Detection, and Segmentation. And you’ll want to understand how the training and validation work. It’s powerful – it’s built by Teledyne DALSA’s software engineers standing on the shoulders of neural network researchers – but you don’t have to be a rocket scientist to add value in your field of work.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Kowa FC24M C-mount lens series

With 9 members in the Kowa FC24M lens series, focal lengths range from 6.5mm through 100mm. Ideal for sensors like the 1.1″ Sony IMX183, 530/540, 253 and IMX304, these C-mount lenses cover any sensor up to 14.1mm x 10.6mm, with no vignetting. Their design is optimized for sensors with pixel sizes as small as 2.5µm – but of course work great on pixels larger than that as well.

Kowa FC24M C-mount lenses – Courtesy Kowa

Lens selection

Machine vision veterans know that lens selection ranks right up there with camera/sensor choice, and lighting, as determinants in application success. For an introduction or refresher, see our knowledge base Guide to Key Considerations in Machine Vision Lens Selection.

Click to contact
Give us a brief idea of your application and we will contact you with options.

Noteworthy features

Particularly compelling across the Kowa FC24M lens series is the floating mechanism system. Kowa’s longer name for this is the “close distance aberration compensation mechanism.” It creates stable optical performance at various working distances. Internal lens groups move independently of each other, which optimizes alignment compared to traditional lens design.

Kowa FC24M lenses render sharp images with minimal distortion – Courtesy Kowa

Listing all the key features together:

  • Floating mechanism system (described above)
  • Wide working range… and as close at 15 cm MOD
  • Durable construction … ideal for industrial applications
  • Wide-band multi-coating – minimizes flare and ghosting from VIS through NIR
High resolution down to pixels as small as 2.5um – Courtesy Kowa

Video overview shows applications

Applications include manufacturing, medical, food processing, and more. View short one-minute video:

Kowa FC24M key features and example applications – Courtesy Kowa

What’s in a family name?

Let’s unpack the Kowa FC24M lens series name:

F is for fixed. With focal lengths at 9 step sizes from 6 – 100, lens design is kept simple and pricing is correspondingly competitive.

C is for C-mount. It’s one of the most popular camera/lens mounts in machine vision, with a lot of camera manufacturers offering diverse sensors designed in to C-mount housings.

24M is for 24 Megapixels. Not so long ago it was cost prohibitive to consider sensors larger than 20M. But as with most things in the field of electronics, the price : performance ratio keeps moving in the user’s favor. Many applications benefit from sensors in this size.

And the model names?

Model names include LM6FC24M, LM8FC24M, …, LM100FC24M. So the focal length is specified by the digit(s) just before the family name. i.e. the LM8FC24M has a focal length of 8mm. In fact that particular model is technically 8.5mm but per industry conventions one rounds or truncates to common de facto sizes.

LM8FC24M 8.5mm focal length – Courtesy Kowa

See the full brochure for the Kowa FC24M lens series, or call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Machine vision problems solved with SWIR lighting

Some problems best solved outside the visible spectrum

Most of us think about vision with a human bias, since most of us are normally sighted with color stereo vision. We perceive distance, hues, shading, and intensity, for materials that emit or reflect light in the wavelengths 380 – 750 nm. Many machine vision problems can also be solved using monochrome or color light and sensors in the visible spectrum.

Human visible light – marked VIS – is just a small portion of what sensors can detect – Courtesy Edmund Optics

Many applications are best solved or even only solved, in wavelengths that we cannot see with our own eyes. There are sensors that react to wavelengths in these other parts of the spectrum. Particularly interesting are short wave infrared (SWIR) and ultraviolet (UV). In this blog we focus on SWIR, with wavelengths in the range 0.9 – 1.7um.

Examples in SWIR space

The same apple with visible vs. SWIR lighting and sensors – Courtesy Effilux

Food processing and agricultural applications possible with SWIR. Consider the above images, where the visible image shows what appears to be a ripe apple in good condition. With SWIR imaging, a significant bruise is visible – as SWIR detects higher densities of water which render as black or dark grey. Supplier yields determine profits, losses, and reputations. Apple suppliers benefit by automated sorting of apples that will travel to grocery shelves vs. lightly bruised fruit that can be profitably juiced or sauced.

Even clear fluids in opaque bottles render dark in SWIR light –
Courtesy Effilux

Whether controlling the filling apparatus or quality controlling the nominally filled bottles, SWIR light and sensors can see through glass or opaque plastic bottles and render fluids dark while air renders white. The detection side of the application is solved!

Hyperspectral imaging

Yet another SWIR application is hyperspectral imaging. By identifying the spectral signature of every pixel in a scene, we can use light to discern the unique profile of substances. This in turn can identify the substance and permit object identification or process detection. Consider also multi-spectral imaging, an efficient sub-mode of hyperspectral imaging that only looks for certain bands sufficient to discern “all that’s needed”.

Multispectral and hyperspectral imaging – Courtesy Allied Vision Technologies

How to do SWIR imaging

The SWIR images shown above are pseudo-images, where pixel values in the SWIR spectrum have been re-mapped into the visible spectrum along grey levels. But that’s just to help our understanding, as an automated machine vision application doesn’t need to show an image to a human operator.

In machine vision, an algorithm on the host PC interprets the pixel values to identify features and make actionable determinations. Such as “move apple to juicer” or “continue filling bottle”.

Components for SWIR imaging

SWIR sensors and cameras; SWIR lighting, and SWIR lenses. For cameras and sensors, consider Allied Vision’s Goldeye series:

Goldeye SWIR cameras – Courtesy Allied Vision

Goldeye SWIR cameras are available in compact, rugged, industrial models, or as advanced scientific versions. The former has optional thermal electric cooling (TEC), while the latter is only available in cooled versions.

Contact us

For SWIR lighting, consider Effilux bar and ring lights. Effilux lights come in various wavelengths for both the visible and SWIR applications. Contact us to discuss SWIR lighting options.

EFFI-FLEX bar light and EFFI-RING ring light – Courtesy Effilux

By emitting light in the SWIR range, directed to reflect off targets known to reveal features in the SWIR spectrum, one builds the components necessary for a successful application.

Hyperspectral bar lights – Courtesy Effilux

And don’t forget the lens. One may also need a SWIR-specific lens, or a hybrid machine vision lens that passes both visible and SWIR wavelengths. Consider Computar VISWIR Lite Series Lenses or their VISWIR Hyper-APO Series Lenses. It’s beyond the scope of this short blog to go into SWIR lensing. Read our recent blog on Wide Band SWIR Lensing and Applications or speak with your lensing professional to be sure you get the right lens.

Takeaway

Whether SWIR or UV (more on that another time), the key point is that some machine vision problems are best solved outside the human visible portions of the spectrum. While there are innovative users and manufacturers continuing to push the boundaries – these areas are sufficiently mature that solutions are predictably creatable. Think beyond the visible constraints!

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Depth of Field – a balancing act

Most who are involved with imaging have at least some understanding of depth of field (DoF). DoF is the distance between the nearest and furthest points that are acceptably in focus. In portrait photography, one sometimes seeks a narrow depth of field to draw attention to the subject, while intentionally blurring the background to a “soft focus”. But in machine vision, it’s often preferred to maximize depth of field – that way if successive targets vary in their Z dimension – or if the camera is on a moving vehicle – the imaging system can keep processing without errors or waste.

Making it real

Suppose you need to see small features on an item that has various heights (Z dimension). You may estimate you need a 1″ depth of field. You know you’ve got plenty of light. So you set the lens to f11 because the datasheet shows you’ll reach the depth of field desired. But you can’t resolve the details! What’s up?

So I should maximize DoF, right?

Well generally speaking, yes – to a point. The point where diffraction limits negatively impact resolution. If you read on, we aim to provide a practical overview of some important concepts and a rule of thumb to guide you through this complex topic without much math.

Aperture, F/#, and Depth of Field

Aperture size and F/# are inversely correlated. So a low f/# corresponds to a large aperture, and a high f/# signifies a small aperture. See our blog on F-Numbers aka F-Stops on the way the F-numbers are calculated, and some practical guidance.

Per the illustration below, a large aperture restricts DoF, while a small aperture maximizes the DoF. Please take a moment to compare the upper and lower variations in this diagram:

Correlation between aperture and Depth of Field – Courtesy Edmund Optics

If we maximize depth of field…

So let’s pursue maximizing depth of field for a moment. Narrow the aperture to the smallest setting (the largest F-number), and presto you’ve got maximal DoF! Done! Hmm, not so fast.

First challenge – do you have enough light?

Narrowing the aperture sounds great in theory, but for each stop one narrows the aperture, the amount of light is halved. The camera sensor needs to receive sufficient photons in the pixel wells, according to the sensor’s quantum efficiency, to create an overall image with contrast necessary to process the image. If there is no motion in your application, perhaps you can just take a longer exposure. Or add supplemental lighting. But if you do have motion or can’t add more light, you may not be able to narrow the aperture as far as you hoped.

Second challenge – the Airy disk and diffraction

When light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Diffraction increases as the aperture becomes smaller –
Courtesy Edmund Optics

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk patterns from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

Contact us

High magnification example

Suppose you have a candidate camera with 3.45um pixels, and you want to pair it with a machine vision lens capable of 2x, 3x, or 4x magnification. You’ll find the Airy disk is 9um across! Something must be changed – a sensor with larger pixels, or a different lens.

As a rule of thumb, 1um resolution with machine vision lenses is about the best one can achieve. For higher resolution, there are specialized microscope lenses. Consult your lensing professional, who can guide you through sensor and lens selection in the context of your application.

Lens data sheets

Just a comment on lens manufacturers and provided data. While there are many details in the machine vision field, it’s quite transparent in terms of standards and performance data. Manufacturers’ product datasheets contain a wealth of information. For example, take a look at Edmund Optics lenses, then pick any lens family, then any lens model. You’ll find a clickable datasheet link like this, where you can see MTF graphs showing resolution performance like LP/mm, DOF graphs at different F#s, etc.

Takeaway

Per the blog’s title, Depth of Field is a balancing act between sharpness and blur. It’s physics. Pursue the links embedded in the blog, or study optical theory, if you want to dig into the math. Or just call us at 987-474-0044.

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Helios2 Ray Outdoor Time of Flight camera by Lucid Vision Labs

Helios2 Outdoor ToF camera – Courtesy Lucid Vision Labs

Time of Flight

The Time of Flight (ToF) method for 3D imaging isn’t new. Lucid Vision Labs is a longstanding leader in 3D ToF imaging. To brush up on ToF vs. other 3D methods, see a prior blog on Types of 3D imaging: Passive Stereo, Structured Light, and Time of Flight (ToF).

Helios2 Ray 3D camera

What is new are the Helios2 Ray 3D ToF outdoor* camera models. With working distances (WD) from 0.3 meters up to 8.3 meters, exterior applications like infrastructure inspection, environmental monitoring, and agriculture may be enabled – or enhanced – with these cameras. That WD in imperial units is from 1 foot up to 27 feet, providing tremendous flexibility to cover many applications.

(*) While rated for outdoor use, the Helios2 3D camera may also be used indoors, of course.

The camera uses a Sony DepthSense IMX556 CMOS back-illuminated ToF image sensor. It provides its own laser lighting via 940nm VCSEL laser diodes, which operate in the infrared (IR) spectrum, beyond the visible spectrum. So it’s independent of the ambient lighting conditions, and self-contained with no need for supplemental lighting.

Operating up to 30 fps, the camera and computer host build 3D point clouds your application can act upon. Dust and moisture protection to the IP67 standard is assured, with robust shock, vibration, and temperature performance as well. See specifications for details.

Example – Agriculture

Outdoor plants imaged in visible spectrum with conventional camera – Courtesy Lucid Vision Labs
Colorized pseudo-image from 3D point cloud – Courtesy Lucid Vision Labs

Example – Industrial

Visible spectrum image with sunlight and shadows – Courtesy Lucid Vision Labs
Pseudo-image from point cloud via Helios2 Ray – Courtesy Lucid Vision Labs

Arena SDK

The Arena SDK makes it easy to configure and control the camera and the images. It provides 2D and 3D views. With the 2D view one can see the intensity and depth of the scene. The 3D view shows the point cloud, and can be rotated by the user in real time. Of course the point cloud data may also be process algorithmically, to record quality measurements, control robot arm or vehicle guidance, etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Teledyne Dalsa Linea2 4k 5GigE camera

The new Linea2 4k color camera with a 5GigE interface delivers RGB images at a max line rate of 42kHz x3. That’s 5x the bandwidth of the popular Linea 1 GigE cameras.

Linea2 4k color cameras with 5GigE – courtesy Teledyne Dalsa

Perhaps you already use the Linea GigE cameras, at 1 GigE, and seek an upgrade path to higher performance in an existing application. Or you may have a new application for which Linea2 performance is the right fit. Either way, Linea2 builds on the foundation of Teledyne DALSA’s Linea family.

Why line scan?

While area scan is the right fit for certain applications, compare area scan to line scan for the hypothetical application illustrated below:

Area scan vs. Line scan – courtesy Teledyne DALSA

If one were to implement an area scan solution, you’d need multiple cameras to cover the field of view (FOV). Plus you’d have to manage lighting and framerate to avoid smear and frame overlaps. With line scan, one gets high resolution without smear, and a single camera solution – ideal to inspect a moving surface.

Call us at 978-474-0044 to tell us about your application, and we can guide you to a suitable line scan or area scan camera for your solution. Of course we also have the right lenses, lighting, and other components.

Sensor

The Trilinear CMOS line scan sensor is Teledyne’s own 4k color design, with outstanding spectral responsivity as shown below:

Linea2 Color responsivity – courtesy Teledyne DALSA

The integrated IR-cut filters insure true-color response is delivered on the native RGB data outputs.

Interface

With a 5GigE Vision interface, the Linea2 provides 5x the bandwidth of the conventional GigE interface, but can use the same Cat5e or Cat6 network cables – and does not require a frame grabber.

Software

Sapera LT software development kit is recommended, featuring:

  • Intuitive CamExpert graphical user interface for configuration and setup
  • Trigger-To-Image Reliability tool (T2IR) for system monitoring

Sapera LT has over 500,000 installations worldwide. Thanks to the 5GigE Vision interface, popular third party software is of course also compatible.

Applications

Application examples – courtesy Teledyne DALSA

While not limited to those listed below, known and suggested uses include:

  • Printing inspection
  • Web inspection
  • Food, recycling, and material sorting
  • Printed circuit board inspection
  • Web inspection
  • etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Webcam vs. machine vision camera

Webcams aren’t (yet) found in Cracker Jack boxes, but they are very inexpensive. And they seem to perform ok for Zoom meetings or rendering a decent image of an office interior. So why not just use a webcam as the front end for a machine vision application?

Before we dig in to analysis and rationale, let’s motivate with the following side-by-side images of the same printed circuit board (PCB):

Machine vision camera and lens vs. webcam – Courtesy 1stVision

Side-by-side images

In the image pair above, the left image was generated with a 20MP machine vision camera and a high resolution lens. The right image used a webcam with a consumer sensor and optics.

Both were used under identical lighting, and optimally positioned within their specified operating conditions, etc. In other words we tried to give the webcam a fair chance.

Even in the above image, the left image looks crisp with good contrast, while the right image has poor contrast – that’s clear even at a wide field of view (FOV). But let’s zoom in:

Clearly readable labeling and contact points (left) vs. poor contrast and fuzzy edges (right)

Which image would you prefer to pass to your machine vision software for processing? Exactly.

Machine vision cameras with lens mounts that accept lenses for different applications

Why is there such a big difference in performance

We’re all so used to smartphones that take (seemingly) good images, and webcams that support our Zoom and Teams meetings, that we may have developed a bias towards thinking cameras have become both inexpensive and really good. It’s true that all cameras continue to trend less expensive over time, per megapixel delivered – just as with Moore’s law in computing power.

As for the seemingly-good perception, if the images above haven’t convinced you, it’s important to note that:

  1. Most webcam and smartphone images are wide angle large field of view (FOV)
  2. Firmware algorithms may smooth values among adjacent pixels to render “pleasing” images or speed up performance

Most machine vision applications, on the other hand, demand precise details – so firmware-smoothed regions may look nice on a Zoom call but could totally miss the defect-discovery which might be the goal of your application!

Software

Finally, software (or the lack thereof) is at least as important as image quality due to lens and sensor considerations. With a webcam, one just gets an image burped out, but nothing more.

Conversely, with a machine vision camera, not only is the camera image better, but one gets a software development kit (SDK). With the SDK, one can:

  • Configure the camera’s parameters relative to bandwidth and choice of image format, to manage performance requirements
  • Choose between streaming vs. triggering exposures (via hardware or software trigger) – trigger allows synchronizing to real world events or mechanisms such as conveyor belt movement, for example
  • Access to machine vision library functions such as edge detection, blob analysis, occlusion detection, and other sophisticated image analysis software

Proprietary SDKs vs. 3rd party SDKs

Speaking of SDKs, the camera manufacturers’ are often very powerful and user friendly. Just to name a few, Teledyne Dalsa offer Sapera, Allied Vision provides Vimba, and IDS Imaging supports both IDS Lighthouse and IDS Peak.

Compare to Apple or Microsoft in the computing sector – they provide bundled software like Safari and Edge, respectively. They work hard on interoperability of their laptops, tablets, and smartphones, to make it attractive for users to see benefits from staying within a specific manufacturer’s product families. Machine vision camera companies do the same thing – and many users like those benefits.

Vision standards – Courtesy Association for Advancing Automation,

Some users prefer 3rd party SDKs that help maintain independence to choose cameras best-suited to a given task. Thanks to machine vision industry standards like GigE Vision, USB3 Vision, Camera Link, GenICam, etc., 3rd party SDKs like MATLAB, OpenCV, Halcon, Labview, and CVB provide powerful functionality that are vendor-neutral relative to the camera manufacturer.


For a deeper dive into machine vision cameras vs. webcams, including the benefits of lens selection, exposure controls, and design-in availability over time, see our article: “Why shouldn’t I buy a $69 webcam for my machine vision application?” Or just call us at 978-474-0044.

In summary, yes a webcam is a camera. For a sufficiently “coarse” area scan application, such as presence/absence detection at low resolution – a webcam might be good enough. Otherwise note that machine vision cameras – like most electronics – are declining in price over time for a given resolution, and the performance benefits – including software controls – are very compelling.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

ECS cost efficient 3D sensor series

Automation Technology GmbH announces the ECS series, where ECS means Eco Compact Sensor. Using less expensive optics and sensors, the standardized pre-configured offering is more than good enough for many applications. And priced to pass the lower-cost component savings on to the customer.

ECS 3D sensors – Courtesy Automation Technology GmbH

Ideal for applications in food, logistics, and robot vision, the sweet spot is performance that’s good enough to add value and get the job done, without having to purchase components needed for even higher performance. ECS sensors use the principle of laser triangulation to create a 3D point cloud.

Resolution and speed.

ECS delivers 2048 points per profile, at up to 43 kHz. Compare that to AT’s higher end scanners with up to 4096 points per profile, and speeds to 204 kHz.

Field of View (FoV)

Initial ECS series members are offered at 100 or 160mm FoVs, with other options planned for release.

Compact design

At only 0.65kg, about 1lb in weight, ECS 3D compact sensors can be easily integrated into many applications.

Software integration

Automation Technology’s AT Solution package makes it easy to configure your sensor. The SDK provides options for C, C++, and Python. The GigE Vision / GenICam interface means users may also choose any software compliant with those popular industry standards.

Applications

As mentioned above, food/beverage, logistics, packaging, and robotics are just a few of the suggested application areas.

Images above courtesy of Automation Technology

Three 3D sensor families: Value, Performance, and Modular

To put it in perspective, Automation Technology GmbH has expanded its 3D sensor portfolio with price : performance offerings at each of:

  • Value: ECS Series – compact, pre-calibrated, IP54 protection class, low cost
  • Performance: C6 Series – high-performance, pre-configured, IP67 protection, mid-priced
  • Modular: MCS Series – high-performance flexible configuration, IP67 protection
Comparing 3D Sensor Series – Courtesy Automation Technology GmbH

See an expanded comparison table at our website. But at a high level think of ECS as the value series. The C6 models offer high performance at a choice of resolutions. And the MCS is a modular unbundling of the C6 products – high-performance with flexible configuration.

What matters to you of course is your own application. And that’s what matters to us, too. As an independent distributor, we work for you. Tell us about your application, and we’ll guide to to the best-fit technology for your needs. Call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

First of its kind! GigE Frame Grabber

A GigE frame grabber? What’s that about? Those who work with Camera Link or CoaXPress cameras need frame grabbers for frame transfer, but GigE?

Frame grabbers use an industry standard PCI Express expansion bus to deliver high speed access to host memory for images. They get the image from the camera, via the cabling and frame grabber, at high speed, into the host, for processing.

But I already do GigE Vision without this so why might I want one?

  • Avoid corrupted images arising from lost packets
  • Reduce CPU load
  • Synchronize images from multiple cameras
  • Perform color conversion in the frame grabber rather than the host

The full name of DALSA’s GigE frame grabber series is Xtium2-XGV PX8. It’s available in both dual and quad configurations, as shown in the image below.

Dual and quad Xtium2-XGV PX8 frame grabbers – courtesy Teledyne DALSA

More than an adapter card

The Xtium2-XGV PX8 image acquisition cards use a real-time depacketization engine to create a ready-to-use image from the GigE Vision image packets. With packet resend logic built in, image transfer reliability is enhanced. And host CPU load is reduced. So already we see two benefits.

But wait there’s more!

Supporting up to 32 cameras, these boards aggregate input bandwidth of 4 GByte/s and up to 6.8 GBytes/sec output bandwidth to the host memory. They can also perform on-board format conversions like Bayer to RGB, Bi-color to RGB, etc.

So it’s really an “Aggregator-conditioner-converter-pre-processor”

Exactly! Which is why we call it a frame grabber for short.

Psst! Wanna see some specs?

Summary of XTIUM2-XGV PX8 key specifications

Free software

Acquisition and control software libraries are included at no charge. Teledyne DALSA’s Sapera LT SDK. Hardware independent by design, Sapera LT offers a rich development ecosystem for machine vision OEMs and system integrators.

Sapera LT SDK screenshots – courtesy Teledyne DALSA

So do you need one or want one?

So an Xtium2-XGV PX8 frame grabber is an aggregator-conditioner-converter-pre-processor. It accepts multi-port GigE Vision inputs, improves reliability, optionally does format conversions, and reduces load on the host PC. If your prototype system is struggling without such a frame grabber, maybe this is the missing link. Or maybe you want to get it right on the first try. Either way, tell us more about your application, and we’ll help you decide if this – or some other approach – can help. We love partnering with our customers to create effective machine vision solutions. Call us at 978-474-0044!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Automation Technology Solution Package

Automation Technology GmbH, or AT for short, is a leading manufacturer of 3D laser profilers, and also infrared smart cameras. As customary among leading camera suppliers, AT provides a comprehensive software development kit (SDK), making it easy for customers to deploy AT cameras. AT’s Solution Package is available for both Windows and Linux. Read on to find out what’s included!

Graphic courtesy of Automation Technology GmbH.

Let’s unpack each of the capabilities highlighted in the above graphic. You can get the overview by video, and/or by our written highlights.

Video overview

Courtesy Automation Technology GmbH

Overview

AT’s Solution Package is designed to make it easy to configure the camera(s), prototype initial setups and trial runs, proceed with a comprehensive integration, and achieve a sustainable solution.

cxExplorer

Configuration of a compact sensor can be easily done with the cxExplorer, a graphical user interface provided by AT – Automation Technology. With the help of the cxExplorer a sensor can be simply adjusted to the required settings, using easy to navigate menus, stepwise “wizards”, image previews, etc.

APIs, Apps, and Tools

The cxSDK tool offers programming interfaces for C, C++, and Python. The same package work with all of Automation Technologies 3D and infrared cameras.

Product documentation

Of course there’s documentation. Everybody provides documentation. But not all documentation is both comprehensive and user-friendly. This is. It’s illustrated with screenshots, examples, and tutorials.

Metrology Package

Winner of a 2023 “inspect” award, the optional add-on Metrology Package can commission a customer’s new sensor in just 10 minutes, with no programming required. Then go on to create an initial 3D point cloud, also with little user effort required.

Screenshot of Metrology Explorer – courtesy Automation Technology GmbH

For more information about Automation Technology 3D laser profilers, infrared smart cameras, or the Solution Package SDK, call us at 978-474-0044. Tell us a little about your application, and we can guide you to the optimal products for your particular needs.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

TCSE series hi-res telecentric lenses

Opto Engineering is known and respected for high-performance lenses in machine vision, medical, and related fields. The new TCSE series are telecentric lenses designed for large sensor formats (4/3″, APS-C, APS-H). Each provides high resolution with low distortion.

Who needs a telecentric lens?

Before inviting you to some of the TCSE series features, let’s offer readers who aren’t already telecentric-savvy a brief motivation for this category of lens. If you are doing precise gauging applications – measuring via optics and software – your tolerances may require a telecentric lens. A telecentric lens eliminates perspective error. They have very low distortion. And, if paired with collimated light, they enhance edge definition.

For a comprehensive read, check out our blog Advantages of Telecentric Lenses in Machine Vision Applications. Not sure if you need a telecentric lens? Call us at 978-474-0044 – tell us a little about your application and we can guide you through any or all of lens, camera, lighting and other choices.

TCSE5EM065-J – Courtesy Opto Engineering

TCSE lenses are available for applications using light in either the visible spectrum or near-infrared (NIR) wavelengths. Currently there are 8 members in the TCSE product family.

Image circle diameter

The TCSE Series offers image circle diameter options from 24 – 45mm.

Magnification

A key parameter in telecentric imaging is the level of magnification available. The 8 members of the TCSE Series offer magnification ranging from 0.36 through 2.75 times the original object size.

Working distance

The working distance (WD), from the front of the lens to the object being imaged, varies by lens model across the TCSE Series. The shortest WD offered is 160mm, spanning distances up to 240mm. These long working distances allow space for lighting and/or robotic arms.

Courtesy Opto Engineering

Worth noting

While typically “plug and play” once mounted on your camera, it’s worth noting that the TCSE lenses offer back focal length adjustment, should one choose to fine tune.

Summary

Telecentric lenses are the core business for Opto Engineering, who have more than 20 years expertise in research, development, and production. 1stVision, North America’s largest stocking distributor, works to understand each customer’s application requirements, to help you select the ideal lens, camera, or other imaging component(s). Call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Teledyne DALSA AxCIS Contact Image Sensor Modules

Teledyne DALSA has released the AxCIS 800mm mono/HDR, and the AxCIS 400mm mono, the first two members of a new flexible and scalable product family of Contact Image Sensors (CIS). As other members are released, users can choose fields of view (FoV) in 100mm increments, e.g. 400mm, 500mm, 600mm, 700mm, and 800mm.

AxCIS 800mm lighting and scanning – Courtesy Teledyne DALSA
AxCIS Contact Image Sensor showing sensor array
– Courtesy Teledyne DALSA

Contact Image Sensor vs. Linescan

Actually that’s a trick heading! A contact image sensor (CIS) is a type of linescan camera. Conventionally, the industry calls it a linescan camera if the sensor uses CMOS or CCD. while it’s called a CIS if it bundles a linear array of detectors, lenses, and lights.

But CIS is very much a linescan type of camera, With a 2D area scan camera, a comprehensive pixel array captures hundreds or thousands of (X,Y) values in a single exposure. But a Contact Image Sensor requires either the target or the imaging unit to move, as a single exposure is a slice of Y values at a given coordinate X. Motion is required to step across the set of X values.

Two more notes:

  1. The set of X values may be effectively infinite, as with “web inspection” applications
  2. The term “contact” in CIS is a bit of a misnomer. The sensor array is in fact “very close” to the surface, which must thereby be essentially flat in order to sustain collision-free motion. But it doesn’t actually touch.

AxCIS key attributes include:

  • 28um pixel size (900dpi)
  • high speed 120KHz using Camera Link HS
  • HDR imaging with dual exposure mode
  • optional LED lighting
  • fiberoptic cables immune to EMI radiation

Application areas share the characteristics of flat surfaces and motion of either the target or the sensor, since contact image sensing (CIS) is a form of linescan imaging.

Courtesy Teledyne DALSA

HDR imaging

Some targets are inherently challenging to obtain sufficient saturation for the darker regions while avoiding over-saturation for the lighter areas. The multiline sensors used in AxCIS utilize a sensor array with:

  • One row of the sensor array that can have a longer exposure for dark scenes
  • Another row using a shorter exposure for light scenes

The camera then combines the images, as shown below. The technique is referred to as High Dynamic Range imaging – HDR.

Ilustration of HDR Imaging – Courtesy Teledyne DALSA

Want to know more about area scan vs line scan? And multifield line scan? And other Teledyne DALSA linescan products, in which they have years of expertise? See our blog “What can multifield linescan imaging do for me?“.

For details on the AxCIS CIS family, please see the product page with detailed specs.

If you’ve had enough reading, and want to speak with a real live engineer, just call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Telecentric lenses – Edmund Optics SilverTL and CobaltTL Series

While a standard lens is adequate or even ideal for many machine vision applications, there is inherent distortion in a standard lens, often in the range of 1 – 2%. Telecentric lenses achieve distortion of 0.1% or less. They also provide constant magnification and no perspective error.

If you “just” need presence/absence detection, or counting discreet non-occluded objects, a conventional lens may be fine. But if you need highly accurate contactless measurement, telecentric lenses offer remarkable performance.

CobaltTL Telecentric Lens with In-Line Illumination –
Courtesy Edmund Objects

Let’s take a brief look at what qualifies a lens as telecentric, and why you might want (or need) one. Subsequently we’ll summarize Edmund Optics SilverTLTM and CobaltTLTM lens series.


Telecentric Tutorial

Telecentric lenses only accept incoming light rays that are parallel to the optical axis of the lens. It’s not that the oblique rays don’t reach the outer edge of the telecentric lens. Rather, it’s about the optical design of the lens in terms of what it passes on through the other lens elements and onto the sensor focal plane.

Hmm, but the telecentric lens must have a narrower Field of View (FoV) – and I have to pay a premium for that? Well yes – and yes. There are certain benefits.

Let’s get to an example. In the image immediately below, labeled “Setup”, we see a pair of cubes positioned with one forward of the other. This image was made with a conventional (entocentric) lens, whereby all three dimensions appear much the same as for human vision. It looks natural to us because that’s what we’re used to. And if we just wanted to count how many orange cubes are present, the lens used to make the setup image is probably good enough.

Courtesy Edmund Optics
Courtesy Edmund Optics.

But suppose we want to measure the X and Y dimensions of the cubes, to see if they are within rigorous tolerance limits?

An object-space telecentric lens focuses the light without the perspective of distance. Below, the image on the left is the “straight on” view of the same cubes positioned as in “Setup” above, taken with a conventional lens. The forward cube appears larger, when in fact we know it to be exactly the same size.

The rightmost image below was made with a telecentric lens, which effectively collapses the Z dimension, while preserving X and Y. If measuring X and Y is your goal, without regard to Z, a telecentric lens may be what you need.

Courtesy Edmund Optics.

Depth of Field can be “pushed”

You are likely familiar with Depth of Field (DoF), the range in the Z dimension in which objects in the FoV are in focus. With a conventional lens, if an object moves out of focus, the induced blur is asymmetrical, due to parallax (aka. perspective error).

But with a telecentric lens, there is no parallax error, since the FoV is constant and non-angular. A benefit of this is that even if the target image is somewhat defocused with a telecentric lens, the image may still be perfectly usable.

In the two images below, the “sharp transition” edge is clearly optimal. But when measuring tolerances in a manufacturing environment, with mechanized conveyors, vibration, etc., target objects may not always be ideally positioned. So the “shallow transition” image from the object just out of focus is entirely acceptable to identify the center of mass for the circular object, since the transition is symmetrical at all positions.


Edmund Optics is widely recognized for their range of standard products – and their expertise in custom lens design when needed. The SilverTLTM and CobaltTLTM lens series each offer 10+ members, where all lenses are high-resolution and bi-telectric. Some additionally offer inline illumination options.

Noteworthy characteristics of both the SilverTL and CobaltTL series include:

  • Aperture controls often not available in competitor products
  • “Fast” ==> lower F# options than in many competitor products (so can work effectively with less light)
  • Conform to narrowly specified engineering tolerances
  • Pricing identical with or without in-line illumination via coax port

Edmund Optics SilverTLTM series

The SilverTL series pairs with C-mount sensors up to 7.5 MegaPixels, ideal with 2.8 µm pixel size. Magnification options range from 0.16X to 4X.

SilverTL series – Courtesy Edmund Optics

Edmund Optics CobaltTLTM series

For C-mount sensors up to 20 MegaPixels, and pixel size 2.2 µm, choose the CobaltTL series.

CobaltTL series – Courtesy Edmund Optics

What type of lens is best for my application?

Machine vision is a broad field, with a lot of variables across wavelengths, application goals, sensor, software, and lens choices. If you are a seasoned veteran, you may know from experience exactly what you need. Or you may want to review our on-line knowledge base or online blogs. Easier yet – just phone us at 978-474-0044. You’ll speak with one of our sales engineers, who put customer success first. Customers with successful outcomes – who return to us project after project – is our goal.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

16, 20 and 24 MP: IDS uEye+ cameras with Sony Pregius sensors

IDS Imaging will soon release new members in their uEye+ camera series, utilizing Sony’s 4th generation Pregius S sensors. Included are 16, 20, and 24 MP offerings of the compact uEye+ USB3 cameras.

XLE+ housed and board-level options – Courtesy IDS Imaging

The “S” in Pregius S stands for “stacked”, a sensor architecture that is back-illuminated as well as layered, creating a light-sensitive, low-noise, high-performance sensor. Even the first 3 generations of Sony Pregius sensors broke new ground, but Pregius S is special. Read our dedicated blog on the Sony Pregius four generation offerings, including details on Pregius S.

Sony Pregius S sensorMPFormat
IMX 532 165328 x 3040
IMX 531 204512 x 4512
IMX 530 245328 x 4608
Sony Pregius S sensors joining IDS uEye+ family

IDS peak SDK : “Configuring instead of programming”

Enhancing the ease of development and deployment for the uEye+ cameras, IDS has released update 2.6 of “IDS peak”, the comprehensive software development kit (SDK), available at no cost. Of course the cameras are Vision Standard compatible ( U3V and GenICam), for those preferring third party SDKs, but IDS peak has much to offer IDS’ camera users.

While the SDK naturally includes conventional programming interfaces, IDS includes tools such as tools such as histograms, line and pixel views, color and greyscale conversions, useful automatic functions and bandwidth management. These skew deployment helpfully towards “configuring instead of programming”.

IDS peak is available for both Windows and Linux OS. In addition, IDS peak SDK works not just with IDS USB3 cameras, but also IDS GigE cameras. So multi-camera applications with mixed interfaces are possible. Or your developers can benefit from familiarity with a single SDK across multiple applications, bringing efficiencies to your team. Download IDS SDKs here.

Call us at 978-474-0044. Tell us about your applications goals and constraints, and we can guide you to any or all of cameras, lenses, lighting, software, and accessories.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

4 generations of SONY Pregius sensors explained

Newer is better, right? Well yes if by better one wants the very highest performance. More below on that. But the predecessor generations are performant in their own right, and remain cost-effective and appropriate for many applications. We’re often get the question “What’s the difference?” – in this piece we summarize key differences among the 4 generations of SONY Pregius sensors.

In machine vision, sensors matter. Duh. As do lenses. And lighting. It’s all about creating contrast. And reducing noise. Each term linked above takes you to supporting pieces on those respective topics.

This piece is about the four generations of the SONY Pregius sensor. Why feature a particular sensor manufacturer’s products? Yes, there are other fine sensors on the market, and we write about those sometimes too. But SONY Pregius enjoys particularly wide adoption across a range of camera manufacturers. They’ve chosen to embed Pregius sensors in their cameras for a reason. Or a number of reasons really. Read on for details.

Machine Vision cameras continue to reap the benefits of the latest CMOS image sensor technology since Sony announced the discontinuation of CCD’s.  We have been testing and comparing various sensors over the years and frequently recommend Sony Pregius sensors when dynamic range and sensitivity is needed.

If you follow sensor evolution, even passively, you have probably also seen a ton of new image sensor names within the “Generations”.  But most users make a design-in sensor and camera choice, and then live happily with that choice for a few years. As we do when choosing a car, a TV, or a laptop. So unless you are constantly monitoring the sensor release pipeline, its hard to keep track of all of Sony’s part numbers. We will try to give you some insight into the progression of Sony’s Pregius image sensors used in industrial machine vision cameras.

How can I tell if it’s a Sony Pregius sensor?

Sony has prefixes of the image sensors which make it easy to identify the sensor family.  All Sony Pregius sensors have a prefix of “IMX.” Example: IMX174 – which today is one of the best sensors for dynamic range..

1stVision’s camera selector can be filtered by “Resolution” and you can scroll and see the sensors with a prefix of IMX.  CLICK HERE NOW

What are the differences in the “Generations” of Sony Pregius Image sensors?

Sony Pregius Generation 1:

Primarily consisted of a 2.4MP resolution sensor with 5.86um pixels BUT had a well depth (saturation capacity) of 30Ke- and still unique in this regard within the generations.   Sony also brought the new generations to the market with “slow” and “fast” versions of the sensors at two different price points.  In this case, the IMX174 and IMX249 were incorporated into industrial machine vision cameras providing two levels of performance.  Example being Dalsa Nano M1940 (52 fps)  using IMX174 vs Dalsa Nano M1920 (39 fps) using IMX249, but the IMX249 is 40% less in price.

Sony Pregius Generation 2:

Sony’s main goal with Gen 2 was to expand the portfolio of Pregius sensors which consists of VGA to 12 MP image sensors.  However, the pixel size decreased to 3.45um along with well depth to ~ 10Ke-, but noise also decreased!  The smaller pixels allowed smaller format lenses to be used saving overall system cost.   However this became more taxing on lens resolution being able to resolve the 3.45um pixels.  In general it offered a great family of image sensors and in turn an abundance of machine vision industrial cameras at lower cost than CCD’s with better performance.   

1stVision’s camera selector  can be filter by “Resolution” AND pixel size that correspond to one of the generations.  You will have a list of cameras in which you can select those starting with IMX.  I.e  All Generation 2 sensors will be 3.45um, and can narrow to a desired resolution. CLICK HERE NOW

Sony Pregius Generation 3:

For Gen 3, Sony took the best of both the Gen 1 and Gen 2.  The pixel size increased to 4.5um increasing the well depth to 25Ke-!  This generation has fast data rates, excellent dynamic range and low noise.  The family will ranges from from VGA to 7.1MP.  Gen 3 sensors started appearing in our machine vision camera lineup in 2018 and continued to be designed in to cameras for the last few years.

Sony Pregius Generation 4:

The 4th generation is denoted Pregius S, and is designed in to a range of cameras from 5 through 25 Megapixels. Like the prior generations, Pregius S provide global shutter for active pixel CMOS sensors using Sony Semiconductor’s low-noise structure.

New with Pregius S is a back-illuminated structure – this enables smaller sensor size as well as faster frame rates. The benefits of faster frame rates are self-evident. But why is smaller sensor size so important? If two sensors, with the same pixel count, and equivalent sensitivity, are different in size, the smaller one may be able to use a smaller lens – reducing overall system cost.

Surface- vs back-illuminated image sensors – courtesy SONY Semiconductor Solutions Corporation

Pregius S benefits:

With each Pregius S photodiode closer to the micro-lens, a wider incident angle is created. This admits more light. Which enhances sensitivity. At low incident angles, the Pregius S captures up to 4x as much light as Sony’s own highly-praised 2nd generation Pregius from just a few years ago!

With pixels only 2.74um square, one can achieve high resolution even is small cube-size cameras, continuing the evolution of more capacity and performance in less space.

Courtesy Sony Sensors

Fun fact: The “S” in Pregius S is for stacked, the layered architecture of the sensor with the photodiode on top and circuits below, which as note has performance benefits. It’s such an innovation – despite already high-performing Gens 1, 2, and 3, that Sony graced Gen 4 as the Pregius S to really call out the benefits.

Summary

While Pregius S sensors are very compelling, the prior generation Pregius sensors remain and excellent choice for many applications. It comes down to performance requirements and cost, to achieve the optimal solution for any given application.

Pregius sensors by generation and sizes – Courtesy Sony Sensors

Many Pregius sensors, including Pregius S, can be found in industrial cameras offered by 1stVision. Use our camera selector to find Pregious sensors, any staring with “IMX”. For Pregius S in particular, supplement that prefix with a “5”, i.e. “IMX5”, to find Pregious S sensor like IMX540, IMX541, …, IMX548.

Contact us

Sony Pregius image sensor Comparison Chart

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

3D IDS Ensenso C (Color) Series

Sometimes just the Z-values are enough, no image needed at all. Some applications require pseudo-images generated from a point cloud – whether in monochrome or with color tones mapped to Z values. Yet other applications require – or benefit from – 3D digital point cloud data as well as color rendering. IDS Ensenso’s C Series provides stereo 3D imaging with precise metrics as well as true color rendering.

Two models of Ensenso stereo cameras – Images courtesy of IDS

If you want an overview of 3D machine vision techniques, download our Tech Brief. It surveys laser triangulation, structured light, Time of Flight (ToF), and stereo vision. If you know you want stereo vision, you might like an overview of all IDS Ensenso 3D offerings.

But if you know you want stereo 3D accuracy to 0.1mm, with color rendering, let’s dive in to the IDS Ensenso C Series. If you prefer to speak with us instead of reading further, just call us at 978-474-0044, or request that we follow up via our contact form.

Key differentiator is “projected texture”

In the short video below, we see 3 scene pairs. For each pair, the leftmost images are the unenhanced 3D image. The rightmost images take advantage of the projected texture created by the LED projector and the RGB sensor, augmenting the 3D point cloud with color information. It can be a differentiator for certain applications.

Video courtesy of IDS

Application areas

Let’s start with candidate application areas, from customer perspective, before pointing out specific features. In particular let’s look at application areas including:

  • Detect and recognize
  • Bin picking
  • De-palletizing
  • Test and measure

Detect and recognize

The ability to accurately detect moving objects to select, sort, verify, steer, or count can enhance (or create new) applications. Ensenso C’s high-luminance projector enables high pattern contrast for single-shot images. Video courtesy of IDS.


Bin picking

Regardless of a robot’s gripping sensitivity, speed, and range of motion, 3D imaging accuracy is central to success. Ensenso C’s integrated RGB sensor can make all the difference for color-dependent applications. Video courtesy of IDS.


De-palletize

De-palletizing might seem like a straightforward operation, but must detect object size, rotation and position even with different and densely stacked goods. Ensenso C supports all those requirements – even from a distance. Video courtesy of IDS.


Test and measure

Automated inspection and measurement of large-volume objects are key for many quality control applications. Precision to the millimeter range can be achieved with Ensenso C at working distances even to 5m. Video courtesy of IDS.


IDS Ensenso C Series

With two models to choose from, Ensenso C supports a range of working distances and focal distances – see specifications.

Both models utilize GigE Vision interface; both embed a 200W LED projector; both use C-mount lenses; both provide IP 65/67 protection. And both models are easy to configure with the Ensenso SDK: Windows or Linux; sample programs including source code; live composition of 3D point clouds from multiple viewing angles; robot eye-hand calibration; and more.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Test your parts in 3D lab

Have you wondered if 3D laser profiling would work for your application? Unless you have experience in 3D imaging, for which laser profiling is one of several popular methods, you may be uncertain of the fit for your application. Yes, one can read a comprehensive Tech Briefs on 3D methods, or product specifications, but wouldn’t it be helpful to see some images of your parts taken with an actual 3D Laser Profiler?

Image courtesy Teledyne DALSA.

While prototyping at your facility is of course one option, if your target objects can be shipped, Teledyne DALSA has a Z-Trak Application Lab, whose services we may be able to arrange at no cost to you. Just describe your application requirements to us, and if 3D laser profiling sounds promising, the service works as follows:

  1. Send in representative samples (e.g. good part, bad part)
  2. We’ll configure Z-Trak Application Lab relative to sample size, shape, and applications goals, and run the samples to obtain images and data
  3. We’ll send you data, images, and reports
  4. Together we’ll interpret the results and you can decide if laser profiling is something you want to pursue

Really, just send samples in? Anything goes? Well not anything. It can’t be 50 meters long. Maybe a 15 centimeter subset would be good enough for proof of concept? And if the sample is a foodstuff, it can’t suffer overnight spoilage before it arrives.

A phone conversation that discusses the objects to be inspected, their dimensions, and the applications goal(s) is all we need to qualify accepting your samples for a test. Image courtesy of Teledyne DALSA.


Case study

In this segment, we feature outtakes from a recent use of the Z-Trak Application Lab, for a customer who needs to do weld seam inspections. The objective is to image a metal part with two weld seams using a Z-Trak 3D Laser Profiler and produce 3D images for evaluation of application feasibility. The images and texts shown here are taken from an actual report prepared for a prospective customer, to give you an understanding of the service.

Equipment:

  • Z-Trak LP1-1040-B2
  • Movable X,Y stage
    X-Resolution: ~25 um
    Y-Resolution: 40 um
    WD: ~50 mm

Image courtesy Teledyne DALSA

Conditions:
The metal part was laid flat on the X,Y stage under the Z-Trak. The stage was moved
to scan the part.

To the right, see the image generated from a perpendicular scan of the metal part. Image courtesy Teledyne DALSA.

The composite image below requires some explanation. The graphs on the middle column, from top to bottom, show Left-Weld-Length, Right-Weld-Length, and Weld-Midpoint-Width (between the left and right welds), respectively. The green markup arrows help you correlate the measurements to the image on the left. The rightmost column includes summary measurements such as Min, Max, and Mean values.

Image courtesy Teledyne DALSA

Now have a look at a similar screenshot, for Sample #2, which includes a “bad weld”:

Image courtesy Teledyne DALSA

With reference to the image above, the customer report included the following passage:

The top-right image is the left weld seam profile. In the Reporter window the measurement of this seam is 1694.79 mm long. However, a defect can be noted at the bottom of the left weld. In addition to the defect it can be seen from the profile that the weld is not straight in the Z-direction. The weld is closer to the surface at the top and further from the surface at the bottom

Translation: The automated inspection reveals the defective weld! Naturally one would have to dig in further regarding definitions of “good weld”, “bad weld”, tolerances, where to set thresholds to balance yields and quality standards vs. too many false positives, etc.

Conclusion

The report provided to the customer concluded that “This application is feasible using a Z-Trak 3D Laser Profiler.” While it’s likely that outcome will be achieved if we qualify your samples and application to use the Z-Trak Application Lab service, it’s not a foregone conclusion. We at 1stVision and our partner Teledyne DALSA are in the business of helping customers succeed, so we’re not going to raise false hopes of application success.

Recap

To summarize, the segments above are representative outtakes from an actual report prepared by the Z-Trak Application Lab. The full report contains more images, data, and analysis. Our goal here is to give you a taste for the complimentary service, to help you consider whether it might be helpful for your own application planning process.

Next steps?

To learn more, see a recent blog “Which Z-Trak 3D camera is best for my application?“. Or have a look at the Z-Trak product overview.

If you’d like to send in your parts, please use this “Contact Us” link or the one below. In the ‘Tell us about your project’ field, just write something like “I’d like to have parts sent to the Z-trak lab.” If you want to write additional details, that’s cool – but not required. We’ll call to discuss details at your convenience.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

High-Resolution 360° optics by Opto Engineering

Who needs 360° optics? Granted, it’s specialized stuff. Innovative lenses in Opto Engineering’s series enable single-camera inspection of objects many users might not have thought possible! For example, a Bi-Telecentric system uses mirrors to image all 4 sides of an object at once, without moving the camera or the object. Or a boroscope gets the optics and a light inside a tight space, creating a panoramic view of the interior.

Even experienced machine vision professionals may never have seen or heard of some of these specialized optics. Unless one knows of such lens systems, one might try to design a multi-camera system for an application, when in fact a single camera could have been used!

In the segments below, we highlight categories for which there are lens series available, together with representative images, diagrams, and texts. The goal here is not a master class in optics – just an overview to raise awareness.

Pericentric lenses

Opto Engineering provides pericentric lenses, allowing 360° by 180° FOV from a position above an object. That provides 360° top and lateral views with a single camera. The PC Series, with five choices, are designed to perform complete inspection of objects up to 60 mm in diameter. Typical applications include bottleneck thread inspection and data matrix reading – the code will always be properly imaged regardless of its position.

Suppose you produce and pack a product in a plastic container such as the one shown here. Quality control inspections may require verifying each container is labeled with print, graphical, and/or coded information. Image courtesy Opto Engineering.

Below we see the top and sides imaged in a single exposure, using a PC lens:

Image generated with a pericentric lens from the PC Series – Courtesy Opto Engineering.

The PCCD Series, with four members, enables the 360° side view of small objects (sample diameter 7 – 35 mm). Perfect for bottle cap and can inspection.

Above, the top image is generated from a lens that uses both reflection and refraction to image the vial’s interior as well as the exterior “shoulder”. The interior check is for any impurities before filling, and the exterior aspect is to obtain OCR characters or bar codes for tracking.


Hole inspection lenses

The PCHI Series includes 10 members, covering a range of sensor sizes, and includes a liquid lens option for adjustable focus control. Unlike a common lens with a flat field of view (FOV), these lenses provide a focused view of both the cavity bottom as well as the interior sidewalls! Perfect for thread inspection or cavity checks for contamination from above the cavity entrance.

PCHI Series hole inspection lenses and applications – Image courtesy Opto Engineering.

Bi-Telecentric lens systems

Many are familiar with telecentric lenses, which hold magnification constant, regardless of an object’s distance or position in the field of view. Consider Opto Engineering’s Bi-Telecentric Series, TCCAGE. Using multiple mirrors, parts can be measured and inspected horizontally from each 90, with no rotation required. Two different illumination devices are built into the system to provide either backlight or direct part illumination. In the example to the right, syringes are inspected for length and angle from all 4 directions.

Image courtesy of Opto Engineering.

Boroscopic probes

A boroscope gets the optics into tight spaces, for panoramic cavity imaging from the inside. The PCBP series includes built-in compact illumination. It’s ideal for 360 degree inspection of interiors with static parts.

Image courtesy of Opto Engineering.

Focus controls

In addition to fixed focus and manual focus (with lockring) options, some lenses in the PCHI and PCBP Series include Adjustable Focus (AF) features. With liquid lens technology, using AF these lenses with varying product sizes and dimensions just got easier. With millisecond repositioning, it allows extremely fast changes to focus to allow you to dial in the exact position on multiple size products or sizes for inspection of an even wider range of SKU with a single system.


If your imaging application can be solved with more conventional lenses, lucky you. But if your requirements might otherwise be impossible to address, or seemingly need two or more separate cameras, or complex rotation controls and multiple exposures, call us at 978-474-0044. You might not have realized there are specialized optics designed precisely for your type of application!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Optimize wavelength to maximize contrast

Seasoned machine vision practitioners know that while the sensor and the optics are important, so too is lighting design. Unless an application is so “easy” that ambient light is enough, many or most value-added applications require supplemental light. “White” light is what comes to mind first, since it’s what we humans experience most. But narrow-band light – whether colored light within the visible spectrum, or non-visible light with a sensor attuned to those frequencies – is sometimes the key to maximizing contrast.

Gold contrasts better with red light than either white or blue – Image courtesy of CCS America

In the illustrations above, suppose we have an application to do feature identification for gold contacts. The ideal contrast to create is where gold features “pop” and everything that’s not gold fails to appear at all, or at most very faintly. If the targets that will come into the field of view have known properties, one can often do lighting design to achieve precisely such optimal outcomes

In this example, consider the white light image in the top left, and then the over-under images created with red and blue light respectively. The white light image shows “everything” but doesn’t really isolate the gold components. The red light does a great job showing just the gold (Au). The blue light emphasizes silver (Ag). The graph to the right shows four common metals relative to how they respond under which (visible) wavelengths. Good to know!


For an illustrated nine-page treatment of how various wavelengths improve contrast for specific materials or applications, download this Wavelength Guide from our Knowledge Base. You may be able to self-diagnose the wavelength ideal for your application. Or you may prefer to just call us at 978-474-0044, and we can guide you to a solution.


To the left we see 5 plastic contact lens packages, in white light. Presence/absence detection is inconclusive. Image courtesy of CCS America.

With UV light, a presence/absence quality control check can be programmed based on a rule that presence = 30% or more of the area in each round renders as black. Image courtesy of CCS America.


It all comes down to the reflection or absorption characteristics of specific properties with respect to certain wavelengths. Below we see a chart showing the peaks of some of the more commonly used wavelengths in machine vision.

Commonly used wavelengths – Image courtesy CCS America

For more details on enhancing contrast via lighting at specific wavelengths, download this Wavelength Guide from our Knowledge Base. Or click on Contact Us so we can discuss your application and guide you. 1stVision has several partners with different lighting geometries and wavelengths to create contrast. All three partners are in the same business group. CCS America and Effilux offer a variety of wavelengths (UV through NIR) and light formats (ring light, back light, bar light, dome). Gardasoft has full offerings for lighting controls. Tell us about your application and we’ll help you design an optimal solution.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Ensenso – 1stVision expands 3D portfolio with stereo vision

IDS Ensenso 3D cameras
Ensenso 3D Cameras – Courtesy of IDS

Most industries go through waves of technology and product innovation as they mature. In powered flight we had propellers long before jets, though each still has its place. In machine vision, 1D and 2D imaging took several decades to mature before 3D moved from experimentation and early innovation to mature products affordable to many. Download our Tech BriefWhich 3D imaging technique is best for my application?“, if you haven’t yet committed to a particular approach.

Stereo vision is one of the fastest growing approaches to 3D imaging, thanks to Moore’s Law and ever more powerful and compact cameras, processing power, together with modularized and turnkey products. 1st Vision is pleased to represent IDS Imaging’s Ensenso series of 3D cameras. In addition to the downloadable Tech Brief linked above, we encourage you to read on for an overview of all four Ensenso 3D camera families, the S, N, C, and X Series, respectively. If you prefer we guide you directly to a best-fit for your application, just give us a call at 978-474-0044.


Before we get to several different stereo vision series, and their respective capabilities, we note that IDS’ Ensenso S Series in fact utilizes the structured light approach rather than stereo vision. Per the Tech Brief linked above, there are several ways to do 3D.

S Series

Ensenso S Series are compact 3D industrial cameras combining AI software with 3D infrared laser point triangulation, generating point clouds to Z dimension accuracy of 2.4 mm at 1 meter distance. They are a cost-effective solution for many budget-conscious and high volume 3D applications. Each is in a zinc housing with IP65/67 protection.

3D imaging via structured light – Courtesy of IDS

Back to stereo vision, IDS Ensenso  N, C, X and XR 3D Series are based on the stereo vision principle.

The Stereo Vision principle – Courtesy of IDS

N Series

Ensenso N Series 3D cameras are designed for harsh industrial environments and pre-calibrated for easy setup.  N Series 3D cameras are “TM Plug & Play” certified by Techman Robot, and suitable for many 3D applications such as robotics and factory automation.

The Ensenso N Series 3D camera works for either static or moving objects even in changing or low light conditions.  With IP65/67 protection, and a compact design, the Ensenso N Series 3D cameras fit into tight spaces or in moving components such as robotic arms. There are two variants:

  • N3X: aluminum housing for optimal heat dissipation in extreme environments
  • N4X: cost-effective plastic composite housing

C Series

The Ensenso C Series 3D camera, also uses stereo vision, but additionally embeds a color CMOS RGB sensor, pre-calibrated and aligned with the stereo vision system. This allows a “colorized” effect as shown in the video clip below, where one sees 3 adjacent image pairs. Each “right image” is the colorized augmentation on top of the initial stereo point cloud view to its left. Most would agree it lends a more realistic look.

Color sensor lends more realistic look to point cloud – Courtesy IDS

The C Series delivers Z accuracy 0.1 mm at 1 meter distance, with the C-57S, or 0.2mm at 2 meters, with C-57M.

Ensenso C Series – small or medium option – Courtesy of IDS

X Series

Ensenso X Series 3D camera is an ultra-flexible, modular, 3D GigE industrial camera system. The X Series 3D camera systems are available with a choice of two variants: X30 and X36.

Ensenso X Series – Courtesy IDS

The Ensenso X30 3D camera system is designed to capture moving objects making it suitable for many industrial applications such as factory automation production lines, and bin picking.

For static objects, use the Ensenso X36 3D camera system. FlexView2 greatly increases the resolution producing 3D images with precise detail and definition of the objects being captured even with low light or reflective surfaces.

The Ensenso X 3D camera system includes a 100 watt LED projector with an integrated GigE power switch. The 3D camera system can be configured with many GigE uEye cameras and a 1.6 or 5 megapixel CMOS monochrome sensor to create your customized 3D imaging system.

Working distances may be up to 5m, and point cloud models may be developed for objects up to 8 cubic meters in volume!


All of the above cameras include the Ensenso SDK software that accelerates the application set up, configuration and development time. Ensenso 3D cameras are ideal for numerous industrial 3D applications including robotics, logistics, factory automation, sorting, and quality assurance.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

37M and 67M 10GigE cameras | Higher resolution and speed!

Teledyne DALSA 10GigE Genie Nano

In this focused blog we call out 4 specific camera models with 10GigE interfaces, ideal for life sciences and security applications, though of course not limited to those. In particular, there are two 1:1 square format sensors, with 37 and 67M pixels, respectively, that each come in monochrome and color offerings.

Teledyne DALSA 10GigE Genie Nano
Genie Nano 10GigE camera : courtesy Teledyne DALSA

Context if you want it: Teledyne DALSA recently augmented its Genie Nano series, to include the 10GigE interface, beyond the previous 1, 2, and 5GigE offerings, per our overview blog released in June 2023. Of particular interest there is a graphic showing throughput by interface across the whole range of Teledyne area scan cameras. It’s a convenient way of understanding how the different levels of GigE interface compare to USB, CameraLink, and CXP interfaces.


Back to the 37 and 67M cameras in particular… What makes these cameras distinctive in the current market?

Non-stitched sensors:

These sensors are non-stitched. While some competitors use dual or quad sensor readout zones to drive framerates, they have to work hard to achieve tap balance outcomes satisfactory to the user – not easy under certain conditions. But with the 10GigE interface, when image quality is paramount for your application, these cameras deliver impressive framerates and images free from any tap balance artefacts. At full resolution, the 67M camera delivers 15fps, and the 37M provides 20fps.

If you love these two sensors, and want even faster framerates than what 10GigE can support, note that the same sensors appear in the Falcon4 with the CameraLink High Speed (CLHS) interface. With CLHS these sensors deliver 90 and 120fps, respectively!

Other features of note:

  • 67M model is the most compact on the market at 41 mm x 59 mm x 59 mm
  • Multi-ROI up to 16 regions – further boost framerates by moving only essential image data
  • Robust and performant Teledyne Sapera driver or 3rd party GenICam compliant SDKs
  • 10 – 36V or PoE (single cable for power, data, and control signals)
  • M42 lens mount

Precision Time Protocol (PTP) synchronization of two or more cameras over GigE network, avoiding the need for hardware triggers and controllers, for many applications.

Uniquely from Teledyne DALSA is their proprietary Trigger to Image Reliability (T2IR):

  • Manage exceptions in a controlled manner
  • Verify critical imaging events such as lost frames, lines, or triggers
  • Tag images for traceability
Trigger to Image Reliability (T2IR) – courtesy Teledyne DALSA

The Teledyne e2V sensors used in these cameras are designed and produced here in North America. Not to play politics when we all participate in global supply chains in our personal and professional lives, but for certain contract approvals or risk assessments it can be beneficial when the country of origin question is any easy one to answer.

Sometimes new camera families or models are just “me too” market followers – often worthy but not innovative as such. But the Genie Nano 10G-M/C-6200 and 10G-M/C–8200 cameras are game changers. Call us for a quote! 978-474-0044.

Even if you don’t need the resolution or performance of the 37 or 67M Genie Nano 10GigE right now, the Teledyne DALSA Genie Nano families include products using 1, 2, and 5GigE, 10GigE, CXP, and CLHS interfaces. That ranges from “fast enough” (and modestly priced) through fast and on to very fast. If you have diverse imaging projects, there are economies of scale, and efficiencies in deployment, by using drivers, SDKs, and features shared by multiple cameras – and mastered by your team.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

3D Scanning Applications with AT Automation Technology

Previously we’ve introduced AT Automation Technology 3D scanners, which use triangulation – together with precision optics and embedded algorithms – to build a point cloud representation of 3D objects.

AT Automation Technology 3D scanner
– courtesy of Automation Technology

While there are interesting scanning applications in diverse industries, including automotive, food processing, battery production, display inspection, and more, in this piece we focus on the automotive industry. Below we offer a collection of short videos that help to tell the story. Each application utilizes AT Automation Technology 3D laser profilers.

CONTACT US to discuss your application! We have longstanding returning customers who know we like to help you choose the right cameras and components. It’s what we do.

Inspection of brake discs, for surface defects, duration 1 minute 24 seconds:


Inspection of stamped metal parts, duration 37 seconds:


Inspecting asymmetrical objects, duration 50 seconds:


You don’t have to be in the automotive industry to take advantage of AT Automation Technology 3D laser scanning! Food processing, display inspection, battery production – indeed all sorts of 3D applications are enabled or enhanced by laser triangulation approaches to building 3D point clouds for a scanned object, and comparing the scan to the idealized perfect object. The difference calculation determines if the test object is within the defined tolerances.

3D point cloud
From real space to 3D point cloud model – Image courtesy of AT Automation Technology

We have videos for other industries and applications available, and sales engineers who can help guide you to a solution for your particular needs. Call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

CEI Camera Enclosures and Mounts

First generation cars were open-topped – until motorists demanded creature comforts like protection from rain, a heater, and – later – air-conditioning. While some machine vision applications allow the camera to operate unprotected at ambient temperatures – other applications are more demanding. Components Express Inc. (CEI) provides a range of enclosures and mounts – both standard and custom – to protect your camera.


Standard enclosures may be enough for applications that don’t require wash-down, but for many applications the camera, lens, and ports need some protection. Light weight, low profile designs are available in various diameters, suitable for cameras from diverse manufacturers. Adjustable mounts with pre-tapped holes are provided. In addition to generic enclosures suitable for diverse camera models, enclosure designs are available for Teledyne Dalsa Genie Nano, Allied Vision Mako and Alvium cameras and other manufacturers. 

55mm-IP67 Series Round – type 2 anodizing – Image courtesy of Components Express, Inc.

Food processing or high-particulates deployments often must be in wash-down configurations.

Stainless Series IP67 – round stainless wash-down design – Image courtesy of Components Express, Inc.

Extruded housing, with IP67 rating, features CEI Integrated Connector Design that keeps cord grips inside the housing:

Video shows stepwise field-assembly of camera and lens into enclosure – courtesy of Components Express, Inc.

Or you may need a sturdy mount to position the camera on a stable base, with adjustable alignment options.

Index Mount™ EN-M4 Adjustable Camera Mount – Image courtesy of Components Express, Inc.

Custom enclosure and mounts are also available for the most demanding applications. Besides all the Components Express Inc. (CEI) mounts and enclosures, some may also be interested in air curtains, windows & filters by Midwest Optical. These also come in customer sizes, length and focusing solutions.  Discuss your requirements with us!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

TAMRON Wide Band SWIR Lenses and Applications

TAMRON SWIR lenses

Short wave infrared (SWIR) imaging is a specialized segment of machine vision, applying automated imaging outside the human-visible spectrum. Able to see through opaque plastic bottles to verify or control fill levels, inspect fruit for bruising, sort recyclables, inspect silicon wafers, etc., SWIR applications require SWIR-specific sensors, cameras, and lenses.

Visible vs SWIR image of same targets – image courtesy of TAMRON

TAMRON Wide Band SWIR lenses are designed for Sony IMX990 / Sony IMX991 sensors and other sensors with a 5µm pixel-pitch. These wide band lenses feature the capability to work in a wide range of wavelengths from the visible range to Short-Wave Infrared Range (SWIR 400 – 1700nm). In addition, TAMRON’s new proprietary eBAND anti-reflection lens coating technology provides 80% constant spectrum transmittance over the whole visible to SWIR spectrum.

Tamron wide-band SWIR lens series – Image courtesy of TAMRON

Vein imaging application overlays SWIR image of veins into visible image of patient forearm –
Image courtesy TAMRON
Monitor moisture levels in crops from airborne drone – Image courtesy TAMRON
Banknote under visible light (left) and SWIR (right) – Images courtesy of Allied Vision
Opaque and clear plastic bottles in visible light (left) and SWIR (right) –
Images courtesy of Allied Vision

Cool applications! Would 1stVision happen to carry any cameras that utilize the SONY sensors for which these TAMRON lenses are designed? Yes, of course:

Allied Vision Technologies’ Alvium USB and CSI cameras, models:

Alvium 1800 U/C-030

Alvium 1800 U/C-130

Allied Vision Technologies’ Goldeye cameras, models:

Goldeye CL-030 TEC1

Goldeye CL-130 TEC1


So far we’ve got SWIR lenses, sensors and cameras, the latter with several interface and performance options. How about SWIR lighting, to create the proper contrast? We’ve got that too. Call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

IR Applications for AT – Automation Technology IRSX Smart Cameras 

IRSX Series Smart Cameras

“Smart Cameras” for regular machine vision are not new, but a “Smart Thermal camera” is completely new.  Smart cameras attractively reduce or eliminate the need for a computer host, for some applications, putting the image processing onboard the camera.  That can lower costs and/or speed processing by eliminating components and data transfer time. 

AT – Automation Technology provides the IRSX series, bringing smart cameras to the IR space.  

IRSX Series Smart Cameras – courtesy AT – Automation Technology

In this overview, we address in turn each of:

  • Camera attributes
  • Smart camera and communications/control features
  • Example applications

Camera attributes: (physical)

Detector type: Focal Plane Array (FPA), uncooled microbolometer

Range: Thermal measurement range -10°C to +550°C

Precision: Measurement accuracy of +/- 2°C or +/- 2%

ROIs: Supports temperature evaluation based on an unlimited number of ROIs

Rugged: Rugged housing with air purge for the lens

Size: small enough to fit in the tightest of spaces (55 x 55 x 77 mm)

Options: Different models with different resolutions, FoV and frame rates available


IRSX Standards Compliance –
courtesy AT – Automation Technology

Smart camera and communications/control features:

GigE Vision: Complies to the newest GenICam standard

SDK options: Bundled irsxSupportPackage, AT’s comprehensive SDK, includes interfaces to popular third party packages as well

Smart Processing App: Complete functionality to create applications solutions
for a stand-alone operation of camera

Web-based configuration of your measurement task and display of results

IoT communications: Modbus server and client for IoT communication with
external devices


Example Applications: Besides the general purpose ability to monitor industrial infrastructure for early detection of combustion, specific industries taking advantage of IR applications include plastics, iron and steel, food processing, automotive, chemical, oil and gas, and electrical utilities, to name a few.

Do you have warehouses with combustible goods?

Or installations where there is a risk of fire?

Detect and respond to critical conditions BEFORE there’s an outbreak of fire.


Counting, packing, and sorting

Prepared meals: check the sealing of cover foils for defects

Thermal process monitoring during production


Foamed parts: e.g. dashboards: Inline
inspection for voids in the foam layer

Hot stamping: Monitor temperature distribution before and after forming
to optimize product quality


Are you already using IR imaging, and want to know more about how a smart IR camera could enhance existing applications – or innovate new ways to add value to your product and service offering? Or will your first IR application be with a smart camera? Call us at 978-474-0044 and we’ll be happy to learn more about your unique applications requirements – and how we can help. That’s what we do.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

AT – Automation Technology 3D Profilers – What makes them different? 

3D laser profiling is widely used in diverse industries and applications.  There are a number of mature offerings and periodic next generation innovations.  So what would it take to convince you to take a look at the value proposition for AT – Automation Technology’s C6 Series?  In particular the C6-3070, the fastest laser triangulation laser profiler on the market.

AT says that  “C6 Series is an Evolution.  C6-3070 is a Revolution”.   Let’s briefly review the principles of laser profile scanning, followed by what makes this particular product so compelling.

3D profile scanning components – courtesy Automation Technology

What are the distinguishing characteristics of each item labeled in the above diagram?

  • Target object: An item whose height variations we want to digitally map or profile
  • XYZ guide: The laser line paints the X dimension; each slice is in the Y dimension; height correlates to Z
  • Laser line projector: paints the X dimension across the target object
  • Objective lens: focuses reflected laser light
  • CMOS detector: array of pixel wells, or pixels, such that for each cycle, the electronic value of a pixel scales with the height value of the geometrically corresponding position on the target object
  • FPGA and I/O circuitry: provide the timing, the smarts, and the communications

The key to laser triangulation is that the triangulation angle varies in direct correlation with the height variances on the target object that reflects the projected laser light through the lens and onto the detector. It’s “just geometry” – though packaged of course efficiently into the embedded algorithms and precisely aligned optics.

The goal in 3D profile scanning is to build a 3D point cloud representing the height profile of the target object.

Laser line reflections captured to create 3D point cloud of target object – courtesy Automated Technology

Speed and Resolution: 200kHz @ 3k resolution. That’s the fastest on the market. This is due to AT’s proprietary sensor WARP – Widely Advanced Rapid Profiling. How does it work?

The C6-3070 imager has on-board pre-processing. In particular, it detects the laser line on the imager, so that only the part of the image around the laser line is transferred to the FPGA for further processing. This massively reduces the volume of data needing to be transferred, but focusing on just the relevant immediate neighborhood around the laser line. Which means more cycles per second. Which is how 200kHz at 3k resolution is attained.

C6-3070 imager’s pre-processing sends just the portion of the image needed, thereby achieving higher framerates – courtesy Automation Technology

Modularity: When Henry Ford introduced the Model T, he is famously attributed to have said “You can have it any color you like, as long as it’s black.” Ford achieved economies of scale with a standardized product, and almost all manufacturers follow principles of standardization for the same reason.

But AT – Automation Technology’s C6 Series is modular by design – each component of an overall system offers standard options. There are no minimum order quantities, no special engineering charges, and lead times are short because the modular components are pre-stocked.

For example:

  • Laser options (blue, red laser class: 2M, 3R, 3B)
  • X-FOV (Field Of View) from 7mm to 1290 mm
  • Single or dual head sensors
  • Sensor parameters offer customizable Working Distance, Triangulation Angle, and Speed

Software: The cameras may be controlled by many popular third party software products, as the are GigE-Vision / Genicam 3.0 compliant. Or you may download the comprehensive and free AT Solution Package, optimized for use with AT’s IR cameras. The SDK is C-based API with wrappers for C++, C# and Python.

Besides the SDK itself, users may want to take advantage of the Metrology Package. The Metrology Package provides a toolset for evaluating measurement results.

Pricing: You might think that a product asserted to be the fastest on the market would come at a premium price. In fact AT’s 3D profilers are priced so competitively that they are often price leaders as well. At the time of writing, they certainly lead on (price : performance) in their class. Call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

EFFI Flex2 Lights – What’s New? 

EFFI-Flex2 LED bar lights

EFFI Flex2 LED bar lights are the next generation modular lights with even more flexibility. 

Series 1 vs. EFFI-Flex2 NextGen Series – courtesy of Effilux

Getting the lighting right is as important as choosing the right sensor, camera, and interface, in terms of machine vision success. Most all lights are “pre-configured” to a specific lighting geometry as well as wavelengths.  Lighting is still tricky and in most cases, testing is needed with various configurations. If your samples are small in size and don’t require special handling or equipment, your imaging partner may offer a lighting lab in which they can test your samples to optimize lighting configuration.

But shipping samples and re-creating your conditions in a partner’s lab isn’t always practical, and you may want to do your own testing and configuration, taking advantage of your domain expertise and home-field advantage. With EFFI Flex2 lights, the standard components and range of settings yield 36 configurations in 1 light! How so? Each unit comes with 3 diffuser windows x 4 lens positions x 3 electronic modes = 36 configurations. Optional additional accessories, like polarizers, take the calculation to more than 100 configurations!

EFFI Flex2 LED modular bar lights are configurable to a wide range of applications.  The user can adapt the optics, the electronics, and the mechanics, thanks to the engineering design.  Perhaps you’ll embed the unit in a “forever” deployment mode, never again re-configuring a specific unit once you lock down the optimal settings.

Or maybe you’ll adapt your light again to repurpose it in another application. With the long service life of LED lights, the light may well outlive the application. Or maybe you do multiple applications, and want a light that’s so versatile you can do all your testing in house, letting the lighting drive the final choice of sensor and camera model.

As with the first generation EFFI-Flex product, EFFI-Flex2 series is designed to provide easy adaptation of:

  • Optics – adjust the lens position and the diffusers
  • Electronics – built-in multimode driver offers 3 operating modes in one light
  • Mechanics – optical lengths from 60mm – 2900mm (factory configured)
One-minute video shows key optical and electronic configuration options

Optically,  lens positions may be user-adjusted for emission angles ranging from from 90 to 10 degrees. Each unit ships with swappable diffusers for strobed light, diffused, semi-diffused, or opaline light. Further, if necessary for your particular application, optional optical accessories are available: polarizer, linescan film, and cylindrical lens).

Electronically, the built-in multimode driver features 3 electronic modes: AutoStrobe with 450% Overdrive, Adjustable Strobe with controlled intensity from 10% to 100%, and Dimmable Continuous with controlled intensity from 10% to 100%. The 450% Overdrive mode is 1.5 times more powerful than the original EFFI-Flex LED bar light in overdrive.

The driver software makes it easy to select among the 3 modes, and the parameters within each mode.

Mechanics: While the length of a given unit cannot be adapted once delivered, one may order in lengths from as short at 60mm to as long as 2900mm. If your default units are English rather than metric, that’s from less than 3 inches to as much as 9.5 feet!

EFFI-Flex original series remains in production. If you don’t require the flexibility of EFFI-Flex2, with up to 36 configurations per unit shipped, the original series offers great value in its own right. Call us at 978-474-0044 to speak with one of our sales engineers, for guidance, or take out your own appendix with this side-by-side comparison diagram:

Series 1 vs. NextGen EFFI-Flex2 – courtesy of Effilux

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

10GigE cameras join Teledyne DALSA Genie Nano Series

Teledyne DALSA 10GigE Genie Nano

Derived from 10 Gigabit Ethernet, and adapted to GigE Vision standards, Teledyne DALSA has continued buildout of the Nano series from 1GigE, 2GigE, 5GigE, and now 10GigE.

10GigE Teledyne DALSA Genie Nano – courtesy Teledyne DALSA

The Genie Nano series is now extended from 1, 2.5 and 5GigE with new 10GigE camera models M/C8200 and M/C6200. These are based on Teledyne e2v’s 67Mp and 37Mp monochrome and color sensors. These high resolution sensors generate a lot of image data to transfer to the host computer, but at 10GigE speeds they achieve frame rates to:

  • 15fps – for the 67Mp cameras
  • 20fps – for the 37Mp cameras

There are four new models offered, in color and monochrome versions for each sensor variant. All are GenICam, GigE Vision 2.0 compliant. They are multi ROI with up to 16 x Region of Interest (ROI). The cameras have all-metal bodies and 3 year warranties.

Further, the M/C8200, at 59 mm x 59 mm, is the industry’s smallest 67M 10GigE Vision camera, for those needing high-resolution and high-performance in a comparatively small form factor.

These 10GigE models share all the other features of the Teledyne DALSA Genie Nano Series, for ease of integration or upgrades. Such features include but are not limited to:

Power over Ethernet (PoE) – single cable solution for power, data, and control

Precision Time Protocol (PTP) synchronization of two or more cameras over GigE network, avoiding the need for hardware triggers and controllers

General Purpose Input Output (GPIO) connectors providing control flexibililty

Trigger to Image Reliability (T2IR)

  • Manage exceptions in a controlled manner
  • Verify critical imaging events such as lost frames, lines, or triggers
  • Tag images for traceability
Trigger to Image Reliability (T2IR) – courtesy Teledyne DALSA

Across the wide range of Teledyne DALSA (area scan) cameras shown below, the Genie Nano 10GigE cameras are at the upper end of the already high-performance mid-range.

Genie Nano 10GigE area scan cameras in the Teledyne portfolio – courtesy Teledyne DALSA

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

New Falcon4-M2240 – 2.8Mpix at up to 1200fps!

Teledyne DALSA Falcon4

Who needs another 2.8Mpix camera? In this case it’s not about the pixel count per se, but about the frame rates and the dynamic range.

Falcon™4-CLHS – courtesy Teledyne DALSA

With more common interfaces like GigE and 5GigE we expect frame rates from a 2.8 Mpix camera in the range 20 – 120fps, respectively. But with the Camera Link High Speed (CLHS) interface, Teledyne DALSA’s new Falcon4-M2240 camera can deliver up to 1200fps. If your application demands high-speed performance together with 2.8Mpix resolution, this camera delivers.

Besides speed, an even more remarkable feature of the Falcon4-M2240, based on the Teledyne e2v Lince 2.8 MP, is a pixel well depth, or full well capacity, of ~138 [ke-]. THAT VALUE IS NOT A TYPO!! It really is ~138 [ke-]. Other sensors also thought of as high quality offer pixel well depths only 1/10th of this value, so this sensor is a game changer.

Contact us for a quote

Why does pixel well depth matter? Recall the analogy of photons to raindrops, and pixel wells to buckets. With no raindrops, the bucket is empty, just as with no photons quantized to electrons, the pixel well is empty and the monochrome pixel would correspond to 0 or full-black. When the bucket, or pixel well, becomes exactly full with the last raindrop (electron) it can hold, it’s reached it’s full well capacity – the pixel value would be fully saturated at white (for a monochrome sensor).

The expressive capacity of each pixel admits the widest range of values in correlation to the full well capacity before charge overflows, so the camera is calibrated by the designer according to the sensor’s capabilities. Sensors with higher full well capacity are desirable, since they can capture all the nuances of the imaging target, which in turn gives your software maximum image features to identify.

Falcon4 cameras offer highest performance – courtesy Teledyne DALSA

This newest member of the Falcon4 family joins siblings with sensors offering 11, 37, and 67 Mpix respectively. The Falcon4 family represents continues the success of the Falcon2 family, all of which share many common features: These include:

  • CMOS global shutter
  • High dynamic range
  • 1000x anti-blooming
  • M42 to M95 optics mount
  • Camera Link or Camera Link HS interface
Falcon family members share many features

Even before the new firmware update (V1.02), Falcon4 cameras already offered:

  • Multiple triggering options
  • Multiple exposure control options
  • In sensor binning
  • Gain control
  • In camera Look-up-table (LUT)
  • Pixel correction
  • … and more

Now with Firmware 1.02 the Falcon4 family gets these additional features:

  • Multi-ROI
  • ROI position change by sequencer cycling
  • Digital gain change by sequencer cycling sequencer cycling of Digital Gain
  • Exposure change by sequencer cycling
  • Sequencer cycling of output pulse
  • Meta Data

Multi-ROI

Higher FPS by sending only ROIs needed – courtesy Teledyne DALSA

Region Of Interest (ROI) capabilities are compelling when an application has defined regions within a larger field that can be read out, skipping the un-necessary regions, thereby achieving much higher framerates than having to transfer the full resolution image from camera to host. It’s like having a number of smaller-sensor cameras, each pointed at their own region, but without the complexity of having to manage multiple cameras. As shown in the image below, the composite image frame rates are equivalent to the single ROI speed gains one might have known on other cameras.


Sequencer cycling of ROI position:

Each trigger changes ROI position – courtesy Teledyne DALSA

Cycling the ROI position for successive images might not seem to have obvious benefits – but what if the host computer could process image 1, while the camera acquires and begins transmitting image 2, and so forth? Overall throughput for the system rises – efficiency gains!


Sequencer cycling of output pulse:

Courtesy Teledyne DALSA

For certain applications, it can be essential to take 2 or more exposures of the same field of view, each under different lighting conditions. Under natural light, one might take a short, medium, and long exposure duration, to hedge on which is best, let the camera or object move to the next position, and let the software decide which is best. Or under controlled lighting, one might image once with white or colored light, then again with an NIR wavelength, knowing that each exposure condition reveals different features relevant to the application.


Metadata:

Metadata structure – courtesy Teledyne DALSA

Metadata may not sound very exciting, and the visuals aren’t that compelling. But sending data along for the ride with each image may be critical for quality control archiving, application analysis and optimization, scheduled maintenance planning, or other reasons of your own choosing. For example, it may be valuable to know at what shutter or gain setting an image was acquired; or to have a timestamp; or to know the device ID from which camera the image came.


The Falcon2 and Falcon4 cameras are designed for use in industrial inspection, robotics, medical, scientific imaging, as well as wide variety of other demanding automated imaging and machine vision applications requiring ultra-high-resolution images.

Representative application fields:

Applications for 67MP Genie Nano – courtesy Teledyne DALSA

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Learn how an Allied Vision Mako camera can control your LED light source

camera as controller

In this article we discuss when and why one might want to strobe a light instead of using continuous lighting. While strobing traditionally required a dedicated controller, we go on to introduce that CCS and AVT have published an Application Note showing how the Allied Vision Mako camera can serve as the controller!

While LED lights are often used for continuous lighting, since that’s an easy mode of deployment, sometimes an application is best served with a well-timed strobe effect. This might be for one or more of the following reasons:

  • to “freeze motion” via light timing rather than shutter control alone;
  • to avoid the heat buildup from continuously-on lights
  • overwhelm ambient lighting
  • maximize lamp lifetime
Effilux LED lights

Let’s suppose you’ve already decided that you require strobe lighting in your application. You’re past “whether” and on to “how to”.

Since you are moving into the realm tight timing tolerances, it’s clear that the following are going to need to be coordinated and controlled:

  • the strobe light start and stop timing, possibly including any ramp-up delays to full intensity
  • the camera shutter or exposure timing, including any signal delays to start and stop
  • possibly the physical position of real world objects or actuators or sensors detecting these

Traditionally, one used and external controller, an additional device, to control both the camera and the lighting. It’s a dedicated device that can be programmed to manage the logical control signals and the appropriate power, in the sequence required. This remains a common approach today – buy the right controller and configure it all, tuning parameters through calculations and empirical testing.

Effilux pulse controller: controls up to 4 lights; output current can reach up to 1A @ 30V in continuous and 10A @ 200V in strobe mode – courtesy Effilux

Call us if you want help designing your application and choosing a controller matched to your camera and lighting requirements.

But wait! Sometimes, thanks to feature-rich lighting equipment and cameras, with the right set of input/output (I/O) connections, and corresponding firmware-supported functionality, one can achieve the necessary control – without a separate controller. That’s attractive if it can reduce the number of components one needs to purchase. Even better, it can reduce the number of manuals one has to read, the number of cables to connect, and the overall complexity of the application.

Let’s look at examples of “controller free” applications, or more accurately, cameras and lights that can effect the necessary controls – without a separate device.

Consider the following timing diagram, which shows the behavior of the Effi-Ring when used in auto-strobe mode. That doesn’t mean it strobes randomly at times of its own choosing! Rather it means that when triggered, it strobes at 300% of continuous intensity until the trigger pulse falls low again, OR 2 seconds elapse, whichever comes first. Then if steps down to continuous mode at 100% intensity. This “2 seconds max” feature, far longer than most strobed applications require, is a design feature to prevent overheating.

Courtesy Allied Vision Technologies

OK, cool. So where to obtain that nice square wave trigger pulse? Well, one could use a controller as discussed above. But in the illustration below, where’s the controller?!? All we see are the host computer, an Allied Vision Mako GigE Vision camera, an Effilux LED, a power supply, and some cabling.

Camera exposure signal controls strobe light – courtesy Allied Vision Technologies

How is this achieved without a controller? In this example, the AVT Mako camera and the Effilux light are “smart enough” to create the necessary control. While neither device is “smart” in the sense of so-called smart cameras that eliminate the host computer for certain imaging tasks, the Mako is equipped with opto-isolated general purpose input output (GPIO) connections. These GPIOs are programmable along with many other camera features such as shutter (exposure), gain, binning, and so forth. By knowing the desired relationship between start of exposure, start of lighting, and end of exposure, and the status signals generated for such events, one can configure the camera to provide the trigger pulse to the light, so that both are in perfect synchronization.

Note: During application implementation, it can be helpful to use an oscilloscope to monitor and tune the timing and duration of the triggers and status signals.

Whether your particular application is best served with a controller, or with a camera that doubles as a controller, depends on the application and camera options available. 1stVision carries a wide range of Effilux LED lights in bar, ring, backlight, and dome configurations, together with the ability to be used on continuous or strobe modes.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Machine vision lights as important as sensors and optics

Lighting matters as much or more than camera (sensor) selection and optics (lensing). A sensor and lens that are “good enough”, when used with good lighting, are often all one needs. Conversely, a superior sensor and lens, with poor lighting, can underperform. Read further for clear examples why machine vision lights are as important as sensors and optics!

Assorted white and color LED lights – courtesy of Advanced Illumination

Why is lighting so important? Contrast is essential for human vision and machine vision alike. Nighttime hiking isn’t very popular – for a reason – it’s not safe and it’s no fun if one can’t see rocks, roots, or vistas. In machine vision, for the software to interpret the image, one first has to obtain a good image. And a good image is one with maximum contrast – such that photons corresponding to real-world coordinates are saturated, not-saturated, or “in between”, with the best spread of intensity achievable.

Only with contrast can one detect edges, identify features, and effectively interpret an image. Choosing a camera with a good sensor is important. So is an appropriately matched lens. But just as important is good lighting, well-aligned – to set up your application for success.

What’s the best light source? Unless you can count on the sun or ambient lighting, or have no other option, one may choose from various potential types of light:

  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode
  • Metal Halide (Mercury)
  • Xenon (Strobe)
Courtesy of Advanced Illumination

By far the most popular light source is LED, as it is affordable, available in diverse wavelengths and shapes (bar lights, ring lights, etc.), stable, long-life, and checks most of the key boxes.

The other light types each have their place, but those places are more specialized. For comprehensive treatment of the topics summarized here, see “A Practical Guide to Machine Vision Lighting” in our Knowledgebase, courtesy of Advanced Illumination.

Download whitepaper
Download whitepaper

Lighting geometry and techniques: There’s a tendency among newcomers to machine vision lighting to underestimate lighting design for an application. Buying an LED and lighting up the target may fill up sensor pixel wells, but not all images are equally useful. Consider images (b) and (c) below – the bar code in (c) shows high contrast between the black bars and the white field. Image (b) is somewhere between unusable or marginally usable, with reflection obscuring portions of the target, and portions of the (should be) white field appearing more grey than white.

Courtesy of Advanced Illumination

As shown in diagram (a) of Figure 22 above, understanding bright field vs dark field concepts, as well as the specular qualities of the surface being imaged, can lead to radically different outcomes. A little bit of lighting theory – together with some experimentation and tuning, is well worth the effort.

Now for a more complex example – below we could characterize images (a), (b), (c) and (d) as poor, marginal, good, and superior, respectively. Component cost is invariant, but the outcomes are sure different!

Courtesy of Advanced Illumination

To learn more, download the whitepaper or call us at (978) 474-0044.

Contact us

Color light – above we showed monochrome examples – black and white… and grey levels in between. Many machine vision applications are in fact best addressed in the monochrome space, with no benefit from using color. But understanding what surfaces will reflect or absorb certain wavelengths is crucial to optimizing outcomes – regardless of whether working in monochrome, color, infrared (IR), or ultraviolet (UV).

Beating the same drum throughout, it’s about maximizing contrast. Consider the color wheel shown below. The most contrast is generated by taking advantage of opposing colors on the wheel. For example, green light best suppresses red reflection.

Courtesy of Advanced Illumination

On can use actual color light sources, or white light together with well-chosen wavelength “pass” or “block” filters. This is nicely illustrated in Fig. 36 below. Take a moment to correlate the configurations used for each of images (a) – (f), relative to the color wheel above. Depending on one’s application goals, sometimes there are several possible combinations of sensor, lighting, and filters to achieve the desired result.

Courtesy of Advanced Illumination

Filters – can help. Consider images (a) and (b) in Fig. 63 below. The same plastic 6-pack holder shown is shown in both images, but only the image in figure (b) reveals stress fields that, were the product to be shipped, might cause dropped product, reduced consumer confidence in one’s brand. By designing in polarizing filters, this can be the basis for a value-added application, automating quality control in a way that might not have been otherwise achievable – or not at such a low cost.

Courtesy of Advanced Illumination

For more comprehensive treatment of filter applications, see either or both Knowledgebase documents:


Powering the lights – should the be voltage-driven or current-driven? How are LEDs powered? When to strobe vs running in continuous modes? How to integrate light controller with the camera and software. These are all worth understanding – or having someone in your team – whether in-house or a trusted partner – who does.

For comprehensive treatment of the topics summarized here, see Advanced Illumination’s “A Practical Guide to Machine Vision Lighting” in our Knowledgebase:

Download whitepaper
Download whitepaper

This blog is intended to whet the appetite for interest in lighting – but it only skims the surface. Machine vision lights as important as sensors and optics. Please download the guide linked just above – to deepen your knowledge. Or if you want help with a specific application, you may draw on the experience of our sales engineers and trusted partners.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

What can you do with 3D from Automation Technology?

Automation Technology GmbH C6 Laser Sensor

When new technologies or product offerings are introduced, it can help get the creative juices flowing by seeing example applications. In this case, 3D laser triangulation isn’t new, and Automation Technology (AT) has more than 20 years’ experience developing and supporting their products. But 1stVision has now been appointed by AT as their North American distributor – a strategic partnership for both organizations bring new opportunities to joint customers.

Laser Triangulation overview – courtesy Automation Technology

The short video above provides a nice overview of how laser triangulation provides the basis for 3D imaging in Automation Technology GmbH’s C6 series of 3D imagers.

With no ranking implied by the order, we highlight applications of 3D imaging using Automation Technology products in each of:


Weld inspection

Weld inspection is essential for quality control, whether pro-actively for customer assurance and materials optimization or to archive against potential litigation.

Weld inspection – courtesy of Automation Technology
  • 3D Inspections provide robust, reliable, reproducible measured data largely independent of ambient light effects, reflection and the exact positioning of the part to be tested
  • High resolution, continuous inspection of height, width and volume
  • Control of shape and position of weld seams
  • Surface / substrate shine has no influence on the measurement

Optionally combine with an IR inspection system for identification of surface imperfections and geometric defects.


Rail tracks and train wheels

Drive-by 3D maintenance inspection of train wheel components and track condition:

  • Detect missing, loose, or deformed items
  • Precision to 1mm
  • Speeds up to 250km/hr
Train components and rail images – courtesy Automation Technology

Rolling 3D scan of railway tracks:

  • Measure rail condition relative to norms
  • Log image data to GPS position for maintenance scheduling and safety compliance
  • Precision to 1mm
  • Speeds up to 120km/hr

Additional rail industry applications: Tunnel wall inspection; catenary wire inspection.


Adhesive glue beads

Similar in many ways to the weld inspection segment above, automated glue bead application also seeks to document quality standards are met, optimize materials usage, and maximize effective application rates.

Glue bead – courtesy of Automation Technology

Noteworthy characteristics of 3D inspection and control of glue bead application include:

  • Control shape and position of adhesive bead on the supporting surface
  • Inspect height, width and volume
  • Control both inner and outer contour
  • Application continuity check
  • Volumetric control of dispensing system
  • Delivers robust, reliable, reproducible measured data largely independent of ambient light effects, reflection and exact positioning of the items being tested

Automation Technology C6 3D sensor

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

New IDS XLS cameras – tiny cameras – low-price category

IDS XLS board level cameras

The smallest board-level cameras in the IDS portfolio, the uEye XLS cameras have very low power consumption and heat generation. They are ideal for embedded applications and device engineering. Sensors are available for monochrome, color, and NIR.

XLS board-level with no lens mount; with S-mount; with C-mount – courtesy of IDS

The “S” in the name means “small”, as the series is a compact version of the uEye XLE series. As small as 29 x 29 x 7 mm in size! Each USB3 camera in the series is Vision Standard compliant, has a Micro-B connector, and offers a choice of either C/CS lens mount, S-mount, or no-mount DIY.

IDS uEye XLS camera familycourtesy of IDS

Positioned in the low-price portfolio, the XLS cameras are most likely to be adopted by customers requiring high volumes for which basic – but still impressive – functions are sufficient. The XLS launch family of sensors include ON Semi AR0234, ON Semi AR0521, ON Semi AR0522, Sony IMX415, and Sony IMX412. These span a wide range of resolutions, framerates, and frequency responses. Each sensor appears in 3 board-level variants per the last digit in each part number corresponding as follows: 1 = S-mount, 2 = no-mount, 4 = C, CS-mount.

SensorResolutionFramerateMonochromeColorNIR
ON Semi AR02341920
x
1200
102 fpsU3-356(1/2/4)
XLS-M
U3-356(1/2/4)
XLS-C
ON Semi AR05212592
x
1944
48 fpsU3-
368(1/2/4)
XLS-M
U3-
368(1/2/4)
XLS-C
ON Semi AR05222592
x
1944
48 fpsU3-368(1/2/4)
XLS-NIR
Sony
IMX415
3864
x
2176
25 fpsU3-38J(1/2/4)
XLS-M
U3-38J(1/2/4)
XLS-C
Sony
IMX412
4056
x
3040
18 fpsU3-38L(1/2/4)
XLS-C
XLS family spans 5 sensors covering a range of requirements
XLS dimensions, mounts, and connections – courtesy of IDS

Uses are wide-ranging, skewing towards high-volume embedded applications:

Example applications for XLS board-level cameras – courtesy of IDS

In a nutshell, these are cost-effective cameras with basic functions. The uEye XLS cameras are small, easy to integrate with IDS or industry-standard software, cost-optimized and equipped with the fundamental functions for high-quality image evaluation

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

How to select an industrial or machine vision camera?

How to select a camera

Why should I read about how to select an industrial camera, when I could just call 1stVision as a distributor of cameras, lenses, lighting, software, and cables, and let you recommend a solution for me?

Well yes, you could – and ultimately we believe a number of you who read this will in fact call us, as have many before. But when you take your car to the mechanic, do you just tell him “sometimes it makes a funny noise”? Or do you qualify the funny noise observation by noting at what speed it happens? When driving straight or turning in one direction? Whether it correlates to the ambient temperature or whether the vehicle is warmed up – or not?

How to select a camera

The best outcomes tend to come from partnerships where both the customer and the provider each bring their knowledge to the table – and work together to characterize the problem, the opportunity, and the solution. In our many years of expertise helping new and returning customers create machine vision solutions, the customers with the best outcomes also make the effort to dig in and understand enough about cameras and other components in order to help us help them.

So how does one in fact choose an industrial or machine vision camera?

An industrial camera is a camera, often embedded in or connected to a system, used for commercial or scientific applications. Additionally, machine systems are often fully automated, or at least partially automated, with long duty cycles. Applications are many, ranging from surveillance, process control, quality control, pick and place, biomedical, manufacturing, and more.

Further, the camera may be moving – or stationary, or the target might be moving – or stationary. And the wavelengths of light best-suited to achieving intended outcomes may be in the visible spectrum – the same spectrum we see – or the application may take advantage of ultraviolet (UV) or infrared (IR) characteristics.

So where to begin? First we need to characterize the application to be developed. Presumably you know or believe there’s an opportunity to add value by using machine vision to automate some process by applying computer controlled imaging to improve quality, reduce cost, innovate a product or service, reduce risk, or otherwise do something useful.

Now let’s dig into each significant consideration, including resolution, sensor selection frame rate, interface, cabling, lighting, lens selection, software, etc. Within each section we have links to more technical details to help you focus on your particular application.

Resolution: This is about the level of detail one needs in the image, in order to achieve success. If one just needs to detect presence or absence, a low resolution image may be sufficient. But if one needs to measure precisely, or detect fine tolerances, one needs a far more pixels that correlate to the fine-grained features from the real-world details being imaged.

The same real-world test chart imaged with better resolution on the left than on the right, due to one or both of sensor characteristics and/or lens quality

A key guideline is that each minimal real-world feature to be detected should appear in a 3×3 pixel grid in the image.  So if the real-world scene is X by Y meters, and the smallest feature to be detected is A by B centimeters, assuming the lens is matched to the sensor and the scene, it’s just a math problem to determine the number of pixels required on the sensor. Read more about resolution requirements and calculations.

Sensor selection: So the required resolution is an important determinant for sensor selection. But so is sensitivity, including concepts like quantum efficiency. Pixel size matters too, as an influencer on sensitivity, as well as determining sensor size overall. Keys to choosing the best image sensor are covered here.

image sensor

Wavelength: Sensor selection is also influenced based on the wavelengths being using in the application.     Let’s assume you’ve identified the wavelength(s) for the application, which determines whether you’ll need:

  • a CMOS sensor for visible light in the 400 – 700nm range
  • a UV sensor for wavelengths below 400nm
  • a Near Infrared sensor for 750 – 900nm
  • or SWIR and XSWIR to even longer wavelengths up to 2.2µm

Monochrome or color? If your application is in the visible portion of the spectrum, many first-timers to machine vision assume color is better, since it would seem to have more “information”. Sometimes that intuition is correct – when color is the distinguishing feature. But if measurement is the goal, monochrome can be more efficient and cost-effective. Read more about the monochrome vs. color sensor considerations.

Area scan vs. line scan? Area scan cameras are generally considered to be the all-purpose imaging solution as they use a straight-forward matrix of pixels to capture an image of an object, event, or scene. In comparison to line scan cameras, they offer easier setup and alignment. For stationary or slow moving objects, suitable lighting together with a moderate shutter speed can produce excellent images.

In contrast to an area scan camera, in a line scan camera a single row of pixels is used to capture data very quickly. As the object moves past the camera, the complete image is pieced together in the software line-by-line and pixel-by-pixel. Line scan camera systems are the recognized standard for high-speed processing of fast-moving “continuous” objects such as in web inspection of paper, plastic film, and related applications. An overview of area scan vs. line scan.

Frame-rate: If your object is stationary, such as a microscope slide, frame rate may be of little importance to you, as long as the entire image can be transferred from the camera to the computer before the next image needs to be acquired. But if the camera is moving (drive-by-mapping, or camera-on-robot-arm) or the target is moving (fast moving conveyor belt or a surveillance application), one must capture each image fast enough to avoid pixel blur – and transfer the images fast enough to keep up. How to calculate exposure time?

Interfaces: By what interface should the camera and computer communicate? USB, GigE, Camera Link, or CoaXPress? Each has merits but vary by throughput capacity, cable lengths permitted, and cost. It’s a given that the interface has to be fast enough to keep up with the volume of image data coming from the camera, relative to the software’s capability to process the data. One must also consider whether it’s a single-camera application, or one in which two or more cameras will be integrated, and the corresponding interface considerations.

Cabling: So you’ve identified the interface. The camera and computer budget is set. Can you save a bit of cost by sourcing the cables at Amazon or eBay, compared to the robust ones offered by the camera distributor? Sometimes you can! Sometimes not so much.

Lighting: While not part of the camera per se, for that sensor you’re now liking in a particular camera model, can you get enough photons into the pixel well to achieve the necessary contrast to discern target from background? While sensor selection is paramount, lighting and lensing are just a half-step behind in terms of consideration with the most bearing on application outcomes. Whether steady LED light or strobed, bright field or dark field, visible or IR or UV, lighting matters. It’s worth understanding.

Filters: Twinned closely with the topic of lighting, well-chosen filters can “condition” the light to polarize it, block or pass certain frequencies, and can generally add significant value. Whether in monochrome, color, or non-visible portions of the spectrum, filters can pay for themselves many times over in improving application outcomes.

Lens selection: Depending on resolution requirements, sensors come in various sizes. While always rectangular in shape, they have differing pixel densities, and differing overall dimensions. One needs to choose a lens that “covers” the light-sensitive sections of the sensor, so be sure to understand lens optical format. Not only does the lens have to be the right size, one also has to pay attention to quality. There’s no need to over-engineer and put a premium lens into a low-resolution application, but you sure don’t want to put a mediocre lens into a demanding application. The Modulation Transfer Function, or MTF, is a good characterization of lens performance, and a great way to compare candidate lenses.

Software: In machine vision systems, it’s the software that interprets the image and takes action, whether that be accept/reject a part, actuate a servo motor, continue filling a bottle or vial, log a quality control image, etc. Most camera providers offer complementary software development kits (SDKs), for those who want to code camera control and image interpretation. Or there are vendor-neutral SDKs and machine vision libraries – these aren’t quite plug-and-play – yet – but they often just require limited parameterization to achieve powerful camera configuration and image processing.

Accessories: How about camera mounts? Wash-down enclosures for food-processing or dusty environments? If used outdoors, do you need heating or cooling, or condensation management? Consider all aspects for a full solution.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Release of Goldeye G/CL-008 XSWIR Cameras

Recently released Goldeye G/CL-008 XSWIR cameras with QVGA resolution extended range InGaAs sensors offer two sensitivity options: up to 1.9 µm or 2.2µm.

Goldeye SWIR camera
From SWIR into Extended SWIR. Image courtesy of Allied Vision Technologies.

The Extended Range (ER) InGaAs sensor technology integrated into the new Goldeye XSWIR models provides high imaging performance beyond 1.7 µm.

The cut-off wavelength can be shifted to higher values by increasing the amount of Indium vs. Gallium in an InGaAs compound. Corresponding sensors can only detect light below the cut-off wavelength. In the Goldeye XSWIR cameras there are four different sensors with VGA and QVGA resolution and cut-off wavelength at 1.9 µm or 2.2 µm that provide very high peak quantum efficiencies of > 75%.

Indium Gallium mix affects cutoff value
Indium : Gallium ratio determines cut-off wavelength; image courtesy of Allied Vision

The new Goldeye XSWIR models are:

Table showing 4 sensor options for Goldeye 008 XSWIR; courtesy of Allied Vision
Contact us for a quote

In these cameras the sensors are equipped with a dual-stage thermo-electric cooler (TEC2) to cool down the sensor temperature by 60K vs. the housing temperature. Also included are image correction capabilities like Non-Uniformity Correction (NUC) and 5×5 Defect Pixel Correction (DPC) to capture high-quality SWIR images beyond 1.7 µm.

Goldeye XSWIR cameras are available with two sensor options. The 1.9µm version detects light between 1,100nm to 1,900nm and the 2.2 µm version from 1,200 – 2,200nm.

Response curves for two respective sensors; images courtesy of Allied Vision

Industrial grade solution for an attractive price: Other sensor technologies available to detect light beyond 1,700 nm based on materials like HgCdTe (MCT), Type-II Superlattice (T2SL), or Colloidal Quantum Dots (CQD) tend to be very expensive. The Goldeye XWIR Extended Range (ER) InGaAs sensors have several advantages including cost-effective sensor cooling via TEC, high quantum efficiencies, and high pixel operability (> 98.5%).

MCT or T2SL sensor-based SWIR cameras typically require a very strong sensor cooling using Stirling coolers or TEC3+ elements. By comparison the Goldeye XSWIR cameras are available for a comparatively low price.

The easy integrability and operation of ER InGaAs sensors makes them attractive for industrial applications, including but not limited to:

  • Laser beam analysis
  • Spectral imaging in industries like recycling, mining, food & beverages, or agriculture
  • Medical imaging: e.g. tissue imaging due to deeper penetration of longer wavelengths
  • Free Space Optics Communication
  • Surveillance

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Note: All images courtesy of Allied Vision Technologies.

Which Z-Trak 3D camera is best for my application?

So you want to do an in-line measurement, inspection, identification and/or guidance application in automotive, electronics, semiconductor or factory automation. Whether a new application or time for an upgrade, you know that Teledyne DALSA’s Z-Trak 3D Laser Profiler balances high performance while also offering a low total cost of ownership.

In this 2nd Edition release we update the Z-Trak family overview with the addition of the new LP2C 4k series, bringing even more options along the price : performance spectrum. From low cost and good enough, through more resolution as well as fast, and all the way to highest resolution, there are a range of Z-Trak profiles to choose from.

Z-Trak 3D Laser Profiler

The first generation Z-Trak product, the LP1, is the cornerstone of the expanded Z-Trak family, now augmented with the Z-Trak2 group (V-series and the S-series), plus the LP2C 4k series. Each product brings specific value propositions – here we aim to help you navigate among the options.

Respecting the reader’s time, key distinctions among the series are:

  • LP1 is the most economical 3D profiler on the market – contact us for pricing.
  • Z-Trak2 is one of the fastest 3D profilers on the market – with speeds to 45kHz.
  • LP2C 4k provides 4,096 profiles per second at resolution down to 3.5 microns.

To guide you effectively to the product best-suited for your application, we’ve prepared the following table, and encourage you to fill in the blanks, either on a printout of the page or via copy-past into a spreadsheet (for your own planning or to share with us as co-planners).

3D application key attributes

Compare your application’s key attributes from above with some of the feature capacities of the three Z-Trak product families below, as a first-pass at determining fit:

Z-Trak Series' overvivew

Unless the fit is obvious – and often it is not – we invite you to send us your application requirements. We we love mapping customer requirements, so please send us your application details in our form on this contact link; or you can send us an email to info@1stvision.com with the feedback from your 3D application’s “Key questions” above.

In addition to the parameter-based approach to choosing the ideal Z-Trak model, we also offer an empirical approach – send in your samples. We have a lab set up to inspect customer samples with two or more candidate configurations. System outputs can then be examined for efficacy relative to your performance requirements, to determine how much is enough – without over-engineering.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Note: This is the 2nd edition of a blog originally published December 16, 2022, now augmented with the Z-Trak LP2C 4k series.

How to read an MTF lens curve

We recently published a TechBrief “What is MTF?” to our Knowledge Base. It provides an overview of the Modulation Transfer Function, also called the Optical Transfer Function, and why MTF provides an important measure of lens performance. That’s particularly useful when comparing lenses from different manufacturers – or even lenses from different product families by the same manufacturer. With that TechBrief as the appetizer course, let’s dig in a little deeper and look at how to read an MTF lens curve. They can look a little intimidating at first glance, but we’ll walk you through it and take the mystery out of it.

Figure A. Both images created with lenses nominally for similar pixel sizes and resolution – which would you rather have in your application?

Test charts cluster alternating black and white strips, or “line pairs”, from coarse to fine gradations, varying “spatial frequency”, measured in lines / mm, in object space. The lens, besides mapping object space onto the much smaller sensor space, must get the geometry right in terms of correlating each x,y point to the corresponding position on the sensor, to the best of the lens’ resolving capacity. Furthermore, one wants at least two pixels, preferably 3 or more, to span any “contrast edge” of a feature that must be identified.

So one has to know the field of view (FOV), the sensor size, the pixel pitch, the feature characteristics, and the imaging goals, to determine optical requirements. For a comprehensive example please see our article “Imaging Basics: How to Calculate Resolution for Machine Vision“.

Figure B. Top to bottom: Test pattern, lens, image from camera sensor, brightness distribution, MTF curve

Unpacking Modulation Transfer Function, let’s recall that “transfer” is about getting photons presented at the front of the lens, coming from some real world object, through glass lens elements and focused onto a sensor consisting of a pixel array inside a camera. In addition to that nifty optical wizardry, we often ask lens designers and manufacturers to provide lens adjustments for aperture and variable distance focus, and to make the product light weight and affordable while keeping performance high. “Any other wishes?” one can practically hear the lens designer asking sarcastically before embarking on product design.

So as with any complex system, when transferring from one medium to another, there’s going to be some inherent lossiness. The lens designer’s goal, while working within the constraints and goals mentioned above, is to achieve the best possible performance across the range of optical and mechanical parameters the user may ask of the lens in the field.

Consider Figure B1 below, taken from comprehensive Figure B. This shows the image generated from the camera sensor, in effect the optical transfer of the real world scene through the lens and projected onto the pixel array of the sensor. The widely-spaced black stripes – and the equally-spaced white gaps – look really crisp with seeming perfect contrast, as desired.

Figure B1: Image of progressively more line pairs per millimeter (lp/mm)

But for the more narrowly-spaced patterns, light from the white zones bleeds into the black zones and substantially lowers the image contrast. Most real world objects, if imaged in black and white, would have shades of gray. But a test chart, at any point position, is either fully black or fully white. So any pixel value recorded that isn’t full black or full white represents some degradation in contrast introduced by the lens.

The MTF graph is a visual representation of the lens’ ability to maintain contrast across a large collection of sampled line pairs of varying widths.

Let’s look at Figure B2, an example MTF curve:

Figure B2: Example of MTF graph
  • the horizontal axis denotes spatial frequency in line pairs per millimeter; so near the origin on the left, the line pairs are widely spaced, and progressively become more narrowly spaced to the right
  • the vertical axis denotes the modulation transfer function (MTF), with high values correlating to high contrast (full black or full white at any point), and low values representing undesirable gray values that deviate from full black or full white

The graph in Figure B2 only shows lens-center MTF, for basic discussions, and does not show performance on edges, nor take in account f# and distance. MTF, and optics more generally, are among the more challenging aspects of machine vision, and this blog is just a primer on the topic.

Click to contact
Give us some brief idea of your application or your questions –
we will contact you to assist

In very general terms, we’d like a lens’ MTF plot to be fairly close to the Diffraction Limit – the theoretical best-case achievable in terms of the physics of diffraction. But lens design being the multivariate optimization challenge that it is, achieving near perfection in performance may mean lots of glass elements, taking up space, adding weight, cost, and engineering complexity. So a real-world lens is typically a compromise on one or more variables, while still aiming to achieve performance that delivers good results.

Visualizing correlation between MTF plot and resultant image – MORITEX North America

How good is good enough? When comparing two lenses, likely in different price tiers that reflect the engineering and manufacturing complexity in the respective products, should one necessarily choose the higher performing lens? Often, yes, if the application is challenging and one needs the best possible sensor, lighting and lensing to achieve success.

But sometimes good enough is good enough. It depends. For example, do you “just” need to detect the presence of a hole, or do you need to accurately measure the size of the hole? The system requirements for the two options are very different, and may impact choice of sensor, camera, lens, lighting, and software – but almost certainly sensor and lensing. Any lens can find the hole, but a lens capable of high contrast is needed for accurate measurement.

Here’s one general rule of thumb: the smaller the pixel size, the better the optics need to be to obtain equivalent resolution. As sensor technology evolves, manufacturers are able to achieve higher pixel density in the same area. Just a few years ago the leap from a VGA sensor to 1 or 5 MegaPixels (MP) was considered remarkable. Now we have 20 and 50 MP sensors. That provides fantastic options to systems-builders, creating single-camera solutions where multiple cameras might have been needed previously. But it means one can’t be careless with the optical planning – in order to achieve optimal outcomes.

Not all lens manufacturers express their MTF charts identically, and testing methods vary somewhat. Also, note that many provide two or even three lens families for each category of lenses, in order to provide customers with performance and pricing tiers that scale to different solutions requirements. To see an MTF chart for a specific lens, click first on a lens manufacturer pages such as Moritex, then on a lens family page, then on a specific lens. Then find the datasheet link, and scroll within the datasheet PDF to find the MTF curves and other performance details.

Contact us for a quote

Besides the theoretical approach to reading specifications prior to ordering a lens, sometimes it can be arranged to send samples to our lab for us to take sample images for you. Or it may be possible to test-drive a demo lens at your facility under your conditions. In any case, let us help you with your component selection – it’s what we do.

Finally, remember that some universities offer entire degree programs or specializations in optics, and that an advanced treatment of MTF graph interpretation could easily fill a day-long workshop or more – assuming attendees met certain prerequisites. So this short blog doesn’t claim to provide the advanced course. But hopefully it boosts the reader’s confidence to look at MTF plots and usefully interpret lens performance characteristics.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Acknowledgement / Credits: Special thanks to MORITEX North America for permission to include selected graphics in this blog. We’re proud to represent their range of lenses in our product offerings.

Effilux LED bar lights for machine vision – adjustable and modular!

Various LED bar configurations

Effective machine vision outcomes depend upon getting a good image. A well-chosen sensor and camera are a good start. So is a suitable lens. Just as important is lighting, since one needs photons coming from the object being imaged to pass through the lens and generate charges in the sensor, in order to create the digital image one can then process in software. Elsewhere we cover the full range of components to consider, but here we’ll focus on lighting.

While some applications are sufficiently well-lit without augmentation, many machine vision solutions are only achieved by using lighting matched to the sensor, lens, and object being imaged. This may be white light – which comes in various “temperatures”; but may also be red, blue, ultra-violet (UV), infra-red (IR), or hyper-spectral, for example.

LED bar lights are a particularly common choice, able to provide bright field or dark field illumination, according to how they are deployed. The illustrations below show several different scenarios.

Example uses of LED bar lights

LED light bars conventionally had to be factory assembled for specific customer requirements, and could not be re-configured in the field. The EFFI-Flex LED bar breaks free from many of those constraints. Available in various lengths, many features can be field-adapted by the user, including, for example:

  • Color of light emitted
  • Emitting angle
  • Optional polarizer
  • Built-in controller – continuous vs. strobed option
  • Diffuser window opacity: Transparent, Semi-diffusive, Opaline
EFFI-Flex user-configurable LED bar
Contact us for a quote

While the EFFI-Flex offers maximum configurability, sister products like the EFFI-Flex-CPT and EFFI-Flex-IP69K offer IP67 and IP69 protection, respectively, ideal for environments requiring more ruggedized or washdown components.

SWIR LED bar, backlight, and ringlight

Do you have an application you need tested with lights? Contact us and we can get your parts in the lab, test them and send images back.   If your materials can’t be shipped because they are spoilable foodstuffs, hazmat items, or such, contact us anyway and we’ll figure out how to source the items or bring lights to your facility.

Test and optimize lighting with customer materials

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Do I need cables designed for machine vision?

Do I really need cables designed specifically for machine vision? As a distributor of machine vision cameras, lenses, camera systems, cables, and accessories, we hear this question many times a day. Why does your GigE or USB3 cable cost so much?  I can just buy a cable online from Amazon, Ebay, etc. for $5 when yours costs $25 or more!

The answer is: You can…  sometimes… but it depends upon many things, and how critical those things are to your application. 

Here are 5 key variables you need to consider in a camera cable selection

  1. Distance from camera to computer
  2. Data rate in which the camera is transmitting  
  3. Importance of reliability of application 
  4. Structural integrity of connection at camera and computer
  5. Total cost of your process and / or down time

From many years of diagnosing industrial imaging problems, especially after incorrect software setup, BAD CABLES ARE NEXT ON THE LIST FOR  “MY CAMERA DOESN’T WORK” problems!! (Inadequate lighting or sub-optimal lensing also come up, but those are topics for another day.)

Distance, the killing factor!  If you were to look at a “bode plot” of the signal transmitting from the camera to the computer you would see dramatic attenuation of the signal vs. distance, and also versus the data rate. In fact, at the distance limits, you might wonder if it actually works as the signal is so low! 

GigE is rated at 100 meters, however, the signal does degrade quite a bit, so cable quality and data rate will be the determining factors.  USB3 does not have a real specification and it is difficult to find consumer grade cables greater than 2 meters in length. In fact, we have experienced poor results with consumer cables greater than 1 meter in length!

Click here for all USB3 Industrial camera cables

Click here for all GigE Industrial camera cables

What are the differences between ‘Industrial’ and ‘’consumer’ cables?  

8 differences are listed below: 

Assorted machine vision cables
  1. Industrial cables are tested to a specification for each cable.  There are no batch to batch differences. 
  2. That specification usually meets organization requirements such as IEEE or Automated Imaging Association (AIA) standards
  3. Industrial cables give you consistency from a single manufacturer (when buying online, you are not always sure you are getting the same cable)
  4. Industrial cables have over-molded connectors
  5. Industrial cables have screw locks on the ends
  6. Industrial cables are usually made with larger gauge wire
  7. Industrial cables typically specify bend radius
  8. Industrial cables are made with flex requirements (bend cycles they can meet)

When should we consider using an “Industrial cable”?  Here are a few examples to consider:

Example 1: In a research lab, using a microscope 1 meter from the computer running low data rates, non automated.

Distance is small, data rate is low, chance of someone pulling on the cable is low, and if the data doesn’t get delivered, you can re-acquire the image. There is no big need for a special cable and can buy it off the internet.

Example 1a: Let’s change some of these parameters, now assuming you are not in lab, but the microscope is in an OEM instrument being shipped all over the world

If the system fails because you went with an unspecified cable, what is the cost of sending someone to fix this system 3000 miles away? In this situation, even though the distance is small, and the data rate is low, the consequences of a cable failure are very high!

Example 2: GigE cameras running at close to the full bandwidth. If you don’t need screw lock connectors, and the distance is not too great (< 10 or 20 meters), 

You can probably get by with ‘higher quality’ consumer cables. At distances greater than 20 meters, if you care about system reliability, you will definitely want industrial cables.  

Example 3. Two to Four GigE cameras running at close to full bandwidth in a system.

If you need system repeatability, or anything close to determinism, you will need industrial cables. On the other hand, if you your application is not sensitive to packet re-sends, a consumer cable should work at under 20 meters

Example 4. GigE cameras in an instrument.  Regular GigE cables are just locked into the RJ45 with a plastic tab. 

If your product is being shipped, you can’t rely on this not to break. You want an industrial cable with screw locks.

Example 5. GigE cameras in a lab. 

Save the money and use a consumer cable!

Key takeaways:

  • If you running USB3 cables at distances more than 2 meters, DO NOT use consumer cables.
  • If you are running multiple cameras at high speeds, DO NOT use consumer cables.  
  • Obviously, if you need to make sure your cables stay connected, and need lock downs on the connectors, you cannot use consumer cables. 
  • If you are running low speed , short distance, and you can afford to re-transmit your data, consumer cables might be just fine.
Contact us for a quote

Below are additional remarks provided by our cable manufacturing partner Components Express Inc., that help to support the above conclusions. It’s good reading for those who want to understand the value-added processes used to produce specialized cables.

  • Families of connectors for vision systems include right-angle options to address commonly found space constraints and/or avoid overstressing the cable strain relief. Generic cables are typically “straight on” only.
  • The test process for machine vision cables go beyond routing hi-pot testing to include the electrical testing that ensures conformance with the latest and most stringent machine vision performance standards. Machine vision configurators – using customer application parameters – prevent mis-applying a cable that won’t meet the performance requirements.
Screenshot for GigE cable configurator; also available for USB and other types
  • Machine vision cable families cater to de-facto standards. For example, pin-outs vary by ring lighting makers for the same 5-pin connector. So it’s more labor and cost-intensive to support the permutations of pin-outs across diverse camera and lighting manufacturers.
  • The IP67 versions of standard electrical interfaces can vary by camera body. Machine vision cameras have different part numbers for specific camera bodies. For example, a screw lock Ethernet cable might damage the camera body of another maker if the mold-to-connector nose distance varies.
  • Machine vision y-cables are a unique breed and typically bought in small quantities. Pin-outs are higher and the semi-standard interfaces are different.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

SVS-Vistek HR + SHR cameras: high resolution and speed

SVS-Vistek HR and SHR camera series

Some applications demand high resolution from 16MP or up to 151MP. Thanks to dual and 10GigE interfaces, Camera Link, and CoaXPress, getting image data from the camera to the computer can be accomplished at speeds matched to application requirements, using camera series HR and SHR from SVS-Vistek.

What kind of applications require such resolution? Detail-demanding manufacturing inspection, geo mapping, life science, film industry and other applications require, or benefit from, a high resolution image delivered from the camera directly to the PC. Prior to the convergence of high-resolution sensors and high-speed interfaces, one might have needed multiple smaller-resolution cameras to capture the entire field of view – but with complex optical alignment and image-stitching (in software) to piece together the desired image.

The HR series offers resolutions from 16 – 120MP. The SHR series ranges from 47 – 151MP. While every machine vision camera offers various features designed to enhance ease-of-use or applications outcomes, here are some particular features we highlight from one or both of the HR or SHR series:

  • Minimal 128 MB internal image memory, burst mode – capture sequences rapidly on the camera and transfer them to the computer before the next event
  • LED controller for continuous & strobe built into camera – avoids the need to purchase and integrate a separate controller
  • Programmable logic functions , sequencers and timers – critical for certain applications where programmed capture sequences can be pre-loaded on the camera
  • RS-232 serial data to control exposure, lights or lenses
  • Long exposure times up to 60 seconds (camera model dependent) – useful for low-light applications such as those sometimes found in life sciences or astrophysics
  • Camera Link, CoaXPress and 10GigE interface options (varies by model)

For pricing on the HR / HSR Series, follow the family links below to the series table, then click on “Get quote” for a specific model of interest. Or just call us at 978-474-0044 to discuss your application and let us guide you to the best fit.

SVS-Vistek HR series

SVS-Vistek SHR series

The HR series uses a range of CCD and CMOS sensors from CANON, Sony and ON Semi. The SDR series use both CCDs from ON Semi and  CMOS sensors from Sony. The same sensor choices and feature sets are offered across several popular machine vision interfaces, permitting users to tailor their own need for speed to specific application requirements. SVS-VISTEK engineering and manufacturing precision mounts these high-resolution sensors, which allows users to have distortion free, high quality, high content images.

SVS-Vistek shr661 – 127 megapixel camera

At the time of writing, note the newest member of the SHR series, the shr661. At 127 megapixels this CMOS sensor camera has remarkably high resolution with a global shutter. With the IMX661 sensor from the Sony Pregius series, the backlight technology enables very high light sensitivity and above-average noise behavior. This enables an image quality with which even the finest structures can be resolved. The shr661 is one of the most powerful industrial cameras on the market. 

Those familiar with high-resolution sensors may know about dual and quad-tap sensors, whereby higher frame rates may be achieved with electronics permitting two or more sections of the sensor’s pixel values to be read out in parallel. A traditional challenge to that approach has been for camera manufacturers to match or balance the taps so that no discernable boundary line is visible in the composite stitched image. SVS-Vistek is an industry leader with their proprietary algorithm for multi-tap image stitching.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

45 Megapixel Computar MPT lens series

Computar MPT series 45 MP lenses

The Computar MPT Series is a compact 45MP, 1.4″ C-Mount Lens Series engineered to optimize the capabilities of the latest industrial CMOS sensors. The 1.4″ Ultra-high resolution Machine Vision Lenses are ultra-low distortion in a compact design available in fixed focal lengths: 12mm, 16mm, 25mm, 35mm, and 50mm. C-Mount for large format sensor.

Computar MPT 45MP C-Mount series

Designed for up to 1.4″ sensors with densely-packed pixels, as compact C-Mount lenses one achieves a more compact overall design, and lower optics costs than with a large-format lens, at the same time getting high performance. High-level aberration correction and centering/alignment technology deliver an extraordinary performance from the image center to the corner with tiny pixels.

Since the lenses may also be used with popular 1.2″ sensors, one achieves impressive Modular Transfer Function (MTF) outcomes in such a configuration.

Screenshot from video below highlights MTF performance across working distances

Call us at 978-474-0044 for expert assistance. We can tell you more about these lenses, and help you determine if they are best for your application.

The Computar MPT series lenses deliver superior performance at any working distance, thanks to the floating design. This is ideal for industrial drones, sports analytics, wide-area surveillance, and other vertical markets.

Vision Systems Design awarded the Silver Honoree in their Innovator Awards for the Computar MPT lens series.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

SONY IMX548 now in Alvium CSI-2, USB3, 5GigE cameras

AVT Alvium housed, board-level, and open options

Allied Vision has integrated the IMX548 into the Alvium family with the Alvium C/U/G5-511 camera models, where the prefix designator:

  • C is CSI-2, the Camera Serial Interface, popular for embedded systems
  • U is USB3, the widely available interface between computers and electronic devices
  • G5 is 5GigE, with up to 100 meter cable runs and 5x the throughput of GigE
AVT Alvium housed, board-level, and open options
AVT Alvium cameras are available in housed, board-level, and open versions

SONY’s IMX548 is a member of the 4th generation Pregius sensors, providing global shutter for active pixel CMOS sensors, with low-noise structure yielding high-quality images. See our illustrated blog for an overview of Pregius-S‘ back-illuminated sensor structure and its benefits.

So why the IMX548 in particular? Readers who follow the sensor market closely may note that the IMX547 looks the same in terms of pixel structure and resolution. Correct! SONY found they could adapt the sensor to a smaller and more affordable package, passing those savings along to the camera manufacturer, and in turn to the customer. As 5.1MP resolution is the sweet spot for many applications, Allied Vision picked up on SONY’s cues and integrated the IMX548 into the Alvium family.

There are nuanced timing differences between the IMX547 and IMX548. For new design-ins, this is of no consequence. If you previously used the IMX547, please check with our sales engineers to see if switching to the IMX548 requires any adjustments – or if it’s simply plug-and-play.

As shown in the photo above, Alvium cameras are very compact, and the same sensor and features are offered in housed, board-level, and open configurations. AVT Alvium is one of the most flexible, compact, and capable camera families in the current market.

Concurrent with the release of this new sensor in the Alvium camera family, Allied Vision has also released Alvium Camera Firmware V 11.00, notably adding the following features:

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

USB powers and controls LensConnect lenses

If your application enjoys fixed lighting conditions, objects of uniform height – and always at the same working distance from the lens – lucky you! For other imaging applications – where more variables challenge the optical solution – a different approach is needed.

In particular, IF your application exhibits one or more of:

  • Variable lighting due to time of day, cloud coverage (exterior application); or robot in a warehouse with uneven lighting (interior application)
  • Variable height targets (pick-and-place of heterogeneous items, autonomous vehicle continuously re-calculating speed and direction as it moves through the landscape or airspace)
  • Need to adapt to changing working distances while maintaining sharp focus

THEN you may find that a fixed aperture lens with a narrow focal range would yield sub-optimal outcomes, or that you’d have to software-manage two or more cameras each with a different optical configuration.

Those challenges triggered the emergence of motorized lenses, such that one or more of the aperture (a.k.a. iris), the focus, or even varifocal breadth may be software controlled via electro-mechanical features. Early offerings in motorized lenses often used proprietary interfaces or required separate power vs. control cabling.

Thanks to USB, there are now machine vision lenses engineered by Computar, their LensConnect series, such that applications software can continuously control lens configuration through a single USB connection.

Each lens in the LensConnect series provide motorized zoom and iris controls. Some additionally provide varifocal zoom controls across a wide working distance.

All lenses in the series are:

  • Easy to use
  • Plug-and-play
  • Compatible with Windows and Linux
  • Precise through use of stepping motors
Computar LensConnect USB controlled lenses

Vision Systems Design named Computar a Silver Honoree in their Innovator Awards for this lens series.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Ultraviolet (UV) imaging

While we’re all familiar with imaging in the human visible spectrum, there are also huge opportunities in non-visible portions of the spectrum. Infra-red and its sub-domains NIR, SWIR, MWIR, and LWIR have a range of compelling applications, at wavelengths just-longer than visible, starting at 800nm. Products that take us to the shorter-than-visible wavelengths, where we find UV, aren’t as well known to many. But there are sensors, cameras, lighting, filters, and best-practices for a wide range of applications generating value for many already.

Starting at the lower end of the visible spectrum, from 380nm until about 10nm, we find the ultraviolet (UV) spectrum.

UV spectrum has wavelengths just-shorter than the visible range

Applications areas include but are not limited to:

  • High-speed material sorting (including recyclables)
  • Biological domains:
    • Food inspection
    • Plant monitoring
    • Fluorescence analysis
  • Glass, gemstone, and liquid inspection
  • Semiconductor process monitoring
  • Power line inspection

Consider the following three-part illustration relative to recyclables sorting:

Differentiating between two types of plastic

In a typical recyclables operation, after magnets pick out ferrous materials and shakers bin the plastics together, one must efficiently separate plastics by identifying and picking according to materials composition. In the rightmost image above, we see that the visible spectrum is of little help in distinguishing polystyrene from acrylic resin. But per the middle image above, a pseudo-image computationally mapped into the visible spectrum, the acrylic resin appears black while the polystyrene is light gray. The takeway isn’t for humans to watch the mixed materials, of course, but to enable a machine vision application where a robot can pick out one class of materials from another.

For the particular example above, a camera, lighting, and lensing are tuned to a wavelength of 365nm, as shown in the leftmost illustration. Acrylic resin blocks that wavelength, appearing black in the calculated pseudo-image, while polycarbonate permits some UV light to pass – enough to make it clear it isn’t acrylic resin.

Different materials block or pass different wavelengths, but knowledge of those characteristics, and the imaging “toolkit” of sensors, lighting, filters, etc., are the basis for effective machine vision applications.

Here’s just one more application example:

Electrical infrastructure inspection

Scenario: we want to inspect components that may need replacing because they are showing electric discharge, as opposed to doing costly scheduled replacements on items that still have service life in them. From a ground-based imaging system, we establish the field of view on the component (marked by the purple rectangle). We take a visible image of the component; also a UV image revealing whether discharge is present; then we computationally create a pseudo-image to either log “all good” or trigger a service action for that component.

As mentioned above, biological applications, glass and fluid inspection, and semiconductor processes are also well-suited to UV imaging – it’s beyond the scope of this piece to show every known application area!

In the UV space, we are pleased to represent SVS Vistek cameras. While SVS Vistek specializes in “Beyond the Visible”, in the UV area they offer three distinct cameras. Each features Sony Pregius UV high resolution image sensors with high dynamic range and sensitivity in the 200 – 400 nm range. Maximum frame rates, depending on camera model, range from 87fps – 194fps. Interfaces include GigE and CoaXPress.

Tell us about your intended application – we love to guide customers to the optimal solution.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Illustrations in this blog courtesy of SVS Vistek.

What does XSWIR (eXtended SWIR sensitivity) do for me?

Visible imaging, infrared imaging (IR), short wave IR (SWIR), Extended SWIR (XSWIR) – it’s an alphabet soup of acronyms and their correlating concepts. Let’s briefly review each type of imaging to set the stage for the new kid in town – XSWIR – to better understand what each has to offer.

Visible imaging is the shorthand name for machine vision applications that are in the same portion of the spectral range as human vision, from about 380 – 700 nm. The field of machine vision initially developed largely in the visible space, partly because it’s easiest to conceptualize innovation in a familar space, but also due to the happy coincidence that CCD and CMOS sensors are photosensitive in the same portion of the spectrum as human sight!

Infrared imaging (IR), including near-infrared (NIR), focus on wavelengths in the range above 700 nm. NIR is roughly from 750 nm – 1400 nm. Applications include spectroscopy, hardwood and wood pulp analysis, biomedicine, and more.

Short-wave IR (SWIR) applications have tended to fall in the range 950 nm – 1700 nm. Applications include quality-control of electronics boards, plastic bottle-contents inspection, fruit inspection, and more. The camera sensor is typically based not on Silicon (Si) but rather Indium gallium arsenide (InGaAs) , and one typically requires special lensing.

Then there is MWIR (3 – 5 um) and LWIR (9 – 15 um). You can guess what M and L stand for by now. MWIR and LWIR are interesting in their own right, but beyond the scope of this short piece.

We draw your attention to a newish development in SWIR, namely Extended SWIR, or simply XSWIR. Some use the term eSWIR instead – it’s all so new there isn’t a dominant acronym yet as we write this – we’ll persist with XSWIR for purposes of this piece. XSWIR pushes the upper limits of SWIR beyond what earlier SWIR technologies could realize.

As mentioned above, SWIR cameras, lenses, and the systems built on such components tended to concentrate on applications with wavelengths in the range 950 – 1700 nm. XSWIR technologies can now push the right end of the response curve to 1900 nm and even 2200 nm.

Big deal, a few hundred more nanometers of responsivity, who cares? Those doing any of the following may care a lot:

  • Spectral imaging
  • Laser beam profiling
  • Life science research
  • Surveillance
  • Art inspection

A camera taking XSWIR to 1900 nm responsivity is Allied Vision Technologies’ Goldeye G-034 XSWIR 1.9. AVT’s sister camera the Goldeye G-034 XSWIR 2.2 achieves even more responsivity up to 2200 nm.

Allied Vision Goldeye XSWIR camera with lens

The Goldeye family was already known for robust design and ease of use, making SWIR accessible. Of particular note in the new Goldeye XSWIR 1.9 and 2.2 models are:

  • Extended SWIR wavelength detection beyond 1,700 nm
  • Multi-ROI selection to speed up processes, especially useful in spectrometer-based sorting and recycling applications
  • Industrial grade solution for an attractive price

Tell us about your intended application – we love to guide customers to the optimal solution.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Components needed for machine vision and industrial imaging systems

Machine vision and industrial imaging systems are used in various applications ranging from automated quality control inspection, bottle filling, robot pick-and-place applications, autonomous drone or vehicle guidance, patient monitoring, agricultural irrigation controls, medical testing, metrology, and countless more applications.

Imaging systems typically include a least a camera and lens, and often also include one or more of specialized lighting, adapter cards, cables, software, optical filters, power supply, mount, or enclosure.

At 1stVision we’ve created a resource page is intended to make sure that nothing in a planned imaging application has been missed.  There are many aspects on which 1stVision can provide guidance.   The main components to consider are indicated below.

Diverse cameras

Cameras: There are area scan cameras for visible, infrared, and ultraviolet light, used for static or motion situations.  There are line scan cameras, often used for high-speed continuous web inspection.  Thermal imaging detects or measures heat.  SWIR cameras can identify the presence or even the characteristics of liquids.  The “best” camera depends on the part of the spectrum being sensed, together with considerations around motion, lighting, surface characteristics, etc.

An assortment of lens types and manufacturers

Lens: The lens focuses the light onto the sensor, mapping the targeted Field of View (FoV) from the real world onto the array of pixels.  One must consider image format to pair a suitable lens to the camera.  Lenses vary by the quality of their light-passing ability, how close to the target they can be – or how far from it, their weight (if on a robot arm it matters), vibration resistance,  etc.  See our resources on how to choose a machine vision lens.  Speak with us if you’d like assistance, or use the lens selector to browse for yourself.

Lighting: While ambient light is sufficient for some applications, specialized lighting may also be needed, to achieve sufficient contrast.  And it may not just be “white” light – Ultra-Violet (UV) or Infra-Red (IR) light, or other parts of the spectrum, sometimes work best to create contrast for a given application – or even to induce phosphorescence or scatter or some other helpful effect.  Additional lighting components may include strobe controllers or constant current drivers to provide adequate and consistent illumination. See also Lighting Techniques for Machine Vision.

Optical filter: There are many types of filters that can enhance application performance, or that are critical for success.  For example a “pass” filter only lets certain parts of the spectrum through, while a “block” filter excludes certain wavelengths.  Polarizing filters reduce glare.  And there are many more – for a conceptual overview see our blog on how machine filters create or enhance contrast

Don’t forget about interface adapters like frame grabbers and host adapters; cables; power supplies; tripod mounts; software; and enclosures. See the resource page to review all components one might need for an industrial imaging system, to be sure you haven’t forgotten anything.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Lucid Helios2+ Time of Flight 3D cameras

The Lucid Vision Labs Helios2+ Time of Flight (ToF) 3D camera features High Dynamic Range (HDR) mode, and High-Speed Time-of-Flight mode, and a Sony DepthSense™ IMX556PLR 1/2″ global shutter CMOS back-illuminated ToF sensor.

Lucid Helios2+ Time of Flight 3D cameras

Do I need a Time of Flight (ToF) 3D camera? It depends. If you can achieve the desired outcome in 2D, by all means stay in 2D since the geometry is simpler as are the camera, lensing, lighting, and software requirements. But as discussed in “Types of 3D imaging systems – and benefits of Time of Flight (TOF)”, some applications can only be solved, or innovative offerings created, by working in a three dimensional space.

Robots doing pick-and-place, aerial drones, and patient monitoring are three examples, just to name diverse applications, that may require 3D ToF imaging. Some 3D systems use structured light or passive stereo approaches to build a 3D representation of the object space – but those approaches are often constrained to short working distances. ToF can be ideal for applications operating at working distances of 0.5m – 5m and beyond, with depth resolution requirements to 1 – 5mm.

Lucid Vision Labs has been a recognized leader in 3D ToF systems some time, and we are proud to represent their Helios2 and new Helios2+ cameras, the latter with high speed modes achieve frame rates of 100fps+.

Besides the high speed mode in the video above, another feature is High Dynamic Range mode, combining multiple exposures to provide accurate 3D depth information for high contrast, complex objects containing both highly reflective and low reflectivity objects. Sensing and depth measurement applications to sub-mm (< 1mm) precision. Click here to see examples and further details.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Sony Pregius 4th generation continues image sensor excellence

Continuing the tradition of excellence begun in 2013, Sony’s 4th generation of Pregius sensors, designated Pregius S, is now available in a range of cameras. All Pregius sensors, starting with the “IMX” code preceding the sensor model number, provide global shutter pixel technology for active pixel CMOS image sensors that adopts Sony Semiconductor Solutions Corporation’s low-noise structure to realize high-quality images.

Pregius S brings a back-illuminated structure, enabling smaller sensor size as well as faster frame rates. The faster frame rates speak for themselves, but it’s worth noting that the smaller sensor size has the benefit of permitting smaller lenses, which can reduce overall costs.

Figure 1. Surface-illuminated vs. Back-illuminated image sensors

Let’s highlight some of the benefits offered by Pregius S image sensors:

  • With the photodiode placed closer to the micro-lens, a wider incident angle is created, admitting more light, leading to enhanced sensitivity. At low incident angles, the Pregius S captures up to 4x as much light as Sony’s own highly-praised 2nd generation Pregius sensors from just a few years ago! (See Fig. 1 above)
  • Light collection is further enhanced by positioning wiring and circuits below the photodiode
  • Smaller 2.74um pixels provides higher resolution in typical smaller cube cameras, continuing the evolution of ever more capacity and performance while occupying less space

While Pregius S sensors are very compelling, the prior generation Pregius sensors remain an excellent choice for many applications. As with many engineering choices, it comes down to performance requirements as well as cost considerations, to achieve the optimal solution for any given application. Many of the Pregius S image sensors can be found in industrial cameras offered by 1stVision.  Use our “Sensor” pull down menu on our camera selector to look for the new sensors, starting with IMX5 e.g. IMX541. 

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Graphics courtesy of Sony.

AVT Alvium series G1 GigE and G5 5GigE Vision cameras

Supplementing Allied Vision’s ALVIUM Technology, AVT’s ASIC chip camera lineup, previously with USB3 Vision and MIPI CSI-2 interfaces, ALVIUM offerings now include two speed levels of the GigE Vision interface. While retaining the compact sugar-cube housing format, the ALVIUM G1 and ALVIUM G5 combine the advantages of the established GigE Vision standard with the flexibility of the ALVIUM platform.

ALVIUM G1 GigE and G5 5GigE Vision cameras

As a SoC design tailored for imaging, ALVIUM is highly-optimized to balance functionality and performance, unlike cameras built on generic components. And with four interfaces to the ALVIUM platform, users can match application needs by testing different interfaces, each with a similar user experience.

The ALVIUM G1 series are compact GigE cameras with excellent image quality, offering a comprehensive feature set across 14 sensors in the initial release:

  • Resolution: up to 24.6 megapixels
  • Sensors: CMOS global and rolling shutter sensors from Sony and ON Semi
  • Frame rates: up to 276 frames per second
  • Housing: Closed housing
  • Lens mount options: C-Mount, CS-Mount, or S-Mount (M-12)
  • Image colors: Monochrome and color (UV, NIR & SWIR coming soon)
ALVIUM G1

Click here to see all G1 models and get a quote

The ALVIUM G5 series offer the easy upgrade for more performance, also with a comprehensive feature set, and 11 high-performance Sony IMX image sensors at first release:

  • Resolutions: up to 24.6 megapixels
  • Sensors: CMOS global and rolling shutter SONY IMX sensors
  • Frame rates: up to 464 frames per second
  • Housing: Closed housing (60 mm x 29 mm x 29 mm)
  • Lens mount options: C-Mount, CS-Mount, or S-Mount (M-12)
  • Image colors: Monochrome and color (UV, NIR & SWIR coming soon)
ALVIUM G5

Click here to see all G5 models and get a quote

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

Microfluidics systems with Opto Imaging Modules

Microfluidics includes the control and manipulation of fluids at sub-millimeter scale, with a wide and growing range of industrial and medical applications. Opto now offers microfluids systems solutions via the Opto digital inverse microsope profile M.

With a pressure-based system to move fluids into the droplet micro-nozzle, the dosing nozzle (to add active substances to a droplet), and through the tubing, the Opto Inverse Microsope Profile M revolutionizes the ease of creating a microfluids system.

Five key benefits of the Opto microfluidics system:

  • Integrates camera sensor, lens, and lighting into one module
  • Provides lens magnification choices to match application requirements
  • Light source frequency and color options are available
  • Parameterized software permits droplet size monitoring and control with user-friendly controls and display (see image below)
  • Droplets may be sequentially identified and logged for export and data mining
Microfluidics droplet tracking

The microfluidics systems offering described above is in turn based upon Opto’s innovative Imaging Modules.  These modules each contain an integrated camera sensor, lens and lighting in an “all in one” housing.  A range of imaging modules are available, each configured to optimize component alignment and operations. The end-user may quickly deploy a new system, benefiting from the standardized systems, economies of scale, and expertise of the module builder.

Coming soon are soon are modules to track, count and analyze fast objects, particles and droplets, at more than 150 FPS. So the approach is also suitable for high-speed biomedical and microfluidics applications.

Key takeaway: Imaging modules relieve the systems builder of the challenges in building an imaging system from scratch, such that the imaging system is a building-block available for integration into (often as the controlling “engine” of) a larger system. The integrator or system builder can focus more at the systems level, connecting the imaging module rather than having to integrate a lens, sensor, and lighting into a custom solution.

In short, it’s buy vs. build – for certain applications areas, Opto’s integrated modules make a compelling value proposition in favor of “buy” – for the imaging features – allowing the integrator or systems builder to add his or her expertise in other aspects of the system build.

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

New compact IDS XCP and XLE cameras

IDS offers new compact budget-friendly industrial cameras in XCP and XLE families. These versatile cameras are GenICam compliant, and easily programmed with SDK IDS peak – or with third-party software.

IDS XCP and XLE industrial cameras

Designed for price-sensitive and high-volume applications, these cameras can easily be integrated into a wide variety of image processing systems.

The uEye XCP is the smallest industrial camera with housing and C-mount, members of this family measure only  29 x 29 x 17 mm (W/H/L). With zinc die-cast full housing and screw-type USB Micro-B connector, their C-mount adapter makes it possible to choose among a wide range of lenses.

For embedded applications, the uEye XLE family offers single-board cameras with or without C-/CS-Mount or S-Mount and USB Type-C interface.

In both the  uEye XCP and XLE series, you can currently choose between the 2.3 MP global shutter sensor AR0234 and the 5 MP rolling shutter sensor AR0521 from Onsemi. In addition, the 8.46 MP sensor from the Sony Starvis series will soon be available.

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

We have stock! USB3 machine vision cameras

While “we have stock” might not have been a compelling opening line prior to COVID and supply-chain challenges, in the current context it’s an attention-grabbing headline.

USB3 cameras

It’s widely known through the machine vision industry, and many other electronics-dependent sectors, that supply chain shortages of key components like FPGAs have led to months- or years- long backlogs in delivering products that were once continuously stocked – or at least available with short lead times.

Some camera manufacturers use their own ASIC and components not plagued by component shortages. So many of the most popular models are in stock here at 1stVision. And others are available with short lead times.

We currently have ~ 50 different USB3 industrial camera models in stock,  with resolutions ranging from WVGA to 20.2MP. Image sensor manufacturers include ON Semi and Sony, with frame rates up to 281fps.  Our latest stock update has good quantities of popular sensor models such as ONSEMI AR0521, Sony IMX273, IMX183, IMX265 to name a few.  In many cases,  1stVision has cameras in various housed and board-level formats and we restock on a regularly. 

A subset of the many cameras in stock at 1stVision

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

IDS uEye XC: Webcam alternative for industrial applications

While traditional webcams are notoriously easy to bring online, they are typically only consumer-grade in robustness, and the images they deliver haven’t been standards compliant – meaning machine vision software hasn’t been able to process the data.

Enter IDS uEye XC, a game changing USB3 auto-focus camera from the Vision Standard-compliant uEYE+ product line. With integrated auto-focus, images – both stills and videos – remain sharp even as working distance varies. Application possibilities include kiosk systems, logistics, and robotics.

With a lightweight magnesium housing, dimensions of just 32 x 61 x 19mm (W x H x D), the 13 MP OnSemi sensor delivers 20 fps. BSI (Backside lllumination) provides significant improvements in low light signal-to-noise ratio, visible light sensitivity and infrared performance.

The IDS uEye XC camera utilizes industrial-grade components and IDS provides a long planned lifecycle, so that customers can confidently do design-ins knowing they can source more cameras for many years to come. Additional features include 24x digital zoom, auto white balance and color correction.

Designed for plug-and-play installation, IDS’ peak SDK makes it easy to configure the camera for optimal performance in your application, in case you want to modify parameter settings.

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

Fujinon HF-XA-1F lenses with unique anti-shock and vibration performance

While conventional machine vision camera lenses exhibit problematic degradation of resolution when the shooting distance or aperture is changed, the Fujinon HF-XA-1F lenses  feature high performance “4D HR” to minimize such degradation.  The new lenses maintain a highly consistent image sharpness from the center to the edges, while mitigating degradation of resolution caused by changes in the working distance or aperture. This enables consistent delivery of high-resolution images under a wide variety of installation and shooting conditions.

4DHR: With vs. without

Designed for “4DHR” (4D High Resolution) and compatible with IMX250 high performance CMOS image sensor (2/3″, 5 megapixels, 3.45µm pixel pitch).  With five family members, at focal lengths 8, 12, 16, 25, and 35, each model can be used for optical formats from 1/3” up through all 2/3”, and even to some 1/1.2″ sensors.

Fujinon HF-XA-1F Series

Adjusting the focus is demonstrated in the video below: one ring adjusts focus while the operator monitors the image, and another ring locks in the adjustment:

In addition, the lenses’ unique mechanical design realizes anti-shock and vibration-resistant performance, further contributing to image quality. The lenses are compliant with standard IEC60068-2-6, key test parameters being:

  • Vibration frequency of 10-60Hz (amplitude of 0.75mm), 60-500Hz (acceleration of 100m/S2)
  • Sweep frequency of 50 cycles

Unlike what most are familiar with in lens designs, for this family iris parts having different F numbers are included with package! These parts enable the user to adjust the F.no. depending on the situation of the installation and the user’s application. Please refer to the video below for for how to replace the iris parts & attach to the camera. 

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss lensing and camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist you.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

What is the difference between an Area Scan and a Line Scan Camera?

Examples of area scan and line scan applications

While the differences between the applications for an area scan machine vision camera vs. a line scan camera may often appear to be subtle, the differences in their technologies and the ways to optimize them in specific use cases is clear. By optimizing we include relative costs as well as imaging outcomes.  This brief overview provides a foundational overview. For additional application engineering assistance please contact one of our industrial imaging technical consultants and get the support you need.

Definition of an Area Scan Camera:

Area scan cameras are generally considered to be the all-purpose imaging solution as they use a straight-forward matrix of pixels to capture an image of an object, event, or scene. In comparison to line scan cameras, they offer easier setup and alignment. For stationary or slow moving objects, suitable lighting together with a moderate shutter speed can produce excellent images.

Even moving objects can become “stationary” from the perspective of an area scan camera through appropriate strobe lighting and/or a fast shutter speed, so just because something is motion does not necessarily disqualify an area scan solution.

Among the key features of an area scan camera include that the camera, when matched with a suitable lens, provide a fixed resolution. This allows for easy set up in imaging system applications where the cameras will not move after installation. Area scan cameras are also extremely flexible, as a single frame can be segmented into multiple regions-of-interest (ROI) to look for specific objects rather than having to process the entire image.

Additionally, some models of area scan cameras are optimized to be sensitive to infrared light, in portions of the spectrum not visible to the human eye. This allow for thermal imaging as well as feature identification applications that can be innovative and cost-effective, opening new opportunities for machine vision.

NIR imaging detects flaws in photovoltaic modules

Definition of a Line Scan Camera:

In contrast to an area scan camera, in a line scan camera a single row of pixels is used to capture data very quickly. As the object moves past the camera, the complete image is pieced together in the software line-by-line and pixel-by-pixel.

Line scan camera systems are the recognized standard for high-speed processing of fast-moving “continuous”objects such as in web inspection of paper, plastic film, and related applications.. Among the key factors impacting their adoption in these systems is that the single row of pixels produced by line scanning allows the imaging processing system to build continuous images unlimited by a specific vertical resolution. This results in superior, high resolution images. Unlike area scan cameras, a line scan camera can also expose a new image while the previous image is still transferring its data. (Because the pixel readout is faster than the camera exposure.) When building a composite image, the line scan camera can either move over an object or have moving objects presented to it. Coordination of production/camera motion and image acquisition timing are critical for line scan cameras but, unlike area scan cameras, lighting is relatively simple.

What if you need to image a medical tube, or round object, such as a steel ball bearing?

In certain applications, line scan cameras have other specific advantages over area scan cameras. Consider this application scenario: You need to inspect several round or cylindrical parts and your typical system experience is with area scan cameras, so you set about to use multiple cameras to cover the entire part surface. It’s doable, but a better solution would be to rotate the part in front of a single line scan camera to capture the entire surface and allow the processor to “unwrap” the image pixel-by-pixel. Line scan cameras are also typically smaller than area scan. As a result, they sneak into tight spaces such as in a spot where they might have to peek through rollers on a conveyor to view a key angle of a part for quality assurance.

Not sure which area scan or line scan camera is right for you?

There are a host of options and tradeoffs to consider even after you’ve made your decision on the technology that’s likely best for you. 1st Vision is the US distributor you need. Our industrial imaging consultants are available to help you navigate the various camera models and brands from industry-leading manufacturers Teledyne DALSA, IDS, and Allied Vision.

Contact us to learn more.

1stVision has cameras in stock!

IDS, Allied Vision, and DALSA cameras

Are you having problems with your machine vision camera deliveries?  Due to component shortages in the global marketplace, many camera manufacturers’ lead times are 3 to 6 months and some pushing more than 9 months.

We have good news.  As a stocking distributor, 1stVision has over 300 cameras in stock!

IDS Imaging, Allied Vision, and Teledyne DALSA cameras
Lights and lenses for machine vision

We may be a distributor, but our technical knowledge is second to none with our sales engineers having an average of 25 years of experience in the industry.  We can solve your problems and make recommendations. We’re the stocking distributor that’s big enough to stock the best cameras, and small enough to care about every image.

We’re also committed to customer education – we maintain online resources such as a Knowledge Base and a Machine Vision Blog, regularly updated to keep you informed of new technologies and product releases.  Machine vision and optics are evolving fields, with new technologies constantly emerging – it pays to stay informed.

Contact us at 1stVision to speak with us about cameras in stock now.

New AVT Alvium 1800 VSWIR cameras

Visible to SWIR sensors that cover both the visible and short wave infrared spectrum, are now available, affordable, and well-suited for a range of imaging applications.  Previously one might have needed two different sensors – and cameras – but Allied Vision’s Alvium 1800 U/C-030 and Alvium 1800 U/C-130 take advantage of Sony’s innovative InGaAs SenSWIR sensor technology to provide coverage across the visible to SWIR spectrum with the Sony IMX991.

Alvium VSWIR with MIPI CSI-2 and USB3 Vision interfaces

These Alvium 1800 VSWIR cameras can be used from 400 nm to 1700 nm, and are the smallest industrial-grade, uncooled SWIR core modules on the market.  With their compact design, low power consumption, and light weight, they are the ideal solution for compact OEM systems used in embedded and machine vision applications. 

The 030 models use a ¼”sensor with framerates to 223 fps, while the 130 models use a ½” sensor with framerates to 119 fps.  Both are available with USB3 Vision or MIPI CSI-2 interfaces, in housed, open, or board-level configurations. 

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

How machine vision filters create contrast in machine vision applications

Before and after applying filters

Imaging outcomes depend crucially on contrast. It is only by making a feature “pop” relative to the larger image field in which the feature lies, that the feature can be optimally identified by machine vision software.

While sensor choice, lensing, and lighting are important aspects in building machine vision solutions with effective contrast creation, effective selection and application of filters can provide additional leverage for many applications. Often overlooked or misunderstood, here we provide a first-look at machine vision filter concepts and benefits.

Before and after applying filters

In the 4 image pairs above, each left-half image was generated with the same sensor, lighting, and exposure duration as the corresponding right-half images. But the right-half images have had filters applied to reduce glare or scratch-induced scatter, separate or block certain wavelengths, for example. If your brain finds the left-half images to be difficult to discern, image processing software wouldn’t be “happy” with the left-half either!

While there are also filtering benefits in color and SWIR imaging, it is worth noting that we started above with examples shown in monochrome. Surprising to many, it can often be both more effective and less expensive to create machine vision solutions in the monochrome space – often with filters – than in color. This may seem counter-intuitive, since most humans enjoy color vision, and use if effectively when driving, judging produce quality, choosing clothing that matches our skin tone, etc. But compared to using single-sensor color cameras, monochrome single sensor cameras paired with appropriate filters:

  • can offer higher contrast and better resolution
  • provide better signal-to-noise ratio
  • can be narrowed to sensitivity in near-ultraviolet, visible and near-infrared spectrums

These features give monochrome cameras a significant advantage when it comes to optical character recognition and verification, barcode reading, scratch or crack detection, wavelength separation and more. Depending on your application, monochrome cameras can be three times more efficient than color cameras.

Identify red vs. blue items

Color cameras may be the first thought when separating items by color, but it can be more efficient and effective to use a monochrome camera with a color bandpass filter. As shown above, to brighten or highlight an item that is predominantly red, using a red filter to transmit only the red portion of the spectrum can be used, blocking the rest of the transmitted light. The reverse can also work, using a blue filter to pass blue wavelengths while blocking red and other wavelengths.

Here we have touched on just a few examples, to whet the appetite. We anticipate developing a Tech Brief with a more in depth treatment of filters and their applications. We partner with Midwest Optical to offer you a wide range of filters for diverse application solutions.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

Three new AVT Alvium 1800 USB3 cameras

AVT Alvium housed, board-level, and open options

1stVision is pleased to announce that Allied Vision has released three fourth-generation Sony IMX sensors with Pregius S global shutter technology to its Alvium 1800 U camera series. With the new models Alvium 1800 U-511 (Sony IMX547), Alvium 1800 U-811 (Sony IMX546) and Alvium 1800 U-1242 (Sony IMX545), the Alvium camera series with USB3 interface now comprises 19 models. All cameras are available in different housing variants (closed housing, open housing, bareboard), as monochrome or color cameras, and with different lens mount options. The USB port can be located either on the back of the camera or on the left side (as seen from the sensor).

AVT Alvium 1800 U housing option
AVT Alvium housed, bareboard, and open variants

To highlight just one key point about each new camera:

  • Alvium U-511: First 5.1 Mpix global shutter Sony sensor for S-mount lens 
  • Alvium U-811: Square 8 Mpix sensor ideal for round or square objects, and microscopy
  • Alvium U-1242: Same resolution with smaller sensor as 2nd gen IMX304
ModelAlvium 1800 U-511 Alvium 1800 U-811Alvium 1800 U-1242
SensorSony IMX547Sony IMX546Sony IMX545
Sensor typeCMOS Global shutterCMOS Global shutterCMOS Global shutter
Sensor sizeType 1/1.8Type 2/3Type 1/1.1
Pixel size2.74 μm × 2.74 μm2.74 μm × 2.74 μm2.74 μm × 2.74 μm
Resolution5.1 MP
2464 × 2064  
8.1 MP
2848 × 2848
12.4 MP
4128 × 3008
Frame rate78 fps (@450MB/s)51 fps(@450MB/s) 33 fps(@450MB/s)
Key attributes at a glance

All cameras are available with different housing variants (closed housing, open housing, bareboard) as well as different lens mount options, according to your application’s requirements.

Contact us at 1stVision with a brief idea of your application, and we will contact you to discuss camera options. support and / or pricing.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

New IDS uEye XLE camera family

IDS XLE

The IDS uEye XLE family is now available to 1stVision customers.  These versatile cameras are designed for high-volume price-sensitive projects needing basic functions without special features.  Suitable applications include but are not limited to manufacturing, metrology, traffic, and agriculture.

IDS Imaging XLE camera
IDS uEye XLE board-level and housed options

Thanks to different housing variants, extremely compact dimensions and modern USB3 Vision interface, uEye XLE cameras can be easily integrated into any image processing system.  Housing variants include housed and board-level, with different lens mount options.

Currently there are 10 family members, each available with monochrome or color CMOS sensors, from 2 – 5MPixel.  Cameras have excellent low-light performance, thanks to BSI “Back Side Illumination” pixel technology.

With a USB 3.1 Gen 1 interface, all XLE models communicate via the USB3 Vision protocol, and are 100 percent GenICam-compliant.  So you may easily operate and program the cameras with the IDS peak SDK, as well as other industry-standard software.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

11 and 86Mpixel Teledyne DALSA Falcon 4 cameras

Falcon4 cameras

Teledyne DALSA’s Falcon4-CLHS cameras are now available to 1stVision customers.  The state-of-the-art in the Falcon series, there are both 11Mpixel and 86Mpixel models, each using CLHS to achieve stunning frame rates.  This can enable new applications not previously possible, or next-gen solutions with a single camera, where previously two or more were needed – greatly simplifying implementation.

This 11MPixel camera, available in two monochrome variants, offers a global shutter sensor, a wide field of view to 4480 pixels wide, and up to 609fps at full resolution. 

Teledyne Dalsa Falcon 4
Teledyne DALSA Falcon4 cameras

Popular applications for the 11Mpixel models include:

  • Machine Vision
  • Robotics
  • Factory Automation Inspection
  • Motion Tracking and Analysis
  • Electronic Inspection
  • High Speed 3D imaging

If your application requires even more resolution, Teledyne DALSA’s Falcon 4-CLHS 86M also uses a global shutter 86Mpixel CMOS sensor, and up to 16fps.  Also a monochrome sensor, it shows good responsivity into the NIR spectrum.

Falcon 4- CLHS 86MP
Aerial imaging

Applications for the 86Mpixel camera include:

  • Aerial Imaging
  • Reconnaissance
  • Security and Surveillance
  • 3D Metrology
  • Flat Panel Display Inspection
Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

Spatial resolution is an essential machine vision concept

image sensor

Spatial resolution is determined by the number of pixels in a CMOS or CCD sensor array.  While generally speaking “more is better”, what really matters is slightly more complex than that.  One needs to know enough about the dimensions and characteristics of the real-world scene at which a camera is directed; and one must know about the smallest feature(s) to be detected.

Choosing the right sensor requires understanding spatial resolution

The sensor-coverage fit of a lens is also relevant.  As is the optical quality of the lensLighting also impacts the quality of the image. Yada yada.

But independent of lens and lighting, a key guideline is that each minimal real-world feature to be detected should appear in a 3×3 pixel grid in the image.  So if the real-world scene is X by Y meters, and the smallest feature to be detected is A by B centimeters, assuming the lens is matched to the sensor and the scene, it’s just a math problem to determine the number of pixels required on the sensor.

There is a comprehensive treatment how to calculate resolution in this short article, including a link there to a resolution calculator. Understanding these concepts will help you to design an imaging system that has enough capacity to solve your application, while not over-engineering a solution – enough is enough.

Finally, the above guideline is for monochrome imaging, which to the surprise of newcomers to the field of machine vision, is often more better than color, for effective and cost-efficient outcomes.  Certainly some applications are dependent upon color.  The guideline for color imaging is that the minimal feature should occupy a 6×6 pixel grid.

If you’d like someone to double-check your calculations, or to prepare the calculations for you, and to recommend sensor, camera and optics, and/or software, the sales engineers at 1stVision have the expertise to support you. Give us some brief idea of your application and we will contact you to discuss camera options.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

What can multifield linescan imaging do for me?

Multifield imaging is a new imaging technology that enables capturing multiple images simultaneously at various lighting conditions e.g. brightfield, darkfield, and backlight in a single scan. It’s a variation on the concept of sequence modes. Teledyne Dalsa Linea HS is the industry’s first TDI camera capable of capturing up to three images using light sources at different wavelengths.

OK, cool.  How does that help me?  How does it differ from other imaging methods?  What applications can it solve that couldn’t be tackled before?

Backlight, Darkfield, and Brightfield images of same target

Perhaps a quick review of area scan imaging and conventional linescan imaging will help set the stage:

Area scan cameras are most intuitive, creating in one exposure a rectangular array of pixels corresponding to an entire scene or field of view.T hat’s ideal for many types of machine vision imaging, if the target fits wholly in the field of view, and if the lighting, lens, and image processing can best achieve the desired outcome at an optimal price point.

But linescan imaging is sometimes a better choice, especially for continuous-flow applications, where there is no discrete start and end point, in one dimension.  Linescan systems can capture an image “slice” that is enough pixels wide to make effective imaging computations, and, where required, to archive those images, using fewer active pixels and reducing sensor costs compared to area scan.  Other benefits include high sensitivity and the ability to image fast moving materials without the need for expensive strobe lighting.

Understanding line scan applications: concepts still relevant!

… so much for the review session.  So, what can multifield linescan imaging do for me?  Multifield capable linescan cameras bring all the benefits of conventional linescan imaging, but additionally deliver the perspectives of monochrome, HDR, color/multispectral (NIR), and polarization views.   This can enable machine vision solutions not previously possible, or solutions at more attractive price points, for a diverse range of applications.

Multifield imaging is a new imaging technology that enables capturing multiple images simultaneously at various lighting conditions e.g. brightfield, darkfield, and backlight in a single scan.

Consider OLED display inspection, for example. Traditionally an automated inspection system would have required multiple passes, one each with backlight, darkfield, and brightfield lighting conditions. With a multifield solution, all three image types may be acquired in a single pass, greatly improving throughput and productivity.

Flat panel glass is inspected at every stage of manufacturing

So how is multifield imaging achieved? In this blog we’re more focused on applications. For those new to Time Delay and Integration (TDI), it is the  concept of accumulating multiple exposures of the same (moving) object, effectively increasing the integration time available to collect incident light. The key technology for a multifield linescan camera is the sensor uses advanced wafer-level coated dichroic filters with minimum spectral crosstalk to spectrally isolate three images captured by separate TDI arrays, i.e. wavelength division multifield imaging.

Multifield images on one sensor using filters to isolate wavelengths

This new technology significantly boosts system throughput as it eliminates the need of multiple scans. It also improves detectability as multiple images at different lighting conditions are captured simultaneously with minimum impact from mechanical vibration.

1stVision is pleased to offer our customers a multifield linescan camera from Teledyne Dalsa, the HL-HF-16K13T: https://www.1stvision.com/cameras/models/Teledyne-DALSA/HL-HF-16K13T

Contact 1stVision for support and / or pricing.

Click to contact

Give us some brief idea of your application and we will contact you to discuss camera options.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Computar ViSWIR Visible + SWIR lenses

1stVision is pleased to make available two new lens series from Computar: both the ViSWIR HYPER / APO Lens Series, and the VISWIR Lite Series. Traditionally, applications landed in either the visible or the SWIR range, so components tended to be optimized for one or the other. The new lens series are designed to perform well with for both visible and SWIR, enabling cost-effective and performant imaging systems for a range of applications.

ViSWIR Hyper / Multi-Spectral Lens Series were created for the latest Vis-SWIR imaging sensors, the IMX990/IMX991 SenSWIR, currently found in the new Allied Vision Goldeye G-130. The series was recognized as a Gold Honoree by Vision Systems Design in 2021:

With fully corrected focus shift in visible and SWIR range (400nm-1,700nm), spectral imaging is achievable with a single sensor camera by simply syncing the lighting. Per Sony, “the IMX990/IMX991 top indium-phosphorus (InP*2) layer inevitably absorbs some visible light, but applying Sony SWIR sensor technology makes this layer thinner, so that more light reaches the underlying InGaAs layer. The sensors have high quantum efficiency even in visible wavelengths. This enables broad imaging of wavelengths from 0.4 μm to 1.7 μm. A single camera equipped with the sensor can now cover both visible light and the SWIR spectrum, which previously required separate cameras. This results in lower system costs. Image processing is also less intensive, which accelerates inspection.”

With ViSWIR HYPER-APO, it is unnecessary to adjust focus for different wavelengths or to keep the high resolution from short to long working distances. The focus shift is reduced at any wavelength and any working distance, making the series ideal for multiple applications, including machine vision, UAV, and remote sensing.

Computar ViSWIR HYPER-APO lens series

Since diverse substances respond to differing wavelengths, one can use such characteristics as the basis for machine vision applications for materials identification, sorting, packing, quality control, etc. To understand the value of these lenses, see below for an example of conventional lenses that cannot retain focus across different wavelengths:

Conventional lenses only focus in specific wavelengths

Now see images across a wide range of wavelengths, with the award winning Computar lens, that retain focus:

Diverse materials under diverse lighting – in focus at each wavelength.
The same lens may be used effectively in diverse applications.

Also new from Computar is the VisSWIR Lite series, providing:

— High transmission from Visible to SWIR (400-1700nm) range
— Reasonable cost performance for narrow band imaging
— Compact design
Key features of Computar VisSWIR Lite seriesComputer

Computer ViSWIR Lite lens series

Which to select? APO or Lite series?

Contact 1stVision for support and / or pricing.

Contact us to talk to an expert!Give us some brief idea of your application and we will contact you to discuss.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Allied Vision G-130 TEC1 SWIR Camera

1stVision is pleased to announce that we can obtain Allied Vision’s new G-130 TEC1 SWIR camera for our customers. Utilizing Sony’s innovative IMX990 sensor, based on their SenSWIR technology, the camera is responsive in both the visible as well as the short-wave infrared range, spanning from 400 – 1700nm.

AVT G-130 TEC1 SWIR camera

While there are a number of cameras that cover short-wave infrared (SWIR) alone, from 900 – 1700nm, this sensor’s responsivity down to 400nm in the visible range opens up applications possibilities not previously possible with a single sensor camera.

Besides the wide spectral range, the sensor uses small 5µm pixels, with high quantum efficiency, offering precise detection of details.

The Goldeye 130 with IMX990 1.3MP SXGA sensor can deliver 110fps with Camera Link interface, or 94fps with GigEVision interface. The camera is fan-less, using thermoelectric sensor cooling (TEC1), yielding a robust and compact design.

Contact 1stVision for support and / or pricing.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Allied Vision Alvium with Sony Pregius Gen 4 Sensors

Allied Vision Alvium camera image

1st Vision is pleased to relay that Allied Vision has introduced new Alvium machine vision camera models featuring 4th gen IMX Sony Pregius S global shutter sensors. The sensors feature an improved back side illuminated pixel architecture that can capture light more effectively. This leads to improved quantum efficiency (QE) compared to 2nd and 3rd generation IMX sensors. Because of the decreased pixel size of 2.74µm, higher pixel densities and resolutions for the same optical format are possible.

Allied Vision Alvium

The IMX542 sensor in the 1800 U-1620 models has a 16:9 wide screen format similar to the IMX265 (2nd gen.). It is practically the same size but has almost twice the resolution. So, the FOV is nearly the same but at a much higher resolution. This sensor is especially suited for ITS applications.

The IMX540 sensor in the 1800 C/U-2460 models has an almost square format. Even though it is not much wider than the IMX304 (2nd gen.), it is considerably higher. It is a solid, lower priced alternative to the OnSemi Python 25k sensor, which has a similar resolution and aspect ratio, but is much larger.

The IMX541 sensor in the 1800 U-2040 models has a square format which was only available in the larger IMX367, but is now available as a C-mount camera in a sugar cube housing. This makes it especially suited for microscopy applications.

A summary of the new Alvium USB3 camera is as follows:

CameraSensorResolutionFormatFrames / Sec
1800 U-1620Sony IMX54216.2 MP5328×304022
1800 U-2040Sony IMX54120.4 MP4512×451217
1800 U-2460Sony INX54024.6 MP5328×460814
New Alvium cameras with Sony 4th Gen Pregius sensors

Contact 1stVision for support and / or pricing.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Opto Imaging Modules Provide a Turn-Key Solution

Demanding imaging applications require particular combinations of image sensor, lens, and lighting in order to achieve an optimal image.  It can be challenging to choose the right components and configure them in a compact space. An attractive solution for many is to use “Imaging Modules” which contain an integrated camera sensor, lens and lighting in an “all in one” housing.  A range of imaging modules are available, each configured to optimize component alignment and operations. The end-user may quickly deploy a new system, benefiting from the standardized systems, economies of scale, and expertise of the module builder.

Simplified example of imaging module key components

Opto Imaging GmbH offers imaging modules based on their more than 40 years experience in imaging.  Early leaders in imaging software, they also led with products and systems for stereo microscopy imaging, fluorescence imaging, metrology, surface imaging, and bioimaging.  They now offer Opto Imaging Modules, a collection of “plug-n-play” imaging systems for rapid deployment in diverse situations.

Here are 5 key benefits derived from using Opto Imaging Modules :

One unitCompact integrated sensor, lens, and lighting, optimally calibrated and tested
One wireUSB-C provides power, control signals, and images, and image data on a single cable
Plug and playRapid turn-key deployment into your environment, with minimal configuration, and confidence to achieve reliable results, thanks to pre-configuration by the manufacturer
Free viewer https://www.opto.de/en/software/opto-viewer/
SDK includedOr use any standard SDK you may prefer
Five key benefits of Opto Imaging Modules

Application areas include but are not limited to:

Machine vision microscopy: Hardness testing, bond inspection, scratch analysis, automated measurements and documentation, metrology, and more.

Industry 4.0 production micro imaging: With a measurement resolution of 1.8 micrometers per pixel, it enables the analysis of the smallest details.

Surface inspection: For example, of highly-reflective metal surfaces: https://www.opto.de/media/solino-slider.gif

Macro imaging: Traditional machine vision of scenes or objects larger than 20mmx20mm. Options include megapixel sensors and/or telecentric optics.

Watch this short video captured with an Opto Imaging module, capturing blood cells in a biomedical application: https://www.youtube.com/watch?v=E4Uy00rzejI 

Demonstrations are available: virtual demos are available by appointment, and demo loaners are available to try in your own environment.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.
Opto Imaging Modules offer varied sub-components pre-configured and calibrated to work together

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Teledyne DALSA launch “Linea Lite” line scan cameras

Linea Lite and Linea sizes compared

The “Linea Lite” 2k and 4k line scan cameras provide industry-leading performance in a compact package. Built for a wide-range of machine vision applications, the new Linea Lite cameras feature a 45% smaller footprint than the original Linea and are based on a brand new, proprietary CMOS image sensor designed by Teledyne Imaging. This expands on the success of the Linea series of low cost, high value line scan cameras.

Designed to suit many applications, the Linea Lite offers customers a choice between high full well mode or high responsivity mode, via easy to configure gain settings.

Linea Lite (left) vs. original Linea (right – with lens) (Note: original Linea series also available)
Linea Lite 4k – Linea 4k

The cameras are available in 2k and 4k resolutions, in monochrome and bilinear color. Linea Lite has all the essential line scan features, including multiple regions of interest, programmable coefficient sets, precision time protocol (PTP), and TurboDrive™. With GigE interface and power over Ethernet (PoE), Linea Lite is an excellent fit for applications such as secondary battery inspection, optical sorting, printed materials inspection, packaging inspection, and many more.

Linea Lite Specifications

Download full specifications here.

KeyFeatures:

– 7µm or 14µm pixels

– 2k and 4k resolutions

– Configurable full well

– Precision time protocol

– Selectable 8 or 12-bit output

Contact us for a quote

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Types of 3D imaging systems – and benefits of Time of Flight (ToF)

Time Of Flight Gets Precise: Whitepaper

2D imaging is long-proven for diverse applications from bar code reading to surface inspection, presence-absence detection, etc.  If you can solve your application goal in 2D, congratulations!

But some imaging applications are only well-solved in three dimensions.  Examples include robotic pick and place, palletization, drones, security applications, and patient monitoring, to name a few.

For such applications, one must select or construct a system that creates a 3D model of the object(s).  Time of Flight (ToF) cameras from Lucid Vision Labs is one way to achieve cost-effective 3D imaging for many situations.

ToF systems setup
ToF systems have a light source and a sensor.

ToF is not about objects flying around in space! It’s about using the time of flight of light, to ascertain differences in object depth based upon measurable variances from light projected onto an object and the light reflected back to a sensor from that object.  With sufficiently precise orientation to object features, a 3D “point cloud” of x,y,z coordinates can be generated, a digital representation of real-world objects.  The point cloud is the essential data set enabling automated image processing, decisions, and actions.

In this latest whitepaper we go into depth to learn:
1. Types of 3D imaging systems
2. Passive stereo systems
3. Structured light systems
4. Time of Flight systems
Whitepaper table of contents
Download

Let’s briefly put ToF in context with other 3D imaging approaches:

Passive Stereo: Systems with cameras at a fixed distance apart, can triangulate, by matching features in both images, calculating the disparity from the midpoint.  Or a robot-mounted single camera can take multiple images, as long as positional accuracy is sufficient to calibrate effectively.

Challenges limiting passive stereo approaches include:

Occlusion: when part of the object(s) cannot be seen by one of the cameras, features cannot be matched and depth cannot be calculated.

ToF diagram
Occlusion occurs when a part of an object cannot be imaged by one of the cameras.

Few/faint features: If an object has few identifiable features, no matching correspondence pairs may be generated, also limiting essential depth calculations.

Structured Light: A clever response to the few/faint features challenge can be to project structured light patterns onto the surface.  There are both active stereo systems and calibrated projector systems.

Active stereo systems are like two-camera passive stereo systems, enhanced by the (active) projection of optical patterns, such as laser speckles or grids, onto the otherwise feature-poor surfaces.

ToF diagram
Active stereo example using laser speckle pattern to create texture on object.

Calibrated projector systems use a single camera, together with calibrated projection patterns, to triangulate from the vertex at the projector lens.  A laser line scanner is an example of such a system.

Besides custom systems, there are also pre-calibrated structured light systems available, which can provide low cost, highly accurate solutions.

Time of Flight (ToF): While structured light can provide surface height resolutions better than 10μm, they are limited to short working distances. ToF can be ideal for or applications such as people monitoring, obstacle avoidance, and materials handling, operating at working distances of 0.5m – 5m and beyond, with depth resolution requirements to 1 – 5mm.

ToF systems measure the time it takes for light emitted from the device to reflect off objects in the scene and return to the sensor for each point of the image.  Some ToF systems use pulse-modulation (Direct ToF).  Others use continuous wave (CW) modulation, exploiting phase shift between emitted and reflected light waves to calculate distance.

The new Helios ToF 3D camera from LUCID Vision Labs, uses Sony Semiconductor’s DepthSense 3D technology. Download the whitepaper to learn of 4 key benefits of this camera, example applications, as well as its operating range and accuracy,

Download whitepaper
Download whitepaper
Time Of Flight Gets Precise: Whitepaper
Download Time of Flight Whitepaper

Have questions? Tell us more about your application and our sales engineer will contact you.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Keys to Choosing the Best Image Sensor

Keys to Choosing the Best Image Sensor

Image sensors are the key component of any camera and vision system.  This blog summarizes the key concepts of a tech brief addressing concepts essential to sensor performance relative to imaging applications. For a comprehensive analysis of the parameters, you may read the full tech brief.

Download Tech Brief - Choosing the Best Image Sensor

While there are many aspects to consider, here we outline 6 key parameters:

  1. Physical parameters


    Resolution: The amount of information per frame (image) is the product of horizontal pixel count x by vertical pixel count y.  While consumer cameras boast of resolution like car manufacturers tout horsepower, in machine vision one just needs enough resolution to solve the problem – but not more.  Too much resolution leads to more sensor than you need, more bandwidth than you need, and more cost than you need.  Takeaway: Match sensor resolution to optical resolution relative to the object(s) you must image.

    Aspect ratio: Whether 1:1, 3:2, or some other ratio, the optimal arrangement should correspond to the layout of your target’s field of view, so as not to buy more resolution than is needed for your application.



    Frame rate: If your target is moving quickly, you’ll need enough images per second to “freeze” the motion and to keep up with the physical space you are imaging.  But as with resolution, one needs just enough speed to solve the problem, and no more, or you would over specify for a faster computer, cabling, etc.

    Optical format: One could write a thesis on this topic, but the key takeaway is to match the lens’ projection of focused light onto the sensor’s array of pixels, to cover the sensor (and make use of its resolution).  Sensor sizes and lens sizes often have legacy names left over from TV standards now decades old, so we’ll skip the details in this blog but invite the reader to read the linked tech brief or speak with a sales engineer, to insure the best fit.

  2. Quantum Efficiency and Dynamic Range:


    Quantum Efficiency (QE): Sensors vary in their efficiency at converting photons to electrons, by sensor quality and at varying wavelengths of light, so some sensors are better for certain applications than others.

    Typical QE response curve

    Dynamic Range (DR): Factors such as Full Well Capacity and Read Noise determine DR, which is the ratio of maximum signal to the minimum.  The greater the DR, the better the sensor can capture the range of bright to dark gradations from the application scene.

  3. Optical parameters

    While some seemingly-color applications can in fact be solved more easily and cost-effectively with monochrome, in either case each silicon-based pixel converts light (photons) into charge (electrons).  Each pixel well has a maximum volume of charge it can handle before saturating.  After each exposure, the degree of charge in a given pixel correlates to the amount of light that impinged on that pixel.

  4. Rolling vs. Global shutter

    Most current sensors support global shutter, where all pixel rows are exposed at once, eliminating motion-induced blur.  But the on-sensor electronics to achieve global shutter have certain costs associated, so for some applications it can still make sense to use rolling shutter sensors.

  5. Pixel Size

    Just as a wide-mouth bucket will catch more raindrops than a coffee cup, a larger physical pixel will admit more photons than a small one.  Generally speaking, large pixels are preferred.  But that requires the expense of more silicon to support the resolution for a desired x by y array.  Sensor manufacturers work to optimize this tradeoff with each new generation of sensors.

  6. Output modes

    While each sensor typically has a “standard” intended output, at full resolution, many sensors offer additional switchable outputs modes like Region of Interest (ROI), binning, or decimation.  Such modes typically read out a defined subset of the pixels, at a higher frame rate, which can allow the same sensor and camera to serve two or more purposes.  Example of binning would be a microscopy application whereby a binned image at high speed would be used to locate a target blob in a large field, then switch to full-resolution for a high-quality detail image.

For a more in depth review of these concepts, including helpful images and diagrams, please download the tech brief.

Download tech brief - Choosing the Best Image Sensor

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

1stVision Announces New Logo, Refreshed Website, and Continued Investment in Customer Support

1stVision

We are excited to be shining the spotlight on ourselves today as we introduce for the first time our new logo and website user interface (UI) design . Our new logo signifies our continuous high-level commitment to all your machine vision needs and captures the new foundation laid by a capital investment by and strategic partner relationship with Next Imaging.

On February 7, 2020, we announced that 1st Vision had been acquired by Next Imaging but would continue doing business as 1st Vision, Inc. We are keeping our well-known identity and presence in the North American Market and looking to excel even further at becoming your 1st choice for all your imaging requirements.

Check out our new website!

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.