Machine vision problems solved with SWIR lighting

Some problems best solved outside the visible spectrum

Most of us think about vision with a human bias, since most of us are normally sighted with color stereo vision. We perceive distance, hues, shading, and intensity, for materials that emit or reflect light in the wavelengths 380 – 750 nm. Many machine vision problems can also be solved using monochrome or color light and sensors in the visible spectrum.

Human visible light – marked VIS – is just a small portion of what sensors can detect – Courtesy Edmund Optics

Many applications are best solved or even only solved, in wavelengths that we cannot see with our own eyes. There are sensors that react to wavelengths in these other parts of the spectrum. Particularly interesting are short wave infrared (SWIR) and ultraviolet (UV). In this blog we focus on SWIR, with wavelengths in the range 0.9 – 1.7um.

Examples in SWIR space

The same apple with visible vs. SWIR lighting and sensors – Courtesy Effilux

Food processing and agricultural applications possible with SWIR. Consider the above images, where the visible image shows what appears to be a ripe apple in good condition. With SWIR imaging, a significant bruise is visible – as SWIR detects higher densities of water which render as black or dark grey. Supplier yields determine profits, losses, and reputations. Apple suppliers benefit by automated sorting of apples that will travel to grocery shelves vs. lightly bruised fruit that can be profitably juiced or sauced.

Even clear fluids in opaque bottles render dark in SWIR light –
Courtesy Effilux

Whether controlling the filling apparatus or quality controlling the nominally filled bottles, SWIR light and sensors can see through glass or opaque plastic bottles and render fluids dark while air renders white. The detection side of the application is solved!

Hyperspectral imaging

Yet another SWIR application is hyperspectral imaging. By identifying the spectral signature of every pixel in a scene, we can use light to discern the unique profile of substances. This in turn can identify the substance and permit object identification or process detection. Consider also multi-spectral imaging, an efficient sub-mode of hyperspectral imaging that only looks for certain bands sufficient to discern “all that’s needed”.

Multispectral and hyperspectral imaging – Courtesy Allied Vision Technologies

How to do SWIR imaging

The SWIR images shown above are pseudo-images, where pixel values in the SWIR spectrum have been re-mapped into the visible spectrum along grey levels. But that’s just to help our understanding, as an automated machine vision application doesn’t need to show an image to a human operator.

In machine vision, an algorithm on the host PC interprets the pixel values to identify features and make actionable determinations. Such as “move apple to juicer” or “continue filling bottle”.

Components for SWIR imaging

SWIR sensors and cameras; SWIR lighting, and SWIR lenses. For cameras and sensors, consider Allied Vision’s Goldeye series:

Goldeye SWIR cameras – Courtesy Allied Vision

Goldeye SWIR cameras are available in compact, rugged, industrial models, or as advanced scientific versions. The former has optional thermal electric cooling (TEC), while the latter is only available in cooled versions.

Contact us

For SWIR lighting, consider Effilux bar and ring lights. Effilux lights come in various wavelengths for both the visible and SWIR applications. Contact us to discuss SWIR lighting options.

EFFI-FLEX bar light and EFFI-RING ring light – Courtesy Effilux

By emitting light in the SWIR range, directed to reflect off targets known to reveal features in the SWIR spectrum, one builds the components necessary for a successful application.

Hyperspectral bar lights – Courtesy Effilux

And don’t forget the lens. One may also need a SWIR-specific lens, or a hybrid machine vision lens that passes both visible and SWIR wavelengths. Consider Computar VISWIR Lite Series Lenses or their VISWIR Hyper-APO Series Lenses. It’s beyond the scope of this short blog to go into SWIR lensing. Read our recent blog on Wide Band SWIR Lensing and Applications or speak with your lensing professional to be sure you get the right lens.

Takeaway

Whether SWIR or UV (more on that another time), the key point is that some machine vision problems are best solved outside the human visible portions of the spectrum. While there are innovative users and manufacturers continuing to push the boundaries – these areas are sufficiently mature that solutions are predictably creatable. Think beyond the visible constraints!

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Helios2 Ray Outdoor Time of Flight camera by Lucid Vision Labs

Helios2 Outdoor ToF camera – Courtesy Lucid Vision Labs

Time of Flight

The Time of Flight (ToF) method for 3D imaging isn’t new. Lucid Vision Labs is a longstanding leader in 3D ToF imaging. To brush up on ToF vs. other 3D methods, see a prior blog on Types of 3D imaging: Passive Stereo, Structured Light, and Time of Flight (ToF).

Helios2 Ray 3D camera

What is new are the Helios2 Ray 3D ToF outdoor* camera models. With working distances (WD) from 0.3 meters up to 8.3 meters, exterior applications like infrastructure inspection, environmental monitoring, and agriculture may be enabled – or enhanced – with these cameras. That WD in imperial units is from 1 foot up to 27 feet, providing tremendous flexibility to cover many applications.

(*) While rated for outdoor use, the Helios2 3D camera may also be used indoors, of course.

The camera uses a Sony DepthSense IMX556 CMOS back-illuminated ToF image sensor. It provides its own laser lighting via 940nm VCSEL laser diodes, which operate in the infrared (IR) spectrum, beyond the visible spectrum. So it’s independent of the ambient lighting conditions, and self-contained with no need for supplemental lighting.

Operating up to 30 fps, the camera and computer host build 3D point clouds your application can act upon. Dust and moisture protection to the IP67 standard is assured, with robust shock, vibration, and temperature performance as well. See specifications for details.

Example – Agriculture

Outdoor plants imaged in visible spectrum with conventional camera – Courtesy Lucid Vision Labs
Colorized pseudo-image from 3D point cloud – Courtesy Lucid Vision Labs

Example – Industrial

Visible spectrum image with sunlight and shadows – Courtesy Lucid Vision Labs
Pseudo-image from point cloud via Helios2 Ray – Courtesy Lucid Vision Labs

Arena SDK

The Arena SDK makes it easy to configure and control the camera and the images. It provides 2D and 3D views. With the 2D view one can see the intensity and depth of the scene. The 3D view shows the point cloud, and can be rotated by the user in real time. Of course the point cloud data may also be process algorithmically, to record quality measurements, control robot arm or vehicle guidance, etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Teledyne Dalsa Linea2 4k 5GigE camera

The new Linea2 4k color camera with a 5GigE interface delivers RGB images at a max line rate of 42kHz x3. That’s 5x the bandwidth of the popular Linea 1 GigE cameras.

Linea2 4k color cameras with 5GigE – courtesy Teledyne Dalsa

Perhaps you already use the Linea GigE cameras, at 1 GigE, and seek an upgrade path to higher performance in an existing application. Or you may have a new application for which Linea2 performance is the right fit. Either way, Linea2 builds on the foundation of Teledyne DALSA’s Linea family.

Why line scan?

While area scan is the right fit for certain applications, compare area scan to line scan for the hypothetical application illustrated below:

Area scan vs. Line scan – courtesy Teledyne DALSA

If one were to implement an area scan solution, you’d need multiple cameras to cover the field of view (FOV). Plus you’d have to manage lighting and framerate to avoid smear and frame overlaps. With line scan, one gets high resolution without smear, and a single camera solution – ideal to inspect a moving surface.

Call us at 978-474-0044 to tell us about your application, and we can guide you to a suitable line scan or area scan camera for your solution. Of course we also have the right lenses, lighting, and other components.

Sensor

The Trilinear CMOS line scan sensor is Teledyne’s own 4k color design, with outstanding spectral responsivity as shown below:

Linea2 Color responsivity – courtesy Teledyne DALSA

The integrated IR-cut filters insure true-color response is delivered on the native RGB data outputs.

Interface

With a 5GigE Vision interface, the Linea2 provides 5x the bandwidth of the conventional GigE interface, but can use the same Cat5e or Cat6 network cables – and does not require a frame grabber.

Software

Sapera LT software development kit is recommended, featuring:

  • Intuitive CamExpert graphical user interface for configuration and setup
  • Trigger-To-Image Reliability tool (T2IR) for system monitoring

Sapera LT has over 500,000 installations worldwide. Thanks to the 5GigE Vision interface, popular third party software is of course also compatible.

Applications

Application examples – courtesy Teledyne DALSA

While not limited to those listed below, known and suggested uses include:

  • Printing inspection
  • Web inspection
  • Food, recycling, and material sorting
  • Printed circuit board inspection
  • Web inspection
  • etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Webcam vs. machine vision camera

Webcams aren’t (yet) found in Cracker Jack boxes, but they are very inexpensive. And they seem to perform ok for Zoom meetings or rendering a decent image of an office interior. So why not just use a webcam as the front end for a machine vision application?

Before we dig in to analysis and rationale, let’s motivate with the following side-by-side images of the same printed circuit board (PCB):

Machine vision camera and lens vs. webcam – Courtesy 1stVision

Side-by-side images

In the image pair above, the left image was generated with a 20MP machine vision camera and a high resolution lens. The right image used a webcam with a consumer sensor and optics.

Both were used under identical lighting, and optimally positioned within their specified operating conditions, etc. In other words we tried to give the webcam a fair chance.

Even in the above image, the left image looks crisp with good contrast, while the right image has poor contrast – that’s clear even at a wide field of view (FOV). But let’s zoom in:

Clearly readable labeling and contact points (left) vs. poor contrast and fuzzy edges (right)

Which image would you prefer to pass to your machine vision software for processing? Exactly.

Machine vision cameras with lens mounts that accept lenses for different applications

Why is there such a big difference in performance

We’re all so used to smartphones that take (seemingly) good images, and webcams that support our Zoom and Teams meetings, that we may have developed a bias towards thinking cameras have become both inexpensive and really good. It’s true that all cameras continue to trend less expensive over time, per megapixel delivered – just as with Moore’s law in computing power.

As for the seemingly-good perception, if the images above haven’t convinced you, it’s important to note that:

  1. Most webcam and smartphone images are wide angle large field of view (FOV)
  2. Firmware algorithms may smooth values among adjacent pixels to render “pleasing” images or speed up performance

Most machine vision applications, on the other hand, demand precise details – so firmware-smoothed regions may look nice on a Zoom call but could totally miss the defect-discovery which might be the goal of your application!

Software

Finally, software (or the lack thereof) is at least as important as image quality due to lens and sensor considerations. With a webcam, one just gets an image burped out, but nothing more.

Conversely, with a machine vision camera, not only is the camera image better, but one gets a software development kit (SDK). With the SDK, one can:

  • Configure the camera’s parameters relative to bandwidth and choice of image format, to manage performance requirements
  • Choose between streaming vs. triggering exposures (via hardware or software trigger) – trigger allows synchronizing to real world events or mechanisms such as conveyor belt movement, for example
  • Access to machine vision library functions such as edge detection, blob analysis, occlusion detection, and other sophisticated image analysis software

Proprietary SDKs vs. 3rd party SDKs

Speaking of SDKs, the camera manufacturers’ are often very powerful and user friendly. Just to name a few, Teledyne Dalsa offer Sapera, Allied Vision provides Vimba, and IDS Imaging supports both IDS Lighthouse and IDS Peak.

Compare to Apple or Microsoft in the computing sector – they provide bundled software like Safari and Edge, respectively. They work hard on interoperability of their laptops, tablets, and smartphones, to make it attractive for users to see benefits from staying within a specific manufacturer’s product families. Machine vision camera companies do the same thing – and many users like those benefits.

Vision standards – Courtesy Association for Advancing Automation,

Some users prefer 3rd party SDKs that help maintain independence to choose cameras best-suited to a given task. Thanks to machine vision industry standards like GigE Vision, USB3 Vision, Camera Link, GenICam, etc., 3rd party SDKs like MATLAB, OpenCV, Halcon, Labview, and CVB provide powerful functionality that are vendor-neutral relative to the camera manufacturer.


For a deeper dive into machine vision cameras vs. webcams, including the benefits of lens selection, exposure controls, and design-in availability over time, see our article: “Why shouldn’t I buy a $69 webcam for my machine vision application?” Or just call us at 978-474-0044.

In summary, yes a webcam is a camera. For a sufficiently “coarse” area scan application, such as presence/absence detection at low resolution – a webcam might be good enough. Otherwise note that machine vision cameras – like most electronics – are declining in price over time for a given resolution, and the performance benefits – including software controls – are very compelling.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!