Kowa FC24M C-mount lens series

With 9 members in the Kowa FC24M lens series, focal lengths range from 6.5mm through 100mm. Ideal for sensors like the 1.1″ Sony IMX183, 530/540, 253 and IMX304, these C-mount lenses cover any sensor up to 14.1mm x 10.6mm, with no vignetting. Their design is optimized for sensors with pixel sizes as small as 2.5µm – but of course work great on pixels larger than that as well.

Kowa FC24M C-mount lenses – Courtesy Kowa

Lens selection

Machine vision veterans know that lens selection ranks right up there with camera/sensor choice, and lighting, as determinants in application success. For an introduction or refresher, see our knowledge base Guide to Key Considerations in Machine Vision Lens Selection.

Click to contact
Give us a brief idea of your application and we will contact you with options.

Noteworthy features

Particularly compelling across the Kowa FC24M lens series is the floating mechanism system. Kowa’s longer name for this is the “close distance aberration compensation mechanism.” It creates stable optical performance at various working distances. Internal lens groups move independently of each other, which optimizes alignment compared to traditional lens design.

Kowa FC24M lenses render sharp images with minimal distortion – Courtesy Kowa

Listing all the key features together:

  • Floating mechanism system (described above)
  • Wide working range… and as close at 15 cm MOD
  • Durable construction … ideal for industrial applications
  • Wide-band multi-coating – minimizes flare and ghosting from VIS through NIR
High resolution down to pixels as small as 2.5um – Courtesy Kowa

Video overview shows applications

Applications include manufacturing, medical, food processing, and more. View short one-minute video:

Kowa FC24M key features and example applications – Courtesy Kowa

What’s in a family name?

Let’s unpack the Kowa FC24M lens series name:

F is for fixed. With focal lengths at 9 step sizes from 6 – 100, lens design is kept simple and pricing is correspondingly competitive.

C is for C-mount. It’s one of the most popular camera/lens mounts in machine vision, with a lot of camera manufacturers offering diverse sensors designed in to C-mount housings.

24M is for 24 Megapixels. Not so long ago it was cost prohibitive to consider sensors larger than 20M. But as with most things in the field of electronics, the price : performance ratio keeps moving in the user’s favor. Many applications benefit from sensors in this size.

And the model names?

Model names include LM6FC24M, LM8FC24M, …, LM100FC24M. So the focal length is specified by the digit(s) just before the family name. i.e. the LM8FC24M has a focal length of 8mm. In fact that particular model is technically 8.5mm but per industry conventions one rounds or truncates to common de facto sizes.

LM8FC24M 8.5mm focal length – Courtesy Kowa

See the full brochure for the Kowa FC24M lens series, or call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Machine vision problems solved with SWIR lighting

Some problems best solved outside the visible spectrum

Most of us think about vision with a human bias, since most of us are normally sighted with color stereo vision. We perceive distance, hues, shading, and intensity, for materials that emit or reflect light in the wavelengths 380 – 750 nm. Many machine vision problems can also be solved using monochrome or color light and sensors in the visible spectrum.

Human visible light – marked VIS – is just a small portion of what sensors can detect – Courtesy Edmund Optics

Many applications are best solved or even only solved, in wavelengths that we cannot see with our own eyes. There are sensors that react to wavelengths in these other parts of the spectrum. Particularly interesting are short wave infrared (SWIR) and ultraviolet (UV). In this blog we focus on SWIR, with wavelengths in the range 0.9 – 1.7um.

Examples in SWIR space

The same apple with visible vs. SWIR lighting and sensors – Courtesy Effilux

Food processing and agricultural applications possible with SWIR. Consider the above images, where the visible image shows what appears to be a ripe apple in good condition. With SWIR imaging, a significant bruise is visible – as SWIR detects higher densities of water which render as black or dark grey. Supplier yields determine profits, losses, and reputations. Apple suppliers benefit by automated sorting of apples that will travel to grocery shelves vs. lightly bruised fruit that can be profitably juiced or sauced.

Even clear fluids in opaque bottles render dark in SWIR light –
Courtesy Effilux

Whether controlling the filling apparatus or quality controlling the nominally filled bottles, SWIR light and sensors can see through glass or opaque plastic bottles and render fluids dark while air renders white. The detection side of the application is solved!

Hyperspectral imaging

Yet another SWIR application is hyperspectral imaging. By identifying the spectral signature of every pixel in a scene, we can use light to discern the unique profile of substances. This in turn can identify the substance and permit object identification or process detection. Consider also multi-spectral imaging, an efficient sub-mode of hyperspectral imaging that only looks for certain bands sufficient to discern “all that’s needed”.

Multispectral and hyperspectral imaging – Courtesy Allied Vision Technologies

How to do SWIR imaging

The SWIR images shown above are pseudo-images, where pixel values in the SWIR spectrum have been re-mapped into the visible spectrum along grey levels. But that’s just to help our understanding, as an automated machine vision application doesn’t need to show an image to a human operator.

In machine vision, an algorithm on the host PC interprets the pixel values to identify features and make actionable determinations. Such as “move apple to juicer” or “continue filling bottle”.

Components for SWIR imaging

SWIR sensors and cameras; SWIR lighting, and SWIR lenses. For cameras and sensors, consider Allied Vision’s Goldeye series:

Goldeye SWIR cameras – Courtesy Allied Vision

Goldeye SWIR cameras are available in compact, rugged, industrial models, or as advanced scientific versions. The former has optional thermal electric cooling (TEC), while the latter is only available in cooled versions.

Contact us

For SWIR lighting, consider Effilux bar and ring lights. Effilux lights come in various wavelengths for both the visible and SWIR applications. Contact us to discuss SWIR lighting options.

EFFI-FLEX bar light and EFFI-RING ring light – Courtesy Effilux

By emitting light in the SWIR range, directed to reflect off targets known to reveal features in the SWIR spectrum, one builds the components necessary for a successful application.

Hyperspectral bar lights – Courtesy Effilux

And don’t forget the lens. One may also need a SWIR-specific lens, or a hybrid machine vision lens that passes both visible and SWIR wavelengths. Consider Computar VISWIR Lite Series Lenses or their VISWIR Hyper-APO Series Lenses. It’s beyond the scope of this short blog to go into SWIR lensing. Read our recent blog on Wide Band SWIR Lensing and Applications or speak with your lensing professional to be sure you get the right lens.

Takeaway

Whether SWIR or UV (more on that another time), the key point is that some machine vision problems are best solved outside the human visible portions of the spectrum. While there are innovative users and manufacturers continuing to push the boundaries – these areas are sufficiently mature that solutions are predictably creatable. Think beyond the visible constraints!

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Depth of Field – a balancing act

Most who are involved with imaging have at least some understanding of depth of field (DoF). DoF is the distance between the nearest and furthest points that are acceptably in focus. In portrait photography, one sometimes seeks a narrow depth of field to draw attention to the subject, while intentionally blurring the background to a “soft focus”. But in machine vision, it’s often preferred to maximize depth of field – that way if successive targets vary in their Z dimension – or if the camera is on a moving vehicle – the imaging system can keep processing without errors or waste.

Making it real

Suppose you need to see small features on an item that has various heights (Z dimension). You may estimate you need a 1″ depth of field. You know you’ve got plenty of light. So you set the lens to f11 because the datasheet shows you’ll reach the depth of field desired. But you can’t resolve the details! What’s up?

So I should maximize DoF, right?

Well generally speaking, yes – to a point. The point where diffraction limits negatively impact resolution. If you read on, we aim to provide a practical overview of some important concepts and a rule of thumb to guide you through this complex topic without much math.

Aperture, F/#, and Depth of Field

Aperture size and F/# are inversely correlated. So a low f/# corresponds to a large aperture, and a high f/# signifies a small aperture. See our blog on F-Numbers aka F-Stops on the way the F-numbers are calculated, and some practical guidance.

Per the illustration below, a large aperture restricts DoF, while a small aperture maximizes the DoF. Please take a moment to compare the upper and lower variations in this diagram:

Correlation between aperture and Depth of Field – Courtesy Edmund Optics

If we maximize depth of field…

So let’s pursue maximizing depth of field for a moment. Narrow the aperture to the smallest setting (the largest F-number), and presto you’ve got maximal DoF! Done! Hmm, not so fast.

First challenge – do you have enough light?

Narrowing the aperture sounds great in theory, but for each stop one narrows the aperture, the amount of light is halved. The camera sensor needs to receive sufficient photons in the pixel wells, according to the sensor’s quantum efficiency, to create an overall image with contrast necessary to process the image. If there is no motion in your application, perhaps you can just take a longer exposure. Or add supplemental lighting. But if you do have motion or can’t add more light, you may not be able to narrow the aperture as far as you hoped.

Second challenge – the Airy disk and diffraction

When light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Diffraction increases as the aperture becomes smaller –
Courtesy Edmund Optics

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk patterns from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

Contact us

High magnification example

Suppose you have a candidate camera with 3.45um pixels, and you want to pair it with a machine vision lens capable of 2x, 3x, or 4x magnification. You’ll find the Airy disk is 9um across! Something must be changed – a sensor with larger pixels, or a different lens.

As a rule of thumb, 1um resolution with machine vision lenses is about the best one can achieve. For higher resolution, there are specialized microscope lenses. Consult your lensing professional, who can guide you through sensor and lens selection in the context of your application.

Lens data sheets

Just a comment on lens manufacturers and provided data. While there are many details in the machine vision field, it’s quite transparent in terms of standards and performance data. Manufacturers’ product datasheets contain a wealth of information. For example, take a look at Edmund Optics lenses, then pick any lens family, then any lens model. You’ll find a clickable datasheet link like this, where you can see MTF graphs showing resolution performance like LP/mm, DOF graphs at different F#s, etc.

Takeaway

Per the blog’s title, Depth of Field is a balancing act between sharpness and blur. It’s physics. Pursue the links embedded in the blog, or study optical theory, if you want to dig into the math. Or just call us at 987-474-0044.

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Helios2 Ray Outdoor Time of Flight camera by Lucid Vision Labs

Helios2 Outdoor ToF camera – Courtesy Lucid Vision Labs

Time of Flight

The Time of Flight (ToF) method for 3D imaging isn’t new. Lucid Vision Labs is a longstanding leader in 3D ToF imaging. To brush up on ToF vs. other 3D methods, see a prior blog on Types of 3D imaging: Passive Stereo, Structured Light, and Time of Flight (ToF).

Helios2 Ray 3D camera

What is new are the Helios2 Ray 3D ToF outdoor* camera models. With working distances (WD) from 0.3 meters up to 8.3 meters, exterior applications like infrastructure inspection, environmental monitoring, and agriculture may be enabled – or enhanced – with these cameras. That WD in imperial units is from 1 foot up to 27 feet, providing tremendous flexibility to cover many applications.

(*) While rated for outdoor use, the Helios2 3D camera may also be used indoors, of course.

The camera uses a Sony DepthSense IMX556 CMOS back-illuminated ToF image sensor. It provides its own laser lighting via 940nm VCSEL laser diodes, which operate in the infrared (IR) spectrum, beyond the visible spectrum. So it’s independent of the ambient lighting conditions, and self-contained with no need for supplemental lighting.

Operating up to 30 fps, the camera and computer host build 3D point clouds your application can act upon. Dust and moisture protection to the IP67 standard is assured, with robust shock, vibration, and temperature performance as well. See specifications for details.

Example – Agriculture

Outdoor plants imaged in visible spectrum with conventional camera – Courtesy Lucid Vision Labs
Colorized pseudo-image from 3D point cloud – Courtesy Lucid Vision Labs

Example – Industrial

Visible spectrum image with sunlight and shadows – Courtesy Lucid Vision Labs
Pseudo-image from point cloud via Helios2 Ray – Courtesy Lucid Vision Labs

Arena SDK

The Arena SDK makes it easy to configure and control the camera and the images. It provides 2D and 3D views. With the 2D view one can see the intensity and depth of the scene. The 3D view shows the point cloud, and can be rotated by the user in real time. Of course the point cloud data may also be process algorithmically, to record quality measurements, control robot arm or vehicle guidance, etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!