The smallest board-level cameras in the IDS portfolio, the uEye XLS cameras have very low power consumption and heat generation. They are ideal for embedded applications and device engineering. Sensors are available for monochrome, color, and NIR.
The “S” in the name means “small”, as the series is a compact version of the uEye XLE series. As small as 29 x 29 x 7 mm in size! Each USB3 camera in the series is Vision Standard compliant, has a Micro-B connector, and offers a choice of either C/CS lens mount, S-mount, or no-mount DIY.
Positioned in the low-price portfolio, the XLS cameras are most likely to be adopted by customers requiring high volumes for which basic – but still impressive – functions are sufficient. The XLS launch family of sensors include ON Semi AR0234, ON Semi AR0521, ON Semi AR0522, Sony IMX415, and Sony IMX412. These span a wide range of resolutions, framerates, and frequency responses. Each sensor appears in 3 board-level variants per the last digit in each part number corresponding as follows: 1 = S-mount, 2 = no-mount, 4 = C, CS-mount.
Sensor
Resolution
Framerate
Monochrome
Color
NIR
ON Semi AR0234
1920 x 1200
102 fps
U3-356(1/2/4) XLS-M
U3-356(1/2/4) XLS-C
ON Semi AR0521
2592 x 1944
48 fps
U3- 368(1/2/4) XLS-M
U3- 368(1/2/4) XLS-C
ON Semi AR0522
2592 x 1944
48 fps
U3-368(1/2/4) XLS-NIR
Sony IMX415
3864 x 2176
25 fps
U3-38J(1/2/4) XLS-M
U3-38J(1/2/4) XLS-C
Sony IMX412
4056 x 3040
18 fps
U3-38L(1/2/4) XLS-C
XLS family spans 5 sensors covering a range of requirements
Uses are wide-ranging, skewing towards high-volume embedded applications:
In a nutshell, these are cost-effective cameras with basic functions. The uEye XLS cameras are small, easy to integrate with IDS or industry-standard software, cost-optimized and equipped with the fundamental functions for high-quality image evaluation
Why should I read about how to select an industrial camera, when I could just call 1stVision as a distributor of cameras, lenses, lighting, software, and cables, and let you recommend a solution for me?
Well yes, you could – and ultimately we believe a number of you who read this will in fact call us, as have many before. But when you take your car to the mechanic, do you just tell him “sometimes it makes a funny noise”? Or do you qualify the funny noise observation by noting at what speed it happens? When driving straight or turning in one direction? Whether it correlates to the ambient temperature or whether the vehicle is warmed up – or not?
The best outcomes tend to come from partnerships where both the customer and the provider each bring their knowledge to the table – and work together to characterize the problem, the opportunity, and the solution. In our many years of expertise helping new and returning customers create machine vision solutions, the customers with the best outcomes also make the effort to dig in and understand enough about cameras and other components in order to help us help them.
So how does one in fact choose an industrial or machine vision camera?
An industrial camera is a camera, often embedded in or connected to a system, used for commercial or scientific applications. Additionally, machine systems are often fully automated, or at least partially automated, with long duty cycles. Applications are many, ranging from surveillance, process control, quality control, pick and place, biomedical, manufacturing, and more.
Further, the camera may be moving – or stationary, or the target might be moving – or stationary. And the wavelengths of light best-suited to achieving intended outcomes may be in the visible spectrum – the same spectrum we see – or the application may take advantage of ultraviolet (UV) or infrared (IR) characteristics.
So where to begin? First we need to characterize the application to be developed. Presumably you know or believe there’s an opportunity to add value by using machine vision to automate some process by applying computer controlled imaging to improve quality, reduce cost, innovate a product or service, reduce risk, or otherwise do something useful.
Now let’s dig into each significant consideration, including resolution, sensor selection frame rate, interface, cabling, lighting, lens selection, software, etc. Within each section we have links to more technical details to help you focus on your particular application.
Resolution: This is about the level of detail one needs in the image, in order to achieve success. If one just needs to detect presence or absence, a low resolution image may be sufficient. But if one needs to measure precisely, or detect fine tolerances, one needs a far more pixels that correlate to the fine-grained features from the real-world details being imaged.
A key guideline is that each minimal real-world feature to be detected should appear in a 3×3 pixel grid in the image. So if the real-world scene is X by Y meters, and the smallest feature to be detected is A by B centimeters, assuming the lens is matched to the sensor and the scene, it’s just a math problem to determine the number of pixels required on the sensor. Read more about resolution requirements and calculations.
Sensor selection: So the required resolution is an important determinant for sensor selection. But so is sensitivity, including concepts like quantum efficiency. Pixel size matters too, as an influencer on sensitivity, as well as determining sensor size overall. Keys to choosing the best image sensor are covered here.
Wavelength: Sensor selection is also influenced based on the wavelengths being using in the application. Let’s assume you’ve identified the wavelength(s) for the application, which determines whether you’ll need:
a CMOS sensor for visible light in the 400 – 700nm range
a UV sensor for wavelengths below 400nm
a Near Infrared sensor for 750 – 900nm
or SWIR and XSWIR to even longer wavelengths up to 2.2µm
Monochrome or color? If your application is in the visible portion of the spectrum, many first-timers to machine vision assume color is better, since it would seem to have more “information”. Sometimes that intuition is correct – when color is the distinguishing feature. But if measurement is the goal, monochrome can be more efficient and cost-effective. Read more about the monochrome vs. color sensor considerations.
Area scan vs. line scan? Area scan cameras are generally considered to be the all-purpose imaging solution as they use a straight-forward matrix of pixels to capture an image of an object, event, or scene. In comparison to line scan cameras, they offer easier setup and alignment. For stationary or slow moving objects, suitable lighting together with a moderate shutter speed can produce excellent images.
In contrast to an area scan camera, in a line scan camera a single row of pixels is used to capture data very quickly. As the object moves past the camera, the complete image is pieced together in the software line-by-line and pixel-by-pixel. Line scan camera systems are the recognized standard for high-speed processing of fast-moving “continuous” objects such as in web inspection of paper, plastic film, and related applications. An overview of area scan vs. line scan.
Frame-rate: If your object is stationary, such as a microscope slide, frame rate may be of little importance to you, as long as the entire image can be transferred from the camera to the computer before the next image needs to be acquired. But if the camera is moving (drive-by-mapping, or camera-on-robot-arm) or the target is moving (fast moving conveyor belt or a surveillance application), one must capture each image fast enough to avoid pixel blur – and transfer the images fast enough to keep up. How to calculate exposure time?
Interfaces: By what interface should the camera and computer communicate? USB, GigE, Camera Link, or CoaXPress? Each has merits but vary by throughput capacity, cable lengths permitted, and cost. It’s a given that the interface has to be fast enough to keep up with the volume of image data coming from the camera, relative to the software’s capability to process the data. One must also consider whether it’s a single-camera application, or one in which two or more cameras will be integrated, and the corresponding interface considerations.
Cabling: So you’ve identified the interface. The camera and computer budget is set. Can you save a bit of cost by sourcing the cables at Amazon or eBay, compared to the robust ones offered by the camera distributor? Sometimes you can! Sometimes not so much.
Lighting: While not part of the camera per se, for that sensor you’re now liking in a particular camera model, can you get enough photons into the pixel well to achieve the necessary contrast to discern target from background? While sensor selection is paramount, lighting and lensing are just a half-step behind in terms of consideration with the most bearing on application outcomes. Whether steady LED light or strobed, bright field or dark field, visible or IR or UV, lighting matters. It’s worth understanding.
Filters: Twinned closely with the topic of lighting, well-chosen filters can “condition” the light to polarize it, block or pass certain frequencies, and can generally add significant value. Whether in monochrome, color, or non-visible portions of the spectrum, filters can pay for themselves many times over in improving application outcomes.
Lens selection: Depending on resolution requirements, sensors come in various sizes. While always rectangular in shape, they have differing pixel densities, and differing overall dimensions. One needs to choose a lens that “covers” the light-sensitive sections of the sensor, so be sure to understand lens optical format. Not only does the lens have to be the right size, one also has to pay attention to quality. There’s no need to over-engineer and put a premium lens into a low-resolution application, but you sure don’t want to put a mediocre lens into a demanding application. The Modulation Transfer Function, or MTF, is a good characterization of lens performance, and a great way to compare candidate lenses.
Software: In machine vision systems, it’s the software that interprets the image and takes action, whether that be accept/reject a part, actuate a servo motor, continue filling a bottle or vial, log a quality control image, etc. Most camera providers offer complementary software development kits (SDKs), for those who want to code camera control and image interpretation. Or there are vendor-neutral SDKs and machine vision libraries – these aren’t quite plug-and-play – yet – but they often just require limited parameterization to achieve powerful camera configuration and image processing.
Accessories: How about camera mounts? Wash-down enclosures for food-processing or dusty environments? If used outdoors, do you need heating or cooling, or condensation management? Consider all aspects for a full solution.
Recently released Goldeye G/CL-008 XSWIR cameras with QVGA resolution extended range InGaAs sensors offer two sensitivity options: up to 1.9 µm or 2.2µm.
The Extended Range (ER) InGaAs sensor technology integrated into the new Goldeye XSWIR models provides high imaging performance beyond 1.7 µm.
The cut-off wavelength can be shifted to higher values by increasing the amount of Indium vs. Gallium in an InGaAs compound. Corresponding sensors can only detect light below the cut-off wavelength. In the Goldeye XSWIR cameras there are four different sensors with VGA and QVGA resolution and cut-off wavelength at 1.9 µm or 2.2 µm that provide very high peak quantum efficiencies of > 75%.
The new Goldeye XSWIR models are:
In these cameras the sensors are equipped with a dual-stage thermo-electric cooler (TEC2) to cool down the sensor temperature by 60K vs. the housing temperature. Also included are image correction capabilities like Non-Uniformity Correction (NUC) and 5×5 Defect Pixel Correction (DPC) to capture high-quality SWIR images beyond 1.7 µm.
Goldeye XSWIR cameras are available with two sensor options. The 1.9µm version detects light between 1,100nm to 1,900nm and the 2.2 µm version from 1,200 – 2,200nm.
Industrial grade solution for an attractive price: Other sensor technologies available to detect light beyond 1,700 nm based on materials like HgCdTe (MCT), Type-II Superlattice (T2SL), or Colloidal Quantum Dots (CQD) tend to be very expensive. The Goldeye XWIR Extended Range (ER) InGaAs sensors have several advantages including cost-effective sensor cooling via TEC, high quantum efficiencies, and high pixel operability (> 98.5%).
MCT or T2SL sensor-based SWIR cameras typically require a very strong sensor cooling using Stirling coolers or TEC3+ elements. By comparison the Goldeye XSWIR cameras are available for a comparatively low price.
The easy integrability and operation of ER InGaAs sensors makes them attractive for industrial applications, including but not limited to:
Laser beam analysis
Spectral imaging in industries like recycling, mining, food & beverages, or agriculture
Medical imaging: e.g. tissue imaging due to deeper penetration of longer wavelengths
So you want to do an in-line measurement, inspection, identification and/or guidance application in automotive, electronics, semiconductor or factory automation. Whether a new application or time for an upgrade, you know that Teledyne DALSA’s Z-Trak 3D Laser Profiler balances high performance while also offering a low total cost of ownership.
In this 2nd Edition release we update the Z-Trak family overview with the addition of the new LP2C 4k series, bringing even more options along the price : performance spectrum. From low cost and good enough, through more resolution as well as fast, and all the way to highest resolution, there are a range of Z-Trak profiles to choose from.
The first generation Z-Trak product, the LP1, is the cornerstone of the expanded Z-Trak family, now augmented with the Z-Trak2 group (V-series and the S-series), plus the LP2C 4k series. Each product brings specific value propositions – here we aim to help you navigate among the options.
Respecting the reader’s time, key distinctions among the series are:
LP1 is the most economical 3D profiler on the market – contact us for pricing.
Z-Trak2 is one of the fastest 3D profilers on the market – with speeds to 45kHz.
LP2C 4k provides 4,096 profiles per second at resolution down to 3.5 microns.
To guide you effectively to the product best-suited for your application, we’ve prepared the following table, and encourage you to fill in the blanks, either on a printout of the page or via copy-past into a spreadsheet (for your own planning or to share with us as co-planners).
Compare your application’s key attributes from above with some of the feature capacities of the three Z-Trak product families below, as a first-pass at determining fit:
Unless the fit is obvious – and often it is not – we invite you to send us your application requirements. We we love mapping customer requirements, so please send us your application details in our form on this contact link; or you can send us an email to info@1stvision.com with the feedback from your 3D application’s “Key questions” above.
In addition to the parameter-based approach to choosing the ideal Z-Trak model, we also offer an empirical approach – send in your samples. We have a lab set up to inspect customer samples with two or more candidate configurations. System outputs can then be examined for efficacy relative to your performance requirements, to determine how much is enough – without over-engineering.