The μEye XCP-E event-based camera utilizes Sony’s Prophesee IMX636 sensor. So, by design, it captures only relevant image changes. Event-based imaging can be a game changer for certain applications. Unlike area scan or line scan imaging – which capture every pixel and render a “full image” – event imaging only senses and delivers changes.
IDS μEye XCP-E housed camera (leftmost) and forthcoming XLS-E board level models – Courtesy IDS Imaging
Event-based imaging captures the changes:
Left: uEye XCP-E image vs. Right: Area scan image – Courtesy IDS Imaging
Less is more
Playing on the “less is more” adage reveals key insights into event-based imaging.
The human eye is adept at delivering and entire scene, of course, and that’s how most of us imagine we see the world around us. But our overall vision perception also builds upon the eye’s ability to sense brightness changes within small segments of the overall scene..
Consider a baseball batter awaiting a pitched ball. The overall scene is relatively static: the outfield fence, bases, and foul lines aren’t moving. And the infielders are almost static – relative to the motion of the ball. But the pitched ball approaching at 80 – 90 miles per hour can be identified by a good batter, to gauge “strike or ball” and “swing or take”.
The batter’s visual processing does NOT have time to capture the full scene at each instant of “ball release”, “just released”, “mid-way”, and “arriving soon”. Rather, the ball’s trajectory is discerned as successive changes against a static background. So too with an event-based camera.
Less data -> More speed: In other circumstances, less data might seem like a handicap. For area scan applications it often would be. Finding defects on a static surface requires ingesting a lot of detail – all the pixels – in order to do edge detection, blob analysis, or other algorithmic processing. But by detecting “just the brightness changes”, transmitting less data is exactly what delivers the increased speed!
Applications example: motion detection and analysis
What is delivered are pixel motion coordinates and timestamps – NOT pixel brightness values. So you get useable results rather than having to algorithmically compute the results from a traditional area scan image. Track moving objects easily.
How much?
Already intrigued? The housed model, UE-39B0XCP-E, is available now, as this blog releases in early March 2025. Board-level models to be released soon.
Temporal resolution better than 100 μsec
Detect rapid changes – a conventional camera would need > 10,000 fps to capture this – Courtesy IDS Imaging
Courtesy IDS Imaging
Efficient data processing
Courtesy IDS Imaging
IDS Imaging uEye XCP-E event-based cameras can be directly integrated with the sensor manufacturers’ software tools, called Metavision. That’s all thanks to Sony’s partnership with Prophesee. Since event-based imaging is a paradigm shift away from conventional machine vision approaches, the visualization tools, API, and training videos help you get up to speed quickly.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.
We’ve previously written about Sony STARVIS sensors, and all that still holds true, of course. If you don’t feel like chasing that last link, a two-word summary would be “high sensitivity”. But this is our first piece on the Sony IMX662 STARVIS sensor in particular. And the corresponding 10 camera models into which IDS Imaging has embedded this remarkable new sensor.
Sony IMX662 sensor – what’s so special?
Before reviewing STARVIS in general, let’s cut to the chase on the IMX662. Three specifics jump out.
1. Wider dynamic range:
Dynamic range characterizes the expressive power of the sensor. It’s the ratio between the smallest and largest values the sensor can capture. Per the side by side images below, the more performant sensor (in this case the IMX662 of course), gets the saturated segments bright, the darker segments dark, and a lot more nuance in the middle. Which translates into actionable imaging data for your machine vision algorithms.
Courtesy Sony Semiconductor
2. No chromatic aberration in HDR mode:
Chromatic aberration is the introduction of color artifacts not present in the original scene, due to the physics of light passing through a lens. Note that the “lens” might be the user-added camera lens, or the micro-lens inherent to every pixel on the sensor. Either way, it’s an undesirable phenomenon, since if your application uses color, it can be a source of “confusion” in your image processing.
So it’s a nice benefit that Sony’s “Clear HDR” feature overcomes chromatic aberration when the IMX662 is used in HDR mode. As shown below:
Courtesy Sony Semiconductor
3. Low cost
The Sony IMX662 sensor is very attractively priced. Since it doesn’t cost camera manufacturers very much to buy the sensors wholesale, they can design them into their value-added cameras, and price the overall package attractively for you, the customer. Whether in small volumes or large.
Think “go where no camera has ever gone before.” Or if you already have an application with another sensor, consider a Gen 2 application with a higher return on investment.
Why 10 different camera models for just this one sensor?
The Sony IMX662 sensor is so compelling that IDS Imaging designed it into 10 different camera packages, providing form factors for diverse customer requirements. Per the snapshot below from 1stVision’s camera selector, with sensor dropdown Sony IMX662 selected, we see all 10 models. In fact it’s 20 models, as each is available in a monochrome or color version.
The top 6 rows are GigE models, with framerates to 59fps. The 4 bottom rows utilize the USB3 interface, delivering up to 93fps.
1stVision carries all 10 IDS Imaging camera models using the Sony IMX662 sensor
GigE Vision models:
The GV prefix in the model name denotes the GigE Vision interface. M/C indicates both Monochrome and Color offerings. MB stands for MotherBoard, which is an especially small form factor, for applications with tight and/or angled spaces. For MB variants, and optional daughterboard and flex ribbon cable, if desired.
Top row: GigE no mount; GigE C-mount; GigE motherboard; Bottom row: GigE MB C-mount; GigE MB S-mount; GigE S-mount
USB3 Vision models:
The models with the U3 prefix are offer board level and housed models, similarly ideal for tight spaces and embedded applications. And with frame rates to 93fps:
Top left: BL no lens mount; Top right: BL S-mount; Bottom left: BL C-mount; Bottom right: Housed C-mount
Since both interface options, GigE Vision and USB3 Vision, are industry standards, you can use “IDS peak” SDK, or any other standards compliant software you like.
Sony STARVIS technology
Underlying the Sony IMX662 – and indeed all the Sony STARVIS sensors – is the innovative back-illuminated structure. This means more photons get into the pixel well, greatly enhancing low light performance.
Courtesy Sony Semiconductor
Choose the sensor – and camera – that’s right for the application
Is the Sony IMX662 right for your application? For all cameras with the IMX662, go to our camera selector and choose IMX662 in the sensor pulldown. Or other members of the Sony STARVIS sensor collection? While lensing and lighting are also important, choosing the right sensor is at the heart of your application solution. We’re always happy to advise.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.
To learn what kinds of applications are well-suited for a Contact Image Sensor
To see the unique features only found in the Teledyne DALSA AxCIS series
You already know (or can catch up quickly):
Contact Image Sensors don’t actually contact the things they are imaging. But they get to within 15 mm = 0.59 inches! So they are ideal for space-constrained applications.
And they aren’t interchangeable with line scan cameras, they are a variant on line scan concepts. They share the requirement that “something is moving” and that the sensor array is a single row of pixels.
Applications for Contact Image Sensing
Courtesy Teledyne DALSA
Why Teledyne DALSA AxCIS in particular?
You may want to review the whole Teledyne DALSA AxCIS series, and the datasheet details. Go for it! Geek out. Full transparency as always.
Or maybe you’d like a little help on what we think is special about the Teledyne DALSA AxCIS series?
T2IR – Trigger to Image Reliability
This is a Teledyne DALSA proprietary innovation that helps to de-mystify what’s happening inside a complex vision system. It uses hardware and software to improve reliability. In high level terms, T2IR monitors from trigger through image capture, and on to host memory transfer, aiming to protect against data loss. And to provide insights for system tuning if needed. T2IR is compatible with many Teledyne DALSA cameras and frame grabbers – including the AxCIS series.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about
When designing an application, one likes to read the specifications to determine whether a candidate solution will satisfy the applications requirements. Let’s say you want to design an application to do laser profiling of your continuously moving target(s). You know Teledyne DALSA is well-regarded for their Z-Trak 3D Laser Profiler. In the specifications you may see that up to 3.3K second are achievable, but what factors could influence the rate?
What factors affect the line rate?
When choosing a pickup truck or SUV, cubic displacement and horsepower matter. But so do whether you plan to tow a trailer of a certain weight. And whether the terrain is hilly or flat.
With an area scan camera, maximum framerate is expressed for reading out all pixels when operating at full resolution. Faster rates can be achieved by reading out partial rows with a reduced area of interest. One must match camera and interface capabilities to application requirements.
Laser triangulation is an effective 3D technique
Here too one must read the specifications – and think about application requirements.
Figure 1: Key laser profiler terms and concepts in relation to each other – Courtesy Teledyne DALSA
What considerations affect 3D triangulation laser profilers?
Data volume: With reference to Figure 2 below, the number of pixels per row (X) and the frequency of scans in the Y dimension, together with the number of Bytes expressed per pixel, determine the data volume. Ultimately you need what you need, and may purchase a line scanner with a wider or smaller field of view, or a faster or slower interface, or a more intense laser light, accordingly. Required resolution has a bearing on data volumes, too, and that’s the key consideration we’ll go into further below.
Figure 2: Each laser profile scan delivers X pixels’ Z values to build Y essentially continuous slices – Courtesy Teledyne DALSA
Resolution has a bearing on data volumes and application performance
Presumably it’s clear that application performance will require certain precision in resolution. In the Y dimension, how frequently do you need each successive data slice in order to track feature changes over time? In the Z dimension, how fine grained do you need to know of changes in object height? And in the X dimension, how many points must be captured at what resolution?
While you might be prepared to negotiate resolution tolerances as an engineering tradeoff on performance or cost or risk, generally speaking you’ve got certain resolutions you are aiming for if the technology and budget can achieve it.
We’re warming up to the key point of this article – how line rate varies according to application features. Consider Figure 3 below, noting the trapezoidal shape for 3 respective fields of view, in correlation with working distance.
Figure 3: Working distance in which Z dimension may vary also impacts resolution achievable for each value in the X dimension – Courtesy Teledyne DALSA.
Trapezoid bottom width and required X dimension resolution
To drive this final point home, consider both Figure 2 and Figure 3. Figure 2, among other things, reminds us that we need to capture each successive scan from the Y dimension at precisely timed intervals. Otherwise how would we usefully track the changes in height in the Z dimension as the target moves down the conveyance?
That means that regardless of target height, each scan must always take exactly the same time as each other scan – it cannot vary. But per Figure 3, regardless of whether using a short, medium, or longer working distance, X pixels correlating to target values found high up in the trapezoidal FoV will yield a de facto higher resolution than the same X pixels lower down.
Suppose the top of the trapezoid is 50cm wide, and the bottom of the trapezoid is 100cm wide. For any given short span along a line in the X dimension, the real-space mapped into a sensor pixel will be 2x and long for targets sampled at the bottom of the FoV.
Since the required minimum resolution and precision is an applications requirement, the whole system must be configured for sufficient resolution when sampling at the bottom of the trapezoid. So one must purchase a system the covers the required resolution, and deploy it in such a way that the “worst case” sampling at the limits of the system are within the requirements. One must sample as many points as needed at the bottom of the FoV, and that impacts line scan rate.
Height of object matters too
Not only the position of the object in the FoV matters – but also the maximum height of any object whose Z dimension you need to detect. Let’s illustrate the point:
Figure 4. The maximum height anticipated matters too – Courtesy Teledyne DALSA
Consider item labeled Object in Figure 4. Your application’s object(s) may of course be shaped differently, but this generic object serves discussion purposes just fine. In this conceptual application, there’s a continuous conveyor belt (the dark grey surface) moving at continous speed in the Y dimension. Whenever no Object is present, i.e. the gaps between Object_N and Object_N+1, we expect the profiler to deliver a Z value of 0 for each pixel. But when an Object is present, we anticipate positive values corresponding to the height of the object. That’s the whole point of the 3D application.
Important note re. camera sensor in 2D
While the laser emits a flat line as it exits the projector, the reflection sensed inside the camera is two-dimensional. The camera sensor is a rectangular grid or array of pixels, typically in a CMOS chip, similar to that used in an area-scan camera. If one needs all the data from the sensor, the higher data volume takes longer to transfer than if one only needs a subset. If you know your application’s design well, you may be able to achieve optimized performance by avoiding the transfer of “empty” data.
Now let’s do a thought experiment where we re-imagine the Object towards two different extremes:
Extreme 1: Imagine the Object flattened down to a few sheets of paper in a tight stack, or perhaps the flap of a cardboard box.
Extreme 2: Imagine the Object is stretched up to the height of a full box, as high in the Z dimension as in the X dimension shown.
If the Object would never be higher than Extreme 1, the number of pixel rows in the camera sensor registering non-zero values will be just a few rows. Which can be read out quickly, not bothering to read out the unused rows. Yielding a relatively faster line rate.
But if the Object(s) will sometimes be at Extreme 2, many/most of the pixel rows in the camera sensor will register non-zero values, per the reflected laser line ranging up to the full height of the Object. Consequently more lines must be read-out from the camera sensor in order to build the laser profile.
1. The application must be designed to perform for the tallest anticipated Object, as well as the width of the Object in the X dimension and the speed of motion in the Y dimension.
2. All other things being equal, shorter objects, utilizing less camera sensor real estate, will support faster line rates, than taller object.
Summary points regarding object height
By careful planning for your FoV, knowing your timing constraints, and selecting your laser profiler model within it’s performance range, you can optimize your outcomes.
Give us some brief idea of your application and we will contact you to discuss camera options.
Also consider – interface capacity; exposure time
Just as with area scan cameras, output rates may be limited by any of interface limits, exposure duration, or data volumes.
Interface limits: Whether using GigE Vision, USB3 Vision, Camera Link HS – whatever – the interface standard, camera settings, cable, and PC adapter card together determine a maximum frame rate or line rate expressed in Gigabits per second (Gbps), typically. Your intended data volume is a function of exposure time and line rate or frame rate. Be sure to understand maximum practical throughput, choosing components accordingly.
Exposure duration: Even without readout timing considerations (overlapped readout together with start of next exposure – or completion of readout n before start of exposure n+1), if there are, say, 100 exposures per second, one cannot receive more than 100 datasets per second. Even if the camera is capable of faster rates.
That may seem obvious to experienced machine vision applications designers, but it needs mentioning for any new to this. Every application needs to achieve good contrast between the imaging subject and its background field. And if lighting and lensing are optimized, exposure time is the last variable to control. Ideally, lighting and lensing, together with the camera sensor, permit exposures brief enough so that exposure time meets application objectives.
But whether manually parameterized or under auto-exposure control, one has to do the math and/or the empirical testing to insure your achievable line rates aren’t exposure-limited.
Planning for your laser profiler application
Some months ago we wrote a blog which summarizes Teledyne DALSA’s Z-Trak line scan product families. Besides highlighting the characteristics of three distinct product families, we provided a worksheet to help users identify key applications requirements for line scanning. It’s worth offering that same worksheet again below. Consider printing the page or creating a copy of it in a spreadsheet, and fill in the values for your known or evolving application.
3D application key attributes
The moral of the story…
The takeaway is that the scan rate you’ll achieve for your application is more complex to determine than just reading a spec sheet about a laser profiler’s maximum performance. Your application configuration and constraints factor into overall performance.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.