Introduction to Gidel High-Performance Products

How to speed up image processing


Since 1994, Gidel has been a leading provider of high-performance FPGA-based imaging and vision solutions. Their product offerings are engineered for data-intensive applications that demand real-time processing and minimal latency.

Key product categories – Courtesy Gidel

For high-end vision and imaging applications, compression, and HDR

…. for light-weight applications with fewer demands you may not need Gidel’s products.

Below is an overview of Gidel’s high-performance products.


For high-performance frame grabbing or FPGA-processing

… for superior throughput and pre-processing to reduce data load, you absolutely might need Gidel

Example application: Traffic monitoring with 12x 10GigE cameras

Conventional approach: Multiple computers needed to handle the data volume, and the need to coordinate them in a managed solution

With Gidel’s FPGA-based GigE parsing: Single computer solution is possible

Another example: Aerial imaging with FPGA processing:

Conventional approach: Raw image data provided “as is” to host PC and software, then “good luck” from there

With Gidel HDR processing: FPGA in framegrabber does HDR enhancement before passing to host PC

Courtesy Gidel

FPGA Accelerators and development tools

Courtesy Gidel

Robust off-the-shelf and ready-to-use solutions

Accurate triggering despite velocity instability

Ideal when: precise timing must be maintained even when speed varies

Example application: Rail inspection – the rail car on which the inspection system is mounted moves at the variable speed of the train, but track inspection must be continuous at defined minimum intervals.


Real-time data reduction and optimization

  • Compression Ideal for recording, streaming, and cloud-based AI workflows
  • Feature extraction — Common in machine vision to minimize readout and host processing by focusing only on relevant image regions
  • HDR processing — Converts 10–16-bit input to 8-bit output in real time
  • Parallel operation —Simultaneous binning and full ROI processing for efficient mixed-resolution acquisition

Note: Early or crude attempts at compression reduced data volumes but at the cost of quality, struggling for images that were “good enough”. High-quality compression and feature extraction achieve both goals.

Left: Generated by conventional processing; Right: Generated using Gidel’s FPGA processing library
– Courtesy Gidel

FantoVision Edge Computers

First a one paragraph tutorial on edge computing: By putting computing power closer to the source (in this case one or more digital cameras), helpful processing adds value and either “makes the decision at the edge” and/or reduces data volumes that need transmitting to the central host PC.

For high-bandwidth applications. Integrate high-end image acquisition with real-time image processing and/or compression.

Powered by a Nvidia Jetson™ embedded computer with optional pre-processing and compression capabilities.

With Gidel’s InfiniVision™ open frame grabber flow, over 100 sensors can be simultaneously synchronized and processed.

Gidel FantoVision Edge Computers – Courtesy Gidel

Modular and customizable variants also available

… for those who want flexibility to extend and tailor beyond the off-the-shelf solutions

e.g. Replace the need to build your own frame grabber — Gidel allows you to implement and control your acquisition interface directly in the FPGA.


Other applications: Food processing

Low-latency inline execution – since decisions must happen within milliseconds

Also:

Courtesy Gidel

Most-recent award

Gidel’s Quality+ Compression technology has been named one of the Top 10 Innovations of 2025 by inVISION Magazine. Achieve 1:10 compression ratios while preserving the original image quality, ensuring critical details remain intact for applications where lossless accuracy is essential. Latency under a single frame, processing over one GB Pixels/sec.

Courtesy Gidel

Complex products… happy to guide you

Due to their high-performance, it’s not so easy to illustrate everything these Gidel products can do with a simple overview. One needs to dig into spec sheets and look at details – if you want a bottom up approach. Alternatively, tell us about your application, and we do a top-down analysis to guide you to a solution with optimal cameras, grabbers, computers, and tools. Whether by phone at 978-474-0044, or by web-form below, we’re happy to advise.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

#framegrabber

#FPGA

#edge computing

#video compressoin

Z-Trak Express 5k profiles per second

For 3D in-line measurement and inspection applications across various industries such as battery, automotive, lumber inspection, factory automation, logistics, and more. Constant profile rate of 5,000 profiles per second, along with real-time processing. Cost-effective, eye-safe red or blue lasers. Up to 1,700 mm field-of-view. Up to 675 mm Z-range.

Z-Trak Express laser line profiler series
– Courtesy Teledyne DALSA

Newest series in large Z-Trak product line

Teledyne DALSA now offers four distinct 3D laser profiler series in its Z-Trak product line. Each member of the four series is a compact high-performance 3D laser profiler sensor. Each delivers high resolution height measurements using laser triangulation techniques. Each 3D profiler is factory calibrated to ensure accurate, consistent results.

IP67 protection for harsh environments

And each is built with IP67 protection to withstand the demands of harsh industrial environments making them suitable for industrial applications such as factory automation, metrology, surface, semiconductor, and parts inspection.

contact us

Z-Trak Express in particular

Key features – Courtesy Teledyne DALSA
Additional features and capabilities – Courtesy Teledyne DALSA

Detailed specs for each member of the series

You can find all the specs in the datasheets. For all Z-Trak laser profilers, see the Z-Trak product family overview. Then drill in on any specific family for more information.

If you know it’s new Express products you want, see Z-Trak Express 3D laser profiler.

Sometimes a single Z-Trak profiler is enough

But for some applications one might want to combine two or more sensors, in various spatial configurations:

Combine multiple units according to application requirements – Courtesy Teledyne DALSA

To simplify deployment and reduce costs, the Z-Trak Express 1K5 synchronizes multiple sensors via the data cables and supports content-based triggering for enhanced flexibility.

Which Z-Trak 3D camera is best for my application?

See our blog on key characteristics of 3D laser profiling, as a basis for choosing among Z-Trak 3D cameras. Or let us guide you through it by phone: 978-474-0044. Or use the form below. Together we can do this!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

#3D

#profiler

High-resolution 5GigE SWIR Goldeye Pro cameras

Available in 5.3 and 3.2 MP sensor options, the 5 GigE interface delivers framerates exceeding 100 fps. Based on the TEC-version of Sony’s IMX992/993 SenSWIR sensors, the cameras are sensitive from 400nm to 1,700nm, so they are classified as VSWIR. With a single sensor covering both the visible and SWIR range, new economies are possible for applications needing that spectral coverage.

Even if you don’t need VIS and just want SWIR…

These are compelling for SWIR applications for two key reasons:

  1. They achieve impressive framerates for large sensors (by SWIR standards), at 115 fps for the 5.3 MP, and 159 fps for the 3.2 MP model. With a very affordable 5 GigE interface.
  2. Outstanding image quality ideal for demanding applications.
Goldeye Pro – Courtesy Allied Vision – a TKH Vision brand

Our previous “coming soon” blog summarized key features, suggested applications, and a first look, so below we’ll go deeper now that the products are fully released.

Thermoelectric cooling (TEC) for image quality

The InGaAs (indium gallium arsenide) sensors used for SWIR imaging deliver the best images when temperature-stabilized. That’s provided by the thermoelectric cooling (TEC). That helps reduce dark noise and thermal current.

Must an InGaAs SWIR camera use TEC?

No, it’s not a requirement. Allied Vision is a leading producer of SWIR cameras, and while many include thermoelectric cooling, certain models do not. See all Allied Vision SWIR cameras and note some are “TECless.”

Whether your application requires TEC or not comes down to framerates, duty cycles, and overall performance demands. As with many engineering and design questions, how good is good enough?

Overview

Here are the key specifications at a glance:

Goldeye Pro models at a glance – Courtesy Allied Vision – a TKH Vision brand

For price quote or more information on either:

Goldeye Pro 5GigE G5-320 VSWIR TEC1Goldeye Pro 5GigE G5-530 VSWIR TEC1
Contact us

Features of note

Both models offer 12- and 10-bit sensor readout modes for achieving the highest possible dynamic range.

Both offer region-of-interest control to speed up frame rates and optimize bandwidth usage.

Both offer look-up tables to increase contrast.

Both provide digital binning and gain control to increase sensitivity.

And there are multiple user sets are available to simplify camera setup.

Applications

SWIR sees things that visible imaging cannot. (Likewise for UV, but that’s beyond the scope of this piece.) SWIR imaging can be mapped to “pseudo” images for human viewing – if required.

More to the point, machine vision applications get the job done in real-time without human involvement. Sort those materials. Monitor the perimeter for intruders. Optimize crop irrigation. etc.

If SWIR pseudo images help to get the juices flowing, here are a few:

Visible vs. SWIR image pairs – Courtesy Allied Vision – a TKH Vision brand
Contact us for a quote

Vision Systems Design award-winner

While the award was earned in China, the cameras perform the same in whatever country they are deployed in.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Whitepaper: Event-based sensing paradigm

Except for sometimes compelling line-scan imaging, machine vision has been dominated by frame-based approaches. (Compare Area-scan vs. Line-scan). With an area-scan camera, the entire two-dimensional sensor array of x pixels by y pixels is read out and transmitted over the digital interface to the PC host. Whether USB3, GigE, CoaXPress, CameraLink, or any other interface, that’s a lot of image data to transport.

Download whitepaper
Event-based sensing as alternative to frame-based approach

If your application is about motion, why transmit the static pixels?

The question above is intentionally provocative, of course. One might ask, “do I have a choice?” With conventional sensors, one really doesn’t, as their pixels just convert light to electrons according to the physics of CMOS, and readout circuits move the array of charges on down the interface to the host PC, for algorithmic interpretation. There’s nothing wrong with that! Thousands of effective machine vision applications use precisely that frame-based paradigm. Or the line-scan approach, arguably a close cousin of the area-scan model.

Consider the four-frame sequence to the left, relative to a candidate golf-swing analysis application. Per the legend, with post-processing markup the blue-tinged golfer, club, and ball are undersampled in the sense that there are unshown phases of the swing.

Meanwhile the non-moving tree, grass, and sky are needlessly re-sampled in each frame.

It takes an expensive high-frame-rate sensor and interface to significantly increase the sample rate. Plus storage capacity for each frame. And/or processing capacity – for automated applications – to separate the motion segments from the static segments.

With event-based sensing, introduced below, one can achieve the equivalent of 10k fps – by just transmitting the pixels whose values change.

Images courtesy Prophesee Metavision.

Event-based sensing only transmits the pixels that changed

Unlike photography for social media or commercial advertising, where real-looking images are usually the goal, for machine vision it’s all about effective (automated) applications. In motion-oriented applications, we’re just trying to automatically control the robot arm, drive the car, monitor the secure perimeter, track the intruder(s), monitor the vibration, …

We’re NOT worried about color rendering, pretty images, or the static portions in the field of view (FOV). With event-based sensing, “high temporal imaging” is possible, since one need only pay attention to the pixels whose values change.

Consider the short video below. The left side shows a succession of frame-based images for a machine driven by an electric motor and belt. But the left hand image sequence is not a helpful basis for monitoring vibration with an eye to scheduling (or skipping) maintenance, or anticipating breakdowns.

The right-hand sequence was obtained with an event-based vision sensor (EVS), and absolutely reveals components with both “medium” and “significant” vibration. Here those thresholds have triggered color-mapped pseudo-images, to aid comprehension. But an automated application could map the coordinates to take action, such as gracefully shutting down the machine, scheduling maintenance according to calculated risk, etc.

Courtesy Prophesee Metavision

Another example to help make it real:

Here’s another short video, which brings to mind applications like autonomous vehicles and security. It’s not meant to be pretty – it’s meant to show the sensor detects and transmits just the pixels that correlate to change:

Courtesy Prophesee Metavision

Event-based sensing – it really is a different paradigm

Even (especially?) if you are seasoned at line-scan or area-scan imaging, it’s a paradigm shift to understand event-based sensing. Inspired by human vision, and built on the foundation of neuromorphic engineering, it’s a new technology – and it opens up new kinds of applications. Or alternative ways to address existing ones.

Download whitepaper
Event-based sensing as alternative to frame-based approach

Download the whitepaper and learn more about it! Or fill out our form below – we’ll follow up. Or just call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

#EVS

#event-based

#neuromorphic