Learn about CoaXpress and frame rates achievable with Allied Vision’s “Bonito Pro” cameras with CXP-6

Allied Vision Bonito Pro camera

What is  CoaXPress, especially with “CXP-6” capability?

CoaXPress is an established industry standard allowing high speed communications over coaxial cable.  The current version supports bit rates up to 6.25 Gbits/sec over a single coaxial cable.  When used in parallel, two or more coaxial cables can provide incremental speed gains.  The naming convention associated with CoaXPress signify the bit rate as seen in the chart below.   In cases that you see CXP-6 has a bit rate of 6.25 Gb/s.  The 4 x means the number of lanes. Multiply the 2 and you get your total bit rate.

CXP CoaXpress

The new Allied Vision Bonito Pro cameras utilize 4 DIN 1.0/2.3 connectors on a CXP-6 interface (4 lanes) x 6.25Gbits/Sec. This  allows for resolutions of up to 26 megapixels to reach 70 frames per second (fps).  The first two Bonito PRO models (Bonito PRO X-2620 and X-1250) support high resolution with 26.6MP and 12.5MP at 80 and 142 fps respectively.

The Bonito PRO cameras are ideal for a wide range of applications including, 2D/ 3D surface inspection,  high speed printing, PCB & Electronics inspection.

Even faster frame rates can be achieved using the Bonito Pro X1250 (12.5MP) in partial scan mode.  Set to a 768 line height, a rate of 503 fps can be achieved!

Bonito Pro frame rates

The following video’s are good representations of what this relates to in real applications which you can appreciate.

Full specifications for the Allied Vision Bonito Pro cameras can be found HERE, but main features and benefits include:

  • Sensors available in Monochrome (X-1250B) and Color (X-1250C) and extended near-infrared (X1250B NIR ) models
  • On board defect pixel and 2D fixed pattern noise correction for improved image quality
  • Fan-less design for industrial imaging applications.
  • DIN 1.0 / 2.3 CoaXPress connections for secure operation in industrial environments.
  • Single cable solutions using trigger and power over CoaXPress (PoCXP)

contact us1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

 

UPDATE:  New video of the Bonito Pro detailing the multi-ROI function

 

 

Dalsa Nano M2450 polarized camera: Resolving defects that are undetectable with traditional imaging!

Dalsa Polarization camera

Genie Nano cameraThe first Genie Nano camera model with a quad-polarizer filter using the Sony Pregius IMX250-MZR 5.1MP monochrome image sensor is now available.  The Teledyne Dalsa Nano M2450 camera incorporates the nanowire polarizer filter allowing detection of both the angle and amount of polarized light.

What problems can the Nano M2450 polarized camera solve?

Polarized filtering can reduce the effects of reflections and glare from multiple directions and reveal otherwise undetectable features in the target scene.  Polarization enables detection of stress, birefringence, through-reflection and glare from surfaces like glass, plastic, and metal.  Sony’s newest image sensor, with its pixel-level polarizer structure, enables the detection of both the amount and angle of polarized light across a scene. Dalsa Nano polarization camera

 

 

 

 

Four different angled polarizers (90°, 45°, 135° and 0°) are positioned on each pixel, and every block of four pixels comprises a calculation unit.Contact 1st vision for pricing

How does polarization work?  Theory of operation

Polarization direction is defined as the electrical direction.  Light, with its electrical field oscillating perpendicular to the nano wire grid, passes through the filter while that in the parallel direction is rejected.

For Polarized light, only the portion of the light vector perpendicular to the angle of the nanowire filter grid passes.

polarization filter

For example, with a wire-grid polarizer filter at 90 deg. to the maximum transmission is for polarized light at an angle of 0 deg.

polarizer filterThe polarizer filter is placed directly on the sensor’s pixel array, beneath the micro-lens array.  This design, compared to polarize filters on top of the micro lens array reduced the possibility of light at a polarized angle being misdirected into adjacent pixels (cross talk) and incorrectly detected at the wrong angle.

Dalsa polarizer filter theory

The Genie Nano’s polarizer filter on the camera sensor is a 2 x 2 pattern, with each pixel having a nanowire polarizer filter with different angles (90, 45, 135 and 0 degree’s)

The image output pattern of the monochrome camera is arranged in 2 x 2 pixel block as follows:

Pixel blocks

 

 

 

 

That is, the first line output is an alternating sequence of pixels 0 & 35 degrees, with the following line of 45 and 90 degrees.

Given the proportion of light available through these four filters, any angle of polarized light can be calculated. Any given state of polarization can be composed by two linearly polarized waves in perpendicular directions. The state of polarization is determined by the relative amplitude and difference in phase between the two component waves.

Calculations on the 2×2 filter blocks result in a single pixel for each polarizer filter angle, therefore the resulting image is one fourth the original image resolution. For example, with an original image of 2464×2056, the resulting image is 1232×1028 (original buffer width/2 and original buffer height/2) for a single polarizing angle.

resulting image

Teledyne Dalsa offers a Polarization demo user interface making it easy to test the polarization techniques for various applications.  This includes the ability to see the results of various processing algorithms with the summed images.

Dalsa Polarization demo
As part of the demo program, images can be displayed with pseudo-color mapping

In summary, the new Dalsa Nano M2450 polarized camera can help resolve defects not detected by traditional imaging!   Contact 1st Vision to arrange a camera demo in which we will provide the demo polarization software as well or discuss your application.  Or click HERE to request a quoteContact us

 

 

Need line scan?  – With the addition of the Genie Nano polarized model, Teledyne DALSA is the first company to offer polarization for both area and line scan (Piranha™4 polarization) cameras

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Related Posts

Dalsa line scan polarization camera makes invisible visible!

Teledyne Dalsa TurboDrive 2.0 breaks past GigE limits now with 6 levels of compression

Imaging Basics – Calculating Exposure time for machine vision cameras

calculate camera exposure

In any industrial camera application, one key setting is the exposure time of the camera.  In cases where this is set arbitrarily, the resulting image maybe blurry due to movement of the scene we are imaging.  To maximize our settings, we can calculate the minimum exposure time to eliminate blur and maximize our scene brightness.  In this blog post, we will help understand the effects of exposure and calculate it for a given application.

First, let’s  explain camera exposure.  Exposure time for cameras, or shutter speed is the amount of time you let light fall on the image sensor. The longer the exposure time the more you ‘expose’ the sensor charging up the pixels to make them brighter.  Shutter speeds are usually given as a fraction of second, like 1/60th, /125,  1/1000 of a second in photography cameras and come from the film days.  In industrial cameras, exposure time is normally given in milliseconds, just the reciprocal of the shutter speed. (i.e. 1/60 sec = 0.0166 seconds or 16ms).

So how does this relate to blur?  Blur is what you get when your object moves relative to the sensor and in turn moving across 2 or more pixels during the exposure time.

You see this when you take a picture of something moving faster than the exposure time can fully stop the motion.  In the image to the left, we have a crisp picture of the batter, but the ball is moving very fast causing it to appear blurry.  The exposure in this case was taken at 1/500 sec (2 ms), but the ball moved many pixels during this exposure.

The faster the shutter speed, the less chance the object moves much relative to where it started.  In machine vision, cameras are fixed so they don’t move, but what we are worried about is the effect of the object moving during exposure time.

Depending on the application, it may or may not be sensitive to blur.  For instance, say you have a camera that has a pixel array of 1280 pixels in the

pixel blur diagram
Array of pixels – Movement of an object during exposure across pixels = Pixel Blur

x-axis, and your object on the sensor is 1000 pixels.  During the exposure the object moves 1 pixel, it is now moved 1 pixel over to the right. It has moved 1 pixel out of 1000 pixels, This is what we call “pixel blur”.  However, visibly you cannot notice this.  If we have an application in which we’re just viewing a scene and no machine vision algorithms are making decisions on this image, if the object moves a very small fraction of the total object size during exposure, we probably don’t care!.

Now assume you are measuring this object using machine vision algorithms.   Movement becomes more significant, because you now have uncertainty of the actual size of the object.  However, if your tolerances are within 1/1000, you are OK.  However, if your object was only 100 pixels, and it moved 1 pixel, from a viewing application this might still be fine, but from a measurement application, you are now off by 1%, and that might not be tolerable!pixel blur calc

In most cases, we want crisp images with no pixel blur.  The good part is this is relatively easy to calculate!   To calculated blur, you need to know the following:

  • Camera resolution in pixels (in direction of travel )
  • Field of View (FOV),
  • Speed of the object.
  • Exposure time

Then you can calculate how many pixels the object will move during the exposure using the following formula:

B = Vp * Te * Np / FOV

Where:
B = Blur in pixels
Vp = part velocity
FOV = Field of view in the direction of motion
Te = Exposure time in seconds
Np = number of pixels spanning the field of view

In the example above, Vp is 1 cm/sec, Te is 33ms, Np is 640 pixels and FOV is 10cm then:

B = 1 cm/sec * .033 sec * 640 pixels / 10cm = 2.1 pixels

In most cases, blurring becomes an issue past 1 pixel.  In precision measurements, even 1 pixel of blur maybe too much and need to use a faster exposure time.

1st Vision has over 100 years of combined experience contact us to help you calculate the correct exposure

Pixel blur calculator

Contact us

Related Blog posts that you may also find helpful are below: 

Imaging Basics: How to Calculate Resolution for Machine Vision

Imaging Basics – Calculating Lens Focal length

CCD vs CMOS industrial cameras – Learn how CMOS image sensors excel over CCD!

CCD vs CMOSCMOS Image sensors used in machine vision industrial cameras are now the image sensor of choice!  But why is this?

Allied Vision conducted a nice comparison between CCD and CMOS cameras showing the advantages in the latest Manta cameras.

Until recently, CCD was generally recommended for better image quality with the following properties:

  • High pixel homogeneity, low fixed pattern noise (FPN)
  • Global shutters for machine vision applications requiring very short exposure times

Where in the past, CMOS image sensors were used due to existing advantages:

  • High frame rate and less power consumption
  • No blooming or smear image artifacts contrary to CCD image sensors
  • High Dynamic Range (HDR) modes for acquisition of contrast rich and extremely bright objects.

Today CMOS image sensors offer many more advantages in industrial cameras versus CCD image sensors as detailed below

Overall key advantages are better image quality than earlier CMOS sensors due to higher sensitivity,  lower dark noise, spatial noise and higher quantum efficiency (QE) as seen in the specifications comparing a CCD and CMOS camera.

CCD vs CMOS comparisonsSony ICX655 CCD vs a Sony IMX264 CMOS sensor

Comparing the specifications between CCD and CMOS  industrial cameras, the advantages are clear.

  • Higher Quantum Efficiency (QE) – 64% vs 49% where higher is better in converting photons to electrons. 
  • Pixel well depth (ue.sat: ) – 10613 electrons (e-) vs 6600 e- where a higher well depth is beneficial
  • Dynamic range (DYN) – Where CMOS provides almost +17 dB more dynamic range.  This is a partial result of the pixel well depth along with low noise.
  • Dark Noise:  CMOS is significantly less vs CCD with only 2 electrons vs 12!

Images are always worth a thousand words!  Below are several comparison images contrasting the latest Allied Vision CMOS industrial cameras vs CCD industrial cameras.

Dynamic Range of today’s CMOS image sensors are contributed to several of the characteristics above and can provide higher fidelity images with better dynamic range and lower dark noise as seen in this image comparison of a couple of electronics parts

Allied vision cmos vs ccdThe comparison above illustrates how higher contrast can be achieved with high dynamic range and low noise in the latest CMOS industrial cameras

  • High noise in the CCD image causes low contrast between characters on the integrated circuit, whereas the CMOS sensor provides higher contrast.
  • Increased Dynamic range from the CMOS image allows darker and brighter areas in an image to be seen.  The battery (left part) is not as saturated vs the CCD image allowing more detail to be observed.

Current CMOS image sensors eliminate several artifacts and provide more useful images for processing.  The images below are an example of a PCB with LEDs illuminated imaged with a CCD vs CMOS industrial camera

ccd vs cmos artifactsCMOS images will result in less blooming of bright areas (LED’s for example in the image), smearing (vertical lines seen in the CCD image) and lower noise (as seen in the darker areas, providing higher overall contrast)

  • Smearing (vertical lines seen in the CCD image) are eliminated with CMOS.  Smear has inherently been a bad artifact of CCDs.
  • Dynamic Range inherent to CMOS sensors allow the LED’s to not saturates as much as the CCD allowing more detail to be seen.
  • Lower noise in the CMOS image, as seen in the bottom line graph shows a cleaner image.

More advantages of new CMOS image sensors include:

  • Higher frame rates and shutter speeds over CCD resulting in less image blur in fast moving objects.
  • Much lower cost of CMOS sensors translate into much lower cost cameras!
  • Improved global shutter efficiency.

CMOS image sensor manufacturers are also working to design sensors that easily replace CCD sensors making for an easy transition which results in lower cost and better performance.  Allied Vision has several new cameras replacing current CCD’s with more to come!  Below are a few popular cameras / image sensors that have been recently crossed over to CMOS image sensors

Sony ICX424 and Sony ICX445 (1/3″ sensor)  found in the Manta G-032 and Manta G-125 cameras are now replaced by the Sony IMX273 in the Manta G-158 camera keeping the same sensors size.  (Read more here)

Sony ICX424 (1/3″sensor), can also be replaced by the Sony IMX287 (1/2.9″ sensor) with pixel sizes of 6.9um closely matching the older IMX424 having 7.4um pixels.  Allied Vision Manta G-040 is a nice solution with all the benefits of the latest CMOS image sensor technology.  View the short videos below for the highlights.

 

Contact us

 

 

 

 

 

Related Posts

What are the attributes to consider when selecting a camera and its performance?

Allied Vision Manta G-040 & G-158 provide great replacements to legacy CCD cameras

Upgrade your 5MP CCD (Sony ICX625) camera for higher performance with an Allied Vision Mako G-507 (IMX264)