Imaging Basics – Calculating Exposure time for machine vision cameras

calculate camera exposure

In any industrial camera application, one key setting is the exposure time of the camera.  In cases where this is set arbitrarily, the resulting image maybe blurry due to movement of the scene we are imaging.  To maximize our settings, we can calculate the minimum exposure time to eliminate blur and maximize our scene brightness.  In this blog post, we will help understand the effects of exposure and calculate it for a given application.

First, let’s  explain camera exposure.  Exposure time for cameras, or shutter speed is the amount of time you let light fall on the image sensor. The longer the exposure time the more you ‘expose’ the sensor charging up the pixels to make them brighter.  Shutter speeds are usually given as a fraction of second, like 1/60th, /125,  1/1000 of a second in photography cameras and come from the film days.  In industrial cameras, exposure time is normally given in milliseconds, just the reciprocal of the shutter speed. (i.e. 1/60 sec = 0.0166 seconds or 16ms).

So how does this relate to blur?  Blur is what you get when your object moves relative to the sensor and in turn moving across 2 or more pixels during the exposure time.

You see this when you take a picture of something moving faster than the exposure time can fully stop the motion.  In the image to the left, we have a crisp picture of the batter, but the ball is moving very fast causing it to appear blurry.  The exposure in this case was taken at 1/500 sec (2 ms), but the ball moved many pixels during this exposure.

The faster the shutter speed, the less chance the object moves much relative to where it started.  In machine vision, cameras are fixed so they don’t move, but what we are worried about is the effect of the object moving during exposure time.

Depending on the application, it may or may not be sensitive to blur.  For instance, say you have a camera that has a pixel array of 1280 pixels in the

pixel blur diagram
Array of pixels – Movement of an object during exposure across pixels = Pixel Blur

x-axis, and your object on the sensor is 1000 pixels.  During the exposure the object moves 1 pixel, it is now moved 1 pixel over to the right. It has moved 1 pixel out of 1000 pixels, This is what we call “pixel blur”.  However, visibly you cannot notice this.  If we have an application in which we’re just viewing a scene and no machine vision algorithms are making decisions on this image, if the object moves a very small fraction of the total object size during exposure, we probably don’t care!.

Now assume you are measuring this object using machine vision algorithms.   Movement becomes more significant, because you now have uncertainty of the actual size of the object.  However, if your tolerances are within 1/1000, you are OK.  However, if your object was only 100 pixels, and it moved 1 pixel, from a viewing application this might still be fine, but from a measurement application, you are now off by 1%, and that might not be tolerable!pixel blur calc

In most cases, we want crisp images with no pixel blur.  The good part is this is relatively easy to calculate!   To calculated blur, you need to know the following:

  • Camera resolution in pixels (in direction of travel )
  • Field of View (FOV),
  • Speed of the object.
  • Exposure time

Then you can calculate how many pixels the object will move during the exposure using the following formula:

B = Vp * Te * Np / FOV

Where:
B = Blur in pixels
Vp = part velocity
FOV = Field of view in the direction of motion
Te = Exposure time in seconds
Np = number of pixels spanning the field of view

In the example above, Vp is 1 cm/sec, Te is 33ms, Np is 640 pixels and FOV is 10cm then:

B = 1 cm/sec * .033 sec * 640 pixels / 10cm = 2.1 pixels

In most cases, blurring becomes an issue past 1 pixel.  In precision measurements, even 1 pixel of blur maybe too much and need to use a faster exposure time.

1st Vision has over 100 years of combined experience contact us to help you calculate the correct exposure

Pixel blur calculator

Contact us

Related Blog posts that you may also find helpful are below: 

Imaging Basics: How to Calculate Resolution for Machine Vision

Imaging Basics – Calculating Lens Focal length

Which Industrial camera would you use in low light?

OK vs NGOur job as imaging specialists is to help our customers make the best decisions on which industrial camera and image sensor works best for their application.  This is not a trivial task as there are many data points to consider, and in the end, a good image comparison test helps provide the true answer.  In this blog post, we conduct another image sensor comparison for low light applications testing a long time favorite e2V EV76C661 Near Infra Red (NIR) sensor to the new Sony Starvis IMX178 and Sony Pregius IMX174 image sensor using IDS Imaging cameras.

An Industrial camera can be easily selected based on resolution and frame rates, but image sensor performance is more challenging.  We can collect data points from the camera EMVA1288 test results and spectral response charts, but one can not conclude on what is best for the application based on one data point.  In many cases, several data points need to be reviewed to start making an educated decision.

We started this review comparing 3 image sensors to determine which ones would perform best in low light applications.

Below is a chart comparing the e2v EV76C661 NIR, Sony Starvis IMX178 and , Sony Pregius IMX174 image sensors found in the IDS Imaging UI-3240NIR, UI-3880CP and UI-3060CP cameras using EMVA1288 data to start. This provides us with accurate image sensor data to evaluate.

image sensor comparison
Table 1: Sensor comparison data
Spectral response cufves
Camera Spectral Response curves

 

 

We also look at the Quantum Efficiency (QE) curves for the sensors to see the sensor performs over the light spectrum as seen to the left.  (As a note, QE is the conversion of photon to an electrical charge (electrons)

 

 

 

 

 

 

 

 

 

For this comparison, our objective is to determine which sensor will perform best in low light applications with broadband light.  From table 1, the IMX178 has very low absolute sensitivity (abs sensitivity) with taking ~ 1 photon to help make a adequate charge, however the pixels are small (2.4um), so maybe not gather light as well as larger pixels.  It does have the best dark noise characteristics however.  In comparison, the e2V sensor has 9.9 photons  for abs sensitivity (not as good as 1 photon) and has a larger pixel size (bigger is better to collect light).  The IMX174 proves to be interesting as well with the largest pixel of 5.86um and the highest QE @ 533nm.

Using the data from the spectral response curves however, helps give us more insight across the light spectrum.  Given we are using a NIR enhanced camera, we will have significant more conversion of light to a create a charge on the sensor across most of the light spectrum.  In turn, we expect we’d see brighter images from the e2V NIR IDS UI-3240 NIR camera.

As a note, one more data point is to look at the pixel well depth.  Smaller pixels will saturate faster making the image brighter, so if other variables were close, this may also be taken into consideration.

As one can see, this is not trivial, but evaluating many of the data points, can give us some clues, but testing is really what it takes!  So, lets now compare the images to see how they look.

The following images were taken with the same exposure, lens + f-stop in the identical low light environment.  In the 2nd image, the e2v image sensor in the IDS-UI-3240CP NIR provides the brighter image as some of the data points started to indicate.  The IDS UI-3060CP-M (IMX174) is second best.

IDS UI-3880CP (IMX178)
IDS UI-3240CP NIR (e2v )
IDS UI-3060CP-M (Sony Pregius IMX174)

In low light situations, we can always add camera gain, but we pay the price of adding noise to the image.   Depending on the camera image sensor, some have the ability to provide more gain than others.  This is another factor to review when considering adding gain.  We need to also take into account read noise as this will get amplified with gain.   Our next part of our test is to turn up the gain to see how we compare.

The following set of images was taken again with the same lens + f-stop, lighting, but with gain at max for each camera.

IDS UI-3880CP with 14.5X gain
IDS UI-3240CP NIR with 4X gain
IDS UI-3060CP-M with 24X gain

The IDS-UI-3060CP-M has the highest gain available, but still keeps the read noise relatively low with 6 electrons.  This in low light WITH gain, gives us a nice image in nearly dark environments.

Conclusion
We can review the data points until we are blue in the face and they can be very confusing.  We can however take in all the data and help make some more educated decisions on which cameras to test.  For example in the first test, we had a good idea the NIR sensor would perform well looking a the QE curves along with other data.  In our second test, we may have seen the UI-3060CP had 24X gain vs others still with low read noise, giving an indication, we’d have relatively clean image.

In the end, 1st Vision’s sales engineers will help provide the needed information and help conduct testing for you!  We spend a lot of time in our lab  in order to provide first hand information to our customers!

Contact us

1st Vision is the leading provider of industrial imaging components with over 100 years of combined imaging experience.  Do not hesitate to contact us to discuss your applications!

Related Blogs

How do I sort through all the new industrial camera image sensors to make a decision? Download the sensor cheat sheet!

 

Just a few foot notes regarding this blog post:

Magnification of the images differs due to sensor size.  Working distance of the cameras was kept identical in all setups and focused accordingly with distance.

This topic can be very complex!  If we were to dig in even deeper, we’d take into consideration charge convergence of the pixel which effects sensitivity aside from looking at just QE!.. That’s probably another blog post!

As a reference, this image was taken with an Iphone and set to best represent what my eye viewed during our lab test.  Note that the left container with markers was non-distinguishable to the human eye

Clipart courtesy of clipartextra.com

There is NO such thing as a “Megapixel” machine vision camera lens!.. Say what??

Lenses

Megapixel Machine vision lensesThere has been a lot written about the ratings of machine vision lenses1stVision had created white papers that describe this in detail. However, the lens industry continues to use the marketing term, Megapixel Machine Vision Camera Lenses.

Let’s get this out of the way right now. 

There is NO such thing as a Megapixel Machine vision Camera Lens.

But since it is me against the world, let me explain why sometimes a 12 MP lens is really the same resolution as a 5 MP quality lens.

The first thing to understand is that lenses are evaluated on their resolving power, which is a spatial resolution.  For lens used in the industrial imaging marketplace, this is normally given in terms as “Line Pairs per mm” (LP/mm).  The reason it is expressed this way is because to resolve a pixel of “X” um, you need to use the formula, 1 / 2X where “X” is the pixel size and 2 is the Nyquist limit.  So to resolve a pixel of 5um we need a resolution of 1/ ( 5um*2)  per line pair.  In LP/mm, this becomes 100 LP/mm.

A graph showing a lenses performance is shown in a  plot below, plotting intensity vs. LP/mm.  This is called the Modulation Transfer Function (MTF). Note that as the LP/mm increases and the lens can’t resolve it as well, the intensity falls off.  This measurement is variable to F stop and angle of light, so real MTF charts will indicated these parameters. This is the only real way to empirically evaluate how a lens will perform.

You can visually compare lenses, but to truly compare Brand A vs. Brand B you would have to test them under identical situations.  You can’t compare Brand A’s MTF vs. Brand B’s if you don’t know what the parameters used to test them are (need the same camera, with the same lighting, with the same focus, with the same f stop, the same gain, etc. etc.).  Unfortunately its very hard to get that information from most lens manufacturers.

1, 3, 5, 9, 12 Megapixel lens?

Tamron 12MP MPY lenses
Compliments of Computar

What does this mean?  As an example, Sony has recently introduced a new line of image sensors which  have  5MP, 9MP and 12MP sensors.  Many clients have called and said,  “I want to use the 12MP sensor, so please spec a lens that can do 12MP.”  Unfortunately, this isn’t correct as each of these sensors uses a 3.45um pixel.  They ALL need the same quality lens!  Why?  Because it is the size of pixel, what you have to resolve, that dictates the quality of the lens!

In the above situation, the 5MP sensor needs a 2/3” format lens, the 9MP needs a 1” lens, and 12 MP needs a 1.1” format lens.  (Multiply the size of the pixel by the number of H and V pixels to get the sensor format  – more on format HERE ).  However, this sensor needs about 144 LP/mm of resolving power as its a 3.45um pixel size.  As much as I detest the nomenclature of “5MP lens” etc., I do appreciate what Fuji  does; as they will state, “…. This  series of high-resolution lenses deliver 3.45um pixel pitch (equivalent to 5MP) on a 2/3″ sensor”.   Now this make more sense!

In turn, if you see a lens stated as a “Megapixel Machine vision” lens, question this!  It really needs to be stated in terms of its capability to resolve the pixel size in LP/mm!

Contact us

1stVision has a staff of machine vision veterans who are happy to explain this in more detail and help you specify the best lens for your application!   Contact 1st Vision!

Additional References:
For a comprehensive understanding on “How to Choose a Lens”, download our whitepaper HERE.  

Blog post:  Demystifying Lens performance specifications

Blog post:  Learn about FUJI’s HF-XA-5M (5 Megapixel) lens series which resolves 3.45um pixel pitch sensors! Perfect for cameras with Sony Pregius image sensors.

Use the 1st Vision lens selector allowing you to filter by focal length, format and manufacturer to name a few

How much resolution do I lose using a color industrial camera in a mono mode? Is it really 4X?

color vs monochrome imagesMany clients call us about doing measurements on grey scale data, but want to use a color machine vision industrial camera because they want the operator or client to see a more ‘realistic’ picture.  For instance, if you are looking at PCBs, need to read characters with good precision, but also see the colors on a ribbon cable,  you are forced to use a color camera.

In these applications, you could take out a monochrome image from a color sensor for processing, and use the color for cataloging and visualization.   But the question is, how much data is lost by using a color camera in mono mode?

First, the user must understand how a color camera works, and how it gets its picture.  Non 3-CCD cameras use a Bayer filter, which is a matrix of red, green, and blue filters over each pixel.  For each group of 4 pixels, there are 2 greens, 1 red and 1 blue pixel. (The eye being most sensitive in Green, has more to simulate the response).

Bayer image sensor

To get a color image out, each pixel out is a computation of a weighted sum of its nearest neighbor pixels which is known as Bayer interpolation.  The accuracy of the color on these cameras is a result of what the original image was, and how the camera algorithms interpolated the set of red, green and blue values for each pixel.

To get monochrome out, one technique is to have the image broken down into Hue, Saturation, and Intensity, with the intensity taken as the grey scale value.  Again, this is mathematical computation. The quality of the output is dependent upon the original image and the algorithms used to compute the output.

Mono image sensor

An image such as the above will give an algorithm a hard time as you are flipping between grey scale values of 0 and 255 for each pixel (assuming the check board lines up with each pixel).  Since the output of each pixel is based on it’s nearest neighbors, you could be replacing a black pixel with 4 white ones!

Grey scale image

On the other hand, if we had an image with a ramp of pixel values, in other words, each pixel was say 1 value less than the one next to it, the average of the the nearest neighbors would very close to the pixel it was replacing.

What does all this mean in real world applications?  Let’s take a look at a 2 images, both from the same brand of camera where one is the using the 5MP Sony Pregius IMX250 monochrome sensor, the other is using the same sensor, but the color version.  The images were taken with the same exposure and identical setup.  So how do they compare when we blow them up to the pixel level after we take the monochrome output from the color camera and compare it to the monochrome camera?

Grey Scale Analysis
(Left) – Color Image ——————————- (Right) – Monochrome Image

In comparing the color image (Left), if you expand the picture, you can see that the middle of the E is wider. The transition is not as close to a step function as you would want it to be. The vertical cross section is about 11 pixels with more black than white. Comparing the monochrome image (Right), the vertical cross section is closer to about 8 pixels.

Conclusion:

If you need pixel level measurement, and there is no need for a color image, USE A MONOCHROME MACHINE VISION CAMERA.

If you need to do OCR (as in this example) the above images using color or monochrome would work just fine.  This is given you have enough pixels to start and your spatial resolution is adequate.

CLICK HERE FOR A COMPLETE LIST OF MACHINE VISION CAMERAS

Do you lose 4x in resolution as some people claim?  Not with the image I have used above.  Maybe with the checkerboard pattern, but if you can have multiple pixels across your image to measure, you might be ok in with using a color camera and is really application dependent!  This post is to make you aware of the resolution loss specifically and 1st Vision can help in making decisions by contacting us for a discussion. 

Contact us

1stVision is the leading provider of machine vision components and has a staff of experienced sales engineers to help discuss your application.  Please do not hesitate to contact us to help you in calculating the resolution you need to calculating focal lengths for your application. 

Related links and blog posts

How does 3CCD cameras improve color accuracy and spatial resolution over Bayer cameras

Calculating resolution for machine vision

Use the 1st Vision camera filters to help ID the desired camera