What cables should I use with a machine vision camera?

While not an exact figure, we would estimate that about half our client’s problems with machine vision camera connections, dropped frames, etc. comes back to a cabling issue. This is especially true for USB and GigE cameras.

In most of these cases, the issue is that the user is using a poor/low quality cable that was not made for the high speed and/or long distance demands of the application. Most inexpensive camera cables available via mail order are not made for use in high speed highly reliable data transfer applications. If your phone isn’t transferring at the full USB3 bandwidth, you normally don’t care. You probably don’t even know. But when you purchase a high speed USB3 camera and you can’t achieve its full frame rate, or you achieve it intermittently, this becomes a big issue.

This is the reason 1stVision offers ‘machine vision/industrial’ USB3 and GigE cables.  These cables are tested to specs, come with screw locks to prevent the connectors from falling out, use larger gauge wire, are over molded.  They are designed to be twisted and bent (somewhat) and are industrial!

Signal amplitude (the voltage of the signal in the cable) is a function of distance and frequency for cables. For instance, Ethernet is specified to 100 meters.  So your cable should work when each device is 100 meters away.  However, without the proper cable, you will not maintain the full 1000 Mbits/s data transfer rate!  You might only be getting 50% of the speed depending upon the distance without a high quality cable.

Finally, consider the cost if your machine vision camera is part of an instrument or product that is being sold to your clients.  We see far too many clients who try to save $30 on the cable only to find out that it is costing them thousands of dollars to trouble shoot a problem that can be easily solved with the proper part.  Not to mention the cost to their client when the system isn’t working, and a hit to their reputation of not building a reliable system.Alysium

Here is our advice:

  1. If you are in an industrial setting, you are compromising the reliability and robustness of your system if you are not using an ‘industrial cable’.  Even if you are not operating at maximum speed of the camera, you should have these cables.  BTW, these cables are not that much more expensive mail order cables.  They are in the 10’s of dollars, but not in the 1’s of dollars.
  2. If you are using USB3 cables, you should really be using ‘industrial’ cables.  Current ‘inexpensive’ USB3 cables are not reliable at over 2M, and only 1M for USB C connector types.  If you are using USB3 specifically to get the higher speeds from this protocol, then you absolutely need to be using ‘industrial’ cables.  Inexpensive cables are not reliable for high speed data transmission.
  3. If you are in a lab environment, with the cable never moving, and only going a short distance, then a high quality ‘inexpensive’ Cat 6e cable will work.  There is a difference between inexpensive Ethernet cables.  The one that came with the security camera all folder up is NOT what you should use. A reputable mail order cable vendor selling high quality patch cables is OK.

CLICK HERE to get  GigE Cable specs and get a quote

CLICK HERE for USB3 Cable specs and get a quote

Don’t be penny wise and pound foolish. At 1stVision, we offer these cables not to enrich ourselves, there is not much profit in a $30 cable, but rather to make sure our clients systems work well.

 

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Contact us to help in the specification and providing pricing

Ph:  978-474-0044  /  info@1stvision.com  / www.1stvision.com

Related Blogs & Technical resources

Imaging Quick ref Poster
Quick Reference Imaging poster download

 

Optotune liquid lenses – 5 case examples for machine vision

Optotune tunable lenses

Optotune & Gardasoft liquid lens controlsLiquid lens technology, with its ability to change focus within the order of milliseconds is opening up a host of new applications in both machine vision and the life sciences.  It is gaining growing interest from a wide cross section of applications and easily adapts to standard machine vision lenses.

Liquid lens technology alone provides nice solutions, but when combined with advanced controls, many more applications can be solved.

To learn the fundamentals of liquid lens technology and download a comprehensive white paper read our previous blog HERE. 

see spec's

In this blog, we will highlight several case application areas for liquid lens technology.

Case 1:  Applications requiring various focus points and extended depth of field:  This does cover many applications, such as logistics, packaging and code reading in packaging.  Optotune Liquid lenses provide the ability to have pre-set focus points, auto-focus or utilize distance sensors for feedback to the lens.  In the example below, 2 presets can be programmed and toggled to read 2D codes at various heights essentially extending the depth of field.

extended DOF

Case 2:  3D imagery of transparent materials / Hyperfocal (Extended DOF Images:  When image stackingusing an Optotune liquid lens in conjunction with a Gardasoft TR-CL180 controller, sequence of images can be taken with the focus point stepped between each image.  This technique is known as focus stacking.   This will build up a 3D image of transparent environments such as cell tissue or liquid for analysis.  This can also be used to find particles suspended in liquids.

image stacking for cells

A Z-stack of images can also be used to extract 3D data (depth of focus) and compute a hyper-focus or extended depth of field (EFOF) image.

The EDOF technique requires tacking a stack of individual well focused images which have preferably been synchronized with one flash per image.  An example is show below with the rendered hyper focus image shown at right.

Hyperfocus imageCase 3:  Lens inspection:  Liquid lenses can be used to inspect lenses, such as those in cell phones for dust and scratches looking through the lens stack.

Optotune liquid lens stack imageFor this application, a liquid lens is used in conjunction with a telescentric lens taking images through different heights of the lens stack.  

Case 4:  Bottle / Container inspection:  Optotune Liquid lenses can be used to facilitate image bottom’s of glass bottles or containers of various heights.

In this example, the camera is consistently at the neck of the bottle, but the bottom is at different heights.  optotune lens - bottle inspection

Case 5:  Large surface inspections with variation in height:  Items ranging from PCB’s to LCD’s are not flat, have various component heights and need to be inspected at high magnification (typically using lenses with minimal DOF).  Optotune Liquid lenses are a perfect solution using preset focus points.

pcb inspection

Machine Vision applications using Optotune Liquid lenses and controller are endless!

These applications are just the tip of the iceberg and many more exist, but this will give you a good idea of capabilities.   Gardasoft TR-CL controllers are fully GigE Vision compliant, so any compatible GigE Vision client image processing software such as Cognex VisionPro, Teledyne Dalsa Sherlock or National Instruments LABVIEW can be used easily.

Click to contact

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Contact us to help in the specification and providing pricing

Ph:  978-474-0044  /  info@1stvision.com  / www.1stvision.com

Related Video

Related Blog Posts

Learn how liquid lenses keep continuous focus on machine vision cameras when the working distance changes.

5 benefits of using strobed lighting for machine vision applications

Gardasoft controller for machine vision

Gardasoft controllerPulsing (aka strobing)  a machine vision LED  light is a powerful technique that can be beneficial to machine vision systems in various ways.

This blog post outlines 5 benefits you will receive from pulsing  a LED  light head.  Gardasoft is an industry leader in strobe controllers capable of driving 3rd party LED light heads or custom LED banks for machine vision.

1 – Increase the LED light output

It is common to use pulsed light to “freeze” motion for high speed inspection.  But, when the light is on only a short term in burst, its possible to increase the light output beyond the LED manufacturers specified maximum, using a technique called “Overdrive”.   In many cases, the LED can be powered by 10X over the constant current power input in turn providing brighter pulses of light.  When synchronized with the camera acquisition, a brighter scene is generated.Gardasoft LED overdrive

2 – Extend the life of the LED 

As mentioned in the first benefit, strobing a LED light head only turns on the LED for short period of time.  In many cases, the duty cycles are very low which extends the life of the LED and any degradation in turn, keeping the scene at a consistent brightness for years.  (i.e. If the duty cycle is only 10%, the lifetime of the LED head will increase by 10%)

3 – Ambient Light control

Ambient light conditions frequently interfere with machine vision measurements and these issues can be solved by pulsing and over driving the system’s LEDs. For example, over driving the LED by 200% doubles the light intensity and enables the camera exposure to be halved, so reducing the effects of ambient light by a factor of 4.  The end result is the cameras exposure is only utilizing light from the give LED source and NOT ambient light.

4 – High speed imaging and Increased depth of field

Motion blur in images from fast-moving objects can be eliminated with appropriate pulsing of the light.  In some cases a defined camera exposure will be good enough to freeze motion (read our blog on calculating camera exposure), but may suffer in light intensity with constant illumination.  “Over driving” a light can boost the output up to 10x its brightness rating in short pulses.  Increased brightness could allow the whole system to be run faster because of the reduced exposure times.  Higher light output may also allow the aperture to be reduced to give better depth of field.

Extended Depth of Field (DOF) is achieved with a brighter light allowing the f-stop to be turned down

Gardasoft controllers include our patented SafePower and SafeSense technology which prevents over driving from damaging the light.

5 -Multi-Lighting schemed & Computational Imaging

Lighting controllers can be used to reduce the number of camera stations. Several lights are set up at a single camera station and pulsed at different intensities and duration’s in a predefined sequence.

CCS America Shape from shading
Generate edge and texture images using shape from shading

Each different lighting can highlight particular features in the image. Multiple measurements can be made at a single camera station instead of needing multiple stations and reduces, mechanical complexity saving money. For example, sequentially triggering 3 different types of lighting could allow a single camera to acquire specific images for bar code reading, surface defect inspection and a dimensional check in rapid succession.

Pulsing can also be used for computational imaging, where a component is illuminated sequentially by 4 different lights from different directions. The resultant images would be combined to exclude the effect of random reflections from the component surface.  Contact us and ask for the white paper on Computational imaging to learn more

CCS Computational imaing
The images on the right (top and bottom) were taken with bright field and dark field lighting. The left images is the the result of the computational imaging combining the lighting techniques allowing particles and water bubble to be seen

Pulsed multiple lighting schemes can also benefit line scan imaging by using different illumination sources to capture alternate lines. Individual images for each illumination source are then easily extracted using image processing software.

In conclusion, strobe controllers can provide many benefits and save money in an overall setup more than the cost of a controller!

1st Vision has additional white papers on the following.  Contact us an ask for any one of these informative white papers – Simply send an email and ask for 1 or all of the white papers.
1- Practical use of LED controllers
2 – Intelligent Lighting for Machine Vision Systems
3- LED Strobe lighting for ITS systems
4 – Liquid Lens technology and controllers for machine vision.
5 – Learn about computational imaging and how CCS Lighting can help

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Related Topics

Learn how liquid lenses keep continuous focus on machine vision cameras when the working distance changes.

White Paper – Key benefits in using LED lighting controllers for machine vision applications

Imaging Basics – Calculating Exposure time for machine vision cameras

calculate camera exposure

In any industrial camera application, one key setting is the exposure time of the camera.  In cases where this is set arbitrarily, the resulting image maybe blurry due to movement of the scene we are imaging.  To maximize our settings, we can calculate the minimum exposure time to eliminate blur and maximize our scene brightness.  In this blog post, we will help understand the effects of exposure and calculate it for a given application.

First, let’s  explain camera exposure.  Exposure time for cameras, or shutter speed is the amount of time you let light fall on the image sensor. The longer the exposure time the more you ‘expose’ the sensor charging up the pixels to make them brighter.  Shutter speeds are usually given as a fraction of second, like 1/60th, /125,  1/1000 of a second in photography cameras and come from the film days.  In industrial cameras, exposure time is normally given in milliseconds, just the reciprocal of the shutter speed. (i.e. 1/60 sec = 0.0166 seconds or 16ms).

So how does this relate to blur?  Blur is what you get when your object moves relative to the sensor and in turn moving across 2 or more pixels during the exposure time.

You see this when you take a picture of something moving faster than the exposure time can fully stop the motion.  In the image to the left, we have a crisp picture of the batter, but the ball is moving very fast causing it to appear blurry.  The exposure in this case was taken at 1/500 sec (2 ms), but the ball moved many pixels during this exposure.

The faster the shutter speed, the less chance the object moves much relative to where it started.  In machine vision, cameras are fixed so they don’t move, but what we are worried about is the effect of the object moving during exposure time.

Depending on the application, it may or may not be sensitive to blur.  For instance, say you have a camera that has a pixel array of 1280 pixels in the

pixel blur diagram
Array of pixels – Movement of an object during exposure across pixels = Pixel Blur

x-axis, and your object on the sensor is 1000 pixels.  During the exposure the object moves 1 pixel, it is now moved 1 pixel over to the right. It has moved 1 pixel out of 1000 pixels, This is what we call “pixel blur”.  However, visibly you cannot notice this.  If we have an application in which we’re just viewing a scene and no machine vision algorithms are making decisions on this image, if the object moves a very small fraction of the total object size during exposure, we probably don’t care!.

Now assume you are measuring this object using machine vision algorithms.   Movement becomes more significant, because you now have uncertainty of the actual size of the object.  However, if your tolerances are within 1/1000, you are OK.  However, if your object was only 100 pixels, and it moved 1 pixel, from a viewing application this might still be fine, but from a measurement application, you are now off by 1%, and that might not be tolerable!pixel blur calc

In most cases, we want crisp images with no pixel blur.  The good part is this is relatively easy to calculate!   To calculated blur, you need to know the following:

  • Camera resolution in pixels (in direction of travel )
  • Field of View (FOV),
  • Speed of the object.
  • Exposure time

Then you can calculate how many pixels the object will move during the exposure using the following formula:

B = Vp * Te * Np / FOV

Where:
B = Blur in pixels
Vp = part velocity
FOV = Field of view in the direction of motion
Te = Exposure time in seconds
Np = number of pixels spanning the field of view

In the example above, Vp is 1 cm/sec, Te is 33ms, Np is 640 pixels and FOV is 10cm then:

B = 1 cm/sec * .033 sec * 640 pixels / 10cm = 2.1 pixels

In most cases, blurring becomes an issue past 1 pixel.  In precision measurements, even 1 pixel of blur maybe too much and need to use a faster exposure time.

1st Vision has over 100 years of combined experience contact us to help you calculate the correct exposure

Pixel blur calculator

Contact us

Related Blog posts that you may also find helpful are below: 

Imaging Basics: How to Calculate Resolution for Machine Vision

Imaging Basics – Calculating Lens Focal length