Components needed for machine vision and industrial imaging systems

Machine vision and industrial imaging systems are used in various applications ranging from automated quality control inspection, bottle filling, robot pick-and-place applications, autonomous drone or vehicle guidance, patient monitoring, agricultural irrigation controls, medical testing, metrology, and countless more applications.

Imaging systems typically include a least a camera and lens, and often also include one or more of specialized lighting, adapter cards, cables, software, optical filters, power supply, mount, or enclosure.

At 1stVision we’ve created a resource page is intended to make sure that nothing in a planned imaging application has been missed.  There are many aspects on which 1stVision can provide guidance.   The main components to consider are indicated below.

Diverse cameras

Cameras: There are area scan cameras for visible, infrared, and ultraviolet light, used for static or motion situations.  There are line scan cameras, often used for high-speed continuous web inspection.  Thermal imaging detects or measures heat.  SWIR cameras can identify the presence or even the characteristics of liquids.  The “best” camera depends on the part of the spectrum being sensed, together with considerations around motion, lighting, surface characteristics, etc.

An assortment of lens types and manufacturers

Lens: The lens focuses the light onto the sensor, mapping the targeted Field of View (FoV) from the real world onto the array of pixels.  One must consider image format to pair a suitable lens to the camera.  Lenses vary by the quality of their light-passing ability, how close to the target they can be – or how far from it, their weight (if on a robot arm it matters), vibration resistance,  etc.  See our resources on how to choose a machine vision lens.  Speak with us if you’d like assistance, or use the lens selector to browse for yourself.

Lighting: While ambient light is sufficient for some applications, specialized lighting may also be needed, to achieve sufficient contrast.  And it may not just be “white” light – Ultra-Violet (UV) or Infra-Red (IR) light, or other parts of the spectrum, sometimes work best to create contrast for a given application – or even to induce phosphorescence or scatter or some other helpful effect.  Additional lighting components may include strobe controllers or constant current drivers to provide adequate and consistent illumination. See also Lighting Techniques for Machine Vision.

Optical filter: There are many types of filters that can enhance application performance, or that are critical for success.  For example a “pass” filter only lets certain parts of the spectrum through, while a “block” filter excludes certain wavelengths.  Polarizing filters reduce glare.  And there are many more – for a conceptual overview see our blog on how machine filters create or enhance contrast

Don’t forget about interface adapters like frame grabbers and host adapters; cables; power supplies; tripod mounts; software; and enclosures. See the resource page to review all components one might need for an industrial imaging system, to be sure you haven’t forgotten anything.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Types of 3D imaging systems – and benefits of Time of Flight (ToF)

Time Of Flight Gets Precise: Whitepaper

2D imaging is long-proven for diverse applications from bar code reading to surface inspection, presence-absence detection, etc.  If you can solve your application goal in 2D, congratulations!

But some imaging applications are only well-solved in three dimensions.  Examples include robotic pick and place, palletization, drones, security applications, and patient monitoring, to name a few.

For such applications, one must select or construct a system that creates a 3D model of the object(s).  Time of Flight (ToF) cameras from Lucid Vision Labs is one way to achieve cost-effective 3D imaging for many situations.

ToF systems setup
ToF systems have a light source and a sensor.

ToF is not about objects flying around in space! It’s about using the time of flight of light, to ascertain differences in object depth based upon measurable variances from light projected onto an object and the light reflected back to a sensor from that object.  With sufficiently precise orientation to object features, a 3D “point cloud” of x,y,z coordinates can be generated, a digital representation of real-world objects.  The point cloud is the essential data set enabling automated image processing, decisions, and actions.

In this latest whitepaper we go into depth to learn:
1. Types of 3D imaging systems
2. Passive stereo systems
3. Structured light systems
4. Time of Flight systems
Whitepaper table of contents
Download

Let’s briefly put ToF in context with other 3D imaging approaches:

Passive Stereo: Systems with cameras at a fixed distance apart, can triangulate, by matching features in both images, calculating the disparity from the midpoint.  Or a robot-mounted single camera can take multiple images, as long as positional accuracy is sufficient to calibrate effectively.

Challenges limiting passive stereo approaches include:

Occlusion: when part of the object(s) cannot be seen by one of the cameras, features cannot be matched and depth cannot be calculated.

ToF diagram
Occlusion occurs when a part of an object cannot be imaged by one of the cameras.

Few/faint features: If an object has few identifiable features, no matching correspondence pairs may be generated, also limiting essential depth calculations.

Structured Light: A clever response to the few/faint features challenge can be to project structured light patterns onto the surface.  There are both active stereo systems and calibrated projector systems.

Active stereo systems are like two-camera passive stereo systems, enhanced by the (active) projection of optical patterns, such as laser speckles or grids, onto the otherwise feature-poor surfaces.

ToF diagram
Active stereo example using laser speckle pattern to create texture on object.

Calibrated projector systems use a single camera, together with calibrated projection patterns, to triangulate from the vertex at the projector lens.  A laser line scanner is an example of such a system.

Besides custom systems, there are also pre-calibrated structured light systems available, which can provide low cost, highly accurate solutions.

Time of Flight (ToF): While structured light can provide surface height resolutions better than 10μm, they are limited to short working distances. ToF can be ideal for or applications such as people monitoring, obstacle avoidance, and materials handling, operating at working distances of 0.5m – 5m and beyond, with depth resolution requirements to 1 – 5mm.

ToF systems measure the time it takes for light emitted from the device to reflect off objects in the scene and return to the sensor for each point of the image.  Some ToF systems use pulse-modulation (Direct ToF).  Others use continuous wave (CW) modulation, exploiting phase shift between emitted and reflected light waves to calculate distance.

The new Helios ToF 3D camera from LUCID Vision Labs, uses Sony Semiconductor’s DepthSense 3D technology. Download the whitepaper to learn of 4 key benefits of this camera, example applications, as well as its operating range and accuracy,

Download whitepaper
Download whitepaper
Time Of Flight Gets Precise: Whitepaper
Download Time of Flight Whitepaper

Have questions? Tell us more about your application and our sales engineer will contact you.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Keys to Choosing the Best Image Sensor

Keys to Choosing the Best Image Sensor

Image sensors are the key component of any camera and vision system.  This blog summarizes the key concepts of a tech brief addressing concepts essential to sensor performance relative to imaging applications. For a comprehensive analysis of the parameters, you may read the full tech brief.

Download Tech Brief - Choosing the Best Image Sensor

While there are many aspects to consider, here we outline 6 key parameters:

  1. Physical parameters


    Resolution: The amount of information per frame (image) is the product of horizontal pixel count x by vertical pixel count y.  While consumer cameras boast of resolution like car manufacturers tout horsepower, in machine vision one just needs enough resolution to solve the problem – but not more.  Too much resolution leads to more sensor than you need, more bandwidth than you need, and more cost than you need.  Takeaway: Match sensor resolution to optical resolution relative to the object(s) you must image.

    Aspect ratio: Whether 1:1, 3:2, or some other ratio, the optimal arrangement should correspond to the layout of your target’s field of view, so as not to buy more resolution than is needed for your application.



    Frame rate: If your target is moving quickly, you’ll need enough images per second to “freeze” the motion and to keep up with the physical space you are imaging.  But as with resolution, one needs just enough speed to solve the problem, and no more, or you would over specify for a faster computer, cabling, etc.

    Optical format: One could write a thesis on this topic, but the key takeaway is to match the lens’ projection of focused light onto the sensor’s array of pixels, to cover the sensor (and make use of its resolution).  Sensor sizes and lens sizes often have legacy names left over from TV standards now decades old, so we’ll skip the details in this blog but invite the reader to read the linked tech brief or speak with a sales engineer, to insure the best fit.

  2. Quantum Efficiency and Dynamic Range:


    Quantum Efficiency (QE): Sensors vary in their efficiency at converting photons to electrons, by sensor quality and at varying wavelengths of light, so some sensors are better for certain applications than others.

    Typical QE response curve

    Dynamic Range (DR): Factors such as Full Well Capacity and Read Noise determine DR, which is the ratio of maximum signal to the minimum.  The greater the DR, the better the sensor can capture the range of bright to dark gradations from the application scene.

  3. Optical parameters

    While some seemingly-color applications can in fact be solved more easily and cost-effectively with monochrome, in either case each silicon-based pixel converts light (photons) into charge (electrons).  Each pixel well has a maximum volume of charge it can handle before saturating.  After each exposure, the degree of charge in a given pixel correlates to the amount of light that impinged on that pixel.

  4. Rolling vs. Global shutter

    Most current sensors support global shutter, where all pixel rows are exposed at once, eliminating motion-induced blur.  But the on-sensor electronics to achieve global shutter have certain costs associated, so for some applications it can still make sense to use rolling shutter sensors.

  5. Pixel Size

    Just as a wide-mouth bucket will catch more raindrops than a coffee cup, a larger physical pixel will admit more photons than a small one.  Generally speaking, large pixels are preferred.  But that requires the expense of more silicon to support the resolution for a desired x by y array.  Sensor manufacturers work to optimize this tradeoff with each new generation of sensors.

  6. Output modes

    While each sensor typically has a “standard” intended output, at full resolution, many sensors offer additional switchable outputs modes like Region of Interest (ROI), binning, or decimation.  Such modes typically read out a defined subset of the pixels, at a higher frame rate, which can allow the same sensor and camera to serve two or more purposes.  Example of binning would be a microscopy application whereby a binned image at high speed would be used to locate a target blob in a large field, then switch to full-resolution for a high-quality detail image.

For a more in depth review of these concepts, including helpful images and diagrams, please download the tech brief.

Download tech brief - Choosing the Best Image Sensor

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Teledyne Dalsa Imaging – New Technology Showcase 2020

Teledyne Dalsa

A must-attend Virtual Event! Join us for all 6 Industrial Imaging Technology Sessions

While the Covid-19 pandemic has forced the cancellation of trade shows and conferences across the world, we have learned to adapt in keeping up to date on new technologies for industrial machine vision and imaging. The engineering team at Teledyne Dalsa has continued to develop innovative imaging solutions during the pandemic. The team has developed dozens of new imaging components, which are driving advances in industrial machine vision, including machine learning and AI; extremely high-resolution and high-speed imaging; 3D sensing and multi-spectral imaging to name a few. Please join us for this multi-session, virtual event to learn more about the latest innovative imaging solutions.

Agenda Overview: Sign up for one, or all six of the various sessions

Tuesday November 17, 2020 9:00 AM (ET) – Clarity at High Speed – Performance Imaging
Tuesday November 17, 2020, 10:30 AM (ET) – Connection is everything – Camera/Data Interfaces
Wednesday November 18, 2020, 9:00 AM (ET) – AI & Embedded Vision – Driving System Innovation
Wednesday November 18, 2020, 10:30 AM (ET) – New Advances in 3D Sensing
Thursday November 19, 2020, 9:00 AM (ET) – Beyond Sight! Non-Visible and Multi-Spectral Imaging
Thursday November 19, 2020, 10:30 AM (ET) – Evolving CMOS Sensor Technology

UPDATE: Video’s are now on demand!

Clarity at High Speed – Performance Imaging
Tuesday, November 17, 2020 – 9:00AM (ET)

Connection is Everything – Camera/Data Interfaces
Tuesday, November 17, 2020 – 10:30AM (ET)

AI & Embedded Vision – Driving System Innovation
Wednesday, November 18, 2020 – 9:00AM (ET)

New Advances in 3D Sensing
Wednesday, November 18, 2020 – 10:30AM (ET)

Beyond Sight! – Non-Visible and Multi-Spectral Imaging
Thursday, November 19, 2020 – 9:00AM (ET)

Evolving CMOS Sensor Technology
Thursday, November 19, 2020 – 10:30AM (ET)

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

1stVision is the largest distributor in North America for Teledyne Dalsa Imaging products. Contact us to discuss your application with our experienced technical advisorsCLICK HERE Full Data sheets on Dalsa product can be found HERE