The G-158 camera features the 1.58 megapixel Sony IMX273 image sensor, which has a 3.45 µm pixel size and achieves a frame rate of 75.3 fps. The Manta G-040 camera features the 0.4 megapixel Sony IMX287 image sensor, which has a 6.9 µm pixel size and achieves a frame rate of 286 fps. Higher frame rates can be achieved on both models in burst mode.
Allied Vision Manta Specifications and comparisons to older Sony CCD sensors as follows:
Allied Vision Manta features include:
Power over Ethernet options (PoE) with Trigger over Ethernet for single cable solutions
Angled Head and Board level variations allowing for custom OEM designs
Video-Iris lens control for challenging lighting conditions
Three look up tables (LUT)
Gige Vision compliant with support for popular third party image processing library’s including Cognex VisionPro, Mathworks, MATLAB and National Instruments
To Learn More about the Allied Vision Manta cameras
View more information on theG-158.
View more information on the G-040.
UPDATE: See this new video from Allied Vision (6/19/18)
1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection. With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!
Many clients call us about doing measurements on grey scale data, but want to use a color machine vision industrial camera because they want the operator or client to see a more ‘realistic’ picture. For instance, if you are looking at PCBs, need to read characters with good precision, but also see the colors on a ribbon cable, you are forced to use a color camera.
In these applications, you could take out a monochrome image from a color sensor for processing, and use the color for cataloging and visualization. But the question is, how much data is lost by using a color camera in mono mode?
First, the user must understand how a color camera works, and how it gets its picture. Non 3-CCD cameras use a Bayer filter, which is a matrix of red, green, and blue filters over each pixel. For each group of 4 pixels, there are 2 greens, 1 red and 1 blue pixel. (The eye being most sensitive in Green, has more to simulate the response).
To get a color image out, each pixel out is a computation of a weighted sum of its nearest neighbor pixels which is known as Bayer interpolation. The accuracy of the color on these cameras is a result of what the original image was, and how the camera algorithms interpolated the set of red, green and blue values for each pixel.
To get monochrome out, one technique is to have the image broken down into Hue, Saturation, and Intensity, with the intensity taken as the grey scale value. Again, this is mathematical computation. The quality of the output is dependent upon the original image and the algorithms used to compute the output.
An image such as the above will give an algorithm a hard time as you are flipping between grey scale values of 0 and 255 for each pixel (assuming the check board lines up with each pixel). Since the output of each pixel is based on it’s nearest neighbors, you could be replacing a black pixel with 4 white ones!
On the other hand, if we had an image with a ramp of pixel values, in other words, each pixel was say 1 value less than the one next to it, the average of the the nearest neighbors would very close to the pixel it was replacing.
What does all this mean in real world applications? Let’s take a look at a 2 images, both from the same brand of camera where one is the using the 5MP Sony Pregius IMX250 monochrome sensor, the other is using the same sensor, but the color version. The images were taken with the same exposure and identical setup. So how do they compare when we blow them up to the pixel level after we take the monochrome output from the color camera and compare it to the monochrome camera?
(Left) – Color Image ——————————- (Right) – Monochrome Image
In comparing the color image (Left), if you expand the picture, you can see that the middle of the E is wider. The transition is not as close to a step function as you would want it to be. The vertical cross section is about 11 pixels with more black than white. Comparing the monochrome image (Right), the vertical cross section is closer to about 8 pixels.
Conclusion:
If you need pixel level measurement, and there is no need for a color image, USE A MONOCHROME MACHINE VISION CAMERA.
If you need to do OCR (as in this example) the above images using color or monochrome would work just fine. This is given you have enough pixels to start and your spatial resolution is adequate.
Do you lose 4x in resolution as some people claim? Not with the image I have used above. Maybe with the checkerboard pattern, but if you can have multiple pixels across your image to measure, you might be ok in with using a color camera and is really application dependent! This post is to make you aware of the resolution loss specifically and 1st Vision can help in making decisions by contacting us for a discussion.
High speed machine vision camera applications can solve many problems ranging from diagnosing high speed packaging production lines, sports analytics to droplet characterization in spraying applications to name a few.
These solutions require high frame rate cameras, but as in many machine vision applications, there are challenges that must be overcome to be successful.
4 challenges for high speed machine vision camera applications and solutions are presented below.
High Frame Rates are required to capture the event!
The key to high speed image capture is to stop motion by having enough image “frames” within short time periods to play them back slowly and analyze the event. In order to capture these frames, first, you must have a fast image sensor, but then have the ability to offload the image data from the camera to the host computer. Cameras using the CoaxPress (CXP) interface with appropriate sensors provide this solution. Below is an example of achievable frame rates using a Mikrotron EOSens 3CXP camera. Adequate light and a good image sensor is required!
To achieve high frame rates, very short exposure times are required. These short exposures do not allow much time for light to hit the image sensor. In turn to overcome this, you need a strong light source and pixels that are very sensitive. High speed Machine vision image sensors such as the Alexima AM41 and ON Semi LUPA3000 found in the Mikrotron EoSens 3CXP and EoSEns 4CXP cameras respectively solve this problem.
Image storage and an adequate computer is required for machine vision camera event capture. The camera serves its function to capture frames, but typically with “event capture” applications, we need to save the data for playback at a slower frame rate. In many cases, this requires adequate computer processing power, memory and solid state drives (SSD’s). Depending on the application, computing systems with added features such as IO, encoder inputs, serial communication and Power over Ethernet (PoE) ports may be required.
High speed image recording software is needed.
Capturing the high speed video stream is not trivial, yet alone the playback. Software packages such as Streampix by Norpix is a great solution for single up to multiple camera setups. 1stVision can customize a solution using off the shelf industrial components from Mikrotron (High speed cameras), Neousys (Industrial computers), Norpix (Software) and couple with the right lenses and accessories from frame grabbers to cables for your application.
Mikrotron has high speed machine vision camera solutions for many industries. The following video’s demonstrate various solutions.
Automotive Industry – Metal Punch on Oil Filter
Pharmaceutical Industry – Automated filling of syringes
Packaging Industry (Food and Beverage) – Trouble shooting packaging machinery
Packaging Industry (Blister Packs ) – Trouble shooting injection molding of blister packs
1st Vision’s engineers have a combined experience of over 100 years of experience (yes, we are old, but can help you find the best solutions!). We love talking about vision applications and can help provide a detailed solution. Give us a call at 978-474-0044 or email us @ info@1stvision.com
The latest CMOS image sensor technology from Sony and ON-SEMI have continued to expand the industrial camera market. Sony has now reached its 3rd Generation Pregius sensors in addition to adding the low light performer, Starvis sensor. ON-SEMI has also continued with higher resolutions and has the next generation in the works.
Given all these new sensors, we are often asked, “What is the best image sensor and camera for my application”?
Although there are many considerations in general on selecting a camera (i.e Interface, Size, Color vs Mono etc), its best to start with the characteristics of image sensor and performance. Knowing the answers to questions relating to amount of available light, dynamic range requirements, wavelengths involved, and the type of application, the right sensor can start to be identified. From there, we can select a camera with the appropriate sensor fitting other requirements such as interface, frame rate, bit depths etc.
In order to help pick a sensor, its extremely important to have the image sensor data that can be found on the EMVA1288 data sheets. We have continued compiling this data into a “cheat sheet” along with required lens recommendations and comments how how some sensors relate to each other and older CCD sensors for your download.
The data shows us that not all industrial camera image sensors are created equally! Within the Sony Pregius sensors, there is 1st and 2nd Generation sensors both having unique characteristics. The 1st Generation provided great pixel well depth and dynamic range with 5.86um pixels. The 2nd generation came along with smaller 3.45um pixels, improved sensitivity and lower noise, but less well depth. The next generation will have the best of both worlds.. more to come on that front.
Using this data as an example, if we had an application with a “fixed” amount of light and wanted a relatively bright image (given a fixed aperture and just considering sensor characteristics), what sensor is best? Answer: We’d probably look at Model A with a smaller well depth as the pixel will start to saturate faster than Model C. Or possibly we have a very small amount of light? We’d start looking at absolute (abs) sensitivity which tells us the smaller # of photon’s, 1.1 in this case, starts to provide a useful signal.
Example comparisons: Don’t let yourself get frustrated trying to figure this out on your own! 1st Vision’sengineers have combined experience in the machine vision and imaging market of over 100 years! Our team can help explain the various technical terms mentioned in this post and help in selecting the best image sensor and camera for an application.