Simple FA Lens Selection

Similar documents
Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision!

Rodenstock Photo Optics

LEICA TRI-ELMAR-M mm f/4 ASPH. 1

Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002

Rodenstock Photo Optics

The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection

PDF Created with deskpdf PDF Writer - Trial ::

5.3 Cell Phone Camera

The benefits need to be seen to be believed!

Chapter 17: Light and Image Formation

Video Camera Image Quality in Physical Electronic Security Systems

TS-E24mm f/3.5l TS-E45mm f/2.8 TS-E90mm f/2.8 Instructions

Understanding Line Scan Camera Applications

Endoscope Optics. Chapter Introduction

TVL - The True Measurement of Video Quality

THE COMPOUND MICROSCOPE

AF 70~300 mm F/4-5.6 Di LD Macro 1:2 (Model A17)

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec.

Understanding astigmatism Spring 2003

Automatic and Objective Measurement of Residual Stress and Cord in Glass

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

Flat-Field IR Mega-Pixel Lens

Beyond Built-in: Why a Better Webcam Matters

Digital Photography. Author : Dr. Karl Lenhardt, Bad Kreuznach

White paper. In the best of light The challenges of minimum illumination

Alignement of a ring cavity laser

OmniBSI TM Technology Backgrounder. Embargoed News: June 22, OmniVision Technologies, Inc.

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.

Digital Image Requirements for New Online US Visa Application

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

From Pixel to Info-Cloud News at Leica Geosystems JACIE Denver, 31 March 2011 Ruedi Wagner Hexagon Geosystems, Geospatial Solutions Division.

Revision problem. Chapter 18 problem 37 page 612. Suppose you point a pinhole camera at a 15m tall tree that is 75m away.

MICROSCOPY. To demonstrate skill in the proper utilization of a light microscope.

1 Laboratory #5: Grating Spectrometer

50-meter Infrared Vari-focal Camera

Surveillance Security Camera Guide

Application Note #503 Comparing 3D Optical Microscopy Techniques for Metrology Applications

Digital Photography Composition. Kent Messamore 9/8/2013

Optical laser beam scanner lens relay system

Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors

Experiment 3 Lenses and Images

Optical Design using Fresnel Lenses

WAVELENGTH OF LIGHT - DIFFRACTION GRATING

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

Realization of a UV fisheye hyperspectral camera

Thin Lenses Drawing Ray Diagrams

Digital Camera Imaging Evaluation

Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements

ARTICLE. Which has better zoom: 18x or 36x?

Physics 441/2: Transmission Electron Microscope

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair

MRC High Resolution. MR-compatible digital HD video camera. User manual

DICOM Correction Item

Nikon f/2.8g VR II versus Nikon f/4g VR

GRID AND PRISM SPECTROMETERS

LBS-300 Beam Sampler for C-mount Cameras. YAG Focal Spot Analysis Adapter. User Notes

4. CAMERA ADJUSTMENTS

WHITE PAPER. Source Modeling for Illumination Design. Zemax A Radiant Zemax Company

MACHINE VISION FOR SMARTPHONES. Essential machine vision camera requirements to fulfill the needs of our society

Tube Control Measurement, Sorting Modular System for Glass Tube

Chapter 6 Telescopes: Portals of Discovery. How does your eye form an image? Refraction. Example: Refraction at Sunset.

Reflection and Refraction

1 of 9 2/9/2010 3:38 PM

7 MEGAPIXEL 180 DEGREE IP VIDEO CAMERA

18-270mm F/ Di II VC PZD for Canon, Nikon (Model B008) mm F/ Di II PZD for Sony (Model B008)

waves rays Consider rays of light from an object being reflected by a plane mirror (the rays are diverging): mirror object

Theremino System Theremino Spectrometer Technology

AP Physics B Ch. 23 and Ch. 24 Geometric Optics and Wave Nature of Light

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS RAYLEIGH-SOMMERFELD DIFFRACTION INTEGRAL OF THE FIRST KIND

How to Choose the Right Network Cameras. for Your Surveillance Project. Surveon Whitepaper

Lenses and Telescopes

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

White paper. CCD and CMOS sensor technology Technical white paper

Video surveillance camera Installation Guide

Inspection and Illumination Systems for Visual Quality Assurance. Large selection Attractive prices Tailored solutions. optometron.

A concise guide to Safety Glasses, the different standards and the effects of light on the eye. Contents. Links. Year of publication: 2010

Camera Technology Guide. Factors to consider when selecting your video surveillance cameras

Lecture 17. Image formation Ray tracing Calculation. Lenses Convex Concave. Mirrors Convex Concave. Optical instruments

The Basics of Scanning Electron Microscopy

3D Input Format Requirements for DLP Projectors using the new DDP4421/DDP4422 System Controller ASIC. Version 1.3, March 2 nd 2012

Arrayoptik für ultraflache Datensichtbrillen

The Limits of Human Vision

WHITE PAPER. P-Iris. New iris control improves image quality in megapixel and HDTV network cameras.

ZEISS T-SCAN Automated / COMET Automated 3D Digitizing - Laserscanning / Fringe Projection Automated solutions for efficient 3D data capture

Basler. Area Scan Cameras

TECNOTTICA CONSONNI SRL CERTIFIED QUALITY MANAGEMENT SYSTEM COMPANY BY DNV UNI EN ISO 9001:2008

Module 13 : Measurements on Fiber Optic Systems

Telescope Types by Keith Beadman

SP AF 17~50 mm F/2.8 XR Di-II LD Aspherical [IF] (Model A16)

Master Anamorphic T1.9/35 mm

EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS MODULE 1: CAMERA SYSTEMS & LIGHT THEORY (37)

Science In Action 8 Unit C - Light and Optical Systems. 1.1 The Challenge of light

High-resolution Imaging System for Omnidirectional Illuminant Estimation

Comparing Digital and Analogue X-ray Inspection for BGA, Flip Chip and CSP Analysis

How does my eye compare to the telescope?

What is a DSLR and what is a compact camera? And newer versions of DSLR are now mirrorless

LIGHT SECTION 6-REFRACTION-BENDING LIGHT From Hands on Science by Linda Poore, 2003.

Eight Tips for Optimal Machine Vision Lighting

WHICH HD ENDOSCOPIC CAMERA?

Automated Optical Inspection is one of many manufacturing test methods common in the assembly of printed circuit boards. This list includes:

Transcription:

Simple FA Lens Selection Pete Kepf, CVP-A 1

EXECUTIVE SUMMARY Factory Automation (FA) lens applications differ from others that employ digital cameras, such as security or intelligent transportation systems (ITS) in that they are designed for high accuracy usually have controlled illumination and part location. Considering this accuracy need, little wonder, then, that megapixel cameras have gained rapid acceptance in this market. And, while the multitude of security and photography websites provide useful general information about the behavior of light through a lens to a medium, FA-specific information is not as common. This article describes a step-by-step process for selecting a lens for a camera in an FA application. CONTENTS EXECUTIVE SUMMARY...2 PRIMARY LENS CONSIDERATIONS...3 OTHER LENS CONSIDERATIONS...6 GLOSSARY...8 BIBLIOGRAPHY... 11 2

An FA (factory automation/ machine vision) application is like a stool. Just as each leg is correctly sized and shaped for the stool to function properly, so the lighting, optical system, and computer and software are carefully selected to maximize performance. Proper lighting is critical; the computer and software must be able to perform the required calculations and make a decision within the required time frame. The camera and lens make up the optical system; their capabilities must match one another. Overall system performance is limited by the weakest leg. In this article, the assumption is that lighting is controlled and computer and software have been optimized for the application. The discussion considers the optical system. FA Application Considerations There are many variables to consider for a machine vision application, the purpose of which is to transform the machine s world out there into 1 s and 0 s in here so that it can make a good decision. The machine is accurate; it is repeatable; it is fast, but the quality of its decisions (output) depends on accurate information (input). When beginning the application evaluation, the following questions are major considerations: Is the controlled illumination source visible light or another wavelength? Does the part move? If so, how fast? What is the size of the defect in units of measure ( no discernable scratches is not a specification)? What is the size of the object (more specifically, what is the area of the object that will be viewed by the camera)? What is the distance between the object and the lens? What are the constraints due to machine access or real estate? With answers to these questions we can begin to evolve a specification for the optical system with values for: Lens Composition F-number Focal Length Sensor Format Some of these are simple to address; others require some calculation and knowledge of how a camera and lens work together to make an optical system. 3

Lens Composition The composition of the lens is driven by the light source. The visible light spectrum is about 390 to 700 nm. Most FA applications use light in this portion of the spectrum. The proliferation of LED light sources in the UV and IR spectra, however, has enabled cost -effective object illumination for some FA applications where the material properties respond differently to different wavelengths such as plastics and fluorescent materials. Just as objects for inspection may respond differently based on wavelength, different lens materials refract or transmit light differently. For example, the silica used in the glass of most FA lenses will not refract IR. Manufacturers provide SWIR (800-1700 nm) and LWIR (8-12µm) lenses for applications such as these. For other applications, it may be useful to filter visible light for a certain color. In case, a filter thread on the outside of the lens is necessary (See Figure 1) to accommodate the Figure 1: SWIR lens specification highlighting front filter thread (Source: Computar LENS Product Guide) appropriate filter. An excellent discussion about glass transmissivity can be found here (Galbraith, 2017) F-number F-Number refers to the speed of the lens. It is a ratio between the size of the aperture and the focal length of the camera. Lenses with smaller minimum f- numbers have larger apertures and are called fast lenses because images acquired at fast shutter speeds can receive more light than slower lenses. As a rule, lower f-number lenses yield better results, especially for moving parts. A corollary rule is that more light and an aperture that is somewhat closed yields better results than low-light and a wide-open aperture. An excellent, non-technical discussion may be found at this website (Mansurov, 2015). 4

Focal Length The focal length of the lens determines the magnification, the relationship between Working Distance and Field Of View (Figure 2). It is defined as the distance over which initially collimated rays of light are brought to a focus (Wikipedia, 2016). Nominally, it is the distance between the optical center of the lens and the surface of the vision sensor chip. As this distance decreases the light is bent further providing a wider angle of view and increased distortion (wide angle); as this distance increases the light is bent less providing a narrower angle of view and decreased distortion (telephoto). Thus, for machine vision applications a greater Working Distance is preferable when accuracy is a consideration. For machine vision applications, the required focal length may be approximated * using the following formula: f = h x WD/ horizontal FOV where: f = focal length in mm Figure 2: Focal Length, Working Distance and Field Of View (Source: Computar) h = horizontal dimension of the sensor in mm WD = working distance in mm horizontal Field Of View in mm *This is an approximation because exact distances between optical center and sensor surface are often not readily available. 5

Sensor Dimension and Pixel Size Before the use of current camera sensor chips with their smaller pixels, the number of pixels in the matrix determined optical system resolution; today, it is the size of the pixel. Then, only analog cameras with 525 lines at 30 frames per second were available for use in machine vision applications. These soon gave way to digital cameras with their improved 640 x 480-pixel resolution (VGA) and intuitive virtual superimposition of a grid over the object to be inspected. Lenses developed for CCTV applications were more than adequate to provide the image clarity required at this pixel resolution, from which the Field of View was calculated, for example, by: Field Of View (mm) = Defect Size (mm/ pixel) x Number of Pixels or 6.4 mm FOV =. 01mm/ pixel x 640 pixels Now, higher resolution cameras with smaller have pixels shifted the emphasis from pixel resolution to spatial resolution (See my article Is your Lens the Weakest Link in Your Megapixel Camera Inspection Application? ) (Kepf, 2016)). Spatial resolution refers to the ability of the system to differentiate between two objects. In machine vision, it pertains to the smallest detectable defect. Smaller pixels imply the ability to resolve smaller defects. The required pixel size for a given defect size is calculated as: Spatial Resolution = Defect Size/ Number of pixels across the defect The number of pixels across the defect must be at least 2, and for some cameras may be 3 or 4. This is because defect detection depends on contrast or an edge. An Figure 2: Edge edge is defined as a transition (light to dark; dark to light); by definition at least two pixels. Figure 2 illustrates a diagonal edge across a pixel matrix. Note contrast, dark to light, indicating the edge and requiring at least two pixels. (Note: Edge interpolation to less than a pixel can be performed with most image processing software, and often considers several pixels- in fact, the FOV calculation above requires sub-pixel resolution, a topic beyond the scope of this article). As an example, a 0.01mm defect requires a 0.005mm, or 5µm or smaller pixel:.005 mm/ pixel = 0.01mm/2 pixels Today s digital cameras have arrays greater than 24 megapixels and pixels smaller than 2µm. A search of one camera supplier for a sensor size <= 5µm yields a variety of matrix sizes and formats: 640 x 480 (VGA), 800 x 600 (SVGA), 1024 x 768 (XGA), 1280 x 960, 1280 x 1024 (SXGA), 1920 x 1200 (WUXGA), 2448 x 2048, 2592 x 2048, 6

and 2560 x 2048. Also at that website, megapixel cameras are found with pixel sizes of 4.8 µm, 5 µm, and 6 µm, none of which is acceptable for the example application. Note that pixel size does not necessarily imply a megapixel camera and a megapixel camera does not necessarily imply required resolution. This has important lens considerations. For our example, a 5-megapixel camera with a 2448 x 2048-pixel matrix is selected. From the above, it is clear that megapixel lens is somewhat of a misnomer. The term refers to a lens s ability to resolve smaller pixels whether the matrix is megapixel or not. In fact, a VGA camera with a 4.8µm pixel size may actually require a megapixel lens for reasons described below. There is no industry standard that defines a pixel size for a megapixel lens, but most manufacturers provide some data to match their lens to the appropriate pixel size. Just as cameras possess spatial resolution, so too, do lenses. Recall that spatial resolution refers to the ability of a system to differentiate between two objects. For lenses, it is the contrast level at a certain spatial frequency, termed the Optical Transfer Function (OTF) Figure 3: Non-megapixel vs. Megapixel lens image (Source: Computar) or Modulus Transfer Function (MTF). An industry-standard spatial frequency is line pairs per millimeter (lp/mm). Essentially, OTF/ MTF describes the quality of the lens. It is not a calculation, but rather a measurement resulting from a known target. The targets are used to determine the smallest transition from dark to light and vice versa that a system can detect. Figure 3 illustrates a line-pair target with images through a non-megapixel lens (orange, left) and a megapixel lens (blue, right). Note that there are both horizontal and vertical lines and targets at the center and four 7

corners of the image. This allows measurement of the size and spacing range between the lines, large to small at a variety of locations and in both axes. Note, too, significant clarity improvement with the megapixel lens. Spatial resolution calculations are complex and their results are often presented graphically in the form of a chart. The MTF chart plots percent contrast vs. lp/mm. Because the perfect lens is an abstraction, and because the manufacturing process introduces aberrations into the projection, several curves will be displayed corresponding to the distance Figure 4: MTF Chart (Source: Computar) from the center of the lens (where the fewest aberrations occur). The actual contrast requirement varies by application, but a range from 0.5 (conservative) to 0.2 (minimum) is acceptable. Just as the sound from a great stereo may be compromised by poor speakers, a mismatch between the pixel and optical resolutions results in sub-optimal optical performance. To calculate the lp/mm for a given pixel size: 1/pixel size µm/2 (line pair) * 1000 From the above example: (1/3.45 *2 ) * 1000 = 144 lp/mm The required 144 lp/mm at 0.2 to 0.5 MTF is not achievable with the lens described in Figure 4 (in fact, the sample chart is not for a megapixel lens). Many manufacturers consider MTF information to be proprietary intellectual property, so it may not be readily available. To assist the specifier in lens selection, therefore, many manufacturers classify their lenses per the camera type. Thus, a 5- megapixel camera will use a matching 5-megapixel lens (Figure 5). However, there is no industry standard for such specifications. In practice, it is better to obtain the 8

MTF chart or purchase from a reliable vendor who has performed actual product comparisons for any but the simplest applications. Figure 5: 5 Megapixel Lens (Source: Computar) Working Distance (WD) Working distance is inversely proportional to Field of View. It is often a design constraint driven by machine space or plant floor real estate considerations. Generally, a greater Working Distance is preferred for accuracy since closer distances imply greater refraction angles and increased aberration. This example shall assume a WD of 300mm. Field Of View It is preferable, though not always possible, to capture the entire object within the Field Of view of the camera. For example, a dimensional measurement is straightforward if the entire object is within the image to be processed. One may experiment with varying Fields of View for different lens focal lengths as follows: F (mm) = h (mm) x WD (mm)/ horizontal FOV (mm) Solving for horizontal sensor dimension given the pixel size and matrix from above: 3.45µm/ pixel * 2448 horizontal pixels * 1000 = 8.4mm Our example did not specify an object size, however FOVs for alternative focal lengths are: horizontal FOV (mm) = 8.4mm x 300/ f 315mm = 8.4mm x 300/ 8 210mm = 8.4mm x 300/ 12 157.5mm = 8.4mm x 300/ 16 100.8mm = 8.4mm x 300/ 25 50.4mm = 8.4mm x 300/ 50 25.2mm = 8.4mm x 300/100 It is useful to compare pixel resolution vs. optical resolution: 9

Pixel Resolution (mm/ pixel) = FOV (mm) / pixels.128 mm/ pixel = 31 5mm/ 2448 pixels (8mm fl).020 mm/ pixel = 50.4 mm/2448 pixels (50mm fl) Optical resolution =.0035 mm/ pixel Therefore, for this example application, the lens is not the limiting factor. Sensor Format Digital camera sensors are available in a variety of formats and sizes. Format refers to the ratio between the horizontal and vertical axis and the size refers to the dimension of the diagonal of the Figure 6: Sensor Format and Image Circle (Source: Computar) shape. The diagonal is referenced because it determines the smallest diameter of the image circle projected by the lens (Figure 6). It is important that the lens be compatible with the format of the sensor in order to maximize the available pixels and to avoid vignetting. In general, a larger sensor size lens may be used with a matching or smaller sensor (e.g. 2/3 to 2/3 and ½ ). Summary The greater accuracy requirements of FA (machine vision) applications requires more attention to lens attributes than was previously. Smaller pixels in modern cameras prompt the consideration of the camera and lens as an optical system with its own spatial resolution determined by that of both the camera and the lens. This spatial resolution is often a function of pixel size, rather than the number of pixels. Pete Kepf is a Lens Business Development Executive for CBC AMERICAS, Corp., Industrial Optics Division. Over his 30-year automation career, he has performed hundreds of machine vision installations and evaluations. Pete is a Certified Vision Professional-Advanced and a member of SPIE and MVA. 10