pixel size & sensitivity

Similar documents
product overview pco.edge family the most versatile scmos camera portfolio on the market pioneer in scmos image sensor technology

The Geometry of Perspective Projection

Image Formation. 7-year old s question. Reference. Lecture Overview. It receives light from all directions. Pinhole

pco.edge 4.2 LT 0.8 electrons 2048 x 2048 pixel 40 fps :1 > 70 % pco. low noise high resolution high speed high dynamic range

How To Use An Edge 3.1 Scientific Cmmos Camera

hsfc pro 12 bit ultra speed intensified imaging

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

19 - RAY OPTICS Page 1 ( Answers at the end of all questions )

Rodenstock Photo Optics

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

Lecture 17. Image formation Ray tracing Calculation. Lenses Convex Concave. Mirrors Convex Concave. Optical instruments

Computer Vision. Image acquisition. 25 August Copyright by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved

Rodenstock Photo Optics

Optics. Kepler's telescope and Galileo's telescope. f 1. f 2. LD Physics Leaflets P Geometrical optics Optical instruments

Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002

Camera Resolution Explained

Measuring the Point Spread Function of a Fluorescence Microscope

PDF Created with deskpdf PDF Writer - Trial ::

Digital Camera Imaging Evaluation

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.

Digital Photography. Author : Dr. Karl Lenhardt, Bad Kreuznach

Implementing and Using the EMVA1288 Standard

Shutter Speed in Digital Photography

TVL - The True Measurement of Video Quality

Revision problem. Chapter 18 problem 37 page 612. Suppose you point a pinhole camera at a 15m tall tree that is 75m away.

Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision!

The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection

NATIONAL SENIOR CERTIFICATE GRADE 12

APPLICATION NOTE. Basler racer Migration Guide. Mechanics. Flexible Mount Concept. Housing

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

PHYS 39a Lab 3: Microscope Optics

Scanners and How to Use Them

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

Digital Photography Composition. Kent Messamore 9/8/2013

Video surveillance camera Installation Guide

Calculation of gravitational forces of a sphere and a plane

Realization of a UV fisheye hyperspectral camera

LEICA TRI-ELMAR-M mm f/4 ASPH. 1

MECHANICAL PRINCIPLES HNC/D MOMENTS OF AREA. Define and calculate 1st. moments of areas. Define and calculate 2nd moments of areas.

Endoscope Optics. Chapter Introduction

Chapter XIII. PDD TAR TMR Dose Free Space

White paper. In the best of light The challenges of minimum illumination

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS RAYLEIGH-SOMMERFELD DIFFRACTION INTEGRAL OF THE FIRST KIND

Application Note #503 Comparing 3D Optical Microscopy Techniques for Metrology Applications

Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010

WHITE PAPER. P-Iris. New iris control improves image quality in megapixel and HDTV network cameras.

Users Manual Model # English

AxioCam MR The All-round Camera for Biology, Medicine and Materials Analysis Digital Documentation in Microscopy

Comparing Digital and Analogue X-ray Inspection for BGA, Flip Chip and CSP Analysis

A technical overview of the Fuel3D system.

THE DIFFERENCE BETWEEN COEFFICIENT-OF-FRICTION AND DRAG FACTOR

Characterizing Digital Cameras with the Photon Transfer Curve

Physics 441/2: Transmission Electron Microscope

Lecture 14. Point Spread Function (PSF)

Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements

White Paper. "See" what is important

Using quantum computing to realize the Fourier Transform in computer vision applications

OmniBSI TM Technology Backgrounder. Embargoed News: June 22, OmniVision Technologies, Inc.

High Definition Imaging

Understanding Line Scan Camera Applications

High Resolution Spatial Electroluminescence Imaging of Photovoltaic Modules

Planetary Imaging Workshop Larry Owens

Industrial vision cameras CAMERA LINK DVI-D FULL HD CAMERA LINE

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec.

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

MODULATION TRANSFER FUNCTION MEASUREMENT METHOD AND RESULTS FOR THE ORBVIEW-3 HIGH RESOLUTION IMAGING SATELLITE

Flat-Field IR Mega-Pixel Lens

Reflection and Refraction

Basler. Area Scan Cameras

Lenses and Telescopes

Copyright 2011 Casa Software Ltd. Centre of Mass

Inv 1 5. Draw 2 different shapes, each with an area of 15 square units and perimeter of 16 units.

Master Anamorphic T1.9/35 mm

Digital Image Requirements for New Online US Visa Application

Basler. Line Scan Cameras

Understanding Network Video Security Systems

C) D) As object AB is moved from its present position toward the left, the size of the image produced A) decreases B) increases C) remains the same

Basic Manual Control of a DSLR Camera

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

BASIC EXPOSURE APERTURES, SHUTTER SPEEDS AND PHOTO TERMINOLOGY

8. Hardware Acceleration and Coprocessing

Power functions: f(x) = x n, n is a natural number The graphs of some power functions are given below. n- even n- odd

Refractors Give the Best Planetary Images

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair

Encoders for Linear Motors in the Electronics Industry

AxioCam HR The Camera that Challenges your Microscope

ARTICLE Night lessons - Lighting for network cameras

Applications: X-ray Microtomography, Streak Tube and CRT Readout, Industrial & Medical Imaging X-RAY GROUP

Using Image J to Measure the Brightness of Stars (Written by Do H. Kim)

Aperture, Shutter speed and iso

4. CAMERA ADJUSTMENTS

AM Receiver. Prelab. baseband

Dome Camera with IR Night Vision

MRC High Resolution. MR-compatible digital HD video camera. User manual

College on Medical Physics. Digital Imaging Science and Technology to Enhance Healthcare in the Developing Countries

MICROSCOPY. To demonstrate skill in the proper utilization of a light microscope.

Detecting and measuring faint point sources with a CCD

Copyright 2005 IEEE. Reprinted from IEEE MTT-S International Microwave Symposium 2005

A Guide to Acousto-Optic Modulators

Transcription:

knowledge base The relationship between pixel size o an image sensor and its sensitivity is discussed in detail to illuminate the reality behind the myth that larger pixel image sensors are always more sensitive than small pixel sensors. pixel size ill actor table : correspondences & relations ill actor sensitive area quantum eiciency signal large large high large small small small small small + micro larger high large lens eective area Figure Pixels with dierent ill actors (blue area corresponds to sensitive area): [] pixel with 75% ill actor, [] pixel with 50% ill actor and [3] pixel with 50% ill actor plus micro lens on top. The ill actor o a pixel describes the ratio o sensitive area versus total area o a pixel, since a part o the area o an image sensor pixel is always used or transistors, electrodes or registers, which belong to the structure o the pixel o the corresponding image sensor (CCD, COS, scos). Only the sensitive part might contribute to the signal, which the pixel detects. Although the application o micro lenses is always beneicial or pixel with ill actors below 00%, there are some physical and technological limitations to consider: table : some limitations limitations size limitation or micro lenses COS pixels have limited ill actor sensitive areas larger than 5µm x 5µm For pixels larger than µm pixel pitch the stack height o the process to make a good micro lens can t be made high enough to generate a good lens. This stack height is proportional to the area, which should be covered ii due to semiconductor processing there has always to be 5% o the pixel area covered with metal > maximum ill actor 75% are diicult to realize because diusion length and thereore the probability or recombination increase pixel size optical Figure Cross sectional view o pixels with dierent ill actors (blue area corresponds to sensitive area) and the rays which impinge on them (orange arrows). Dashed rays indicate, that this does not contribute to the signal: [] pixel with 75% ill actor, [] pixel with 50% ill actor and [3] pixel with 50% ill actor plus micro lens on top. In case the ill actor is too small i, this ill actor usually is improved by the addition o micro lenses, where the lens collects the impinging onto the pixel and ocuses the to the sensitive area o the pixel (see igure ). Figure 3 Optical with a simple optical system based on a thin lens and characterized by some geometrical parameters: ocal length o lens, F o ocal point o lens on object side, F i ocal point o lens on image side, x o distance between F o and object object distance, x i distance between F i and image image distance, Yo object size, Y i image size Figure shows the situation o an optical with i the ill actor can be small or example: the pixel o an interline transer CCD image sensor, where 50% o the pixel area is used or the shit register or the global shutter 5 or 6T pixel o a COS sensor, where the transistors and electrical leads cause a 50% ill actor ii or some large pixels it still might be useul to use a less eicient micro lens, i or example the lens should prevent that too much alls onto the periphery o the pixel

knowledge base a simple optical system based on a thin lens. For this situation the Newtonian equation is valid: x0 $ xi or the Gaussian lens equation: + ^x0 + h ^xi + h and the magniication is given by the ratio o the image size Yi to the object size Yo: Yi Y 0 x0 xi 3 pixel size depth o ocus / depth o ield iii the ratio o the ocal length 0 and diameter o lens aperture D 0 : 0 /# D0 the radius o the blur disk can be expressed: 0 $ $ Dx ^ + d' h /# 0 where Dx3 is the distance rom the (ocused) image plane. The range o positions o the image plane, [d Dx3, d +Dx3], or which the radius o the blur disk is lower than e, is known as depth o ocus. The above equation cam be solved or Dx3 and yields: Dx d' 3 /# $ c + m $ /# $ ^ + h $ 0 where is the magniication rom chapter. This equation shows the critical role o the number or the depth o ocus. 3 Figure 4 Illustration o the concept o depth o ocus, which is the range o image distances in which the blurring o the image remains below a certain thresh. The image equation (chapter ) determines the relation between object and image distances. I the image plane is sly shited or the object is closer to the lens system, the image is not rendered useless. It rather gets more and more blurred, the larger the deviation rom the distances become, given by the image equation. The concepts o depth o ocus and depth o ield are based on the act that or a given application only a certain degree o sharpness is required. For digital image processing it is naturally given by the size o the pixels o an image sensor. It makes no sense to resolve smaller structures but to allow a certain blurring. The blurring can be described using the image o a point object as illustrated in igure 3. At the image plane, the point object is imaged to a point. It smears to a disk with the radius e (see ig. 4) with increasing distance rom the image plane. Introducing the number /# o an optical system as Figure 5 Illustration o the concept o depth o ield, which gives the range o distances o the object in which the blurring o its image at a ixed image distance remains below a given thresh. O even more importance or practical usage than the depth o ocus is the depth o ield. The depth o ield is the range o object positions or which the radius o the blur disk remains below a thresh e at a ixed image plane. In the limit o d! Dx Dx Dx 3 d' " /# $ ^ + h $ 3 % d we obtain: +. $ $ 3 /# I the depth o ield includes ininite distance, the depth o ield is given by: d min 0. $ /# $ iii This chapter comes rom the book Practical Handbook on Image Processing or Scientiic and Technical Applications by Pro. B. Jähne, CRC Press, chapter 4. Image Formation Concepts, pp. 3 3.

knowledge base Generally the whole concept o depth o ield and depth o ocus is only valid or a perect optical system. I the optical system shows any aberrations, the depth o ield can only be used or blurring signiicantly larger than those caused by aberrations o the system. 4 pixel size comparison total area / resolution I the inluence o the pixel size on the sensitivity, dynamic, image quality o a camera should be investigated, there are various parameters that could be changed or kept constant. In the ollowing dierent pathways are ollowed to try to answer this question. 4. constant area constant > image circle, aperture, ocal length, object distance & irradiance, variable > resolution & pixel size To answer this question it is either possible to look at a single pixel, but this neglects the dierent resolution, or to compare the same resolution with the same lens but this corresponds to compare a single pixel with 4 pixels. Generally or the considerations it is also assumed that both image sensors have the same ill actor o their pixels. The small pixel measures the signal m, it has its own readout noise r 0 and thereore a signaltonoiseratio s could be determined or two important situations: low, where the readout noise is dominant and bright, where the photon noise is dominant. table 3: consideration on signal and SNR or dierent pixel sizes same total area pixel type signal readout noise SNR low SNR bright small pixel m r 0 s 0 s large pixel 4 x m > r 0 > s 0 x s 4 small pixels 4 x m x r 0 iv x s 0 x s Figure 6 Illustration o two square shaped image sensors within the image circle o the same lens, which generates the image o a tree on the image sensors. Image sensor [] has pixels with the quarter size area o the pixels o sensor []. For the ease o comparison square shaped image sensors are assumed, which it into the image circle or area o a lens. For comparison a homogeneous illumination is assumed thereore the radiant lux per unit area also called irradiance E [W/m²] is constant over the area o the image circle. Let us assume that the let image sensor (ig.6, sensor []) in igure 6 has smaller pixels but higher resolution e.g. 000 x 000 pixels at 0µm pixel pitch, while the right image sensor (ig. 6, sensor []) subsequently has 000 x 000 pixels at 0µm pixel pitch. The question would be which sensor has the brighter signal and which sensor has the better signaltonoiseratio (SNR)? Still the proportionality o SNR to pixel area at a constant irradiance is valid, meaning the larger the pixel size and thereore the area, the better the SNR will be. However, ultimately this means that one pixel with a total area which its into the image circle has the best SNR: Figure 7 Illustration o three resulting images which where recorded at dierent resolutions: a) high resolution, b) low resolution and c) pixel resolution Assuming three image sensors with the same total area, which its into the image circle o a lens, but they have dierent resolutions. As can be seen in igure 7 a) shows the nicest image but has the worst SNR per pixel, in igure 7 b) the SNR / pixel is better but due to the smaller resolution the image quality is worse compared to a). Finally igure 7 c) with a super large single pixel shows the maximum SNR per pixel but unortunately the image content is lost. iv since it is allowed to add the power o the readout noise, the resulting noise equals the square root o (4 x r 0 ²) 3

knowledge base 4. constant resolution constant > aperture & object distance, variable > pixel size, ocal length, area & irradiance I or example the resolution o the image sensor is 000 x 000 pixels, and the sensor with the smaller pixels has a pixel pitch o hal the dimension o the larger pixel sensor, the image circle diameter d o the larger pixel sensor amounts to $ c with c width or height o image sensor. Since the smaller pixel sensor has hal the pitch, it also has hal the width and height and hal the diagonal o the image circle, which gives: d c $ $ c Figure 8 Illustration o two square shaped image sensors within the image circle o the same lens type (e.g. mount). The tree is imaged to the same number o pixels which require lenses with dierent ocal lengths: Image sensor [] has pixels with 4 times the area o the pixels o sensor []. Another possibility to look at a comparison is i the number o pixels should be the same at the same object distance. To achieve that condition it would be necessary to change the ocal length o the lens in ront o the image sensor with the small pixels. Since the inormation (energy), which beore has been spread to a larger image area, now has to be concentrated to a smaller area (see igure 8 and 9). To get an idea o the required ocal length, it is possible to determine the magniication and subsequently the required ocal length o the lens (see chapter and igure 9), since the object size Y 0 remains the same: Y Y Y Y with Y Y $ 0 0 $ c $ c Since the photon lux F 0 coming rom the object or a source is constant due to the same aperture o the lens (NOT the same number!), the lens with the changed ocal length achieves to spread the same energy over a smaller area, which results in a higher irradiance. To get an idea on the irradiance, we need to know the area A : A A $ c r $ p r $ c $ c m 4 Figure 9 Illustration o two situations, which are required or comparison: [] this is the original lens which images completely to the large pixel sensor [] this is the lens with the changed ocal length, which images to the smaller pixel size sensor With this area it is possible to calculate how much higher the irradiance I will be. I U0 A 0 4 4 I A A 0 U $ U $ $ 4 The reason or that discrepancy between the argumentation and the results is within the deinition o the number /# iii : /# D 4

knowledge base Figure 0 Images o the same test chart with same resolution, same object distance, same ill actor but dierent pixel size and dierent lens settings (all images have the same scaling or display): a) pixel size 4.8 µm x 4.8 µm, 00mm, number 6 b) pixel size 7.4 µm x 7.4 µm, 50mm, number 6 c) pixel size 7.4 µm x 7.4 µm, 50mm, number 8 The lenses are constructed such that the same number always generates the same irradiance in the same image circle area. Thereore the attempt to ocus the energy on a smaller area or the comparison is not accomplished by keeping the number constant. It is the real diameter o the aperture D, which has to be kept constant. Thereore the number would be, i the number was /# 6 (see example in igure 0): 50 /# $ /# $ 6 8 D0 ` 00 j /# Again the question would be which sensor has the brighter signal and which sensor has the better signaltonoiseratio (SNR)? This time we look at the same number o pixels but with dierent size, but due to the dierent lens at the same aperture the irradiance or the smaller pixels is higher. Still it is assumed that both image sensors have the same ill actor o their pixels. table 4: consideration on signal and SNR or dierent pixel sizes same resolution pixel type signal readout noise SNR low small pixel m r 0 s 0 s large pixel m > r 0 < s 0 s SNR bright I now the argument would be, that the larger number will cause a dierent depth o ield, which will in turn change the sharpness o the image and thereore the quality, the equation rom chapter 3 can be used to look at the consequences. From the above example we have the ocal lengths 00mm and 50mm, and we are looking or the right number to have the same depth o ield. Thereore: $ /# $ $ /# $ + 50 /# $ /# $ 6 4 00 Since we have calculated a number 8 or the correct comparison, the depth o ield or the image sensor with the smaller pixels is even better. 5 pixel size ullwell capacity, readout noise, dark current The ullwell capacity o a pixel o an image sensor is one important parameter, which determines the general dynamic o the image sensor and thereore also or the camera system. Although this ullwell capacity is inluenced by various parameters like pixel architecture, layer structure and well depth, there is a general correspondence also to the sensitive area. This is also true or the electrical capacity o the pixel and the dark current, which is thermally generated. Both, dark current and capacity v add to the noise behaviour, and thereore larger pixels also show larger readout noise. table 5: consideration on ullwell capacity, readout noise, dark current pixel type sensitive area ullwell capacity dark current capacity readout noise small pixel small small small small small large pixel large large large large large 6 pixel size dynamic (intra scene dynamic, pixel dynamic) The intra scene dynamic dyn max o an image sensor is the expression, which generally is used to determine the dynamic o an image sensor or a camera system. It describes the maximum contrast or level, which an image sensor can measure or detect within one image. v the correspondence between pixel area, capacity and thereore ktcnoise is a little bit simpliied, because there are er techniques rom COS manuacturers like Cypress, which overcome that problem, nevertheless the increasing dark current is correct, whereas the sum still ollows the area size. Further in CCDs and er COS image sensors most o the noise comes rom the transer nodes, which is cancelled out by correlated double sampling (CDS). 5

knowledge base It is deined as ratio o the maximum number o electrons, that a pixel can take the ullwell capacity divided by the smallest (signal) level that can be used the readout noise. dyn pixel dynamic ullwell capacity 6e readout noise 6e max I now pixels with dierent sizes have to be compared, the ullwell capacity and the readout noise roughly ollow the pixel size. For some very large pixel the increase in ullwell is more pronounced, thereore they have a larger dynamic dyn max. On the other hand the very small pixels have still decreasing ullwell but persistent readout noise, which decreases the dynamic. Hence, the maximum pixel dynamic corresponds to the signaltonoiseratio o the, because solely the signal is then photon noise limited: ullwell capacity 6e ullwell capacity 6e max Thereore or normal image sensors (normal no image sensors with incredible high readout noise), the larger pixels have the best maximum pixel dynamic. 7 pixel size resolution, contrast & modulation transer unction (TF) Figure a) a macro image o an image sensor showing the pixels at the edge o the area, b) illustration o an image sensor with characteristic geometrical parameters: x, y horizontal, vertical dimensions, p x, p y horizontal, vertical pixel pitch, b x, b y horizontal, vertical pixel dimensions The modulation transer unction, TF, is used to describe the ability o the camera system to resolve ine structures, resolution, and the contrast, which inally deines the quality o the image. The resolution ability depends on the geometrical parameters o the image sensor, and the used lenses or (including micro lenses). The maximum spatial resolution is thereore given by: line pair [lp] needs two pixels to be resolved assumption: square pixels with bx by b and px py p the Nyquist requencies correspond to spatial requencies which are usually expressed in line pairs per mm [lp/mm] This results in the maximum possible axial R axial and diagonal R diagonal resolution ability, that is given by the dimensions o the pixel: R axial and R diagonal $ p $ $ p In the ollowing table there are the maximum resolution ability values or image sensors with dierent pixel sizes given and or comparison some corresponding lens values. table 6: some TF data o image sensors & lenses item image sensor / lens type pitch [µm] ocal length [mm] R axial [lp/mm] R diagonal [lp/mm] ICX85AL CCD 6.45 77.5 54.8 T943 COS 4.7 9.5 ICX05AK CCD 4.65 07.5 76 APO Xenoplan.4/3 Cinegon./6.0 cmount 3 75 cmount 6 30 The next important parameter is the contrast or modulation, which is deined with the intensity I [count] or [DN vi ] in an image: Imax Imin I max + I min The modulation depends on the spatial requencies, which means that is a unction o the resolution R: (R). The quality o the process is described by the modulation transer unction, TF. vi DN digital number, like count 6

knowledge base So both parameters, the resolution and the contrast, deine the quality o an image, as is illustrated in igure. Increasing resolution improves the sharpness o an image while increasing contrast adds to the brilliance. Keep or adjust the same resolution or all cameras! Then select a proper ocal length or each camera, whereas each camera should see the same image on the active image sensor area. Select corresponding lens with the appropriate ocal length or each camera! Adjust the lens with the largest ocal length with the largest useul number! Figure Illustration o the inluence o resolution and contrast on image quality 8 Compare cameras with respect to pixel size & sensitivity Generally it is a good idea to image the same scene to each camera, which means: For a proper comparison use the equation or the number on page 8, keep the aperture D constant, and calculate the corresponding number or the other lenses and adjust them as good as possible then compare the images! Adjust the numbers o the other lenses in a way, such that the aperture o all lenses is equal (similar)! Compare or sensitivity! Keep the object distance and the illumination constant! I the cameras should be compared, they should use the same resolution, which either means analogue binning or mathematical summation or averaging, or usage o a region/area o interest. europe PCO AG Donaupark 93309 Kelheim, Germany on +49 (0)944 005 50 ax +49 (0)944 005 0 inode www.de america The Cooke Corporation 6930 etroplex Drive Romulus, ichigan 4874, USA tel (48) 76 880 ax (48) 76 885 inocookecorp.com www.cookecorp.com subject to changes without prior notice PCO AG, Kelheim v. 070 7