Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.



Similar documents
Lecture 16: A Camera s Image Processing Pipeline Part 1. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

Scanners and How to Use Them

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

Characterizing Digital Cameras with the Photon Transfer Curve

Introduction to Digital Resolution

MICROSCOPY. To demonstrate skill in the proper utilization of a light microscope.

Automatic and Objective Measurement of Residual Stress and Cord in Glass

AxioCam MR The All-round Camera for Biology, Medicine and Materials Analysis Digital Documentation in Microscopy

PDF Created with deskpdf PDF Writer - Trial ::

product overview pco.edge family the most versatile scmos camera portfolio on the market pioneer in scmos image sensor technology

EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS MODULE 1: CAMERA SYSTEMS & LIGHT THEORY (37)

Rodenstock Photo Optics

OmniBSI TM Technology Backgrounder. Embargoed News: June 22, OmniVision Technologies, Inc.

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

Comparing Digital and Analogue X-ray Inspection for BGA, Flip Chip and CSP Analysis

Digital Image Requirements for New Online US Visa Application

To measure an object length, note the number of divisions spanned by the object then multiply by the conversion factor for the magnification used.

Digital Camera Glossary

Filters for Digital Photography

A Proposal for OpenEXR Color Management

Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010

ING LA PALMA TECHNICAL NOTE No Investigation of Low Fringing Detectors on the ISIS Spectrograph.

The Digital Dog. Exposing for raw (original published in Digital Photo Pro) Exposing for Raw

Using the Spectrophotometer

Digital Camera Imaging Evaluation

Assessment of Camera Phone Distortion and Implications for Watermarking

SCANNING, RESOLUTION, AND FILE FORMATS

Digital Image Basics. Introduction. Pixels and Bitmaps. Written by Jonathan Sachs Copyright Digital Light & Color

White paper. CCD and CMOS sensor technology Technical white paper

Canon 10D digital camera

Resolution for Color photography

Personal Identity Verification (PIV) IMAGE QUALITY SPECIFICATIONS FOR SINGLE FINGER CAPTURE DEVICES

Introduction to CCDs and CCD Data Calibration

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

The Array Scanner as Microdensitometer Surrogate: A Deal with the Devil or... a Great Deal?

AF 70~300 mm F/4-5.6 Di LD Macro 1:2 (Model A17)

White Paper. "See" what is important

Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision!

MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013

Theremino System Theremino Spectrometer Technology

Basic Manual Control of a DSLR Camera

Application Note #503 Comparing 3D Optical Microscopy Techniques for Metrology Applications

WHICH HD ENDOSCOPIC CAMERA?

Color Balancing Techniques

High Resolution Spatial Electroluminescence Imaging of Photovoltaic Modules

DSLR Astrophotography

Photography of Cultural Heritage items

University of California at Santa Cruz Electrical Engineering Department EE-145L: Properties of Materials Laboratory

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.

1. Introduction to image processing

18-270mm F/ Di II VC PZD for Canon, Nikon (Model B008) mm F/ Di II PZD for Sony (Model B008)

THE NATURE OF LIGHT AND COLOR

Reflectance Measurements of Materials Used in the Solar Industry. Selecting the Appropriate Accessories for UV/Vis/NIR Measurements.

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University

Light and its effects

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

Preparing Images for PowerPoint, the Web, and Publication

AP Physics B Ch. 23 and Ch. 24 Geometric Optics and Wave Nature of Light

Ultra-High Resolution Digital Mosaics

Telescope Types by Keith Beadman

Camera Resolution Explained

ACTION AND PEOPLE PHOTOGRAPHY

Filters for Black & White Photography

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

How many PIXELS do you need? by ron gibbs

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Planetary Imaging Workshop Larry Owens

BASIC EXPOSURE APERTURES, SHUTTER SPEEDS AND PHOTO TERMINOLOGY

Data Storage 3.1. Foundations of Computer Science Cengage Learning

The best lab standard. 1,4 Megapixels 2/3 inch sensor Giant pixel size 6 times optical zoom Massive 16-bit imaging for enhanced dynamic

Digital Photography Composition. Kent Messamore 9/8/2013

Camera Resolution. Image Size

3D TOPOGRAPHY & IMAGE OVERLAY OF PRINTED CIRCUIT BOARD ASSEMBLY

Aperture, Shutter speed and iso

Using the Olympus C4000 REV. 04/2006

Shutter Speed in Digital Photography

Science In Action 8 Unit C - Light and Optical Systems. 1.1 The Challenge of light

Understanding Line Scan Camera Applications

Module 13 : Measurements on Fiber Optic Systems

Care and Use of the Compound Microscope

Getting Started emacs.cshrc & :+/usr/local/classes/astr1030/astron/ source.cshrc cd /usr/local/classes/astr1030 idl .compile ccdlab2 ccdlab2 exit

SP AF 17~50 mm F/2.8 XR Di-II LD Aspherical [IF] (Model A16)

Technical Considerations Detecting Transparent Materials in Particle Analysis. Michael Horgan

ZEISS Axiocam 506 color Your Microscope Camera for Imaging of Large Sample Areas Fast, in True Color, and High Resolution

Fundamentals of modern UV-visible spectroscopy. Presentation Materials

WHO CARES ABOUT RESOLUTION?

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair

TVL - The True Measurement of Video Quality

Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors

Single Photon Counting Module COUNT -Series

1 Laboratory #5: Grating Spectrometer

Encoders for Linear Motors in the Electronics Industry

Users Manual Model # English

ENGINEERING METROLOGY

NEW 35MM CMOS IMAGE SENSOR FOR DIGITAL CINE MOTION IMAGING

Digital image processing

Transcription:

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your sensor or chip size. Digital cameras come with chip sizes that range from the size of a 35 mm film negative (36 mm x 24 mm) down to consumer cameras with stated sizes such as "1/3" or "1/1.8" inch. Note that these descriptions are not actual dimensions! The "1/3" chips are actually 4.8 x 3.6 mm (and there are smaller sizes out there). Matching a high-quality single lens reflex body such as the Nikon D80 or Canon EOS 400 to a camera may be an option, given its other potential uses in the lab, and reasonable cost, but it is important to understand a few things about all digital cameras before making a choice. The D80, as an example, has a 10 million photosites (called individual sensors below, since the word "pixel" has too many different meanings in various contexts) each of which is 6 µm square. By comparison, in a 1/1.8 chip with 4 million photosites each would be 3 µm square. (These values also allow for a space of about one half µm between the individual sensors to isolate them electrically, and this applies to CCD detectors, not CMOS chips which also require two or three transistors for each photosite, which also take up space). Figure 1 illustrates the relative sizes of different detectors. Figure 1. Relative sizes of a 25 mm negative, APS chip (used in digital SLR's such as the Nikon D80), and the "1/1.8" and "1.3" inch chips used in typical consumer cameras. The role of the transfer lens is to project the image onto the chip. Normally the rectangular area that is captured is set well inside the circular field of the microscope, to avoid focus and illumination variations near the edge. In my rather typical setup, with a 10x objective the captured image field is about 1600 µm wide, and with a 100x objective it is about 160 µm wide. We'll use those numbers again shortly. The maximum resolution of a digitized image is defined by the Nyquist limit as the spacing corresponding to two pixels (not one, since there must be a dark pixel separating two light ones, or vice versa, for them to be distinguished). For a 3600x2400 sensor chip, typical of a high end 10 Megapixel single-lens-reflex camera, this corresponds to 13 µm on the chip, and for a 10x objective represents 0.9 µm on the specimen. This is, of course, better than the optical resolution of that objective lens. With a 100x objective lens, assuming an optical resolution of 0.5 µm on the specimen, the same chip would represent that 0.5 µm distance with 45 digitized pixels, which is a vast oversampling. Page 1

There is no reason to use a camera with that many sensors. The Polaroid DMC camera, no longer supported but one of the good early digital microscope cameras that I used for years, had 1600x1200 photosites (each 7 µm wide) on a 12.15 mm wide chip. With the same 100x objective and an appropriate transfer lens, this still covers 0.5 µm on the specimen with a more than adequate 5 pixels, and with the 10x objective the camera's resolution limit would be 2 µm, better than the optics. So, given that there is no need for huge numbers of photosites ("pixels" in camera advertising) for a camera to mount on the microscope, is there a disadvantage to using something like a Nikon D80? Not necessarily. The digitized images are large, but storage, computer memory and processing speed have advanced so quickly that these are not important factors. But there IS a major reason not to use one of the cameras with a small chip size. As the size of the individual sensors is reduced, each photodiode has a smaller capacity to capture photons and hold electrons. The larger detector sizes in an SLR chip or specialized scientific camera hold more electrons and produce a greater dynamic range for the image. Digitization of the image produces a minimum amount of noise (from statistical, thermal and electronic sources) that corresponds to about 3-5 electrons (Peltier cooled cameras represent the lower end of this range). As the light intensity increases, it is the statistics of producing electrons as photons are absorbed that controls the amount of random noise. The signal to noise ratio in the final image is dependent on the maximum number of electrons that the diode can hold without saturation, which is directly proportional to size. Photodiodes in silicon have maximum capacities (the well size ) of about 1000-1500 electrons per square micrometer, so doubling the width produces a 4x increase in area, collects more photons to produce more electrons, for a resulting improvement in maximum signal. Since the minimum remains the same, the dynamic range in increased and with it the signal to noise ratio. Figure 2. Comparison of the noise level in otherwise identical images acquired with chips having large and small photosites. The small chip cameras struggle to produce 8 bits (representing 256 linear intensity values - this value is somewhat optimistic and is based on the organization of memory into 8-bit bytes rather than on detector perform- Page 2

ance) whereas a large chip camera can easily capture 12 bits (representing more than 4000 values). This large dynamic range is important for images with bright and dark areas, particularly fluorescence microscopy, astronomy, and everyday photography (a typical scene can easily include bright and shadow areas that vary in absolute brightness by 1000:1). The dynamic range provided by 12 bits is roughly comparable to the performance of film cameras, which is why digital SLR cameras are now used by many professional photographers as good replacements for film cameras (the spatial resolution of a 10 MP camera is also similar to that of 35 mm film). The examples shown in Figure 2 are two photos of exactly the same scene, with the same illumination and the same camera settings, using cameras with large and small sensor sizes. The difference in noise level is apparent. The quality of the chips used in digital camera chips comprises many characteristics, one of which is variations in sensitivity of the individual sites (called fixed pattern noise). There are also likely to be a few dead (zero output) or locked (maximum output) diodes, whose signals are usually removed by in-camera processing (a median filter, for example, replaces extreme values with one from a neighboring sensor). CMOS chips have more nonuniformity, more noise, more locked pixels, lower sensitivity (because of the area used for the transistors) and other problems as compared to CCD chips. At least for the foreseeable future CCDs are the better choice for scientific applications, but CMOS development continues because the devices are less expensive to produce and require less power, which is important in some applications such as cell phone cameras. Color detection in digital cameras may be accomplished in several different ways. Early microscope cameras used a single chip with a filter wheel that captured three images, one each for red, green and blue. This method allowed, among other things, for balancing the time for each exposure to compensate for the variation in sensitivity of silicon photodiodes with wavelength. But such cameras were slow to operate and did not provide a live color image. Another method uses prisms and filters that direct the red, green and blue light to three separate detector chips. These cameras are expensive and delicate (the alignment of the chips is critical), are less sensitive (because of the light lost in the optics), and produce color shading with wide angle lenses. Still another technique developed by Foveon fabricates a single chip in which there are three diodes at each photosite, one above the other. Red light penetrates deeper into silicon than blue, so the diode at the surface measures the amount of blue light while filtering it out. The next diode below it does the same for green, while the red light penetrates to the deepest layer. This method is still somewhat experimental, has thus far been accomplished only using CMOS technology, and has some problems with the processing needed to obtain accurate colors, but has the advantage that every photosite receives the full color information. Figure 3. The Bayer color filter array in which one-half of the photosites have a green filter, one quarter have a red filter, and one quarter a blue filter. That is not the case with the most common method of color detection, a color filter array (CFA). The most commonly used arrangement is the Bayer filter which places red, green and blue filters on individual photodiodes in a pattern as shown in Figure 3. Since with this scheme each sensor measures only the corresponding Page 3

color of light, the camera electronics must perform some rather clever interpolation (using proprietary "demosaicing" algorithms) to estimate the color values at each location. This process is made somewhat more complicated by the fact that the filters each pass broad and somewhat overlapping ranges of wavelengths. The interpolation, along with the use of an "antialiasing" filter placed in front of the detector chip to scatter the light and spread the photons out to several sensors, reduces the resolution of the digitized image to about 50% of the value that might be expected from the number of "pixels" advertised in the camera specification! (Note - there are some camera manufacturers who carry this interpolation much farther, and advertise a number of pixels in the resulting stored image that is many times the number of sensors on the chip.) Silicon photodiodes are inherently linear, so that the output signal from the chip is a direct linear response to the intensity of the incident light. Professional cameras that save the raw information from the chip (which because of the demoisaicing and removal of shot noise is usually not quite the direct photodiode output) do indeed record this linear response. But programs that interpret the raw file data to produce viewable images, usually saved as tiff or jpeg formats, apply nonlinear conversions. One reason for this practice is to make the images more like those produced by traditional film cameras, since film responds to light intensity logarithmically rather than linearly. It is also done because of the nature of human vision, which responds to changes in intensity more-or-less logarithmically. In a typical scene, a brightness change of a few percent is noticeable. A difference between 20 and 40 (on the 0=black to 255=white scale used by many programs) represents a 100 percent change, while the same absolute difference between 220 and 240 is less than a 10% change. In film photography, images may be recorded on either hard or soft films and paper. As shown in Figure 4, these are characterized by plots of the density against the log of the incident light intensity that are very steep, producing a high contrast image, or more gradual, covering a greater range of intensity. The slope of the central linear portion of the curve is called the gamma of the film or paper. Figure 4. H & D curves for hard and soft photographic film. The same name is now applied to the same characteristic of digitized images in computer software. Computer displays are nonlinear, with a typical value of gamma in the range from 1.8 (the Macintosh standard) to 2.2 (the Windows standard). The mathematical relationship is Output = Input ^ Gamma (where input and output are normalized to the 0..1 range). To compensate for this nonlinearity, the images may be processed from the camera with a gamma value of about 0.5 (=1/2.0) as shown in Figure 5. But most image processing and display Page 4

software allows the gamma value to be adjusted for optimum viewing (or printing, which may require different settings). Figure 5. Example of the gamma curve for a computer display, with the compensating gamma value applied to the raw data. A common problem experienced by users of digital images is that they look different on different monitors, or in different programs (particularly when going from gamma-aware programs like Photoshop to presentation tools such as Powerpoint or Word, which may not make the same adjustments). Hardcopy prints often present a different appearance than the on-screen image, and in any case cannot represent as great a range of contrast. Because adjusting overall contrast and gamma was commonly done in traditional film photography, it is widely assumed that making arbitrary adjustments to similar parameters for digital images is a permissible operation, and tools for accomplishing this are part of most imaging software. But it is important to keep in mind that such adjustments can alter the appearance of images so that some details may become either more or less visible, and that all such operations (including the selection of the area to be recorded by the camera) represent opportunities to bias the results, and are necessarily subject to the usual caveats about scientific responsibility. Some cameras do not give the user access to the raw sensor data, but only save or transfer to the computer one that has already been converted. Even more serious is the use of lossy compression to reduce the file size, either to save storage space or reduce the time needed to transmit or save the result. Images are large: a 10 Megapixel image using 2 bytes each (necessary if the dynamic range exceeds 256 values) for red, green and blue is 60 MBytes. JPEG compression can easily reduce that to 10 MBytes, and in many cases to much less. The problem with all lossy compression schemes (and even the ones that call themselves lossless can truncate or round off values) is that there is no way to predict just what details may be eliminated from the original image. Fine lines may be erased, feature boundaries moved, texture eliminated, and colors altered, and this may vary from one image to another and from one location to another within an image. There is no way to recover the lost information, and images that have been compressed using JPEG, fractal compression, or other lossy schemes must never be used for any scientific or forensic application. Given the availability of large amounts of inexpensive storage, and increasingly fast networks for image transmission, there is simply no valid reason to ever use these procedures. Page 5