Radiometric Self Calibration *
|
|
|
- Jonah Barnett
- 10 years ago
- Views:
Transcription
1 Radiometric Self Calibration * Tomoo Mitsunaga Media Processing Laboratories Sony Corporation Shree K. Nayar Department of Computer Science Columbia University nayar@ cs. Columbia. edu Abstract A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrar), scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F- number settings on an inexpensive lens) are sufjicient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using O0 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras. Code for the algorithm and several results are publicly available at h t tp://w w w. cs. col um bia. edu/cave/. 1. Radiometric Response Function Brightness images are inputs to virtually all computer vision systems. Most vision algorithms either explicitly or indirectly assume that brightness measured by the imaging system is linearly related to scene radiance. n practice, this is seldom the case. Almost always, there exists a non-linear mapping between scene radiance and measured brightness. We will refer to this mapping as the radiometric response function of the imaging system. The goal of this paper is to present a convenient technique for estimating the response function. Let us begin by exploring the factors that influence the response function. n a typical image formation system, image irradiance E is related to scene radiance L as [S: n d E = L-(-)2~~~4q5, 4 h where, h is the focal length of the imaging lens, d is the diameter of its aperture and q5 is the angle subtended This work was supported in parts by an ONRDARPA MURl grant under ONR contract No. N and a David and Lucile Packard Fellowship. Tomoo Mitsunaga is supported by the Sony Corporation. by the principal ray from the optical axis. f our imaging system were ideal, the brightness it records would be = Et, where, t is the time the image detector is exposed to the scene. This ideal system would have a linear radiometric response: = Lke, (2) where, k = cos4 41 h2 and e = (n d2 / 4) t. We will refer to e as the exposure of the image, which could be altered by varying either the aperture size d or the duration of exposure t. An image detector such as a CCD is designed to produce electrical signals that are linearly related to [14]. Unfortunately, there are many other stages to image acquisition that introduce non-linearities. Video cameras often include some form of gamma mapping (see [ 131). Further, the image digitizer, inclusive of A/D conversion and possible remappings, could introduce its own nonlinearities. n the case of film photography, the film itself is designed to have a non-linear response [9]. Such film-like responses are often built into off-the-shelf digital cameras. Finally, the development of film into a slide or a print and its scanning into a digital format typically introduces further non-linearities (see [5]). The individual sources of radiometric non-linearities are not of particular relevance here. What we do know is that the final brightness measurement M produced by an imaging system is related to the (scaled) scene radiance via a response function g, i.e. M = g ( ). To map all measurements M to scaled radiance values, we need to find the inverse function f = g-l, where = f ( M ). Recovery off is therefore the radiometric calibration problem. t is common practice to estimate f by showing the imaging system a uniformly lit calibration chart, such as the Macbeth chart [4], which includes patches of known relative reflectances. The known relative radiances, of the patches and the corresponding measurements M, are samples that are interpolated to obtain an approximation to f. t is our objective to avoid such a calibration procedure which is only suited to controlled environment. 2. Calibration Without Charts Recent work by Mann and Picard [ll] and Debevec and Malik [5] has demonstrated how the radiometric response function can be estimated using images of ar /99 $ EEE 374
2 bitrary scenes taken under different known exposures. While the measured brightness values change with exposure, scene radiance values L remain constant. This observation permits the estimation of the inverse function f without prior knowledge of scene radiance. Once f has been determined, the images taken under different exposures can be fused into a single high dynamic range radiance image'. Mann and Picard [ 111 use the function M = cr + p T to model the response curve 9. The bias parameter cr is estimated by taking an image with the lens covered, and the scale factor p is set to an arbitrary value. Consider two images taken under different exposures with known ratio R = el/e2. The measurement M() in the first image ( is unknown) produces the measurement M(R) in the second. A pixel with brightness M(R) is sought in the first image that would then produce the brightness M( R21) in the second image. This search process is repeated to obtain the measurement series M (), M( R),..., M(Rn). Regression is applied to these samples to estimate the parameter y. Since the model used by Mann and Picard is highly restrictive, the best one can hope for in the case of a general imaging system is a qualitative calibration result. Nonetheless, the work of Mann and Picard is noteworthy in that it brings to light the main challenges of this general approach to radiometric calibration. Debevec and Malik [5] have developed an algorithm that can be expected to yield more accurate results. They use a sequence of high quality digital photographs (about 11) taken using precisely known exposures. This algorithm is novel in that it does not assume a restrictive model for the response function, and only requires that the function be smooth. f p and q denote pixel and exposure indices, we have MP,, = g(ep.tq) or n f (Mp,p) = n Ep + n t,. With MP,, and t, known, the algorithm seeks to find discrete sample of nf( MP,,) and scaled irradiances Ep. These results are used to compute a high dynamic range radiance map. The algorithm of Debevec and Malik is well-suited when the images themselves are not noisy and precise exposure times are available. Though our goals are similar to those of the above investigators, our approach is rather different. We use a flexible parametric model which can accurately represent a wide class of response functions encountered in practice. On the other hand, the finite parameters of the model allow us to recover the response function without precise exposure inputs. As we show, rough estimates (such as F-number readings on a low-quality lens) are sufficient to accurately calibrate the system as well as recover the actual exposure ratios. To ensure that our algorithm can lfor imaging systems with known response functions, the problem of fusing multiple images taken under different exposures has been addressed by several others (see [3], [O], [7] for examples). n [2], a CMOS imaging chip is described where each pixel measures the exposure time required to attain a fixed charge accumulation level. These exposure times can be mapped to a high dynamic range image. 1. M Figure 1 : Response functions of a few popular films and video cameras provided by their manufacturers. These examples illustrate that a high-order polynomial may be used to model the response function. be applied to video systems, where it is easier to vary the aperture setting than the exposure time, we have implemented a pre-processing algorithm that rejects measurements with large vignetting effects and temporal changes during image acquisition. The recovered response function is used to fuse the multiple images into a high quality radiance map. We have conducted extensive testing of our algorithm by using noisy synthetic data. n addition, we have compared our calibration results for real images with those produced using calibration charts. 3. A Flexible Radiometric Model t is worth noting that though the response curve can vary appreciably from one imaging system to the next, it is not expected to have exotic forms. The primary reason is that the detector output (be it film or solid state) is monotonic or at least semi-monotonic. Unless unusual mappings are intentionally built into the remaining components of the imaging system, measured brightness either increases or stays constant with increase in scene radiance (or exposure). This is illustrated by the response functions of a few popular systems shown in Figure 1. Hence, we claim that virtually any response function can be modeled using a high-order polynomial: N = f(m) = C hmn. (3) n=o The minimum order of the polynomial required clearly depends on the response function itself. Hence, calibration could be viewed as determining the order N in addition to the coefficients c,,. 4. The Self-Calibration Algorithm Consider two images of a scene taken using two different exposures, e, and e,+,, where R,,,,,, = e, /e4+,. The ratio of the scaled radiance at any given pixel p can 375
3 written using expression (2) as: Hence, the response function of the imaging system is related to the exposure ratio as: We order our images such that e, < e,+? and hence 0 < R,,,,, < 1. Substituting our polynomial model for the response function, we have: The above relation may be viewed as the basis for the joint recovery of the response function and the exposure ratio. However, an interesting ambiguity surfaces at this point. Note that from (5) we also have (f (Mp,,)/f(Mp,q+l 1)" = Rq,q+lU. This implies that, in general, f and R can only be recovered up to an unknown exponent U. n other words, an infinite number off -R pairs would satisfy equation (5). nterestingly, this u-ambiguity is greatly alleviated by the use of the polynomial model. Note that, iff is a polynomial, f" can be a polynomial only if u = U or U = 1/~ where v is a natural number, i.e. u = 1/3,1/2,1,2,3,.). This by no means implies that, for any given polynomial, all these multiple solutions must exist. For instance, iff (M) = M3, u = 1/2 does not yield a polynomial. On the other hand, u = 1/3 does result in f (M) = M which, in turn, can have its own u- ambiguities. n any case, the multiple solutions that arise are well-spaced with respect to each other. Shortly, the benefits of this restriction will become clear. For now, let us assume that the exposure ratios R,,,+] are known to us. Then, the response function can be recovered by formulating an error function that is the sum of the squares of the errors in expression (6): n most inexpensive imaging systems, photographic or video, it is difficult to obtain accurate estimates of the exposure ratios R4,,+]. The user only has access to the F-number of the imaging lens or the speed of the shutter. n consumer products these readings can only be taken as approximations to the actual values. n such cases, the restricted u-ambiguity provided by the polynomial model proves valuable. Again, consider the case of two images. f the initial estimate for the ratio provided by the user is a reasonable guess, the actual ratio is easily determined by searching in the vicinity of the initial estimate; we search for the R that produces the c, that minimize E. Since the solution for cn is linear, this search is very efficient. However, when more than two images are used the dimensionality of the search for R,,,+] is Q - 1. When Q is large, the search can be time consuming. For such cases, we use an efficient iterative scheme where the current ratio estimates Rq,q+j (k-l) are used to compute the next set of coefficients c,(~). These coefficients are then used to update the ratio estimates using (6): where, the initial ratio estimates?,,,+, (O) are provided by the user. The algorithm is deemed to have converged when: where E is a small number. t is hard to embed the recovery of the order N into the above algorithm in an elegant manner. Our approach is to place an upper bound on N and run the algorithm repeatedly to find the N that gives the lowest error E. n our experiments, we have used an upper bound of N= Evaluation: Noisy Synthetic Data Shortly, we will present experiments with real scenes * Q-1 P N that demonstrate the performance of our calibration algorithm. However, a detailed analysis of the behavior - ""+' cnmp7q+' ' of the algorithm requires the use of noisy synthetic data. q=i p i n=o n=o For this, 100 monotonic response functions were gen- (7) erated using random numbers for the coefficients cn of where, Q is the total number of images used. f we nora fifth-order polynomial. Using each response function, malize all measurements such that 0 5 M 5 1 and fix four synthetic images were generated with random rathe indeterminable scale using f(1) =,,,, we get the additional constraint: diance values and random exposure ratios in the range R This range of ratio values is of par- N-1 ticular interest to us because in almost all commercial CN = maz - +. (8) lenses and cameras a single step in the F-number setting n=o or shutter speed setting results in an exposure ratio of ap- The response function coefficients are determined by proximately 0.5. The pixel values were normalized such solving the system of linear equations that result from that 0 L M L 1. Next, normally distributed noise with setting: 0 = was added to each pixel value. This translates to U = - a& gray levels when 0 5 M 5 255, as in the = 0. (9) case of a typical imaging system. The images were then E = 1 [ 5 c,mp'qn dcn 376
4 E 1.00E E E E-08 1 \ / iterative Updating / i 1.00E OB t M O6 Figure 2: Self-calibration was tested using 100 randomly gerierated response functions. Here a few of the recovered (solid) and actual (dots) response functions are shown. n each case, four noisy test images were generated using random exposure ratios between image pairs in the range R The initial ratio estimates were chosen to be '" E 2 0 H - $ 6 L z p. L il b 2 2 Eo m 40 x) Eo 90 m TRAL NUMBER Figure 3: The percentage average error between actual and estimated response curves for the 100 synthetic trials. The maximum error was found to be 2.7%. quantized to have 256 discrete gray levels. The algorithm was applied to each set of four images using initial exposure ratios of Rq,q+, (O) = 0.5. All 100 response functions and exposure ratios were accurately estimated. Figure 2 shows a few of the actual (dots) and computed (solid) response functions. As can be seen, the algorithm is able to recover a large variety of response functions. n Figure 3 the percentage of the average error (over all M values) between the actual and estimated curves are shown for the 100 cases. Despite the presence of noise, the worst-case error was found to be less than 2.7%. Figure 4 shows the error & plotted as a function of exposure ratio R. n this example, only two images are used and the actual and initial exposure ratios are 0.7 and 25, respectively. The multiple local minima correspond to solutions that arise from the u-ambiguity described before. The figure illustrates how the ratio value converges from the initial estimate to the final one. n all our experiments, the algorithm converged to the correct solution in less than 10 iterations. 1.00E Figure 4: An example that Lows convergence of the algorithm to the actual exposure ratio (0.7) from a rough initial ratio estimate (25), in the presence of the u-ambiguity. 6. mplementation We now describe a few issues that need to be addressed while implementing radiometric self-calibration Reducing Video Noise Particularly in the context of video, it is important to ensure that the data that is provided to the self-calibration algorithm has minimal noise. To this end, we have implemented a pre-processing step that uses temporal and spatial averaging to obtain robust pixel measurements. Noise arises from three sources, namely, electrical readout from the camera, quantization by the digitizer hardware [6], and motion of scene objects during data acquisition. The random component of the former two noise sources can be reduced by temporal averaging oft images (typically t = 100). The third source can be omitted by selecting pixels of spatially flat area. To check spatial flatness, we assume normally distributed noise N(0, c'). Under this assumption, if an area is spatially flat, ss2/a2 can be modeled by the x2 distribution [ 121, where S is the spatial variance of s pixel values (typically s = 5 x 5). We approximate S2 with the spatial variance 0: of the temporally averaged pixel values, and 0' with the temporal variance 0:. Therefore, only those pixels are selected that pass the following test: where, $J is the rejection ratio which is set to When temporal averaging is not applied, t=l and ot is set to Vignetting n most compound lenses vignetting increases with the aperture size []. This introduces errors in the measurements which are assumed to be only effected by exposure changes. Vignetting effects are minimal at the center of the image and increase towards the periphery. We have implemented an algorithm that robustly detects pixels that are corrupted by vignetting. Consider two consecutive images q and q + 1. Corresponding brightness measurements Mp,q and Mp,q+l are plotted against each other. n the absence of vignetting, all pixels with the same measurement value in one image should produce 377
5 functions preserve the chromaticity of scene points. Let the measurements be denoted as M = [Mr, M,, &T and the computed scaled radiances be = [r, gl &T. Then, the color-corrected radiance values are, = [&r, kgg, kbbt where, k,./kb and kg/kb are determined by applying least-squares minimization to the chromaticity constraint C/ 11, = M/ 11 M 11. Figure 5: Fusing multiple images taken under different exposures into a single scaled radiance image. equal measurement values in the second image, irrespective of their locations in the image. The vignetting-free area for each image pair is determined by finding the smallest image circle within which the Mp,q-Mp,q+l plot is a compact curve with negligible scatter High Dynamic Range Radiance mages Once the response function of the imaging system has been computed, the Q images can be fused into a single high dynamic range radiance image, as suggested in [ 1 ] and [S. This procedure is simple and is illustrated in Figure 5. Three steps are involved. First, each measurement Mp,q is mapped to its scaled radiance value p,q using the computed response function f. Next, the scaled radiance is normalized by the scaled exposure.eq so that all radiance value end up with the same effective exposure. Since the absolute value of each.eq is not important for normalization, we simply compute the scaled exposures so that their arithmetic mean is equal to 1. The final radiance value at a pixel is then computed as a weighted average of its individual normalized radiance values. n previous work, a hat function [S and the gradient of the response function [ 1 ] were used for the weighting function. Both these choices are somewhat ad-hoc, the latter less so than the former. Note that a measurement can be trusted most when its signal-tonoise ratio (SNR) as well as its sensitivity to radiance changes are maximum. The SNR for the scaled radiance value is: where, CTN (M) is the standard deviation of the measurement noise. Using the assumption that the noise ON in (13) is independent of the measurement pixel value M, we can define the weighting function as: 6.4. Handling Color n the case of color images, a separate response function is computed for each color channel (red, green and blue). This is because, in principle, each color channel could have its own response to irradiance. Since each of the response functions can only be determined up to a scale factor, the relative scalings between the three computed radiance images remain unknown. We resolve this problem by assuming that the three response 7. Experimental Results The source code for our self-calibration algorithm and several experimental results are made available at Here, due to limited space, we will present just a couple of examples. Figure 6 shows results obtained using a Canon Optura video camera. n this example, the gray scale output of the camera was used. Figure 6(a) shows 4 of the 5 images of an outdoor scene obtained using five different F-number settings. As can be seen, the low exposure images are too dark in areas of low scene radiance and the high exposure ones are saturated in bright scene regions. To reduce noise, temporal averaging using t=loo and spatial averaging using s=3x3 were applied. Next, automatic vignetting detection was applied to the 5 averaged images. Figure 6(b) shows the final selected pixels (in black) for one of the image pairs. The solid line in Figure 6(d) is the response function computed from the 5 scene images. The initial exposure ratio estimates were set to 0.5,0.25,0.5 and 0.5 and the final computed ratios were 0.419, 0.292, and These results were verified using a Macbeth calibration chart with patches of known relative reflectances (see Figure 6(c)). The chart calibration results are shown as dots which are in strong agreement with the computed response function. Figure 6(e) shows the radiance image computed using the 5 input images and the response function. Since conventional printers and displays have dynamic ranges of 8 or less bits, we have applied histogram equalization to the radiance image to bring out the details within it. The small windows on the sides of the radiance image show further details; histogram equalization was applied locally within each window. Figure 7 shows two results obtained using a Nikon 2020 film camera. n each case 5 images were captured of which only 4 are shown in Figures 7(a) and 7(d). n the first experiment, Kodachrome slide film with S0 200 was used. These slides were taken using F-number = 8 and approximate (manually selected) exposure times of 1/30, 1/15, 1/8, 1/2 and 1 (seconds). n the second experiment, the pictures were taken with the same camera but using F-number = 1 1 and S0 100 slide film. The exposure settings were 1/500, 11250, 11125, 1/60 and 1/30. The developed slides were scanned using a Nikon LS- 35 OAF scanner that produces a 24 bit image (8 bits per color channel). The self-calibration algorithm was applied separately to each color channel (R, G, B), using initial ratio estimates of 0.5, 0.25, 0.5 and 0.5 in the first experiment and all 378
6 (e) Figure 6: (a) Self-calibration results for gray scale video images taken using a Canon Optura camera. (b) Temporal averaging, spatial averaging and vignetting detection are used to locate pixels (shown in black) that produce robust measurements. (c) The self-calibration results are verified using a uniformly lit Macbeth color chart with patches of known reflectances. (d) The computed response function (solid line) is in strong agreement with the chart calibration results (dots). (e) The computed radiance image is histogram equalized to convey some of the details it includes. The image windows on the two sides of the radiance image are locally histogram equalized to bring forth further details. ratios set to 0.5 in the second. The three (R, G, B) computed response functions are shown in Figures 7(b) and 7(e). A radiance image was computed for each color channel and then the three images were scaled to preserve chromaticity, as described in section The final radiance images shown in Figure 7(c) and 7(f) are histogram equalized. As before, the small windows within the radiance images show further details brought out by local histogram equalization. Noteworthy are the details of the lamp and the chain on the wooden window in Figure 7(c) and the clouds in the sky and the logs of wood inside the clay oven in Figure 7(f). For the exploration of high dynamic range images, we have developed a simple interactive visualization tool, referred to as the Detail Brush. t is a window of adjustable size and magnification that can be slided around the image while histogram equalization within the window is performed in real time. This tool is also available at h ttp://w ww. cs. colurn bia. eddcavw. References [] N. Asada, A. Amano, and M. Baba. Photometric Calibration of Zoom Lens Systems. Proc. of EEE nternational Conference on Pattern Recognition (CPR), pages , [2] V. Brajovic and T. Kanade. A Sorting mage Sensor: An Example of Massively Parallel ntensity-to-time Processing for Low-Latency Computational Sensors. Proc. of EEE Conference on Robotics and Automation, pages , April [3] P. Burt and R. J. Kolczynski. Enhanced mage Capture Through Fusion. Proc. of nternational Conference on Computer vision (CCV), pages ,
7 0.8 (a) M (b) M (e> ( f ) Figure 7: Self-calibration results for color slides taken using a Nikon 2020 photographic camera. The slides are taken using film with S0 rating of 200 in (a) and 100 in (d). Response functions are computed for each color channel and are shown in (b) and (e). The radiance images in (c) and (0 are histogram equalized and the three small windows shown in each radiance image are locally histogram equalized. The details around the lamp and within the wooden window in (c) and the clouds in the sky and the objects within the clay oven in (0 illustrate the richness of visual information embedded in the computed radiance images. Y.-C. Chang and J. E Reid. RGB Calibration for Analysis in Machine Vision. EEE Trans. on Pattern Analysis and Machine ntelligence, 5(10): , October P. Debevec and J. Malik. Recovering High Dynamic Range Radiance Maps from Photographs. Proc. ofacm SCGRAPH 1997, pages , G. Healey and R. Kondepudy. Radiometric CCD Camera Calibration and Noise Estimation. EEE Trans. on Pattern Analysis and Machine ntelligence, 16(3): , March K. Hirano, 0. Sano, and J. Miyamichi. A Detail Enhancement Method by Merging Multi Exposure Shift mages. ECE, 181 -D-11(9): , B. K. P. Horn. Robot Vision. MT Press, Cambridge, MA, T. James, editor. The Theory of the Photographic Process. Macmillan, New York, [lo] B. Madden. Extended ntensity Range maging. Technical Report MS-CS-93-96, Grasp Laboratory, University of Pennsylvania, [ll S. Mann and R. Picard. Being Undigital with Digital Cameras: Extending Dynamic Range by Combining Differently Exposed Pictures. Proc. of ST s 48th Annual Conference, pages , May [ 121 S. L. Meyer. Data Analysis for Scientists and Engineers. John Wiley and Sons, [31 C. A. Poynton. A Technical ntroduction to Digital Video. John Wiley and Sons, [41 A. J. P. Theuwissen. Solid State maging with Charge- Coupled Devices. Kluwer Academic Press, Boston,
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University [email protected]
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Imaging process Light reaches surfaces in 3D. Surfaces reflect. Sensor element receives
Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.
Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your
WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
Assessment of Camera Phone Distortion and Implications for Watermarking
Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
A Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
Introduction to CCDs and CCD Data Calibration
Introduction to CCDs and CCD Data Calibration Dr. William Welsh San Diego State University CCD: charge coupled devices integrated circuit silicon chips that can record optical (and X-ray) light pixel =
Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010
Lecture 12: Cameras and Geometry CAP 5415 Fall 2010 The midterm What does the response of a derivative filter tell me about whether there is an edge or not? Things aren't working Did you look at the filters?
High-resolution Imaging System for Omnidirectional Illuminant Estimation
High-resolution Imaging System for Omnidirectional Illuminant Estimation Shoji Tominaga*, Tsuyoshi Fukuda**, and Akira Kimachi** *Graduate School of Advanced Integration Science, Chiba University, Chiba
Highlight Removal by Illumination-Constrained Inpainting
Highlight Removal by Illumination-Constrained Inpainting Ping Tan Stephen Lin Long Quan Heung-Yeung Shum Microsoft Research, Asia Hong Kong University of Science and Technology Abstract We present a single-image
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Implementing and Using the EMVA1288 Standard
Implementing and Using the EMVA1288 Standard A. Darmont *, J. Chahiba, J.-F. Lemaitre, M. Pirson, D. Dethier Aphesa, Rue de Lorcé, 39, 4920 Harzé, Belgium ABSTRACT The European Machine Vision Association
Video Camera Image Quality in Physical Electronic Security Systems
Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global
Projection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
Computer Vision. Image acquisition. 25 August 2014. Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved
Computer Vision Image acquisition 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved [email protected], [email protected] Image acquisition
JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions
Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai
COOKBOOK. for. Aristarchos Transient Spectrometer (ATS)
NATIONAL OBSERVATORY OF ATHENS Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing HELMOS OBSERVATORY COOKBOOK for Aristarchos Transient Spectrometer (ATS) P. Boumis, J. Meaburn,
Digital Image Basics. Introduction. Pixels and Bitmaps. Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction When using digital equipment to capture, store, modify and view photographic images, they must first be converted to a set
Resolution Enhancement of Photogrammetric Digital Images
DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia 1 Resolution Enhancement of Photogrammetric Digital Images John G. FRYER and Gabriele SCARMANA
The Concept(s) of Mosaic Image Processing. by Fabian Neyer
The Concept(s) of Mosaic Image Processing by Fabian Neyer NEAIC 2012 April 27, 2012 Suffern, NY My Background 2003, ST8, AP130 2005, EOS 300D, Borg101 2 2006, mod. EOS20D, Borg101 2010/11, STL11k, Borg101
Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing
Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing Joyce Farrell a, Mike Okincha b, Manu Parmar ac, and Brian Wandell ac a Dept. of Electrical Engineering, Stanford
PDF Created with deskpdf PDF Writer - Trial :: http://www.docudesk.com
CCTV Lens Calculator For a quick 1/3" CCD Camera you can work out the lens required using this simple method: Distance from object multiplied by 4.8, divided by horizontal or vertical area equals the lens
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Lecture 16: A Camera s Image Processing Pipeline Part 1. Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011)
Lecture 16: A Camera s Image Processing Pipeline Part 1 Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Today (actually all week) Operations that take photons to an image Processing
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
High Dynamic Range Imaging
High Dynamic Range Imaging Cecilia Aguerrebere Advisors: Julie Delon, Yann Gousseau and Pablo Musé Télécom ParisTech, France Universidad de la República, Uruguay High Dynamic Range Imaging (HDR) Capture
product overview pco.edge family the most versatile scmos camera portfolio on the market pioneer in scmos image sensor technology
product overview family the most versatile scmos camera portfolio on the market pioneer in scmos image sensor technology scmos knowledge base scmos General Information PCO scmos cameras are a breakthrough
RESOLUTION MERGE OF 1:35.000 SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY
RESOLUTION MERGE OF 1:35.000 SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY M. Erdogan, H.H. Maras, A. Yilmaz, Ö.T. Özerbil General Command of Mapping 06100 Dikimevi, Ankara, TURKEY - (mustafa.erdogan;
How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.
How an electronic shutter works in a CMOS camera I have been asked many times how an electronic shutter works in a CMOS camera and how it affects the camera s performance. Here s a description of the way
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
SoMA. Automated testing system of camera algorithms. Sofica Ltd
SoMA Automated testing system of camera algorithms Sofica Ltd February 2012 2 Table of Contents Automated Testing for Camera Algorithms 3 Camera Algorithms 3 Automated Test 4 Testing 6 API Testing 6 Functional
Introduction to image coding
Introduction to image coding Image coding aims at reducing amount of data required for image representation, storage or transmission. This is achieved by removing redundant data from an image, i.e. by
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision!
Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision! 1 In this presentation We are going to go over lens basics Explain figures of merit of lenses Show how
High Quality Image Magnification using Cross-Scale Self-Similarity
High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg
Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern
Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Samuel Kosolapov Department of Electrical Engineering Braude Academic College of Engineering Karmiel 21982, Israel e-mail: [email protected]
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of
Calibration Best Practices
Calibration Best Practices for Manufacturers SpectraCal, Inc. 17544 Midvale Avenue N., Suite 100 Shoreline, WA 98133 (206) 420-7514 [email protected] http://studio.spectracal.com Calibration Best Practices
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair
Understanding Megapixel Camera Technology for Network Video Surveillance Systems Glenn Adair Introduction (1) 3 MP Camera Covers an Area 9X as Large as (1) VGA Camera Megapixel = Reduce Cameras 3 Mega
Characterizing Digital Cameras with the Photon Transfer Curve
Characterizing Digital Cameras with the Photon Transfer Curve By: David Gardner Summit Imaging (All rights reserved) Introduction Purchasing a camera for high performance imaging applications is frequently
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Vignette and Exposure Calibration and Compensation
Vignette and Exposure Calibration and Compensation Dan B Goldman and Jiun-Hung Chen University of Washington Figure 1 On the left, a panoramic sequence of images showing vignetting artifacts. Note the
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
Rodenstock Photo Optics
Rogonar Rogonar-S Rodagon Apo-Rodagon N Rodagon-WA Apo-Rodagon-D Accessories: Modular-Focus Lenses for Enlarging, CCD Photos and Video To reproduce analog photographs as pictures on paper requires two
Revision problem. Chapter 18 problem 37 page 612. Suppose you point a pinhole camera at a 15m tall tree that is 75m away.
Revision problem Chapter 18 problem 37 page 612 Suppose you point a pinhole camera at a 15m tall tree that is 75m away. 1 Optical Instruments Thin lens equation Refractive power Cameras The human eye Combining
Monte Carlo Path Tracing
CS294-13: Advanced Computer Graphics Lecture #5 University of California, Berkeley Wednesday, 23 September 29 Monte Carlo Path Tracing Lecture #5: Wednesday, 16 September 29 Lecturer: Ravi Ramamoorthi
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia [email protected]
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Advanced Computer Graphics. Rendering Equation. Matthias Teschner. Computer Science Department University of Freiburg
Advanced Computer Graphics Rendering Equation Matthias Teschner Computer Science Department University of Freiburg Outline rendering equation Monte Carlo integration sampling of random variables University
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
http://dx.doi.org/10.1117/12.906346
Stephanie Fullerton ; Keith Bennett ; Eiji Toda and Teruo Takahashi "Camera simulation engine enables efficient system optimization for super-resolution imaging", Proc. SPIE 8228, Single Molecule Spectroscopy
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
CMOS Image Sensor Noise Reduction Method for Image Signal Processor in Digital Cameras and Camera Phones
CMOS Image Sensor Noise Reduction Method for Image Signal Processor in Digital Cameras and Camera Phones Youngjin Yoo, SeongDeok Lee, Wonhee Choe and Chang-Yong Kim Display and Image Processing Laboratory,
International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in Office Applications Light Measurement & Quality Parameters
www.led-professional.com ISSN 1993-890X Trends & Technologies for Future Lighting Solutions ReviewJan/Feb 2015 Issue LpR 47 International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Measuring Line Edge Roughness: Fluctuations in Uncertainty
Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as
Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm
1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,
White Paper. "See" what is important
Bear this in mind when selecting a book scanner "See" what is important Books, magazines and historical documents come in hugely different colors, shapes and sizes; for libraries, archives and museums,
Understanding Line Scan Camera Applications
Understanding Line Scan Camera Applications Discover the benefits of line scan cameras, including perfect, high resolution images, and the ability to image large objects. A line scan camera has a single
S-Log: A new LUT for digital production mastering and interchange applications
S-Log: A new LUT for digital production mastering and interchange applications Hugo Gaggioni, Patel Dhanendra, Jin Yamashita (Sony Broadcast and Production Systems) N. Kawada, K. Endo ( Sony B2B Company)
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
1051-232 Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002
05-232 Imaging Systems Laboratory II Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002 Abstract: For designing the optics of an imaging system, one of the main types of tools used today is optical
The Contribution of Street Lighting to Light Pollution
The Contribution of Street Lighting to Light Pollution Peter D Hiscocks, Royal Astronomical Society, Toronto Centre, Canada Sverrir Guðmundsson, Amateur Astronomical Society of Seltjarnarnes, Iceland April
Exposing Digital Forgeries Through Chromatic Aberration
Exposing Digital Forgeries Through Chromatic Aberration Micah K. Johnson Department of Computer Science Dartmouth College Hanover, NH 03755 [email protected] Hany Farid Department of Computer Science
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Why use ColorGauge Micro Analyzer with the Micro and Nano Targets?
Image Science Associates introduces a new system to analyze images captured with our 30 patch Micro and Nano targets. Designed for customers who require consistent image quality, the ColorGauge Micro Analyzer
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
A Comprehensive Set of Image Quality Metrics
The Gold Standard of image quality specification and verification A Comprehensive Set of Image Quality Metrics GoldenThread is the product of years of research and development conducted for the Federal
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.
Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.
Lectures 6&7: Image Enhancement
Lectures 6&7: Image Enhancement Leena Ikonen Pattern Recognition (MVPR) Lappeenranta University of Technology (LUT) [email protected] http://www.it.lut.fi/ip/research/mvpr/ 1 Content Background Spatial
Digital Image Requirements for New Online US Visa Application
Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must
Aperture, Shutter speed and iso
Aperture, Shutter speed and iso These are the building blocks of good photography and making good choices on the combination of these 3 controls will give superior results than you will get by using the
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
The Digital Dog. Exposing for raw (original published in Digital Photo Pro) Exposing for Raw
Exposing for raw (original published in Digital Photo Pro) The Digital Dog Exposing for Raw You wouldn t think changing image capture from film to digital photography would require a new way to think about
The best lab standard. 1,4 Megapixels 2/3 inch sensor Giant pixel size 6 times optical zoom Massive 16-bit imaging for enhanced dynamic
1 AGENDA Product roadmap...3 Computer based system: PLATINUM..5 XPLORER.6 ESSENTIAL..7 Specifications Computer based system..8 Configuration Computer based system... 9 Software Computer based system...10
High Resolution Spatial Electroluminescence Imaging of Photovoltaic Modules
High Resolution Spatial Electroluminescence Imaging of Photovoltaic Modules Abstract J.L. Crozier, E.E. van Dyk, F.J. Vorster Nelson Mandela Metropolitan University Electroluminescence (EL) is a useful
