Estimating Spectral Reflectances of Digital Artwork

Similar documents
A Proposal for OpenEXR Color Management

A technical overview of the Fuel3D system.

Color Management Terms

Spectrum Recovery from Colorimetric Data for Color Reproductions

High-resolution Imaging System for Omnidirectional Illuminant Estimation

ON THE INVESTIGATION OF THE CIE 1931 COLORIMETRIC OBSERVER

Outline. Quantizing Intensities. Achromatic Light. Optical Illusion. Quantizing Intensities. CS 430/585 Computer Graphics I

White Paper. "See" what is important

Color Accurate Digital Photography of Artworks

ICC Recommendations for Color Measurement

Calibration of a High Dynamic Range, Low Light Level Visible Source

Experimental Evaluation of Museum Case Study Digital Camera Systems

CBIR: Colour Representation. COMPSCI.708.S1.C A/P Georgy Gimel farb

Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing

Spectrophotometry and the Beer-Lambert Law: An Important Analytical Technique in Chemistry

Three Key Paper Properties

Nederland België / Belgique

Characterizing Digital Cameras with the Photon Transfer Curve

Computer Vision: Machine Vision Filters. Computer Vision. Optical Filters. 25 August 2014

Multispectral stereo acquisition using 2 RGB cameras and color filters: color and disparity accuracy

Assessment of Camera Phone Distortion and Implications for Watermarking

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

reflectance of art Hideaki Haneishi, Takayuki Hasegawa, Asako Hosoi, Yasuaki Yokoyama, Norimichi Tsumura and Yoichi Miyake

RIT American Museums Survey on Digital Imaging for Direct Capture of Artwork

Scanners and How to Use Them

Rodenstock Photo Optics

A Comprehensive Set of Image Quality Metrics

Robot Perception Continued

Digitization of Old Maps Using Deskan Express 5.0

A New Concept in High Resolution Internet Image Browsing

Realization of a UV fisheye hyperspectral camera

LED Lighting - Error Consideration for Illuminance Measurement

Resolution for Color photography

Chromatic Improvement of Backgrounds Images Captured with Environmental Pollution Using Retinex Model

Photography of Cultural Heritage items

Evaluation of Bispectral Spectrophotometry for Accurate Colorimetry of Printing Materials

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Introduction to Fourier Transform Infrared Spectrometry

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

The Image Deblurring Problem

Computer Vision for Quality Control in Latin American Food Industry, A Case Study

CBERS Program Update Jacie Frederico dos Santos Liporace AMS Kepler

1. Introduction to image processing

We bring quality to light. MAS 40 Mini-Array Spectrometer. light measurement

What light color should a White-Light-Scanner use?

3D Scanner using Line Laser. 1. Introduction. 2. Theory

A Simple Guide To Understanding 3D Scanning Technologies

Full-Spectral Color Calculations in Realistic Image Synthesis

Road-Map for Interference Filter based Spectral Sensors

RGB Color Managed Workflow Example

Measuring Line Edge Roughness: Fluctuations in Uncertainty

Implementing ISO12646 standards for soft proofing in a standardized printing workflow

A Adobe RGB Color Space

RESOLUTION MERGE OF 1: SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY

Development of The Next Generation Document Reader -Eye Scanner-

HIGH AND LOW RESOLUTION TEXTURED MODELS OF COMPLEX ARCHITECTURAL SURFACES

MODULATION TRANSFER FUNCTION MEASUREMENT METHOD AND RESULTS FOR THE ORBVIEW-3 HIGH RESOLUTION IMAGING SATELLITE

issues Scanning Technologies Technology Cost Scanning Productivity Geometric Scanning Accuracy Total Cost of Ownership (TCO) Scanning Reliability

Virtual Reality Techniques for the Visualization of Biomedical Imaging Data

Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs

Spyder 5EXPRESS Hobbyist photographers seeking a simple monitor color calibration solution.

Graphic Design I GT Essential Goals and Objectives

How To Use Trackeye

A Adobe RGB (1998) Color Image Encoding. Version May 2005

High Resolution Analysis of Halftone Prints

Resolution Enhancement of Photogrammetric Digital Images

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

3D TOPOGRAPHY & IMAGE OVERLAY OF PRINTED CIRCUIT BOARD ASSEMBLY

White paper. CCD and CMOS sensor technology Technical white paper

1 Laboratory #5: Grating Spectrometer

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

Multivariate Normal Distribution

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University

Application Note Noise Frequently Asked Questions

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

Digital Image Basics. Introduction. Pixels and Bitmaps. Written by Jonathan Sachs Copyright Digital Light & Color

Principal components analysis

Digital Remote Sensing Data Processing Digital Remote Sensing Data Processing and Analysis: An Introduction and Analysis: An Introduction

The von Kries Hypothesis and a Basis for Color Constancy

Optical Design Tools for Backlight Displays

AxioCam HR The Camera that Challenges your Microscope

Vector Network Analyzer Techniques to Measure WR340 Waveguide Windows

Chapter 10 Introduction to Time Series Analysis

haze-gard i The objective standard for a clear view Global communication Reliable and Precise

Blackbody Radiation References INTRODUCTION

DNA Detection. Chapter 13

NRC Publications Archive Archives des publications du CNRC

THE NATURE OF LIGHT AND COLOR

WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

Visualizing Data: Scalable Interactivity

Introduction to Computer Graphics

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

ENGINEERING METROLOGY

Why use ColorGauge Micro Analyzer with the Micro and Nano Targets?

McCullough 150, Stanford University. end-to-end design of an image system also requires. whose products are mainly concerned

Transcription:

Estimating Spectral Reflectances of Digital Artwork Joyce E. Farrell 1, John Cupitt 2, David Saunders 2, and Brian A. Wandell 3 Hewlett-Packard Laboratories 1, National Gallery 2, London, Stanford University 3 Introduction This paper analyzes the spectral information acquired by a system that is used for digital archiving of famous paintings in several European galleries. Our interest in understanding the acquisition of spectral reflectance information flows from a desire to be able to render the scanned material as if it were viewed under one of many different illuminants. This objective cannot be achieved accurately if one knows only the pigment tristimulus coordinates under one illuminant. Digital capture systems have a very efficient workflow compared to traditional methods, based on scanning photographic negatives and transparencies. In addition, the linearity of the initial image capture in digital systems makes it possible to specify the spectral information captured much more accurately than is possible with film systems. Here, we specify the spectral information acquired by the MARC scanner, and we make a particular effort to state the properties of the spectral acquisition in terms of a database of surface reflectance functions of real paintings. Finally, we describe a method for estimating surface reflectance by combining databases of paint spectral reflectance measurements, camera calibration data, and scanner measurements. Background In 1989, the European Union funded a project to develop the first digital archiving system based on a high resolution CCD camera and 6 independent color channels [1]. The system, called the VASARI scanner, moved the digital camera to multiple positions across the painting area. At each position, VASARI captured multispectral (6 band) image data by taking successive images, each with a different filtered illumination. The images taken at each position and in each spectral band were later stitched together to create the full highresolution 6 band image data for each painting. The calibrated color data were then transformed into CIEXYZ and stored in the CIE color space L*a*b* for the D65 illuminant. The color accuracy of the VASARI digital archiving system is on the order of 1 E for CIELAB D65. The scientists at the National Gallery who developed the VASARI scanner later developed a more portable digital archiving system called the MARC camera [2]. MARC, also funded by the European Union, is based on a high-resolution scan-back RGB camera. The MARC camera has color errors on the order of 5 E for CIE Lab D65. Although it is less accurate than the VASARI scanner, it is still more accurate than scanning photographic negatives and transparencies which produce color errors on the order of 8 E..

System modeling Image capture We describe the spectral aspects of the digital archiving system in terms of system measurements of the painting s surface reflectance function. In some cases, knowledge of the reflectance function is preferable to specification of tristimulus coordinates (such as XYZ) under a fixed illumination because from knowledge of the spectral reflectance function it is possible to accurately render the painting under many different ambient illuminants. Digital archiving systems, such as VASARI and MARC, use capture devices that can be characterized using linear systems methods and this is the emphasis we use in describing the quality of the spectral information. The questions we will answer in the next section, then, are these: What spectral information do we know securely from the system measurements? What spectral information is missing, and which aspect of the missing information matters for stimulus reproduction? Finally, we will ask about methods for making inferences about the missing information from the known information. We approach these issues by building the notation for the linear analysis, and particularly the spectral transfer function that characterizes the device. We show how knowledge of the spectral transfer function informs us about the surface reflectance information that is securely captured by the device and the spectral reflectance information that is not measured. Then, we show that when the goal is to produce accurate renderings under one of a few different illuminants, only part of the missed information is useful for image reproduction, and we show how to specify the missing information that is relevant to reproduction. Finally, we describe an approach for using the securely captured information and the statistics of the surface reflectance functions to estimate the information missed by the capture device. The Spectral Transfer Function Digital archiving measurements take place in a closed system with a fixed illuminant. Because the illuminant is constant, it can be considered a property of the system. Hence, we define the concept of a system channel to include the combined spectral responsivity of the sensor and illuminant combination. Using this definition, the system channel maps the spectral reflectance function into the system response. The channel spectral responsivity is determined by a number of system elements, including the sensor, other optical elements (e.g., lens, filters, mirrors, etc.) and the illuminant. For the calculations presented below, we will assume there are 61 wavelength samples spanning 400 to 700 nm in 5 nm steps. The transformation from the spectral reflectance function to the system response can be specified by placing the channel spectral response vectors into the columns of a matrix, the spectral transfer function (STF), S. The MARC and VASARI systems have different numbers of color channels (3 and 6 respectively). Hence, for the MARC system the system matrix is 61 x 3 and for the VASARI system the matrix is 61 x 6. Suppose the reflectance function at a point is r. The Marc system color response at that point is a three vector, m, equal to S t r. The columns of the Marc spectral transfer function are plotted in Figure 1. Paint surface reflectance The measurements obtained by these systems, and in particular the 3 channel MARC system, is meager compared to the full spectral reflectance function. The main opportunity to improve spectral estimation performance is to use statistical information about the set of spectral reflectances functions one is likely to find in the input

model approximation, and M is the dimensionality of the linear model. These linear models of surface reflectances establish a bound on the most uncertainty there may be about the spectral characteristics of the input data. As we describe later, the number of parameters needed to specify the linear model bounds may be higher than the number of parameters needed to describe the data set in practice. Figure 2 shows the basis functions for a collection of 189 pigments measured at the National Gallery. To represent these reflectance functions at high precision (12 bits of resolution) requires 10 basis functions. The analysis of the reflectance functions shows that ten independent channels are needed to recover the spectral painting reflectance functions at this high Figure 1. The spectral sensitivity of the MARC channels are shown above. The spectral sensitivity of the human channels under a D 65 illuminant are shown below. material. Summaries of the statistical properties of the collection of the likely surface reflectance functions or illuminant spectral power distributions are often embodied in the form of linear models that summarize the data [3-6]. In thinking about the system properties, it is useful to begin with a summary of the surface reflectance functions using a linear combination of orthogonal basis functions, B i : = M s i= 1 w i B i where w i are the weights chosen to minimize the error between s and its linear Figure 2. Ten orthogonal basis functions for the paint reflectance functions are shown.

accuracy (ignoring important factors such as sensor noise). From knowledge of the reflectance functions, the image could be rendered under any illuminant. What the system measures and misses In practice, most systems do not have enough data to guarantee a spectrally accurate representation. For example, it is impossible to use the 3 channel MARC system, or even 6 sensor VASARI system, to estimate all of the weights of the surface reflectance basis functions. Knowledge of the spectral transfer function, however, tells us precisely what parts of the reflectance function are measured. The system measures the projection of the surface reflectance function vector onto the columns of the spectral transfer function. The scanner measurements completely specify the component of the surface reflectance function that falls within this linear subspace. Using this information about the scanner, it is convenient to re-write the orthogonal basis functions representing the reflectance functions in a new form. We separate the basis for the reflectance functions into two groups. One group comprises the basis functions of the surface reflectance that the MARC scanner measures shown in the top panel of Figure 3. The second group comprises the basis of the surface reflectance functions that are invisible to the MARC scanner. The seven dimensions of the surface reflectance functions that are missed by the scanner are shown in the second and third panels of Figure 3. Adding any of mixture of terms drawn from these components will not change the MARC signal. However, a complete estimate of the surface reflectance function requires an estimate of these components. Figure 3. The three paint basis functions measured by the MARC are shown at the top. The basis functions the MARC cannot see are shown in the two panels below. Of what is missed, what matters? The basis functions used to represent the paints were determined from physical measurements that ignored both the scanner and the human eye. If we intend to reproduce the surface for human viewing under a particular set of illuminants, we do not need to measure the full surface

observer under any of the rendering illuminants Figure 4 shows the two basis functions that the MARC scanner fails to measure and that matter to the human eye when rendering the paints under a D65 source. The calculations needed to determine these basis functions can be applied to the case of multiple illuminants as easily as to the case of one illuminant. Figure 4. (a) The two components of the surface reflectance functions that are invisible to the MARC but influence human perception under a D65 rendering illuminant. reflectance function. Instead, we need only measure the components of the surface reflectance function that can be seen by the human channels under those illuminants. As an extreme case intended to clarify the idea, but of no practical significance, suppose we intend to view the paintings only under a monochromatic illuminant. Then, we need to measure only the reflectance functions at that one wavelength. The same principle generalizes to the measurements in terms of the orthogonal basis functions. With this in mind, consider once more the surface reflectance basis functions that are missed by the scanner. These functions can be divided into two groups. The first group comprises terms that would influence the human observer during the rendering process, and the second group comprises terms whose presence would not influence the human observer under any of the rendering illuminants. We can arrange the second group to be orthogonal to the first. When the goal is to measure enough about the reflectance function so that we can render the pigments under a particular collection of lights, there is no penalty for failing to measure the contribution of the terms that are not detected by the human At this point, it may be useful to summarize how we have organized the analysis. We began by defining a complete linear model for the surface reflectance functions. This linear model defines the statistics of the input signals. We then reorganized these physically-defined functions into two groups: one that can be measured from the scanner response and one that is in the null space of the scanner response. The scanner produces accurate estimates of the first group, but it produces no information about the second group. Next, we consider the collection of basis functions that represent the null space of the scanner response. This portion can also be divided into two parts: one group that is visible to humans under one of the rendering illuminants, and a second that is invisible under any of the rendering illuminants. We then noted that one need not be concerned with estimating the weights of the functions that are invisible to the human eye under any of the rendering illuminants. From the known to the unknown We now have two orthogonal groups of basis functions. One group comprises the basis functions that are accurately measured by the scanner. The second group comprises the basis functions that are unseen by the scanner but that would be needed to render the surfaces under one of the chosen illuminants. The last piece of the puzzle, then, is how to estimate the unknown weights of the relevant surface basis functions.

via a linear transformation. This method is common practice, though it is usually expressed in terms of finding a linear transformation from the measured RGB values to the tristimulus values. Widespread experience with this method tells us that it is not very accurate. Errors are typically on the order of 5 E for CIE Lab D65 and they can sometimes be significantly larger. From the analysis of the estimation problem we conclude that the proper way to improve performance of a fixed system is to the methods used for estimating the unknown weights from the unknown weights. The mathematics of such procedures is fairly straightforward. Performance of such systems depends on the properties of the surfaces (can the unknown weights be predicted from the measured weights) and how complete and representative the database is. Conclusion Figure 5. The unknown weights are systematically, though nonlinearly, related to the known weights. These graphs show how the first unknown weight (4) depends on each of the first two. The only information we have to estimate the unknown weights comes from the statistical analysis of the database of surface reflectance functions. The problem, then, is to understand the functional relationship between the weights that can be measured by the MARC scanner and those that cannot be measured. Figure 5 shows a series of plots in which the 1 st unknown weight (weight 4) is plotted as a function of two of the measured values. There are plainly significant relationships between the known (measured) and unknown weights. Improved estimates of the surface reflectance functions can only come from a better understanding of this relationship. The simplest method for predicting the unknown weights from the known weights is We have analyzed the problem of estimating the surface reflectance function of paintings in a digital archiving system. The spectral information measured by a digital archiving system can be specified from knowledge of the spectral transfer function and a database of the surface reflectance functions. The unmeasured parts of the surface reflectance functions can also be specified. Of these unmeasured parts, we show how to determine that portion that is important to the human visual system given the scope of rendering illuminants. The ability to use the information measured by the digital archiving system to infer the unmeasured portions of the surface reflectance functions depends on the relationship between the terms the system can measure and the terms the system cannot measure but the human can. The accuracy of these inferences will depend on the properties of the surfaces and the completeness of the database available for characterizing the surface reflectance functions.

The main reason for estimating the surface reflectance function, and not just the surface tristimulus coordinates, is that from knowledge of the full function it is possible to render images under a multiplicity of illuminants. The main objective of this analysis is to help us understand what must be characterized to improve the quality of the measurements. We have concluded that the functional relationship between the measured and invisible components of the surface reflectance basis should be the main targets of future analyses. [5] L. T. Maloney and B. A. Wandell, Color constancy: a method for recovering surface spectral reflectance, Journal of the Optical Society of America: Part A, Vol. 1, pp. 29-33, 1986. [6] D. H. Brainard and W. T. Freeman, Bayesian color constancy, Journal of the Optical Society of America: Part A, Vol. 14, pp. 1393-1411, 1997. References [1] K. Martinez, J. Cupitt, and D. Saunders, High resolution colorimetric imaging of paintings, Proc. SPIE, Vol. 1901, pp. 25-36, Jan. 1993. [2] J. Cupitt, K. Martinez, and D. Saunders, A Methodology for art reproduction in colour: The MARC project, Computers and the History of Art, Vol. 6(2), pp. 1-19, 1996. [3] J. Cohen, Dependency of the spectral reflectance curves of the Munsell color chips, Psychonomic Science, Vol.1, pp. 369-370, 1964. [4] D. B. Judd, D. L. MacAdam and G. W. Wyszecki, Spectral distribution of typical daylight as a function of correlated color temperatures, Journal of the Optical Society of America, Vol 54, p. 1031, 1964.