Color Balance in LASER Scanner. Point Clouds

Size: px
Start display at page:

Download "Color Balance in LASER Scanner. Point Clouds"

Transcription

1 Institut für Parallele und Verteilte Systeme Abteilung BildVerstehen Universität Stuttgart Universitätsstraße 38 D Stuttgart Master Thesis Nr Color Balance in LASER Scanner Point Clouds Umair Nasir Studiengang: Prüfer: Betreuer: INFOTECH Prof. Dr. rer. nat. habil. Levi, Paul PD Dr. rer. nat. Schanz, Michael Begonnen am: 1st February 2013 Beendet am: 3rd August 2013 CR-Nummer D.2.2, D.2.3, D.3.3, D.3.4. E.1, F.2.1, F.3.3, G.1.1, G.3, I.4.3, I.4.8, I.6.2

2 Abstract Color balancing is an important domain in the field of photography and imaging. Its use is necessitated because of the color inconsistencies that arise due to a number of factors before and after capturing an image. Any deviation from the original color of a scene is an irregularity which is dealt with color balancing techniques. Images may deviate from their accurate representation because of different illuminant ambient conditions, non-linear behavior of the camera sensors, the conversion of file format from a wider color gamut of raw camera format to a file format with a narrower color gamut and so on. Many approaches exist to correct the color inconsistencies. One of the basic techniques is to do a histogram equalization to increase the contrast in an image by utilizing the whole dynamic range of the brightness values. To remove color casts introduced due to false illuminant selection at the time of image capture many white balancing techniques exist. The white balancing can be employed before image capture right in the camera using hardware filters with dials to set illuminant conditions in the scene. A lot of research has been done regarding the effectiveness of white balancing after image capture. The choice of color space and the file format is quite important to consider before white balancing. Another side to color balancing is color transfer whereby the image statistics of one image are transferred to another image. Histogram matching is quite widely used to match the histogram of a source image to that of a target. Other statistics for color transfer are to match the mean and standard deviation of a source image to a target image. These two approaches for color transfer are analyzed and tested in this thesis on images displaying the same scene but with different color casts. Color transfer matching the means and standard deviations is selected because of its superior color balancing and ease of implementation. While a lot of color balancing work has been done in 2D images, no significant work is done in the 3D domain. There exist 3D scanners which scan a scene to build its 3D model. The 3D equivalent of the 2D pixel is a scan point which is obtained by reflecting a laser beam from a point in a scene. Hundreds of thousands of such points make up a single scan which displays the scene that was in the view of the 3D scanner. Because a single scan cannot capture scene behind obstructions or the scenes out of the scanners range, more than one scans are undertaken from different positions and time. More than one scans grouped together make up a data structure called a point cloud. Due to these changes in position and time, luminance conditions alter. As a result the scan points from different scans representing the same scene show a considerably different color cast. Color balancing by matching the means and standard deviation is applied on the point cloud. The color inconsistencies such as sharp color gradients between points of different scans, the presence of stray color streaks from one scan into another are greatly reduced. The results are quite appealing as the resulting point clouds show a smooth gradient between different scans. I

3 Acknowledgements First of all, I am very thankful to Dipl. Inf. Joachim E. Vollrath for his valuable time and suggestions during the supervision of my thesis. I would also like to thank PD Dr. Michael Schanz for giving me an opportunity of working in his department. Finally, I would like to recognize the continuous support of my family members which has always been there throughout my studies. II

4 Table of Contents Abstract... I Acknowledgements... II Table of Contents... III List of Figures... V Acronyms... VII 1. Introduction LASER Scanner and Point Clouds Luminance and Chromaticity Artifacts in Scans Color Balancing Background and Related Work Color Balancing White Balancing for Color Cast Correction Histogram Matching and Histogram Warping Mean and Standard Deviation Matching Color Spaces for Color Balancing Decoupling of Chromaticity and Luminance Color Space Conversions Experimental Evaluation in MATLAB Data Acquisition Experiments in MATLAB Histogram Matching Mean And Standard Deviation Matching Color Transfer with Multiple Targets Interpolation of Means and Standard Deviation Implementation on 3D Point Clouds LASER Scanner Point Clouds Scan point World Space and Construction Space III

5 4.1.3 Point Group Comparison between 2D and 3D Implementation of Color Transfer Algorithm Populating required data for Color Transfer Implementation Flow Graph Interpolation of Neighbor s Means and Standard Deviations Color Balancing Artifacts - Stains Evaluation Histogram Matching in 2D Color Transfer algorithm in 2D Color Transfer Algorithm on 3D Point Clouds Summary and Future Work Bibliography IV

6 List of Figures Figure 1.1 Screenshot of a 3D Point Cloud with color inconsistencies... 2 Figure 2.1 Images without and with White Balancing [17]... 5 Figure 2.2 Histogram Matching followed by Gradient Preservation [14]... 8 Figure 2.3 Result of Color Histogram Warping [23]... 9 Figure 2.4 Result of Color Transfer from Image b to Image a. [15] Figure 2.5 Showing Correlation between pairs of Channels in RGB and Lαβ for the input image [13] Figure 2.6 Performance of color transfer on different image ensemble in different color spaces [13] Figure 3.1 Scans of the same scene taken in the morning(above) and the evening(below) Figure 3.2 Red channel of the morning(source) and evening(target) images of the room Figure 3.3 Displaying the CDFs for red channel of the source and target images Figure 3.4 Histogram matching. Pixel with value s transformed to t Figure 3.5 Color Transfer by matching means and standard deviations Figure 3.6 Showing a mask overlapping a quadrant each from cells 1, 2, 5 and Figure 3.7 A Pixel in the top left corner of Cell Figure 4.1 FARO LASER Scanner [3] Figure 4.2 Calculation of Source's means and standard deviations Figure 4.3 Flow Graph of Color Transfer Algorithm in 3D Figure 4.4 Rubik Cube: The point group lies in the center most cell Figure 4.5 Showing 8 octants of a 3D cell Figure 4.6 Red Mask overlapping one Octant(gray) of the Blue Cell Figure 4.7 Mask with eight mean values at its corner Figure 4.8 Cross section of a wall manifesting the Stain Artifact Figure 4.9 Red and Yellow cast on the wall after color balance Figure 4.10 A wall showing a brownish stain Figure 4.11 Showing 8 neighbor cells of the Cell N Figure 5.1 Morning and Evening images of the same scene Figure 5.2 Morning image matched to Evening image Figure 5.3 RGB channels of morning image matched to target image Figure 5.4 Histograms for the L, a, and b channels of Source, Target and Matched Image Figure 5.5 Morning Image matched to Evening in CIELab space Figure 5.6 Image 1-2: original images. Image 3-4: matched images Figure 5.7 Cell to Cell Color Transfer Figure 5.8 Result after Interpolation Figure 5.9 Showing Red, Green and Blue shades Figure 5.10 Red, Green and Blue Images matched to a combined Target Figure 5.11 Point Cloud showing two Scans Figure 5.12 Color Balanced without Interpolation Figure 5.13 Final Point Cloud after Color Transfer with Interpolation V

7 Figure 5.14 A Point Cloud with several scans Figure 5.15 Color Balanced Image with Stains Figure 5.16 Color Balanced Point Cloud with no Stains Figure 5.17 Color Balancing Artifact VI

8 Acronyms LASER CDF CIE HSV PCA LMS STL JPEG Light Amplification by Stimulated Emission of Radiation Cumulative Distribution Function Commission Internationale de l Eclairage Hue Saturation Value Principal Component Analysis Long, Medium, Short Standard Template Library Joint Photographic Experts Group VII

9 Chapter 1. Introduction 1. Introduction 1.1 LASER Scanner and Point Clouds A LASER scanner is a device which uses laser beams to probe any object lying in its scope of view. Each laser beam that is reflected back from the object helps find the distance of the point which reflected it and thus calculate the distance information. A collection of millions of such points taken by changing the angle of the laser emitter help in the construction of 3D models of the subjects. These points are called scan points. Besides the distance information, the color information of the points can also be recorded. A laser scanner has many uses. One use is in video games where the infrared laser scanner is known as Kinect and is a part of the Xbox 360 [1]. The scanner identifies the player and his movements in real time. It is also used in the field of robotics for the identification of obstructions and other objects in the path of a robot [2]. Another very common use for a laser scanner is in medicine for designing prosthetics. In industry the laser scanners facilitate high precision 3D measurement and 3D Image Documentation. One example of such a laser scanner is FARO Focus3D Laser Scanner [3]. A laser scanner such as FARO Focus3D can generate millions of scan points in a few minutes. A collection of all the scan points constitute a data structure called a point cloud [4]. Only those surfaces that lie in direct view of the scanner can be reconstructed. To get the other surfaces or obstructions into view, the position of the laser scanner is changed, new complete scans undertaken, and the points of the multiple scans are combined. Thus one point cloud can represent one or more individual scans. 1.2 Luminance and Chromaticity Artifacts in Scans A problem arises when more than one scans are combined into a point cloud. While taking a scan at a different time and from a different position, the lighting conditions may have changed with the effect that the scan points representing the same object now vary in luminance and chromaticity. Besides external changes in the environment, there are internal effects as well. Other reasons can be variation in the laser scanner s internal camera signal response and time varying non-linear automatic gain control of the camera [5]. In a color model, luminance and chromaticity together make up a color. A color space comprises of at least three components such as the RGB, HSV, YUV, CMYK and so on. Luminance measures the brightness of light. The chromaticity component itself is made up of two further components which are hue and saturation [6]. 1

10 Chapter 1. Introduction Hue is the attribute with which the colors can be classified as red, blue, green, yellow and the colors that are formed by mixing them [7]. These are also called the tones of color or chromatic colors. Saturation is the component of chromaticity which describes the purity of a hue or of a chromatic color. In the presence of any achromatic colors such as gray, the saturation is decreased and is eliminated when it becomes absolutely gray [8]. After the scans have been taken by the laser scanner and a point cloud is generated, objects representing the same scene consist of points that originate from different scans. Although this does not have any effect on the basic shape of any object, it does cause visual imperfections. Sometimes very sharp gradients appear in point clouds at the junctions where more than one scans meet. Sometimes the scan points of different luminance and chromaticity appear in small chunks or streaks between the scan points of another scan. Figure 1.1 Screenshot of a 3D Point Cloud with color inconsistencies Figure 1.1 shows a 3D point cloud which consists of a number of individual scans. We see sharp gradients between two scans on the floor. On the left side of the wall, there are some streaks from a scan with a brownish color cast. As a whole the complete wall is visually unappealing due to color inconsistencies between different scans. 1.3 Color Balancing Color Balancing or color correction is a method which does adjustments to the chromatic and achromatic colors in order to reduce color inconsistencies that might appear during image capture. In other words, color correction is the recovery of the actual scene characteristics in an image. The 2

11 Chapter 1. Introduction characteristics that need correction are color, contrast and sharpness [9]. Color inconsistencies can arise both in neutrals such as the gray as well as colors. The latter is said to have a color cast. Adjustment of the neutrals is called white balancing or gray balancing. The goal of white balancing is that a neutral color in the original scene should also be neutral in a captured scene. For this purpose all the colors in an image are scaled by a value so that an effected neutral point or patch appears neutral again. White balancing can be employed before capturing an image by estimating the illuminant followed by some filter so that a neutral object actually appears neutral. Von Kries Transform is one method that applies gains to the spectral sensitivity responses in the cones of the eyes. The gains are adaptable depending upon the illuminant [10] and are applied in the LMS color space. LMS stands for long, medium and short waves because the three types of cones in the eye are sensitive to these three wavelengths. Von Kries Adaptation is one feature of camera image processing. In addition to the digital filtering within a camera, another method to apply white balancing is to physically put color filters over the lens of the camera. For a wide variety of illuminants, the filter wheels are used which can be rotated to adjust for different illuminants. White balancing can also be performed in a later stage on monitor but this causes additional color inconsistencies. In [11] Stephen Viggiano studied six methods of white balancing at different stages and concluded that balancing in the camera s native RGB produces the best results for white balancing. This is because a raw image is minimally processed and exhibits a higher dynamic range compared to other formats which are obtained after processing the raw image. The higher dynamic range means that the color gamut is broad because each color channel is represented with 12, 14 or 16 bits compared to 8 bits in JPEG file format. Color Balancing also removes color casts which may appear in images due to different reasons such as the filters used in the camera during capture, while printing and scanning and due to the way the original scene was illuminated [12]. Such color casts can be removed by first identifying the particular color casts and then using physical color correction(cc) filters of varying density depending upon the intensity of the color cast [12]. The color casts can also be removed later in software using color correction algorithms [13] [9] [14]. Contrast is the difference in the brightness level for any channel of a color space. The more the difference in brightness, the higher is the contrast and it is visually more aesthetic. Color correction by contrast adjustment is most widely done by histogram equalization which spreads the histogram of a color channel over the whole dynamic range of the brightness levels. Another possibility of color balancing is by color transfer [15]. With color transfer the characteristics of one image are transferred to another image through some color transfer function. The characteristics can be the histogram of an image or some other image statistics such as mean, standard deviation or kurtosis. The color transfer function for modifying the histogram of the source image to match that of a target image is called the cumulative distribution function(cdf). 3

12 Chapter 1. Introduction The luminance and chromaticity inconsistencies discussed in Section 1.2 have been known to be eliminated in 2D images [13] [9] [16] [12]. In this thesis report the removal of the color inconsistencies that arise in the point clouds is discussed. Chapter 2 deals with the previous work that is done in the field of color balancing. It lists some of the color balancing algorithms and also explains the underlying concepts for color correction. The importance of a color space for color correction is also highlighted in this chapter along with some explanation of the formulae for color space conversions. In Chapter 3, the experimental evaluation of two of the color balancing algorithms is described based on which one of these algorithms is picked. Chapter 4 details the implementation of the selected algorithm which is followed by the evaluation of the results of Chapter 3 and 4 in Chapter 5. 4

13 Chapter 2. Background and Related Work 2. Background and Related Work 2.1 Color Balancing White Balancing for Color Cast Correction Under different light sources, a region in a scene that is actually white appears to have a color cast especially after an image has been captured. Humans are capable to interpret white objects as white under different illuminants such as tungsten, fluorescent light, direct sunlight, shady, cloudy weather etc. But for a camera this is not possible. White balancing is necessitated when objects that are actually white do not appear white in an image. White balancing can be applied prior to or after image capture. If it is done prior to taking an image, it can be done as auto white balancing or manual white balancing within the camera. In auto white balancing the camera makes a guess on the most accurate description of the conditions. This auto option is often shown in digital cameras. Prior to image capture, one can also manually set the light source where a user is presented with possible illuminants such as tungsten, shady, cloudy etc. Figure 2.1 Images without and with White Balancing [17] 5

14 Chapter 2. Background and Related Work Figure 2.1 shows two images. The left image was taken under standard tungsten bulbs in a room without white balancing. It shows quite considerable yellow cast which is characteristic of tungsten bulbs. The right image was taken with white balancing after the camera was shown a white paper to adjust its white balancing settings [17]. The color cast reflects the type of illuminant [18]. When a reference white point is known then white balancing is a very simple scaling operation. Below a simple scaling is shown in RGB color space. [ ] [ ] [ ] R, G and B are the coordinates of a pixel which is the result of white balancing. R, G and B belong to the original pixel which has the color cast. In the 3x3 scaling matrix, R, G and B are the average RGB values for the reference white point(s) effected by the color cast of the illuminant. The value of 255 in each of the diagonal entries of the scaling matrix represents the actual RGB value of a white point. Generally white balancing occurs in two basic steps. The first step is to gather information about the illuminant. The illuminant can be known or it can be deduced by analysis of the image as well. The second step is to scale the values of the pixels with a matrix whose values differ for different illuminants. A very simple method of white balancing which is also very common is the gray world assumption. It states that any image contains equal amounts of red, blue and green. If an image doesn t live up to this assumption, then correction factors are applied for scaling the RGB values so that the final image holds on to the gray world assumption. In [19] Nguyen et al used the concept of gray world assumption for illuminant estimation. Images in a database were captured under different single light source and the scale factors to get the resulting images adhering to gray world assumption were recorded. These scale factors correspond to individual illuminants. The results were reasonable but the gray world assumption is not always true. In images where a particular color dominates such as sky, oceans, green grass, the gray world assumption fails [18]. In [18] Manuel Innocent describes another method for illuminant estimation. For an image a weighted average for the color values is calculated. With this value the illuminant is determined. Next based on the illuminant as well as using the color values of some reference white points in the image, the scaling factors for white balancing are calculated. White balancing can be applied in various stages. As mentioned in Section 1.3, white balancing manifests best results when done on raw images in camera itself rather than later in monitor. White balancing can also be applied in different color spaces. Xiao et al in [20] experimented with white balancing in various color spaces such as XYZ, Bradford, camera sensor RGB and sharpened RGB. They presented a general formula for white balancing RGB in is a 3x1 matrix corresponding to the original pixel which needs to be white balanced. 6

15 Chapter 2. Background and Related Work F in is the 3x3 matrix that converts RGB to any other space where white balancing is to be performed. D is the 3x3 matrix with scale factors dependent upon the illuminant under which the image was taken. F out is a 3x3 color space conversion matrix which converts the result back to the RGB space. Xiao et al concluded that XYZ and sharpened RGB performed almost equally well while Bradford and camera sensor RGB performed poorly Histogram Matching and Histogram Warping Histogram matching or histogram specification is a method that maps the histogram of one image to the histogram of another image taken as a target. Histograms have many applications such as fast image retrieval where the images with a similar histogram are searched among a pool of images [21]. It is also used in gamut mapping techniques [22]. Histogram matching can be applied in 1D grayscale or 3D color spaces. But the main idea is the same. The source histogram has to match the target histogram. In theory, the source histogram can match exactly to the target histogram in continuous domain, but in discrete domain with finite number of pixels and quantization levels this is not always the case. In order to do color mapping through histogram matching, first of all the dynamic range of the intensity values of both the source(s) and target(t) images are divided into intensity ranges called bins. These bins act as the random variables denoted here with x for the source and y for the target images. The histograms for these random variables are calculated for both the images. Let the histograms or the probability distributions for the random variable x be denoted as p s (x) for the source and p T (y) for the target. A cumulative distribution function(cdf) is determined for the two probability distributions denoted as F s (x) and F T (y) respectively. The CDF is a monotonically increasing function starting at 0 and ending at 1. To find the value of y for each source pixel x, the following transformation T is used [ ] [ ] where F T -1 is the inverse function of F T. According to Neumann in [21] histogram matching generates results which lacks spatial coherences such as the gradients, neighborhoods and topological characteristics. The edges and overall structure is conserved nonetheless. The gradient can be preserved by variation in the histogram specification implementation [14] whereby Xiao et al apply an additional optimization step to preserve the source gradient map and apply it after the histogram specification has performed the color transfer. 7

16 Chapter 2. Background and Related Work Figure 2.2 Histogram Matching followed by Gradient Preservation [14] Figure 2.2 shows the result in image 3 after Xiao s color transfer and gradient preservation is applied on image 1 to match image 2. The problem with histogram matching is that it can lead to contouring effects with spikes and dips in the resulting histogram. The spikes are a result of many pixels in the neighborhood of a particular bin accumulate to the same bin and at the same time cause immediate dips in the neighbor bins. Histogram Warping is a modified version of histogram matching which divides the source and target histogram into quantiles with an adaptive matching function for each of the quantiles. If the number of quantiles are equal to the number of bins in both the source image and the target image, what we have is the original histogram specification [23]. By selecting different number of quantiles in the source and target histograms, one can match the target to any desired level of accuracy easily controlling histogram stretching and expansion. Grundland et all in [23] broke down the 3D histogram matching into three 1D histogram matching in a space which is a derivation of CIELab referred in the paper as CIELa b. It is derived by applying principal component analysis on CIELab followed by independent component analysis so that the resulting space is maximally decorrelated. They also applied a monotonic interpolating spline to increase the contrast after the histogram warping has been performed. 8

17 Chapter 2. Background and Related Work Figure 2.3 Result of Color Histogram Warping [23] Figure 2.3 shows the result of histogram warping where the top original images swap each other colors as shown in the bottom row Mean and Standard Deviation Matching Another category of image enhancement is matching the image statistics of a source image to a target image. The source image and the target image may not be depicting the same scene. This has the effect to transfer a color cast of one image completely to another without changing the texture of the target image. The algorithm basically computes the means and standard deviations for each of the three color channels of complete source and target images. Then each pixel of the source image is given a translational and scale effect based on the statistics computed. The formula is very simple and computationally inexpensive. Let a pixel be P whose old value is P old and the new value to be calculated is P new. Let the mean and standard deviations for the source and target be µ s, σ s and µ t and σ t respectively. Then the color transfer algorithm is For each pixel, first the mean of the whole source image is subtracted and then a scale operation by a factor of σ t /σ s is applied which matches the standard deviation to the target. Lastly the addition of the term µ t matches the mean to the target as well. 9

18 Chapter 2. Background and Related Work The use of this color transfer algorithm based on image statistics in different color spaces is found in [13], [15] and [24]. Figure 2.4 shows the result of color transfer in [15] in Lαβ space. The color cast that of sunset in image b is transferred to image a. Figure 2.4 Result of Color Transfer from Image b to Image a. [15] This color transfer algorithm matching the image statistics performs particularly well when the composition of both the images is more or less the same. In Figure 2.4, both the source image and the target image have similar composition showing sun, clouds, sea and sky. The results might not be appealing when the compositions differ. In such cases, separate swatches can be taken from different objects in the images, the means and standard deviations calculated for these swatches, and color transfer is applied separately for each of the swatches [15]. The report also deals with the cases where the scene is the same but with completely different color casts. Such is the case with the point clouds where two or more scans represent the same scene but have scan points of different colors. Point clouds are different from images in that images consist of pixels in 2D, but point clouds contain scan points that may fall anywhere in 3D space. There might be regions without any scan points at all. Furthermore in 3D the same scene might comprise of even more than two scans in which case the calculation of target means and standard deviations wouldn t be as trivial as seen in this section. 10

19 Chapter 2. Background and Related Work 2.2 Color Spaces for Color Balancing Color space is a way to express, specify and visualize colors [25]. A number of color spaces exists such as RGB, YUV, CMYK, CIELab, HSV and so on. All vary in certain characteristics such as the color gamut, device independence, independence of the chromaticity and luminance components, correspondence to human visual perception etc. Color gamut represents the range of colors that can be represented by a given color space. Each color space is suited for different domains. For example RGB is suitable for displays because of the ease of system design. CMYK (Cyan, Magenta, Yellow, Black) is used in color printing. CMYK is useful for color printing because it is a subtractive color space which means that each of the four pigments are applied to a white surface to create the final color [26]. The YUV color space is derived from RGB where Y stands for luminance and U and V channels are color difference channels. YUV color space is useful for analog television broadcasting because of its low bandwidth. CIEXYZ is an international standard developed by CIE (Commission Internationale de l Eclairage) and the major advantage of this color space is that it encompasses the gamut of most other color spaces such as RGB, CIELab etc. Furthermore it is device independent which means that this color space is independent of the image capturing device as well as the display device. All the color spaces that are derived from CIEXYZ are also device independent. Then there are color spaces that have the luminance and chromaticity decoupled. Examples of such spaces are HSV, CIEXYZ, CIELab etc. The decoupling of luminance and chromaticity is particularly useful in image enhancement methods such as discussed in Sections and 2.1.3, for rendering, noise removal, segmentation and object recognition [27]. In the subsections we discuss what exactly is decoupling, it s advantages and how to find the extent of decoupling through independent component analysis. Lastly the color space conversions that are used in the course of the report are discussed Decoupling of Chromaticity and Luminance With image enhancement algorithms, where it is important to do computations on all the color channels, a problem arises when if one of the channels is adjusted, the overall luminance and chromaticity of an image is disturbed. Thus adjustment in one channel disturbs the other two channels. For example if one requires to reduce red cast in an RGB image, then this would also reduce the overall brightness in the image. This would necessitate that all the three channels are adjusted at the same time. Such color spaces are said to have correlated color channels [13]. The color spaces which have orthogonal color channels have zero correlation between them. For image enhancements it is a lot easier to work in color spaces that are uncorrelated to a considerable degree as it solves a 3D problem as three 1D problems. An important work in the field of color transfer in a decoupled color space is done by Erik Reinhard et al who used a recently derived color space Lαβ [15]. Ruderman et al developed this perception based color space derived from LMS space where L, M and S stand for Long, Medium and Short wavelengths to 11

20 Chapter 2. Background and Related Work which the human eyes are sensitive. This color space shows very little correlation between the channels of any pixel [28]. Covariance is a method to compute the correlation between any pair of the three color channels. Figure 2.5 shows the extent of decorrelation for the input image shown in RGB and Lαβ color spaces after applying PCA. It is easily observable that the decorrelatoin for any pair in the RGB color space is absolutely nonexistent. For the Lαβ, it can easily be seen that the correlation is quite reduced. Figure 2.5 Showing Correlation between pairs of Channels in RGB and Lαβ for the input image [13] Erik Reinhard et al in [13] studied a set of color spaces and used covariance between the pairs of channels to find out the decorrelation between the three color channels for a variety of image ensembles in natural day[nd], manmade day[md], indoors[in], night[n]. They also analyzed the success of these color spaces on the color transfer algorithm discussed in Section The color spaces tested include Lαβ, CIELab with illuminant D65 and E, Yuv, HSV, XYZ, RGB etc as well as ensemble based maximally decorrelated spaces computed by Principal Component Analysis (PCA) [29]. The results of the covariance analysis showed that CIELab with illuminant E produced the most decorrelated channels for each of the category of input images. RGB and XYZ produced very poor results. The color transfer algorithm in Section was applied to all the image ensembles in all the color spaces. Figure 2.6 shows the results as percentage success of each color space in each image category. The percentage success is a subjective result whereby the two authors of [13] independently analysed the color transferred images selecting the results that were believable and ignoring the ones that weren t visually appealing. The black bar shows the average result for all the image types. 12

21 Chapter 2. Background and Related Work Figure 2.6 Performance of color transfer on different image ensemble in different color spaces [13] It is easily observed that CIELab with both illuminants E and D65 does the best color transfer. The results of [13] also took into account the color spaces generated by PCA. PCA was applied on both source image and the target image separately. It was concluded that a color space derived through PCA on target image produced quite good results for color transfer. However CIELab with illuminant E still performed best. The fact that a color space is derived through PCA for each image in every run makes it an unfeasible approach for color transfer specially when CIELab with illuminant E already performs better Color Space Conversions Based on the results of [13], the best color transfer with respect to color spaces occurs in the CIELab color space. The images and the scans available contain pixels and scan points respectively with colors in RGB color space. To apply the color transfer algorithm in [15], it is required that the color space be converted from RGB to CIELab. In the following is detailed the method to convert the color space to CIELab and then back to RGB after color transfer has been performed. To get to the CIELab the color values must be converted to an intermediate space known as CIEXYZ. The human eye has three types of cones which are sensitive to long, medium and short wavelengths. These three components make up three tristimulus values. The tristimulus values are strongly related to color perception by the eyes. In 1931, CIEXYZ was introduced as the first mathematically defined color space by International Commission on Illumination (CIE). The CIEXYZ color space covers all the possible color perceptions for humans. Therefore this color space often acts as a building block for conversions to various color spaces such as CIELab. In CIEXYZ, the Y corresponds to the luminance, Z stimulates the blue hue, and X is the mixture of the three response curves of the cones. Thus X and Z together define the chromaticity [26]. 13

22 Chapter 2. Background and Related Work The color conversion formulae are taken from [30]. The conversion from RGB to CIELab and back is shown as pseudocode below. RGB to CIEXYZ First of all the values of RGB are normalized between 0 and 1. var_r = (R/255) var_g = (G/255) var_b = (B/255) if (var_r> ) var_r = ((var_r ) / 1.055)^2.4 else var_r = var_r / if (var_g> ) var_g = ((var_g ) / 1.055)^2.4 else var_g = var_g/ if (var_b> ) var_b = ((var_b ) / 1.055)^2.4 else var_b = var_b/ var_r =var_r * 100 var_g = var_g * 100 var_b = var_b * 100 [ ] [ ] [ ] Once in CIEXYZ, the CIELab is derived by nonlinearly compressing(cube root) the CIEXYZ values. The L stands for luminance and a and b channels are color opponent channels. CIELab encompasses the gamut of RGB. CIEXYZ to CIELab var_x = (X/ref_X) var_y = (Y/ref_Y) var_z = (Z/ref_Z) The values of ref_x, ref_y and ref_z depend upon the illuminant. For illuminant E, the values are 100 each. For D65, the values are , 100 and respectively if (var_x> ) var_x = (var_x)^1/3 else var_x = (7.787 * var_x) + 16/116 if (var_y> ) 14

23 Chapter 2. Background and Related Work else var_y = (var_y)^1/3 var_y = (7.787 * var_y) + 16/116 if (var_z> ) var_z = (var_z)^1/3 else var_z = (7.787 * var_z) + 16/116 CIE L = (116 *var_y) - 16 CIE a = 500 * (var_x var_y) CIE b = 200 * (var_y var_z) Now the algorithm in [15] can be applied on the CIELab values followed by conversion of the CIELab back to RGB via CIEXYZ which is basically the inverse to the RGB to CIELab conversion. CIELab to CIEXYZ var_y = (CIE L + 16) / 116 var_x = (CIE-a/500) + var_y var_z = var_y - CIE-b/200 if (var_x> ) var_x = (var_x)^3 else var_x = (var_x 16/116) / if (var_y> ) var_y = (var_y)^3 else var_y = (var_y 16/116) / if (var_z> ) var_z = (var_z)^3 else var_z = (var_z 16/116) / X = var_x * ref_x Y = var_y * ref_y Z = var_z * ref_z CIEXYZ to RGB var_x = (X/100) var_y = (Y/100) var_z = (Z/100) [ ] [ ] [ ] if (var_r> ) var_r = * (var_r ^ 1/2.4) else 15

24 Chapter 2. Background and Related Work var_r = var_r* if (var_g> ) var_g = * (var_g ^ 1/2.4) else var_g= var_g* if (var_b> ) var_b = * (var_b^ 1/2.4) else var_b = var_b * var_r =var_r * 255 var_g = var_g * 255 var_b = var_b * 255 These color space formulae work on one pixel at a time. 16

25 Chapter 3. Experimental Evaluation in MATLAB 3. Experimental Evaluation in MATLAB This chapter deals with selecting two approaches to color balancing, applying these approaches on 2D panoramic images generated from LASER Scanners, and selecting the suitable algorithm for final integration in 3D. The two approaches that would be simulated in MATLAB are Histogram Matching Means and Standard Deviations Matching. 3.1 Data Acquisition Using FARO s FOCUS 3D LASER Scanner, some scans were generated. The scanner was placed at the same position throughout the experiment so as to capture the same scene in all the scans. However because the scans were taken at different times i.e. in the morning, afternoon and before sunset, the contrast of the chromaticity and luminance was significant to the extent that the auto white balancing of the camera couldn t compensate this effect. Figure 3.1 depicts the variation in color and luminance in scans taken in a room. The algorithms were also applied on other data-sets which were taken from scans in different locations such as museums etc. In this chapter, only the scans that of the room shown in Figure 3.1 are considered. 17

26 Chapter 3. Experimental Evaluation in MATLAB Figure 3.1 Scans of the same scene taken in the morning(above) and the evening(below) 3.2 Experiments in MATLAB Both the algorithms aim to match a source image to a given target but the matching occurs in two different ways. Histogram matching algorithm tries to match the cumulative distributive function of the source to the target. On the other hand the mean and standard deviation matching as the name implies matches the source mean and standard deviation to that of the target Histogram Matching For histogram matching, any of the two images is taken as the source and the other as the target. In the subsequent discussion, the morning image is taken as the source and the evening image as the target. Histogram matching matches the source to the target in such a way so that the cumulative distribution function (CDF) of the matched image is as close to the CDF of the target as possible. This also means that the resulting histogram of the matched image is as close to the histogram of the target as possible. In RGB color space, the intensity values range from 0 to 255, and these intensity values act as the 256 random variables. The histograms are generated by counting the number of pixels for each of the 18

27 Chapter 3. Experimental Evaluation in MATLAB intensity values and then displaying the results in the form of a graph. A normalized histogram follows the identity Where Xi represents the number of pixels with intensity i and N is the total number of pixels in the image. Each intensity value is called a bin. In RGB, this bin represents discrete integral values of the intensities. In other color spaces, the values of the color channels might take non-integer values in which case a bin is a range of values. In MATLAB, an algorithm for histogram matching was implemented which took the number of bins as a user-specified input. The more the number of bins, the smaller the range of each bin and the higher is the resolution. For RGB, making more number of bins than 256 makes no sense because all the intensity values are integral values. Histogram matching was also performed on the images after a color space conversion was carried out from RGB to CIELAB. Once in CIELAB, the values range for the three color channels are now different and can possibly take any real value. The L channel for luminance, ranges from The a and b channels are basically unbounded but are encoded within a range of -127 to Here in this color space, because the channel values are infinite but the number of pixels is finite, it s normal that a lot of bins do not have any pixel falling in that range. Figure 3.2 shows the histograms of the red channel of two panoramic images of the room in RGB. As one would expect, the morning image has more pixels with higher intensity values compared to the darker evening image. 19

28 Chapter 3. Experimental Evaluation in MATLAB Figure 3.2 Red channel of the morning(source) and evening(target) images of the room For source image, let the cumulative distributive function be called F S and for the target let it be called F T. The functions F S and F T are calculated using the histograms of both the source and the target respectively. In general, the formula for CDF is In this formula P is the distribution function (histogram) as in Figure 3.2 above and the expression represents the probability that the random variable X (iterated over by k ) takes on values less than or equal to x. The CDF is a monotonically increasing function starting from value of 0 till a maximum of 1. For the source and target images above, the CDFs are shown below in Figure

29 Chapter 3. Experimental Evaluation in MATLAB Figure 3.3 Displaying the CDFs for red channel of the source and target images Once the CDFs F S and F T are found, a lookup table is generated between the intensity values and the CDF for both source and the target. The process of histogram matching repeats itself for each and every pixel and works the same way for any pixel. Let s say a pixel value from source image has an intensity level of s. The matching occurs in 3 steps. 1. For intensity s the value of F S (s) is read from the table. 2. The value of F S (s) and F T (t) are on the same level(see Figure 3.4). The value F T (t) is searched in the lookup table of the target. 3. Then an inverse lookup occurs by finding the intensity value t from the table against this F T (t). The three steps are shown in the figure below. 21

30 Chapter 3. Experimental Evaluation in MATLAB Figure 3.4 Histogram matching. Pixel with value s transformed to t 3D histogram matching in RGB color space can also be undertaken where by the input 3D histogram is being matched to a joint cumulative distribution function [31]. However it is not pursued for implementation Mean And Standard Deviation Matching This color transfer algorithm is based on matching the mean and standard deviation of the source to that of the target image. Let the mean and standard deviation of the source be represented as µ s and σ s and the mean and standard deviation of the target be represented as µ t and σ t. As discussed in the Section 2.1.3, the mean and standard deviation matching works better when the chromaticity and the luminance are decoupled. Thus, this color transfer algorithm is then applied in the CIELAB space with illuminant E which gives the best results according to [13]. Figure 3.5 below depicts the flow of the color transfer algorithm. 22

31 Chapter 3. Experimental Evaluation in MATLAB Figure 3.5 Color Transfer by matching means and standard deviations Figure 3.5 shows color transfer by matching means and standard deviations. The values of µ and σ are calculated for each channel independently. Once the means and standard deviations are known, then the color transfer algorithm is applied on each and every pixel of each channel of the source image. The formula for color transfer is: Here it shows the color transfer only for the L channel of CIELab color space. The means and standard deviation of the source and target are also shown only for the L channel. However this formula is true for each channel independently with their own set of means and standard deviations. The first term, which is a multiplication of the offset (L old - µ s ) and the ratio of standard deviations σ t /σ s, in effect, matches the standard deviation of the source to the target while the addition of µ t matches the source to the target mean. After the color transfer has been done, the image is again converted from the CIELab space to RGB. In the process, due to the color transfer algorithm, some values might overshoot and undershoot from the RGB gamut especially if the target mean lies near the upper and lower limits of the RGB gamut. The out of gamut values can either be compressed towards the original source value as in [23] or clipped which 23

32 Chapter 3. Experimental Evaluation in MATLAB is a more favorable option [32]. In our implementation, the intensity values are clamped at 255 and 0 respectively Color Transfer with Multiple Targets It is to be noted that in the above explanation any one of the images could be taken as the target. As seen in Figure 1.1, a scene can be represented with more than two scans or images. In such cases it is not wise to only take one of the images as the target. Without prior knowledge of the actual scene, it is very difficult to know which of the images represents the true colors of the scene. Therefore in the MATLAB implementation, all the images of a scene were taken and an average of the pixel values was calculated for all the images combined. Then using these average pixel values, a single mean and standard deviation was calculated. Then the color transfer formula of Section was applied. This way it is expected that the source would match to a target which is more likely to be the real depiction of the scene Interpolation of Means and Standard Deviation In order to simulate the 3D environment as closely as possible, the images were divided into smaller portions called cells. Then the same process of color transfer algorithm was followed as shown above in Figure 3.5. Only in this case, we worked on each cell separately, with each cell having its own mean and standard deviation in both the source and the target images. After the local color transfers cell by cell and the conversion of the balanced image back to RGB resulted in seams along the cell boundaries. This was expected because the pixel values in different cells were matched to different target means and standard deviations. This is in comparison to a complete image color transfer where the source image was matched to a single target value for the whole image as in Section In order to remove the seams we made use of bilinear interpolation algorithm [33] to interpolate the source and target means of the immediate neighbor cells before the color transfer algorithm is applied. This was done by introducing a mask with a size equal to that of a cell but centered on the junction of four cells. By centering it on the cells junction we made sure that the mask covers one quadrant of four different cells at a time. Then the means and standard deviations of the four different cells were picked up for both the source and the target image. For all the pixels of a cell that fall in the same quadrant of the mask, the set of four neighbors remains the same. The only factor that differs for different pixels is the distance of the pixel from the edges of the mask. The farther the position of the pixel from the neighbor, the lesser the effect of the neighbor s mean and standard deviation for calculation of an interpolated value. Let Figure 3.6 represent an image with 12 cells, 3 rows and 4 columns of cells. The figure shows that for all the pixels of the top left quadrant of the cell number 6, there is a constant mask which also overlaps cell number 1,2 and 5. Thus for all the pixels in the top left quadrant of cell 6, we would use the means and standard deviations of cells 1, 2, 5 and 6, pass them through an interpolation algorithm(discussed below), and finally get a single interpolated value of the mean and standard deviation. This interpolation 24

33 Chapter 3. Experimental Evaluation in MATLAB occurs for both the source and target images separately. Similarly if we move to the top right quadrant of the cell number 6, then the neighbor set would be cells 2, 3, 6 and 7. Figure 3.6 Showing a mask overlapping a quadrant each from cells 1, 2, 5 and 6 Let s say that the means of cell 1,2, 5 and 6 be µ 1, µ 2, µ 3, µ 4 and the standard deviations be σ 1, σ 2, σ 3, σ 4 respectively. We need to find a single interpolated mean µ i, and single interpolated standard deviation σ i at the location of the pixel shown. Figure 3.7 shows the mask of Figure 3.6 more closely and clearly. The means µ 1, µ 2, µ 3, µ 4 for the cells 1, 2, 5 and 6 are shown at the corners of the mask. µ 12 and µ 34 are the interpolated values along the x-axis while µ i is obtained by interpolating µ 12 and µ 34 along the y-axis. The normalized distances to the pixel are shown as dx and dy. The interpolated values of µ i is found below as: First interpolating in the x direction 25

34 Chapter 3. Experimental Evaluation in MATLAB Now interpolating in the y direction. Similarly the interpolated value for σ i is also calculated. Figure 3.7 A Pixel in the top left corner of Cell 6 The interpolated mean and standard deviation are found for both the source and target. Let these be represented as µ is, and σ is for the source and as µ it, and σ it for the target. Then the color transfer formula for a pixel say L will be Now the color transferred pixel value is converted back to RGB. If in Figure 3.6, a pixel in consideration lies at the boundary of the image, then there would be neighbor cell(s) that would not be a part of the image. In such cases, the empty cells are given the means and standard deviation of the cell in which the pixel lies. 26

35 Chapter 4. Implementation on 3D Point Clouds 4. Implementation on 3D Point Clouds Chapter 3 discussed the color transfer algorithms applied on 2D images. The mean and standard deviation matching algorithm was selected to be implemented in 3D. Chapter 4 begins by discussing some basic concepts about the 3D point clouds generated through scans take by LASER scanners. The underlying concepts of a point cloud such as its existence in the world coordinates, and then transformation to a construction space is discussed. The 3D world is compared to the 2D environment to see in what aspects is the implementation of the selected color transfer is similar to that in 3D as well as what are the differences. Finally the implementation details are explained. 4.1 LASER Scanner Point Clouds A LASER scanner such as the one shown in Figure 4.1 is capable of generating a point cloud containing millions of points [3]. The point density can be adjusted and can provide extremely detailed 3D images. Figure 4.1 FARO LASER Scanner [3] 27

36 Chapter 4. Implementation on 3D Point Clouds A mirror in the scanner rotates so as to reflect the laser beams in all the directions to get a single scan that covers 360 view (except a small region directly below the scanner) around the scanner. Not all scenes can be captured by a single scan due to obstructions in the path of laser beams or because the target scene may not be in the range. So multiple scans are taken at different positions and all the scans are combined to generate a very large set of points making up a Point Cloud Scan point Each and every point in the point cloud is called a scan point. This scan point consists of two important pieces of information. It carries its own RGB value as well as its coordinates in 3D. The RGB values are modifiable in software after the scans have been taken and this is what is done during the color transfer algorithm World Space and Construction Space A point cloud consists of a hierarchical structure in the form of a workspace tree. Each of the scans in a point cloud have their own space with origin being the position of scanner represented as a scan node. A world space is the concatenation of the scan nodes using their individual transformation matrices towards a root node in the workspace tree to build a common space called world space. In the software this world space with all the scans and points are converted and reconstructed to another representation called the construction space. This construction is called octree construction which recursively divides a cubic volume into eight cubic sub-volume cubes to a specified depth of the octree. The smallest cube obtained by this decomposition is called a construction cell. All the cells are allotted a unique ID and unique integral coordinates. These coordinates increment by one over each cell in each of the three rectilinear axes. This completes the construction space Point Group A point group is a data structure that holds the scan points in a volume equal to that of a construction cell. There is one point group per scan per cell as long as at least one point from that scan exists in that cell. A point group also holds other information such as the number of scan points it contains, the ID of the scan to which the scan points belong, total space in memory the points hold, the coordinates of the point group etc. The coordinates of the point groups are in Cartesian coordinates and are used to derive a single unique number which acts as the unique cell-id of the cell the point groups belongs to. The use of the cell-id will be seen in Sections and At this point it is important to know that a cell might contain more than one point groups if that cell contains point(s) from more than one scan. Similarly there can be cells that only contain one point group because only one scan contributed towards this cell. Some cells in the construction space might not contain a point group at all. This means that this particular cell wasn t in the range of any of the scans in the point cloud. 28

37 Chapter 4. Implementation on 3D Point Clouds 4.2 Comparison between 2D and 3D In order to implement the color transfer algorithm in 3D as seen in Section and Section 3.2.3, it is important to know the possible differences that need to be considered. Each scan point is equivalent to a pixel with a certain RGB value. Each scan is equivalent to an image. In a 2D image the pixel density is constant throughout the image, which is not true in 3D. In the 2D implementation, there was always a corresponding pixel in the two images(source and target) with the same coordinates. In 3D, there may be cells that might not contain a single scan point from a particular scan. Some cells might contain more scan points from one scan than the others. In Section 3.2.3, we made use of the neighbors to calculate the interpolated mean and standard deviation. The neighbors always existed except for those cells that lied on the boundaries of a 2D image. In 3D, there will be empty cells irrespective of the position of the scan points. In 2D the mask overlapped quadrants of 4 different cells while in 3D, a mask in the form of a 3D cell covers octants from 8 different cells. This means that in 3D, tri-linear interpolation would be used to calculate interpolated means and standard deviations which is discussed in Section Furthermore, because the cases where the neighbor cells are empty appear so often in 3D, and because the empty cells may exist anywhere in the construction space(not only on the borders as in Section 3.2.3), we assign these empty cells with means and standard deviations which are a weighted average of all the non-empty cells around this empty cell. This would be discussed in more detail in Section In 2D, while calculating an average value of mean and standard deviation for individual cells of the target images, we always had the same number of pixels in all the target images. In 3D, one cell might contain two or more point groups, where the number of scan points in the point groups might vary significantly. 4.3 Implementation of Color Transfer Algorithm Having discussed concepts such as the construction space, point groups and scan points, we are ready to understand how the color transfer algorithm is performed on individual scan points Populating required data for Color Transfer The formula for color transfer is the same as discussed in Section which is: Here we first convert a scan point s color from RGB to CIELab. This formula shows the color transfer applied on only the L channel of CIELab, whereby the values of µ s, σ s, µ t and σ t are means and standard deviation values for the L channel of the source and of the target respectively. This means that before 29

38 Chapter 4. Implementation on 3D Point Clouds we can apply the color transfer formula, we need to get the values of µ s, σ s, µ t and σ t for both the source and the target. In 3D, the source is one point group which occupies a particular 3D cell. One point group can have a variable number of scan points in it ranging from 1 to several hundreds or even thousands of scan points. The source mean µ s and source standard deviation σ s are calculated by visiting each scan point in the point group one by one, converting the RGB value of the scan point to CIELab space, and then calculating the mean and standard deviation of each of the L, a, and b channels independently. As discussed in Section 4.1.3, that a point group holds information such as number of scan points it contains, scan ID etc., so when the mean and standard deviation for the scan points have been computed, the point group also holds this information. This procedure is repeated for each point group of each scan in the point cloud as shown in Figure 4.2. Figure 4.2 Calculation of Source's means and standard deviations Now we have the source means and standard deviations for all the point groups belonging to a particular scan. We also need the target means and standard deviations. The target for the color transfer algorithm is the cell as a whole where one or more point groups may fall. In 3D, the target 30

39 Chapter 4. Implementation on 3D Point Clouds values are calculated by the weighted average of the individual means and standard deviations of the point groups falling in that cell. Given that we have two point groups falling in the same cell called PG1 and PG2. Let the means of PG1 and PG2 be µ 1 and µ 2 respectively. The total number of points in PG1 and PG2 are n1 and n2 respectively. Then the combined mean of the cell (also the target mean) would be µ t and will be calculated as follows. Similarly the weighted standard deviation is also calculated. The same procedure is carried out for all the cells where at least one point group lies. Here it is important to note that each cell has a unique ID. The target values of mean and standard deviations as calculated above are then stored in an ordered associative STL container map. The map holds pairs of values for each cell whereby a single pair consists of <cell-id,µ t and σ t >. The map is ordered with respect to the cell-ids. We also have one map each for individual scans where the map s key correspond to the cell-id of the point groups and the value of the map corresponds to the means and standard deviation of the point group in that cell. There is only one point group per cell per scan. The pair for each entry of this map is <cell-id,µ s and σ s >. This map is used to provide the source values for the color transfer algorithm of means and standard deviation matching as will be discussed in section There is one such map for each scan in the point cloud Implementation Flow Graph Now we have the means and standard deviations for the source and target ready. The source means and standard deviations exist as members of the point groups. The target means and maps exist in the map container as discussed in Section Next step is to apply the color transfer algorithm on every scan point in the point cloud. All the point groups that exist in the point cloud are collected into a vector of point groups. Each point group is visited in the vector and then each scan point of this point group is visited one by one. The RGB values of the scan points are first converted to CIELab color space. The color transfer algorithm is applied whereby the values of µ s and σ s are obtained from the point group itself whereas the values of µ t and σ t are obtained from the map as described in Section Next the new color transferred Lab values are converted back to RGB and the scan point s color is updated. Graphical representation of the flow is shown in Figure

40 Chapter 4. Implementation on 3D Point Clouds Figure 4.3 Flow Graph of Color Transfer Algorithm in 3D Interpolation of Neighbor s Means and Standard Deviations In Section we applied the color transfer algorithm where the means and standard deviations all belonged to the same cell. Similar to the simulation in MATLAB in Section 3.2.2, we would expect seams between the 3D cells. Thus we need to interpolate the means and standard deviations of both the source and the target. For the source, the neighbor cells also belong to the same scan to which the current point group belongs. The 3D implementation in C++ is shown as pseudo code in the table below. The vector holding the point groups is called pgvector. Using an iterator to iterate over all the point groups in the vector, each point group (pg) is dereferenced from the vector one by one in a while loop. Using the coordinates of the point group, the IDs of all the cells surrounding the cell of this point group and the current cell itself are found. In 3D there are 26 cells surrounding the central cell where the point group falls. This is in the form of a rubik cube as shown in Figure

41 Chapter 4. Implementation on 3D Point Clouds Figure 4.4 Rubik Cube: The point group lies in the center most cell Once the IDs of all the 27 cells are found, then the maps populated in section for both the source and the target are used to find the means and standard deviations of all these cells. One of the members of the point group is the scan-id which holds a unique number against all the scans in the point cloud. This scan-id helps to identify and pick the correct map for the source. Using the cell IDs and this map, the means and standard deviations of the 27 cells are extracted from the map and stored in arrays called meansofneighbors and stddevofneighbors respectively. Similar procedure is applied to extract the 27 means and standard deviations of the target using the target map. When the arrays of means and standard deviations for both the source and target are known, these arrays remain constant for all the scan points (totscanpoints) in the point group. 1: Function colorbalance() Color Transfer Algorithm 2: pgvector::iterator iter = pgvector.begin() 3: while(iter!= pgvector.end()) 4: pg = *it 5: neighborids= pg.findneighbors(pg.coordinates) 6: meansofsourceneighbors = pg.calcmeansofneighbors(neighborids) 7: stddevofsourceneighbors = pg.calcstddevofneighbors(neighborids) 8: meansoftargetneighbors = pg.calcmeansofneighbors(neighborids) 9: stddevoftargetneighbors = pg.calcstddevofneighbors(neighborids) 10: totscanpoints = pg.gettotalscanpoints() 11: i = 0 12: while(i!= totscanpoints) 33

42 Chapter 4. Implementation on 3D Point Clouds 13: scanpoint = pg.getscanpoint(i) 14: colortransfer(scanpoint) 15: i++ 16: end 17: iter++ 18: end 19: end 20: 21: Function colortransfer(scanpoint) 22: rgb = scanpoint.getcolor() 23: lab = rgb2lab(rgb) 24: µs= interpolate(meansofsourceneighbors, scanpoint) 25: σs= interpolate(stddevofsourceneighbors, scanpoint) 26: µt= interpolate(meansoftargetneighbors, scanpoint) 27: σt= interpolate(stddevoftargetneighbors, scanpoint) 28: lab = (lab -µs) *σt/ σs+µt 29: scanpoint.rgb = lab2rgb(lab) 30 end Next all the scan points of the current point group are visited one by one. For each scan point the target is to transfer its rgb color to a new value by using the color transfer algorithm, shown as colortransfer(scanpoint) in the pseudo code. In this function, first of all the rgb color value is extracted from the scan point. Then the rgb value is converted to the Lab color space. Then we apply the tri-linear interpolation algorithm to find µs, σs, µt and σt which are all interpolated values. The interpolation algorithm is applied in the same manner as in section in the 2D case. The difference here is that we have three axes instead of two. Therefore we need eight values instead of four, on which the tri-linear interpolation algorithm is applied. Similarly as in 2D a cell is divided into four quadrants, here the cell is divided into eight octants. Figure 4.5 shows one 3D cell divided into eight octants, one of which is colored gray. 34

43 Chapter 4. Implementation on 3D Point Clouds Figure 4.5 Showing 8 octants of a 3D cell Depending upon the coordinates of the scan point, it would be determined which of the eight octants the scan point falls into. Once the correct octant is known for the scan point, a mask the same size as a 3D cell, is applied on this octant in such a way that only one of the octants of the mask overlap with the scan point s octant. The rest of the seven octants of the mask overlap one octant from seven different neighboring cells. One can visualize it using the rubik cube as well, where the mask size is equal to any one of the twenty-seven smaller cubes. For each of the eight octants of the central cell of the rubik cube, the mask will overlap one octant of eight smaller cubes. For each octant of the central cell, the identity of the eight cells are always constant. Figure 4.6 shows that a mask shown in red color overlaps one octant of blue cell. The scan point lies in the overlapping octant shown in gray. The other seven octants of the mask overlap one octant of seven different cells. Figure 4.6 Red Mask overlapping one Octant(gray) of the Blue Cell When visiting each scan point, the coordinates of the scan point help in finding the octant where it lies, which in turn helps to find the eight cells that the mask overlaps. The means and the standard deviation corresponding to these eight cells are picked up from the arrays which were populated in lines 6-9 of the pseudo code. The interpolation algorithm is applied on the means and standard deviations of these eight cells, here forth referred as µ 1, µ 2, µ 3, µ 4, µ 5, µ 6, µ 7, µ 8 and σ 1, σ 2, σ 3, σ 4, σ 5, σ 6, σ 7, σ 8 respectively. 35

44 Chapter 4. Implementation on 3D Point Clouds The normalized distance of the scan point from the center of its cell, is found using the scan point s coordinates and referred to as dx, dy and dz. The tri-linear interpolation algorithm is basically a combination of bilinear interpolations in x and y axes followed by a linear interpolation in z axis [34]. As seen in the algorithm, the interpolate() method receives the scan point itself and the arrays of means and standard deviation of the source and target. The coordinates of the scan point helps to find and extract the values of µ 1-8 and the σ 1-8 as described above. The interpolation algorithm works in the same way in each of the lines in the pseudo code. Below only the calculation of µ s is described. Figure 4.7 Mask with eight mean values at its corner Figure 4.7 shows a mask with a scan point shown as black dot. It is at a distance of dx, dy and dz from the front left-most corner in each of the x, y and the z axes. The values of the eight neighbor s means are shown on the corners as µ 1-8. First we interpolate in the x direction shown in blue color to find the values of µ 12, µ 34, µ 56 and µ

45 Chapter 4. Implementation on 3D Point Clouds Now we interpolate the values of µ 12, µ 34, µ 56 and µ 78 in the y direction to find the values of µ 1234 and µ The distance is shown as dy on the green line in the y direction. Now we need one another linear interpolation in the direction of z shown as red line to find µ s Now we have the interpolated value of source mean. Similarly we get the interpolated source standard deviation as well as the target mean and standard deviation. When the values of µs, σs, µt and σt are known, then the color transfer formula is applied on each of the three lab channels as shown below Next the lab values are converted back to rgb and the scan point s color is modified Color Balancing Artifacts - Stains While color balancing, a scenario can arise when structures such as roof, floor or walls are intersected by an edge of a construction cell. If there are two scans for instance with scan points covering a wall, it can happen that part of the scan points for each scan lie in two different cells on either sides of the cell boundary which is intersecting the wall. Figure 4.8 shows a cross section of a wall whose surface consists of scan points from two scans shown in red and green. The boundary between cell 1 and cell 2 intersects the wall. This has the effect that the scan points are now distributed in two different cells. As seen before, color balancing occurs cell by cell. For cell 1, almost all the points are red in color and therefore, the target mean and standard deviation is more inclined towards red cast. For cell 2, there is a uniform mixture of red and green scan points and the target is more inclined to a yellow cast. 37

46 Chapter 4. Implementation on 3D Point Clouds Figure 4.8 Cross section of a wall manifesting the Stain Artifact Now after color balancing, if the same wall is seen in the direction of the arrow shown, what would be observed is shown in Figure 4.9. In general, with many more scans contributing to the points on the wall, this artifact causes some stains to appear in the view. This is shown in Figure Figure 4.9 Red and Yellow cast on the wall after color balance 38

47 Chapter 4. Implementation on 3D Point Clouds Figure 4.10 A wall showing a brownish stain It is important to note that this artifact is not due to the color balancing algorithm itself but due to the spatial arrangement in the construction space. Interpolation discussed in Section does try to remove the stains but interpolation has little effect here because most of the cells around cell 1 and 2 are empty as one would expect in the front and back of the wall. However interpolation does diminish the effect to only a little extent by spreading the stain. A workaround is to incorporate a new procedure to find the values of µ 1-8 and the σ 1-8 for the neighbors before the interpolation algorithm is applied as in Section For any cell where a scan point lies, the values of µ and σ for the neighbors of this cell are now calculated differently. Let Figure 4.11 show N as one of the neighbors. The calculation of µ N for a neighbor cell is depicted in 2D for clarity where the number of neighbors for cell-n are 8. In 3D, the neighbors are 26. Figure 4.11 Showing 8 neighbor cells of the Cell N 39

48 Chapter 4. Implementation on 3D Point Clouds The value of µ N for the cell N is calculated by taking the weighted mean of all the cells in the neighborhood of N. The formula below shows the calculation for µ N. ( ) µ i and n i refer to the individual mean and the number of scan points of the i th cell. The formula iterates over the 9 cells shown in Figure Some of the cells 1-9 might be empty in which case the summation term in the formula also results in zero. In 3D, the formula remains the same except that i iterates from 1 to

49 Chapter 5. Evaluation 5. Evaluation In this chapter the results of the implementation of Chapters 3 and 4 are discussed. Chapter 3 involved the implementation of two color balancing algorithms in 2D. The two algorithms are Histogram Matching and color transfer by matching means and standard deviations. After discussing the results of Chapter 3, the results of the color transfer by matching means and standard deviation in 3D are discussed. 5.1 Histogram Matching in 2D Figure 3.1, reproduced here from chapter 3, shows two images of the same scene taken in the morning and evening respectively using FARO Focus 3D LASER Scanner. Figure 5.1 Morning and Evening images of the same scene As described in section 3.2.1, the histogram matching aims to match the histogram of one image(the source) to the histogram of another image(the target). The histogram matching was performed in both 41

50 Chapter 5. Evaluation the RGB space as well as CIELab space. The morning image was histogram matched to the evening image and vice versa. Figure 5.2 Morning image matched to Evening image. Figure 5.2 shows the effect of matching the morning image to the evening image by histogram matching. As a whole the source image took the color of the target image quite well, but some artifacts appeared near the brighter white pixels. This is known as contouring effect [35] and happens when the nearby pixel values in a histogram all saturate at a certain brightness value. Figure 5.3 shows the histogram of the 3 color channels RGB in three rows. Each row shows the effect of matching one of the color channels of the source to the target. The result of histogram matching is then shown in blue color. In the histograms as well we see pretty good matching of the source to the target. The blue lines follow the general flow of the green lines. However the contouring effect is also evident. It can be observed easily for the red and green channels in the gray range of 105 to 125. Here the blue lines don t follow the green lines exactly. Instead some spikes followed by dips are visible. This is because the number of pixels at the dips are accumulated at the gray level of the spikes exhibiting the contouring effect. The same happens near the end at the brighter pixel values whose effect is seen in Figure 5.2 near the windows. 42

51 Chapter 5. Evaluation Figure 5.3 RGB channels of morning image matched to target image As discussed in section 3.2.1, the histogram matching was also performed in the CIELab color space. This was done to see whether the histogram matching gets better if the brightness and chromaticity are decoupled. However one problem in CIELab is that there exist infinite pixel values in a closed range. In RGB we had fixed number of levels in each of the R, G and B channels i.e. 256 levels each. In CIELab, the chromaticity channels of a and b remain close to zero and it seldom happens that we get pixel values near to the end of the dynamic range. Figure 5.4 shows the luminance(l) and the chromaticity(a and b) channels of the source, target and the matched images. The number of bins were high i.e compared to 255 in RGB to accommodate smaller differences in pixel values. But as can be observed, most of the bins in the whole dynamic range are empty. Overall matching looks satisfying but the artifacts increase further. 43

52 Chapter 5. Evaluation Figure 5.4 Histograms for the L, a, and b channels of Source, Target and Matched Image Figure 5.5 Morning Image matched to Evening in CIELab space It can be seen in Figure 5.5 that the contouring effect has further increased. This is because the pixel values specially in the a and b chromaticity channels are so near to each other that there are more chances to saturate to a particular value. 44

Overview. Raster Graphics and Color. Overview. Display Hardware. Liquid Crystal Display (LCD) Cathode Ray Tube (CRT)

Overview. Raster Graphics and Color. Overview. Display Hardware. Liquid Crystal Display (LCD) Cathode Ray Tube (CRT) Raster Graphics and Color Greg Humphreys CS445: Intro Graphics University of Virginia, Fall 2004 Color models Color models Display Hardware Video display devices Cathode Ray Tube (CRT) Liquid Crystal Display

More information

Computer Vision. Color image processing. 25 August 2014

Computer Vision. Color image processing. 25 August 2014 Computer Vision Color image processing 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved j.van.de.loosdrecht@nhl.nl, jaap@vdlmv.nl Color image

More information

A Proposal for OpenEXR Color Management

A Proposal for OpenEXR Color Management A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used

More information

Outline. Quantizing Intensities. Achromatic Light. Optical Illusion. Quantizing Intensities. CS 430/585 Computer Graphics I

Outline. Quantizing Intensities. Achromatic Light. Optical Illusion. Quantizing Intensities. CS 430/585 Computer Graphics I CS 430/585 Computer Graphics I Week 8, Lecture 15 Outline Light Physical Properties of Light and Color Eye Mechanism for Color Systems to Define Light and Color David Breen, William Regli and Maxim Peysakhov

More information

Digital Image Basics. Introduction. Pixels and Bitmaps. Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color

Digital Image Basics. Introduction. Pixels and Bitmaps. Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction When using digital equipment to capture, store, modify and view photographic images, they must first be converted to a set

More information

Calibration Best Practices

Calibration Best Practices Calibration Best Practices for Manufacturers SpectraCal, Inc. 17544 Midvale Avenue N., Suite 100 Shoreline, WA 98133 (206) 420-7514 info@spectracal.com http://studio.spectracal.com Calibration Best Practices

More information

Color Balancing Techniques

Color Balancing Techniques Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction Color balancing refers to the process of removing an overall color bias from an image. For example, if an image appears

More information

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing

More information

1. Introduction to image processing

1. Introduction to image processing 1 1. Introduction to image processing 1.1 What is an image? An image is an array, or a matrix, of square pixels (picture elements) arranged in columns and rows. Figure 1: An image an array or a matrix

More information

Technical Paper DISPLAY PROFILING SOLUTIONS

Technical Paper DISPLAY PROFILING SOLUTIONS Technical Paper DISPLAY PROFILING SOLUTIONS A REPORT ON 3D LUT CREATION By Joel Barsotti and Tom Schulte A number of display profiling solutions have been developed to correct image rendering errors in

More information

TerraColor White Paper

TerraColor White Paper TerraColor White Paper TerraColor is a simulated true color digital earth imagery product developed by Earthstar Geographics LLC. This product was built from imagery captured by the US Landsat 7 (ETM+)

More information

Lecture 16: A Camera s Image Processing Pipeline Part 1. Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011)

Lecture 16: A Camera s Image Processing Pipeline Part 1. Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Lecture 16: A Camera s Image Processing Pipeline Part 1 Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Today (actually all week) Operations that take photons to an image Processing

More information

Green = 0,255,0 (Target Color for E.L. Gray Construction) CIELAB RGB Simulation Result for E.L. Gray Match (43,215,35) Equal Luminance Gray for Green

Green = 0,255,0 (Target Color for E.L. Gray Construction) CIELAB RGB Simulation Result for E.L. Gray Match (43,215,35) Equal Luminance Gray for Green Red = 255,0,0 (Target Color for E.L. Gray Construction) CIELAB RGB Simulation Result for E.L. Gray Match (184,27,26) Equal Luminance Gray for Red = 255,0,0 (147,147,147) Mean of Observer Matches to Red=255

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Scanners and How to Use Them

Scanners and How to Use Them Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types

More information

CS 325 Computer Graphics

CS 325 Computer Graphics CS 325 Computer Graphics 01 / 25 / 2016 Instructor: Michael Eckmann Today s Topics Review the syllabus Review course policies Color CIE system chromaticity diagram color gamut, complementary colors, dominant

More information

Colour Image Segmentation Technique for Screen Printing

Colour Image Segmentation Technique for Screen Printing 60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone

More information

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide

More information

Color Management Terms

Color Management Terms Written by Jonathan Sachs Copyright 2001-2003 Digital Light & Color Achromatic Achromatic means having no color. Calibration Calibration is the process of making a particular device such as a monitor,

More information

EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS MODULE 1: CAMERA SYSTEMS & LIGHT THEORY (37)

EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS MODULE 1: CAMERA SYSTEMS & LIGHT THEORY (37) EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS The exam will cover evidence photography involving crime scenes, fire scenes, accident scenes, aircraft incident scenes, surveillances and hazardous materials scenes.

More information

How many PIXELS do you need? by ron gibbs

How many PIXELS do you need? by ron gibbs How many PIXELS do you need? by ron gibbs We continue to move forward into the age of digital photography. The basic building block of digital images is the PIXEL which is the shorthand for picture element.

More information

CBIR: Colour Representation. COMPSCI.708.S1.C A/P Georgy Gimel farb

CBIR: Colour Representation. COMPSCI.708.S1.C A/P Georgy Gimel farb CBIR: Colour Representation COMPSCI.708.S1.C A/P Georgy Gimel farb Colour Representation Colour is the most widely used visual feature in multimedia context CBIR systems are not aware of the difference

More information

White Paper. "See" what is important

White Paper. See what is important Bear this in mind when selecting a book scanner "See" what is important Books, magazines and historical documents come in hugely different colors, shapes and sizes; for libraries, archives and museums,

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Assessment of Camera Phone Distortion and Implications for Watermarking

Assessment of Camera Phone Distortion and Implications for Watermarking Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION

More information

(Refer Slide Time: 06:10)

(Refer Slide Time: 06:10) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 43 Digital Image Processing Welcome back to the last part of the lecture

More information

Personal Identity Verification (PIV) IMAGE QUALITY SPECIFICATIONS FOR SINGLE FINGER CAPTURE DEVICES

Personal Identity Verification (PIV) IMAGE QUALITY SPECIFICATIONS FOR SINGLE FINGER CAPTURE DEVICES Personal Identity Verification (PIV) IMAGE QUALITY SPECIFICATIONS FOR SINGLE FINGER CAPTURE DEVICES 1.0 SCOPE AND PURPOSE These specifications apply to fingerprint capture devices which scan and capture

More information

THE NATURE OF LIGHT AND COLOR

THE NATURE OF LIGHT AND COLOR THE NATURE OF LIGHT AND COLOR THE PHYSICS OF LIGHT Electromagnetic radiation travels through space as electric energy and magnetic energy. At times the energy acts like a wave and at other times it acts

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

The Digital Dog. Exposing for raw (original published in Digital Photo Pro) Exposing for Raw

The Digital Dog. Exposing for raw (original published in Digital Photo Pro) Exposing for Raw Exposing for raw (original published in Digital Photo Pro) The Digital Dog Exposing for Raw You wouldn t think changing image capture from film to digital photography would require a new way to think about

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

A Comprehensive Set of Image Quality Metrics

A Comprehensive Set of Image Quality Metrics The Gold Standard of image quality specification and verification A Comprehensive Set of Image Quality Metrics GoldenThread is the product of years of research and development conducted for the Federal

More information

Color Transfer between Images

Color Transfer between Images Color Transfer between Images Erik Reinhard, ichael Ashikhmin, Bruce Gooch, and Peter Shirley University of Utah One of the most common tasks in image processing is to alter an image s color. Often this

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

Why use ColorGauge Micro Analyzer with the Micro and Nano Targets?

Why use ColorGauge Micro Analyzer with the Micro and Nano Targets? Image Science Associates introduces a new system to analyze images captured with our 30 patch Micro and Nano targets. Designed for customers who require consistent image quality, the ColorGauge Micro Analyzer

More information

Topographic Change Detection Using CloudCompare Version 1.0

Topographic Change Detection Using CloudCompare Version 1.0 Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare

More information

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction

Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya (LKaya@ieee.org) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper

More information

A Adobe RGB Color Space

A Adobe RGB Color Space Adobe RGB Color Space Specification Version DRAFT October 2, 2004 Please send comments to mailto:lars.borg@adobe.com This publication and the information herein are subject to change without notice, and

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

More information

The role of working spaces in Adobe applications

The role of working spaces in Adobe applications Technical paper The role of working spaces in Adobe applications Table of contents 1 It s all a numbers game 1 Color models 2 Color spaces 4 Classes of color spaces 4 Design and benefits of RGB workspaces

More information

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Imaging process Light reaches surfaces in 3D. Surfaces reflect. Sensor element receives

More information

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

Introduction to Digital Resolution

Introduction to Digital Resolution Introduction to Digital Resolution 2011 Copyright Les Walkling 2011 Adobe Photoshop screen shots reprinted with permission from Adobe Systems Incorporated. Version 2011:02 CONTENTS Pixels of Resolution

More information

Thank you for choosing NCS Colour Services, annually we help hundreds of companies to manage their colours. We hope this Colour Definition Report

Thank you for choosing NCS Colour Services, annually we help hundreds of companies to manage their colours. We hope this Colour Definition Report Thank you for choosing NCS Colour Services, annually we help hundreds of companies to manage their colours. We hope this Colour Definition Report will support you in your colour management process and

More information

Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing

Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing Joyce Farrell a, Mike Okincha b, Manu Parmar ac, and Brian Wandell ac a Dept. of Electrical Engineering, Stanford

More information

RESOLUTION MERGE OF 1:35.000 SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY

RESOLUTION MERGE OF 1:35.000 SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY RESOLUTION MERGE OF 1:35.000 SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY M. Erdogan, H.H. Maras, A. Yilmaz, Ö.T. Özerbil General Command of Mapping 06100 Dikimevi, Ankara, TURKEY - (mustafa.erdogan;

More information

RESULTS FROM A SIMPLE INFRARED CLOUD DETECTOR

RESULTS FROM A SIMPLE INFRARED CLOUD DETECTOR RESULTS FROM A SIMPLE INFRARED CLOUD DETECTOR A. Maghrabi 1 and R. Clay 2 1 Institute of Astronomical and Geophysical Research, King Abdulaziz City For Science and Technology, P.O. Box 6086 Riyadh 11442,

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

How Landsat Images are Made

How Landsat Images are Made How Landsat Images are Made Presentation by: NASA s Landsat Education and Public Outreach team June 2006 1 More than just a pretty picture Landsat makes pretty weird looking maps, and it isn t always easy

More information

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene. Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Otis Photo Lab Inkjet Printing Demo

Otis Photo Lab Inkjet Printing Demo Otis Photo Lab Inkjet Printing Demo Otis Photography Lab Adam Ferriss Lab Manager aferriss@otis.edu 310.665.6971 Soft Proofing and Pre press Before you begin printing, it is a good idea to set the proof

More information

Designing Custom DVD Menus: Part I By Craig Elliott Hanna Manager, The Authoring House at Disc Makers

Designing Custom DVD Menus: Part I By Craig Elliott Hanna Manager, The Authoring House at Disc Makers Designing Custom DVD Menus: Part I By Craig Elliott Hanna Manager, The Authoring House at Disc Makers DVD authoring software makes it easy to create and design template-based DVD menus. But many of those

More information

E190Q Lecture 5 Autonomous Robot Navigation

E190Q Lecture 5 Autonomous Robot Navigation E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ. Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your

More information

T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN

T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN Goal is to process 360 degree images and detect two object categories 1. Pedestrians,

More information

PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM

PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM Apurva Sinha 1, Mukesh kumar 2, A.K. Jaiswal 3, Rohini Saxena 4 Department of Electronics

More information

Design Elements & Principles

Design Elements & Principles Design Elements & Principles I. Introduction Certain web sites seize users sights more easily, while others don t. Why? Sometimes we have to remark our opinion about likes or dislikes of web sites, and

More information

Thea Omni Light. Thea Spot Light. Light setup & Optimization

Thea Omni Light. Thea Spot Light. Light setup & Optimization Light setup In this tutorial we will learn how to setup lights inside Thea Studio and how to create mesh lights and optimize them for faster rendering with less noise. Let us have a look at the different

More information

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials 3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines

More information

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed

More information

International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in Office Applications Light Measurement & Quality Parameters

International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in Office Applications Light Measurement & Quality Parameters www.led-professional.com ISSN 1993-890X Trends & Technologies for Future Lighting Solutions ReviewJan/Feb 2015 Issue LpR 47 International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in

More information

Klaus Goelker. GIMP 2.8 for Photographers. Image Editing with Open Source Software. rocky

Klaus Goelker. GIMP 2.8 for Photographers. Image Editing with Open Source Software. rocky Klaus Goelker GIMP 2.8 for Photographers Image Editing with Open Source Software rocky Table of Contents Chapter 1 Basics 3 1.1 Preface....4 1.2 Introduction 5 1.2.1 Using GIMP 2.8 About This Book 5 1.2.2

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

APPLICATIONS AND USAGE

APPLICATIONS AND USAGE APPLICATIONS AND USAGE http://www.tutorialspoint.com/dip/applications_and_usage.htm Copyright tutorialspoint.com Since digital image processing has very wide applications and almost all of the technical

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

CHAPTER 6 TEXTURE ANIMATION

CHAPTER 6 TEXTURE ANIMATION CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of

More information

Video-Conferencing System

Video-Conferencing System Video-Conferencing System Evan Broder and C. Christoher Post Introductory Digital Systems Laboratory November 2, 2007 Abstract The goal of this project is to create a video/audio conferencing system. Video

More information

Multivariate data visualization using shadow

Multivariate data visualization using shadow Proceedings of the IIEEJ Ima and Visual Computing Wor Kuching, Malaysia, Novembe Multivariate data visualization using shadow Zhongxiang ZHENG Suguru SAITO Tokyo Institute of Technology ABSTRACT When visualizing

More information

Digital Imaging and Image Editing

Digital Imaging and Image Editing Digital Imaging and Image Editing A digital image is a representation of a twodimensional image as a finite set of digital values, called picture elements or pixels. The digital image contains a fixed

More information

Introduction to Imagery and Raster Data in ArcGIS

Introduction to Imagery and Raster Data in ArcGIS Esri International User Conference San Diego, California Technical Workshops July 25, 2012 Introduction to Imagery and Raster Data in ArcGIS Simon Woo slides Cody Benkelman - demos Overview of Presentation

More information

1. Three-Color Light. Introduction to Three-Color Light. Chapter 1. Adding Color Pigments. Difference Between Pigments and Light. Adding Color Light

1. Three-Color Light. Introduction to Three-Color Light. Chapter 1. Adding Color Pigments. Difference Between Pigments and Light. Adding Color Light 1. Three-Color Light Chapter 1 Introduction to Three-Color Light Many of us were taught at a young age that the primary colors are red, yellow, and blue. Our early experiences with color mixing were blending

More information

Chapter 2. Point transformation. Look up Table (LUT) Fundamentals of Image processing

Chapter 2. Point transformation. Look up Table (LUT) Fundamentals of Image processing Chapter 2 Fundamentals of Image processing Point transformation Look up Table (LUT) 1 Introduction (1/2) 3 Types of operations in Image Processing - m: rows index - n: column index Point to point transformation

More information

Video Camera Image Quality in Physical Electronic Security Systems

Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global

More information

MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013

MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013 INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover

More information

This document describes how video signals are created and the conversion between different standards. Four different video signals are discussed:

This document describes how video signals are created and the conversion between different standards. Four different video signals are discussed: A technical briefing by J. S. Technology. Introduction. This document describes how video signals are created and the conversion between different standards. Four different video signals are discussed:

More information

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging Physics of Medical X-Ray Imaging (1) Chapter 3 CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY 3.1 Basic Concepts of Digital Imaging Unlike conventional radiography that generates images on film through

More information

A comparison between a CRT and a LCD monitors colors rendering

A comparison between a CRT and a LCD monitors colors rendering A comparison between a CRT and a LCD monitors colors rendering TOADERE FLORIN, NIKOS E. MASTORAKIS INCDTIM Cluj Napoca Str. Donath, nr. 65-103, Cluj Napoca, 400293, ROMANIA Florin.Toadere@bel.utcluj.ro

More information

Compensation Basics - Bagwell. Compensation Basics. C. Bruce Bagwell MD, Ph.D. Verity Software House, Inc.

Compensation Basics - Bagwell. Compensation Basics. C. Bruce Bagwell MD, Ph.D. Verity Software House, Inc. Compensation Basics C. Bruce Bagwell MD, Ph.D. Verity Software House, Inc. 2003 1 Intrinsic or Autofluorescence p2 ac 1,2 c 1 ac 1,1 p1 In order to describe how the general process of signal cross-over

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

INTRODUCTION IMAGE PROCESSING >INTRODUCTION & HUMAN VISION UTRECHT UNIVERSITY RONALD POPPE

INTRODUCTION IMAGE PROCESSING >INTRODUCTION & HUMAN VISION UTRECHT UNIVERSITY RONALD POPPE INTRODUCTION IMAGE PROCESSING >INTRODUCTION & HUMAN VISION UTRECHT UNIVERSITY RONALD POPPE OUTLINE Course info Image processing Definition Applications Digital images Human visual system Human eye Reflectivity

More information

Cloud tracking with optical flow for short-term solar forecasting

Cloud tracking with optical flow for short-term solar forecasting Cloud tracking with optical flow for short-term solar forecasting Philip Wood-Bradley, José Zapata, John Pye Solar Thermal Group, Australian National University, Canberra, Australia Corresponding author:

More information

Synthetic Sensing: Proximity / Distance Sensors

Synthetic Sensing: Proximity / Distance Sensors Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,

More information

Name Class Date. spectrum. White is not a color, but is a combination of all colors. Black is not a color; it is the absence of all light.

Name Class Date. spectrum. White is not a color, but is a combination of all colors. Black is not a color; it is the absence of all light. Exercises 28.1 The Spectrum (pages 555 556) 1. Isaac Newton was the first person to do a systematic study of color. 2. Circle the letter of each statement that is true about Newton s study of color. a.

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

FCE: A Fast Content Expression for Server-based Computing

FCE: A Fast Content Expression for Server-based Computing FCE: A Fast Content Expression for Server-based Computing Qiao Li Mentor Graphics Corporation 11 Ridder Park Drive San Jose, CA 95131, U.S.A. Email: qiao li@mentor.com Fei Li Department of Computer Science

More information

Digital Image Requirements for New Online US Visa Application

Digital Image Requirements for New Online US Visa Application Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must

More information

Image Normalization for Illumination Compensation in Facial Images

Image Normalization for Illumination Compensation in Facial Images Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines

More information

Target Validation and Image Calibration in Scanning Systems

Target Validation and Image Calibration in Scanning Systems Target Validation and Image Calibration in Scanning Systems COSTIN-ANTON BOIANGIU Department of Computer Science and Engineering University Politehnica of Bucharest Splaiul Independentei 313, Bucharest,

More information

Perception of Light and Color

Perception of Light and Color Perception of Light and Color Theory and Practice Trichromacy Three cones types in retina a b G+B +R Cone sensitivity functions 100 80 60 40 20 400 500 600 700 Wavelength (nm) Short wavelength sensitive

More information

ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM

ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM Theodor Heinze Hasso-Plattner-Institute for Software Systems Engineering Prof.-Dr.-Helmert-Str. 2-3, 14482 Potsdam, Germany theodor.heinze@hpi.uni-potsdam.de

More information

Shear :: Blocks (Video and Image Processing Blockset )

Shear :: Blocks (Video and Image Processing Blockset ) 1 of 6 15/12/2009 11:15 Shear Shift rows or columns of image by linearly varying offset Library Geometric Transformations Description The Shear block shifts the rows or columns of an image by a gradually

More information

balesio Native Format Optimization Technology (NFO)

balesio Native Format Optimization Technology (NFO) balesio AG balesio Native Format Optimization Technology (NFO) White Paper Abstract balesio provides the industry s most advanced technology for unstructured data optimization, providing a fully system-independent

More information

High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets

High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets 0 High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets January 15, 2014 Martin Rais 1 High Resolution Terrain & Clutter Datasets: Why Lidar? There are myriad methods, techniques

More information

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based) Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf Flow Visualization Image-Based Methods (integration-based) Spot Noise (Jarke van Wijk, Siggraph 1991) Flow Visualization:

More information

Will Your Fiber Optic Cable Plant Support Gigabit Ethernet?

Will Your Fiber Optic Cable Plant Support Gigabit Ethernet? Will Your Fiber Optic Cable Plant Support Gigabit Ethernet? GBE, as the name says, is Ethernet scaled up to gigabit speeds, providing a migration path from Ethernet at 10 MB/s to Fast Ethernet at 100 MB/s

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information