Automatic Facial Occlusion Detection and Removal

Size: px
Start display at page:

Download "Automatic Facial Occlusion Detection and Removal"

Transcription

1 Automatic Facial Occlusion Detection and Removal Naeem Ashfaq Chaudhry October 18, 2012 Master s Thesis in Computing Science, 30 credits Supervisor at CS-UmU: Niclas Börlin Examiner: Frank Drewes Umeå University Department of Computing Science SE UMEÅ SWEDEN

2

3 Abstract In our daily life, we are faced with many occluded faces. The occlusion may be from different objects like sunglasses, mufflers, masks, scarves etc. Sometimes, this occlusion is used by the criminal persons to hide their identity from the surroundings. In this thesis, a technique is used to detect the facial occlusion automatically. After detecting the occluded areas, a method for image reconstruction called apca (asymmetrical Principal Component Analysis) is used to reconstruct the faces. The entire face is reconstructed using the non occluded area of the face. A database of images of different persons is organized which is used in the process of reconstruction of the occluded images. Experiments were performed to examine the effect of the granularity of the occlusion on the apca reconstruction process. The input mask image is divided into different parts, the occlusion for each part is marked and apca is applied to reconstruct the faces. This process of image reconstruction takes a lot of processing time so pre-defined eigenspaces are introduced that takes very less processing time with very less quality loss of the reconstructed faces.

4 ii

5 Contents 1 Introduction Background Goals of the thesis Related work Occluded face reconstruction Facial occlusion detection Theory Principal Component Analysis (PCA) PCA method/model PCA for images Eigen faces Asymmetrical PCA (apca) Description of apca apca calculation apca for reconstruction of occluded facial region Skin color detection Image registration Translation Rotation Scaling Affine transformation Peak signal-to-noise ratio (PSNR) Method The AR face database Automatic occlusion detection Replace white color with black color Image cropping Image division Occlusion detection for each block iii

6 iv CONTENTS 3.3 Occluded face reconstruction PSNR calculation Experiment Granularity effect Metric Sunglasses scenario Scarf scenario Cap and sunglasses occlusion Pre-defined eigenspaces Metric Experiment description Results Occlusion detection results Reconstruction quality results Reconstruction results using pre-defined eigenspaces Conclusions Discussion about granularity effect and reconstruction quality Discussion about pre-defined eigenspaces Limitations Future work Acknowledgements 43 References 45

7 List of Figures 1.1 Different types of occlusion. (a) Sunglasses occlusion. (b) Mask occlusion The first vector Z 1 is in direction of maximum variance and second vector Z 2 is in direction of residual maximum variance Eigenfaces (a) First eigenface. (b) Second eigenface. (c) Third eigenface The blue part represents the eigenspace of non-occluded regions whereas the green part represents the pseudo eigenspace of the complete image (a) and (b) represent the original images while (c) and (d) represent the registered images (a) an occluded facial image. (b) Image division into 6 parts. (c) Image division into 54 smaller parts (d) Image division into 486 parts (a) an occluded facial image. (b) Image division into blocks. (c) Each black block represents an occluded block (a) Non-occluded facial image. (b) An occluded image. (c) Eigenspaces (a) An occluded image. (b) Level 1 image division. (c) Detected occlusions An example of the reconstructed face by level 1 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.2 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 2 image division. (c) Detected occlusions An example of the reconstructed face by level 2 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.4 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 3a image division. (c) Detected occlusions An example of the reconstructed face by level 3a image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.6 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Occlusion detection by level 2 image division. (c) Level 3b image division. (d) Occlusion detection by level 3b image division.. 23 v

8 vi LIST OF FIGURES 4.9 An example of the reconstructed face by level 3b image division (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.8 (d). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 1 image division. (c) Detected occlusions An example of the reconstructed face by level 1 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.10 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 2 image division. (c) Detected occlusions An example of the reconstructed face by level 2 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.12 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 3a image division. (c) Detected occlusions An example of the reconstructed face by level 3a image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.14 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Occlusion detection by level 2 image division. (c) Level 3b image division. (d) Occlusion detection by level 3b image division An example of the reconstructed face by level 3b image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.16 (d). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 1 image division. (c) Detected occlusions An example of the reconstructed face by level 1 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.18 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 2 image division. (c) Detected occlusions An example of the reconstructed face by level 2 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.20 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Level 3a image division. (c) Detected occlusions An example of the reconstructed face by level 3a image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.22 (c). (c) Reconstructed image. (d) Non-occluded image (a) An occluded image. (b) Occlusion detection by level 2 image division. (c) Level 3b image division. (d) Occlusion detection by level 3b image division An example of the reconstructed face by level 3b image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.24 (d). (c) Reconstructed image. (d) Non-occluded image Occluded facial images used for construction of 6 eigenspaces (a) An occluded image. (b) Detected occlusion by level 3b image division. (c) Pre-defined eigenspace most similar to the detected occlusion in (c). (d) Reconstructed image using the eigenspace in (c)

9 LIST OF FIGURES vii 5.1 Occlusion detection by different image division methods. (a) Occluded image. (b) Occlusion detection by level 1 image division. (c) Occlusion detection by level 2 image division. (d) Occlusion detection by level 3a image division. (e) Occlusion detection by level 3b image division Reconstructed image by different image division methods. (a) An occluded image. (b) Reconstructed image by level 1 image division. (c) Reconstructed image by level 2 image division. (d) Reconstructed image by level 3a image division. (e) Reconstructed image by level 3b image division. (f) Non-occluded image Reconstructed image by different image division methods. (a) An occluded image. (b) Reconstructed image by level 1 image division. (c) Reconstructed image by level 2 image division. (d) Reconstructed image by level 3a image division. (e) Reconstructed image by level 3b image division. (f) Non-occluded image

10 viii LIST OF FIGURES

11 List of Tables 5.1 Reconstruction quality of the complete image (PSNR)[dB] for granularity effect Reconstruction quality of the occluded reconstructed parts (PSNR)[dB] for granularity effect Number of Pixels used in Reconstruction Processing Time (sec) for granularity effect ix

12 x LIST OF TABLES

13 Chapter 1 Introduction 1.1 Background Face recognition has been one of the most challenging and active research topics in computer vision for the last several years (Zhao et al., 2003). The goal of face recognition is to recognize a person even if the face is occluded by some object. A face recognition system should recognize a face independently and robustly as possible to the image variations such as illumination, pose, occlusion, expression, etc. (Kim et al., 2007). A face is occluded if some area of the face is hidden behind an object like a sunglass, a hand, a mask, as seen in Figure 1.1. Face occlusions can degrade the performance of face recognition systems including humans. Recent research projects e.g. (M.Al-Naser and Söderström, 2011) have used pre-determined occluded areas in standardized positions. After occlusion detection, apca (asymmetrical Principal Component Analysis) (Söderström and Li, 2011) was used for entire face reconstruction. apca is used to estimate an entire image based on the subset of the image, e.g. to reconstruct a partially occluded facial image using the non-occluded facial parts of the image. The experiments used a small database (n = 116) of facial images with no classification (Martinzer and Benavente, 1998). A property of the reconstructed images in (M.Al-Naser and Söderström, 2011) is that the reconstructed images have sharp edges between the original and reconstructed regions. This application can be used by the law enforcement agencies, access control systems, surveillance at different public places like ATM machines, air ports etc. 1.2 Goals of the thesis The overall goal of this thesis is to improve the performance of apca for reconstruction of occluded regions of facial images. The primary goal is to develop an algorithm for automatic detection and reconstruction of facial occlusions. The algorithm should be automatic and detect smaller occlusions compared to previous work. Furthermore, arbitrary occlusion should be handled, i.e. occlusions of any part of the face. A secondary goal is to develop an algorithm for smoothing the reconstructed images to reduce the edges between the original and reconstructed regions. 1

14 2 Chapter 1. Introduction Figure 1.1: Different types of occlusion. (a) Sunglasses occlusion. (b) Mask occlusion. A tertiary goal is to extend the AR database with more images and to classify the images individually according to gender, ethnicity etc. 1.3 Related work Occluded face reconstruction M.Al-Naser and Söderström (2011) reconstructed the occluded regions using asymmetrical principal component analysis (apca). The occluded facial regions were estimated based on non-occluded facial regions. They did not detect the occlusion automatically rather occlusion on the facial images was marked manually. Jabbar and Hadi (2010) detected the face area using a combination of skin color segmentation and eye template matching. They used fuzzy c-mean clustering algorithm for detection of occluded facial regions. When the occluded region was one of the symmetric facial feature such as eye, then this feature is used to recover the occluded area. When the occluded area was not one of the symmetric facial feature then they used the most similar mean face from the database Facial occlusion detection Min et al. (2011) performed the facial occlusion detection caused by sunglasses and scarves using the Gabor wavelet. The face image were divided into an upper and lower half. The upper part was used to detect sunglass occlusions while the lower part was used for scarf occlusion detection. Kim et al. (2010) proposed a method to determine if a face is occluded by measuring skin color area ratio (SCAR). Oh et al. (2006) found the occlusion by first dividing the facial images into a finite number of local disjoints patches and then examine each patch separately.

15 Chapter 2 Theory 2.1 Principal Component Analysis (PCA) PCA (Jollifie, 2002) is a mathematical procedure that is used to transform potentially correlated variables into uncorrelated variables. Suppose we have a data matrix of observations of N correlated variables X 1,X 2,...,X N, PCA will transform the X i variables into N new variables Y i that are uncorrelated. The variables Y i are called principal components. The first principal component is in the direction of the largest variance of the origin. The other principal components are orthogonal to each other and represent the largest residual variance, see Figure 2.1. PCA can be used as a dimension reduction method to represent multidimensional, highly correlated data, with fewer variables. PCA is used for, e.g. information extraction, image compression, image reconstruction and image recognition PCA method/model Image-to-vector conversion A 2-dimensional image is transformed to a 1-dimensional vector by placing the rows side by side, i.e. where p i is the ith row of p and r is total number of rows. x = [p 1, p 2,..., p r ] T, (2.1) Subtract the Mean The mean is subtracted from each vector to produce a vector with zero mean. Let I 0 represent the mean then it is calculated as I 0 = 1 N N I j, (2.2) j=1 where N is the number of variables I. 3

16 4 Chapter 2. Theory Figure 2.1: The first vector Z 1 is in direction of maximum variance and second vector Z 2 is in direction of residual maximum variance. Calculate the covariance matrix The covariance of the mean centred matrix is calculated as Cov = W T W, (2.3) where W is a r-by-c sized matrix composed of the column vectors (I i I 0 ). Cov is a square matrix of size r-by-c. Calculate the eigenvectors and eigenvalues of covariance matrix The Singular Value Decomposition (SVD) Strang (2003) of a matrix A (r-by-c) decomposes σ 1 σ 2 E r c = U r r Σ r c Vc c T = [u 1, u 2,..., u r ]... σr 0. 0 [v 1, v 2,..., v c ] T, (2.4) where U is an r-by-r unitary matrix, σ is an rxc rectangular diagonal matrix and V is an cxc unitary matrix. In general, U and V are the left and right singular vectors, respectively and the singular values σ i 0 are sorted in descending order. If A is symmetric positive definite, U = V and contain the eigenvectors and σ i are the eigenvalues. Choosing components and forming a feature vector The eigenvector that is associated with the highest eigenvalue represents the greatest variance in the data whereas the eigenvector associated with lowest eigenvalue represents the least variance. The eigenvalues decrease in an exponential pattern (Kim, 1996). It is estimated that 90% of the total variance is contained in the first 5% to 10% of the dimensions. The eigenvectors associated with low eigenvalues are less significant and can be ignored. A

17 2.1. Principal Component Analysis (PCA) 5 feature vector b is constructed by selecting M eigenvectors associated with highest eigenvalues, from a total of N eigenvectors, i.e. Deriving the new dataset b = (e 1, e 2,..., e M ). (2.5) Take transpose of Feature Vector b and multiply it with W to get the final dataset Φ PCA for images Φ = b T W. (2.6) The PCA is computed as the SVD of the covariance matrix Cov of the facial images. An Eigenspace φ is created by using the equation φ j = i b ij (I i I 0 ), (2.7) where b ij is eigenvector of covariance matrix {(I i I 0 ) T (I j I 0 )}. Eq. 2.6 and 2.7 are the same. The projection coefficients {α j } = { α 1, α 2, α 3...α x } for each facial image are calculated as α j = φ j (I I 0 ) T. (2.8) Each facial image is represented by taking the sum of the mean of all pixels and the weighted principal components. The representation becomes error free if all N principal components are used N I = I 0 + α j φ j. (2.9) The final facial image is constructed by I = I 0 + j=1 M α j φ j, (2.10) where M is number of selected principal components that are used for reconstruction of the facial image. An image with negligible quality loss can be represented by a few principal components because the first 5 10 % of the eigenvectors can represent more than 90% of the variance in the data (Kim, 1996). PCA achieves compression since fewer (M) than the original dimensions (N) are used to represent the images. A PCA model also allows images to be represented with only a few values (α s) and this is how PCA works for image representation Eigen faces The eigenvectors or principal components of the distribution of faces are the eigenfaces. Eigenfaces are like the ghostly faces. The first 3 eigenfaces obtained from AR database described in section 3.1 can be seen in Figure 2.2. Each individual face can be represented by a linear combination of eigenfaces. Each face is approximated using the best eigenfaces that have the most variance within the set of face images. The best M eigenfaces span an M-dimensional subspace- face space -of all possible images (Turk and Pentland, 1991). j=1

18 6 Chapter 2. Theory Figure 2.2: Eigenfaces (a) First eigenface. (b) Second eigenface. (c) Third eigenface. Figure 2.3: The blue part represents the eigenspace of non-occluded regions whereas the green part represents the pseudo eigenspace of the complete image. 2.2 Asymmetrical PCA (apca) apca is a method for estimating the entire space based on a subspace of this space. This method finds the correspondence between pixels in non-occluded regions and pixels behind occluded regions Description of apca apca is an extension of PCA (Principal Component Analysis). By using apca, entire faces are reconstructed by estimating the occluded regions based on the non-occluded regions of the images. Intensity (appearance) of non-occluded pixels is used to estimate the intensity of occluded pixels. In apca, two eigenspaces are constructed, one from non-occluded areas of occluded images where the eigenvectors are orthogonal to each other and the other space is the pseudo eigenspace that is constructed from the eigenvectors of the non-occluded image regions. In the pseudo eigenspace, the eigenvectors are not orthogonal, as seen in the Figure apca calculation In apca, a pseudo eigenspace is created. It models the correspondence between the pixels in the images but only non-occluded parts are orthogonal. Let I no represents the non-occluded image parts I. I no is modelled in an eigenspace Φ no = { } φ no 1, φ no 2, φ no 3,..., φ no N using the formula φ no j = i b no ij (Ii no I0 no ) (2.11)

19 2.3. Skin color detection 7 where b no ij are eigenvector values of the covariance matrix {(Ino i I0 no ) T (Ij no I0 no )} and I0 no is mean of the non-occluded regions, I no 0 = 1 N N j=1 (I no j ). (2.12) Eigenvectors of the non-occluded parts are used to make them orthogonal while the occluded parts are modelled according to the correspondence with the non-occluded parts. The pseudo eigenspace Φ p is calculated as Φ p j = i b no ij (I i I 0 ), (2.13) where I i is the original image and I 0 is the mean of the original images. Projection is used to extract the coefficients {α f j } from the eigenspace Φno α no j The complete facial image Î is reconstructed as = Φ no j (I no I0 no ) T. (2.14) Î = I 0 + M j=1 α no j Φ p j, (2.15) where M is the selected number of pseudo components that are used for the reconstruction. By using the above calculated projection coefficients, a complete image can be reconstructed from only non-occluded parts of the image apca for reconstruction of occluded facial region With the eigenspace modelling the non-occluded facial regions and pseudo eigenspace modelling the entire face, it is possible to use apca to estimate how a face image looks like behind the occlusions. When the spaces are created, the entire face needs to be visible so that the correspondence between the spaces can be modelled with apca. The eigenspace is created according to Eq and a pseudo eigenspace is constructed according to Eq The correspondence between the facial regions is captured in these two spaces. The non-occluded regions can then be used to extract projection coefficients α (Eq. 2.17) meaning that only non-occluded pixels affect the representation. When the pseudo eigenspace is used with these coefficients to recreate an image of the entire face (Eq. 2.18), the content of the previously occluded pixels is calculated based on their relationship with the non-occluded pixels. 2.3 Skin color detection This section follows (Cheddad et al., 2009). He uses 2 approximations l and ˆl for skin color detection. l is calculated as l(x) = ((r(x), g(x), b(x)) α) (2.16) where * represents matrix multiplication and the transformation matrix α = [0.298, 0.587, 0.140]. (2.17)

20 8 Chapter 2. Theory Figure 2.4: (a) and (b) represent the original images while (c) and (d) represent the registered images. The matrix ˆl is calculated as ˆl(x) = argxɛ{1,2,...,n} max(g(x), R(x)). (2.18) An error signal for each pixel is calculated as e(x) = l(x) ˆl(x), (2.19) and classified as skin or not skin by { 1, if e(x) f skin (x) = 0, otherwise. (2.20) 2.4 Image registration Image registration is the process of transforming a set of images into one coordinate system without changing the shape of the images. In this process, one image is selected as the base image and spatial transformations are applied on the other images so that these images align according to the base image. Image registration is performed as a preliminary step in order to apply different image processing operations on the dataset that have same coordinate system. If facial images are being aligned then after alignment, all the images will have their facial features like mouth eyes, nose, etc. in the same position Translation Translation is a process of geometric transformation in which an image element located at a position (x 1, y 1 ) is shifted to a new position (x 2, y 2 ) in the transformed image. The translation operation is defined as [ ] [ ] [ ] x2 x1 tx = + (2.21) y 2 y 1 t y where t x and t y are the horizontal and vertical pixels displacements, respectively.

21 2.5. Peak signal-to-noise ratio (PSNR) Rotation Rotation is a geometric transformation in which the image elements are rotated by a specified rotating angle θ. The rotation operation is defined as [ ] [ ] [ ] x2 cos θ sin θ x1 = (2.22) y 2 sin θ cos θ y Scaling Scaling is a geometric transformation that can be used to reduce or increase the size of the image coordinates. The scaling operation is defined as [ ] [ ] [ ] x2 cx 0 x1 = (2.23) y 2 0 c y Affine transformation Affine transformation is a linear 2-D geometric transformation that uses rotation, scaling and translation operations. It maps variables located at position (x 1, y 1 ) in an input image into variables located at (x 2, y 2 ) in an output image by applying a linear combination of translation, rotation, scaling and/or shearing (non-uniform scaling in some direction) operations. The Affine Transformation takes the form [ ] [ ] [ ] [ ] x2 a11 a = 12 x1 tx + (2.24) y 2 a 22 y 1 t y Facial images used in this thesis are aligned using Affine Transformations. 2.5 Peak signal-to-noise ratio (PSNR) a 21 PSNR is used to calculate the ratio between the maximum possible value of a signal and the power of distorting noise that affects the quality of its representation. It is often used as a benchmark level of similarity between constructed image and the original image (Santoso et al., 2011). PSNR compares the original image with the coded/decoded image to quantify the quality of data that is the output of decompressing the encoded data. A higher PSNR value means that the reconstructed data is of better quality. The mathematical representation of the PSNR is ( ) P SNR = 10 log max 2 10, (2.25) where max is the maximum possible value of the image pixels and MSE is the mean squared difference between the compressed and the original data. MSE = X m=1 n=1 y 1 MSE Y [I 1 (m, n) I 2 (m, n)] 2 XY (2.26) where I 1 is the original image, I 2 is the reconstructed image, X and Y are the number of rows and columns respectively.

22 10 Chapter 2. Theory

23 Chapter 3 Method 3.1 The AR face database To perform the experiments, AR Face database (Martinzer and Benavente, 1998) was used. This database contains more than 4000 facial images of 126 persons including both male and female (70 men and 56 women). The database contains images with scarf and sunglasses occlusions and non-occluded images with different facial expressions. The original size of the images is 768x576 pixels. The images were taken in controlled conditions with no restrictions on wearing and style. 3.2 Automatic occlusion detection Replace white color with black color The skin color detection method of section 2.3 classifies the white pixels as skin pixels. However, since white color is not a skin color, rather it is an occlusion. Therefore, white pixels are always replaced by black pixels before skin color detection. A pixel is classified as white if its R, G, B values are all greater than 190, where 255 is the maximum value Image cropping The original size of the images is 768x576 pixels. These images contain a lot of back ground area that effect the quality of reconstructed images. Therefore the images are cropped to a size of 171x144 pixels Image division The image (171x144)is divided into 6 parts: 2 head parts, 2 eyes parts and 2 mouth parts, see Figure 2.3 (b). The size of each head part is pixels, the size of each eyes part is pixels and the size of each mouth part is pixels. In the second step, each part is further divided into 9 sub parts, see Figure 2.3 (c). By doing this, smaller facial occlusions can also be detected. In the third step, each part of second step is further divided into 9 sub parts, see Figure 2.3 (d). 11

24 12 Chapter 3. Method Figure 3.1: (a) an occluded facial image. (b) Image division into 6 parts. (c) Image division into 54 smaller parts (d) Image division into 486 parts. Figure 3.2: (a) an occluded facial image. (b) Image division into blocks. (c) Each black block represents an occluded block Occlusion detection for each block To detect the occlusion for each block, the skin color information is used. If a pixel is not a skin pixel, it is marked as an occluded pixel. If 25% of the pixels in a block are non-skin pixels, the block is marked as an occluded block. 3.3 Occluded face reconstruction After facial occlusion detection, a column vector is created that contains only the nonoccluded parts of each image. The column vectors are stored in a matrix that contains the corresponding non-occluded parts of the facial images in the database. Each image of the database is also converted into a vector and stored in a matrix. If there are 100 images in the database then this matrix will contain 100 vectors. The mean of each vector of non-occluded matrix is calculated and subtracted from each value of the vector. Similarly, the mean of each vector of the original facial matrix is calculated and subtracted from each value of the vector. This produces a dataset whose mean is zero. The covariance cov of the non-occluded facial matrix is calculated as described in section The eigenvector and eigenvalues of the covariance matrix are calculated using the SVD. An eigenspace is constructed from the non-occluded parts of the images. Similarly, a pseudo eigenspace is constructed from all parts of the images in the database. The projection is used to extract the coefficients from the eigenspace. These extracted coefficients will be used for facial images reconstruction. A specific number M = 50 of eigenvectors are used for the reconstruction of the images. The choice of M = 50 was found by initial experiments. The final facial images data is constructed using the Eq At the last step, each vector of the matrix is reshaped to get the R, G, and B values for each image and to reconstruct the facial images PSNR calculation PSNR of the input image and reconstructed image is calculated to check the quality of the reconstructed image. If value of PSNR is more than 30, then it is normally considered that

25 3.3. Occluded face reconstruction 13 the reconstructed image is of good quality (Wikipedia, 2012).

26 14 Chapter 3. Method

27 Chapter 4 Experiment 4.1 Granularity effect This experiment examines the effect of the granularity of the occlusion on the apca reconstruction process. The image is divided into 6 parts at the first step. Occlusion for each facial part is determined. The non-occluded parts of the image are used to construct the eigenspace whereas the entire image is used to construct the pseudo eigenspace. At the second step, the image is first divided into 6 parts and occlusion is determined for each block. If a part is occluded then this part is further divided into 9 sub parts and the occlusion process is repeated. At the third step, the image is first divided into 6 parts then each part into 9 sub parts based on occlusion detection. Occlusion for each of these sub parts is determined. If any of the block is occluded, it is further divided into 9 sub parts and occlusion is determined for these parts. These small parts are used to construct the eigenspace and the entire image is used to construct the pseudo eigenspace Metric PSNR is used as a metric to determine the results of the granularity effect. PSNR is calculated for the entire image and for only the reconstructed part of the image. The number of non-occluded pixels used for encoding in each experiment are also calculated Sunglasses scenario In this scenario, the mask input image is occluded by the sunglasses. The image is divided into sub parts, the occlusion is detected for each of these parts individually and the full faces are reconstructed using apca image reconstruction method. The average PSNR of all the reconstructed faces is calculated to determine the quality of the reconstructed facial images and the average PSNR value of all the reconstructed occluded parts are also calculated. Furthermore, the number of pixels used in the reconstruction process and the time taken by each division method are recorded. In Figure 4.1, the image (a) is the original image, (b) is the input mask image occluded with sunglasses and (c) represents the two eigenspaces. The occluded input mask image (b) will be used in the below given 3 test cases. The green ellipse represents the pseudo eigenspace that is constructed from the non-occluded images as given in the image (a) 15

28 16 Chapter 4. Experiment Figure 4.1: (a) Non-occluded facial image. (b) An occluded image. (c) Eigenspaces. and non-occluded parts of the occluded images. The blue ellipse represents the eigenspace constructed from the non-occluded parts of the occluded images. Level 1 image division In level 1 image division method, the mask input image is divided into 2 head parts, 2 eyes parts and 2 mouth parts, see Figure 4.2(b). The occlusion for each part is detected separately. The full faces are reconstructed as described in section 3.3. In Figure 4.2, the image a represents mask input image occluded with sunglasses, the image b represents that the image is divided into 6 different parts and in the image c, the area marked with the black color is representing the detected occlusion in the eye parts. Note that by dividing the image into 6 parts does not detect all the occlusion and also some non-occluded regions are considered as occluded. The background regions in the 2 mouth parts are not detected by level 1 image division method. The reconstruction results of level 1 image division can be seen in Figure 4.3. The reconstructed image has some circles around the eyes. This is due to some images with the eye glasses in the database, so the corresponding eigenvectors leave some imprints on the reconstructed images. After reconstruction, the average PSNR of the complete reconstructed faces is calculated and also of the occluded reconstructed regions only. Furthermore, the number of pixels used in the reconstruction process are recorded. If more pixels are used in the reconstruction process, the reconstructed images should be better with higher average PSNR value. Level 2 image division In the level 2 image division method, the 6 parts of level 1 are further divided into 9 sub parts, see Figure 4.4(b). Each of these parts undergoes occlusion detection process and apca is applied to reconstruct the facial images. In Figure 4.4, the image a represents the mask input image occluded with sunglasses, the image b represents that the image is divided into 54 sub parts and in the image c, the black blocks represent the detected occlusions. The white background area that is not part of mouth is considered as an occlusion. This background occlusion is also detected by dividing the image into smaller parts. The level 2 image division method also marks some occluded area as non-occluded area, see Figure 4.4(c) where some parts of sunglasses are marked as non-occluded. The Figure 4.5 is an example of the image reconstruction using level 2 image

29 4.1. Granularity effect 17 division. Note that there are prominent circles around the eyes, the black background areas near the cheeks are not constructed well. Level 3a image division In the level 3a image division method, the 54 parts of level 2 are further divided into 9 sub parts, see Figure 4.6(b). The complete image is divided into 486 very small parts and occlusion is detected for each part separately. After occlusion detection, apca is applied to reconstruct the faces. Due to very small size of each part, very small occlusions can also be detected. In Figure 4.6, the image a represents the mask input image occluded with sunglasses, the image b represents that the image is divided into 486 sub parts and in the image c, the black blocks represent the detected occlusions. The Figure 4.6 (c) shows that it has detected almost all the facial occlusion but also has also marked the non-occluded area as the occluded area, hair and eyebrows are marked as occluded. The Figure 4.7 shows the face reconstructed by level 3a image division. The quality of the reconstructed image is better than level 1 and level 2 with less imprints of eye glasses around the eyes. Level 3b image division In the level 3b image division method, the 6 parts of level 1 are further divided into 9 sub parts. The occlusion is detected for each of these parts separately. If a part is occluded, it is further divided into 9 sub parts, see Figure 4.8(c). The occlusion is detected for these very small parts and apca is applied to reconstruct the faces. In Figure 4.8, the image (a) represents the mask input image occluded with sunglasses, the image (b) represents the detected occlusions by level 2 image division, the occlusion is marked with the black color, the image (c) represents that the detected occluded area by level 2 image division is further divided into sub parts and again the occlusion is detected for these very small parts, the image (d) represents the occlusion detection by level 3b image division method. Note that background and sunglasses occlusion is detected and very less occluded area is marked as occluded. From the Figure 4.8 (d), we can note that nose and cheeks area near the sunglasses that was marked as occluded in the Figure 4.8 (b) is now marked as non-occluded area. The Figure 4.9 is an example of the image reconstruction using this method Scarf scenario In this scenario, the input image is occluded by the scarf so that all the mouth area is occluded. The image is divided into sub parts, the occlusion is detected for each of these parts individually and the full faces are reconstructed using the apca method. The average PSNR of all the reconstructed faces is calculated to determine the quality of the reconstructed facial images and the average PSNR value of all the reconstructed occluded parts are calculated. Furthermore, the number of pixels used in the reconstruction process and the time taken by each division method are recorded. The figures 4.10 to 4.17 represent the 4 methods of image division applied on the mask input image occluded with scarf, occlusion detected by each of these methods and the reconstructed faces reconstructed using the 4 image division methods by applying the apca.

30 18 Chapter 4. Experiment Figure 4.2: (a) An occluded image. (b) Level 1 image division. (c) Detected occlusions. Figure 4.3: An example of the reconstructed face by level 1 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.2 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.4: (a) An occluded image. (b) Level 2 image division. (c) Detected occlusions. Figure 4.5: An example of the reconstructed face by level 2 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.4 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.6: (a) An occluded image. (b) Level 3a image division. (c) Detected occlusions.

31 4.1. Granularity effect 19 Figure 4.7: An example of the reconstructed face by level 3a image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.6 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.8: (a) An occluded image. (b) Occlusion detection by level 2 image division. (c) Level 3b image division. (d) Occlusion detection by level 3b image division. Figure 4.9: An example of the reconstructed face by level 3b image division (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.8 (d). (c) Reconstructed image. (d) Non-occluded image. Figure 4.10: (a) An occluded image. (b) Level 1 image division. (c) Detected occlusions. Figure 4.11: An example of the reconstructed face by level 1 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.10 (c). (c) Reconstructed image. (d) Non-occluded image.

32 20 Chapter 4. Experiment Figure 4.12: (a) An occluded image. (b) Level 2 image division. (c) Detected occlusions. Figure 4.13: An example of the reconstructed face by level 2 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.12 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.14: (a) An occluded image. (b) Level 3a image division. (c) Detected occlusions. Figure 4.15: An example of the reconstructed face by level 3a image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.14 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.16: (a) An occluded image. (b) Occlusion detection by level 2 image division. (c) Level 3b image division. (d) Occlusion detection by level 3b image division.

33 4.2. Pre-defined eigenspaces 21 Figure 4.17: An example of the reconstructed face by level 3b image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.16 (d). (c) Reconstructed image. (d) Non-occluded image. Figure 4.18: (a) An occluded image. (b) Level 1 image division. (c) Detected occlusions Cap and sunglasses occlusion In this scenario, the head is covered by the cap and the eyes are covered with the sunglasses. The mouth parts contain some background occlusion so some/all areas of all the 6 parts of the mask input image are occluded. The input image is divided into different parts, occlusion is detected for each part and apca is applied to reconstruct the faces. The average PSNR of the complete reconstructed images and of only occluded reconstructed parts are calculated to determine the quality of the reconstructed images. The number of pixels used in the reconstruction process are recorded to determine the affect of the non-occluded pixels on the quality of the reconstructed faces. The processing time of apca process is also recorded. The figures 4.18 to 4.25 represent the 4 methods of image division applied on the mask input image occluded with cap and sunglasses, occlusion detected by each of these methods and the reconstructed faces reconstructed using these image division methods by applying the apca. 4.2 Pre-defined eigenspaces In this experiment, 6 different pre-defined eigenspaces are created and the pseudo eigenspace is constructed for each of them on all 116 images. The pre-defined eigenspaces have dif- Figure 4.19: An example of the reconstructed face by level 1 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.18 (c). (c) Reconstructed image. (d) Non-occluded image.

34 22 Chapter 4. Experiment Figure 4.20: (a) An occluded image. (b) Level 2 image division. (c) Detected occlusions. Figure 4.21: An example of the reconstructed face by level 2 image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.20 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.22: (a) An occluded image. (b) Level 3a image division. (c) Detected occlusions. Figure 4.23: An example of the reconstructed face by level 3a image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.22 (c). (c) Reconstructed image. (d) Non-occluded image. Figure 4.24: (a) An occluded image. (b) Occlusion detection by level 2 image division. (c) Level 3b image division. (d) Occlusion detection by level 3b image division.

35 4.2. Pre-defined eigenspaces 23 Figure 4.25: An example of the reconstructed face by level 3b image division. (a) An occluded image. (b) The occluded image masked by the mask from Figure 4.24 (d). (c) Reconstructed image. (d) Non-occluded image. ferent kinds of sunglasses occlusions. When occlusion is detected, then the pre-defined eigenspace which has the least difference between the detected occlusion and the pre-defined eigenspaces is selected. This eigenspace is used to reconstruct the image with apca. The closest eigenspace is selected based on the positions of the occlusion in the eigenspace and the detected occlusion. If the occlusion of a pixel is the same in both versions, the score is 0, but if they are different, the score is 1. Then the eigenspace with the lowest score is selected Metric PSNR is used as a metric in two different ways. Calculate the PSNR for the entire image and for only the reconstructed parts. This would only need to be done for the 6 different predefined occlusions. The number of non-occluded pixels used for encoding in each experiment are also recorded Experiment description The pre-defined eigenspaces are constructed and saved in some storage media. These eigenspaces are created by dividing the occluded images of the Figure 4.19 as described in the section A pseudo eigenspace and 6 eigenspace for each of the images in the Figure 4.26 are constructed and saved at some storage media. A vector containing the occlusion information about each part is also created and saved to be used later. If a part is occluded, 1 is stored in respective vector element otherwise 0 is stored. The occlusion of the mask input image is detected by following the section A vector is created that contains the occlusion information about each part. This vector is compared to each vector of the predefined eigenspaces to calculate the number of occluded parts that have the same position in both input mask image and the image used in construction of pre-defined eigenspace. The eigenspace that has the maximum number of same occlusion positions is selected for the reconstruction of the facial images. The average PSNR of the complete reconstructed facial images and of occluded reconstructed areas is calculated to determine the quality of the reconstructed facial images. The time taken to perform the apca operation is recorded to determine the efficiency of pre-defined eigenspaces. The 6 faces having sunglasses occlusion that are used in the construction of 6 predefined eigenspaces can be seen in Figure In the Figure 4.27, the image (a) is the mask input image, the image (b) represents the occlusion detection by level 3 (b)image division, the image (c) represents the pre-defined eigenspace that is selected based on the detected occlusion in the image (b), the image (d) represents the reconstructed image using pre-defined eigenspace.

36 24 Chapter 4. Experiment Figure 4.26: Occluded facial images used for construction of 6 eigenspaces. Figure 4.27: (a) An occluded image. (b) Detected occlusion by level 3b image division. (c) Pre-defined eigenspace most similar to the detected occlusion in (c). (d) Reconstructed image using the eigenspace in (c).

37 Chapter 5 Results In this chapter, the results of the experiments performed are described. The chapter is divided into three parts. In the first part, the results of 4 image division methods for automatic occlusion detection are discussed and images showing the output of these methods are displayed. In the second part, the reconstruction results based on the 4 methods of occlusion detection are discussed, tables containing the average PSNR values for the reconstructed faces, for the only reconstructed areas and table containing processing time for each image division method are displayed. In the third part, the discussion about the pre-defined eigenspaces will be made to determine the efficiency and reconstruction quality of the pre-defined eigenspaces. The tables containing the processing time to reconstruct the faces with and without pre-defined eigenfaces and average PSNR values of reconstructed faces are to be displayed and discussed. 5.1 Occlusion detection results The Figure 5.1 represents the occlusion detection by different image division methods, image (a) represents the mask input image occluded with sunglasses, image (b) represents the occlusion detection by level 1 image division, (c) represents the occlusion detection by level 2 image division (d) represents the occlusion detection by level 3a image division and (e) represents the occlusion detection by level 3b image division method. The grey blocks represent the marked occluded areas. In the level 1 image division method, the complete image is divided into 6 large parts. The size of each part is large and to determine the occlusion, its 25% area should be occluded. Due to large size of each part, less occlusion is detected. The image (b) shows that the occlusion in both eyes parts is detected. Since white background in mouth part is also an occlusion but it is not detected because this occlusion covers less than 25% of the corresponding parts. The image (b) also shows that some non-occluded area in both eyes part is also marked as an occlusion. In the level 2 image division method, the size of each part is small so it can detect the small occlusions. The image (c) shows that the eyes occlusions and background occlusions in mouth parts are detected whereas less occluded area is marked as non-occluded area. But still the size of each part is large, some non-occluded area is also marked as occluded area and less pixels are available for the reconstruction process. Many experiments were performed, the level 3a image division showed the best results for occlusion detection as compared to all other methods. 25

38 26 Chapter 5. Results Figure 5.1: Occlusion detection by different image division methods. (a) Occluded image. (b) Occlusion detection by level 1 image division. (c) Occlusion detection by level 2 image division. (d) Occlusion detection by level 3a image division. (e) Occlusion detection by level 3b image division. In the level 3a image division method, the size of each part is very small so it can detect very small occlusions. The image (d) shows that it has detected almost all the occlusion while marking some non-occluded area as an occlusion. The image (d) shows that it has marked eyes and background occlusion correctly but has also marked eyebrows and hair as an occlusion. Occlusion detection by level 3b image division, the process is divided into two steps. In the first step, the image is divided as described in the section and the occlusion is detected for each part. This process detects the small occlusions whereas some non-occluded area is marked as an occlusion. In the second step, the occluded area marked at the first step is further divided into sub parts and occlusion is detected for each sub part. By doing this, the non-occluded areas marked as occluded area in the first step are now marked as non-occluded areas and more pixels gets available for the reconstruction of faces, see Figure 5.1 (e). The level 3b is also a good occlusion detection method. 5.2 Reconstruction quality results The quality of reconstructed faces is determined by PSNR. The average PSNR is calculated of the complete reconstructed faces and of the reconstructed occluded parts only. Table 5.1 shows the average PSNR of the complete reconstructed faces and Table 5.2 shows the PSNR for the reconstructed occluded parts. In tables 5.1, 5.2, 5.3 and 5.4, Level 1 shows the reconstruction of faces by level 1 image division, Level 2 shows the reconstruction of faces by level 2 image division, Level 3a shows the reconstruction of faces by level 3a image division and Level 3b shows the reconstruction of faces using level 3b image division method. The number of pixels used in the reconstruction of faces are recorded to determine the impact of number of non-occluded pixels on the quality of the reconstructed faces. Furthermore, the processing time taken by each image division method is also recorded. Table 5.1 contains the average PSNR values of all 116 reconstructed faces for 3 different types of occlusions. The level 1 image division has the maximum average PSNR value in case of sunglasses occlusion whereas the level 3a image division has maximum average PSNR value in scarf and cap & sunglasses occlusion. Table 5.2 contains the average PSNR values of the reconstructed occluded parts only for 3 different types of occlusions. The level 1 image division has the maximum average PSNR value in sunglasses and cap & sunglasses occlusion while the level 3a has maximum average PSNR value in the scarf occlusion. Table 5.3 contains the number of non-occluded pixels that are used in the reconstruction of the facial images. The quality of the reconstructed faces generally increases with the increase of number of non-occluded pixels.

39 5.2. Reconstruction quality results 27 Table 5.1: Reconstruction quality of the complete image (PSNR)[dB] for granularity effect Occlusion type Level 1 Level 2 Level 3a Level 3b Sunglasses Scarf Cap and sunglasses Table 5.2: Reconstruction quality of the occluded reconstructed parts (PSNR)[dB] for granularity effect Occlusion type Level 1 Level 2 Level 3a Level 3b Sunglasses Scarf Cap and sunglasses Table 5.3: Number of Pixels used in Reconstruction Occlusion type Level 1 Level 2 Level 3a Level 3b Sunglasses Scarf Cap and sunglasses Table 5.4: Processing Time (sec) for granularity effect Occlusion type Level 1 Level 2 Level 3a Level 3b Sunglasses Scarf Cap and sunglasses

40 28 Chapter 5. Results Figure 5.2: Reconstructed image by different image division methods. (a) An occluded image. (b) Reconstructed image by level 1 image division. (c) Reconstructed image by level 2 image division. (d) Reconstructed image by level 3a image division. (e) Reconstructed image by level 3b image division. (f) Non-occluded image. Figure 5.3: Reconstructed image by different image division methods. (a) An occluded image. (b) Reconstructed image by level 1 image division. (c) Reconstructed image by level 2 image division. (d) Reconstructed image by level 3a image division. (e) Reconstructed image by level 3b image division. (f) Non-occluded image Table 5.4 contains the processing time for 4 image division methods that are applied on 3 types of occlusions. The results show that the level 1 image division takes the least processing time whereas the level 3b method takes the most processing time. This shows that the processing time depends on image division. When the size of each image part is large, it takes less processing time and when the size of each image part is small, it takes more processing time. The Figure 5.2 represents a single image reconstructed using different image division methods. The image (a) represents an occluded image, image (b) shows that the quality of the reconstructed image is good except some circles around the eyes but these circles are not very prominent. The images (c) shows that the quality of the reconstructed image is not good as we can notice prominent circles around the eyes. The white background area is also not reconstructed well. The image (d) and (e) represent that the images are reconstructed with good quality with some circles around the eyes but the circles are not prominent and the image (f) represents the non-occluded image. The visual evaluation and average PSNR values of the reconstructed images show that the level 3a image division generates the images with highest quality as compared to all other image division methods. 5.3 Reconstruction results using pre-defined eigenspaces Six pre-defined eigenspaces were constructed using six sunglasses occlusion masks where the vector was created by level 3a image division. The occlusion of the mask input image is detected and based on the detected occlusion, the closest eigenspace is selected for reconstruction process. The average PSNR of the reconstructed faces is calculated to determine the quality of the reconstructed faces. The processing time is recorded to determine the efficiency of the pre-defined eigenspace. Many experiments were performed and the deducted results showed a remarkable decrease in processing time with negligible quality loss of the

Object Recognition and Template Matching

Object Recognition and Template Matching Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of

More information

Adaptive Face Recognition System from Myanmar NRC Card

Adaptive Face Recognition System from Myanmar NRC Card Adaptive Face Recognition System from Myanmar NRC Card Ei Phyo Wai University of Computer Studies, Yangon, Myanmar Myint Myint Sein University of Computer Studies, Yangon, Myanmar ABSTRACT Biometrics is

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

SYMMETRIC EIGENFACES MILI I. SHAH

SYMMETRIC EIGENFACES MILI I. SHAH SYMMETRIC EIGENFACES MILI I. SHAH Abstract. Over the years, mathematicians and computer scientists have produced an extensive body of work in the area of facial analysis. Several facial analysis algorithms

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Efficient Attendance Management: A Face Recognition Approach

Efficient Attendance Management: A Face Recognition Approach Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely

More information

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control. Modern Technique Of Lecture Attendance Using Face Recognition. Shreya Nallawar, Neha Giri, Neeraj Deshbhratar, Shamal Sane, Trupti Gautre, Avinash Bansod Bapurao Deshmukh College Of Engineering, Sewagram,

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Principal components analysis

Principal components analysis CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k

More information

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering

More information

Multidimensional data and factorial methods

Multidimensional data and factorial methods Multidimensional data and factorial methods Bidimensional data x 5 4 3 4 X 3 6 X 3 5 4 3 3 3 4 5 6 x Cartesian plane Multidimensional data n X x x x n X x x x n X m x m x m x nm Factorial plane Interpretation

More information

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings

More information

4. GPCRs PREDICTION USING GREY INCIDENCE DEGREE MEASURE AND PRINCIPAL COMPONENT ANALYIS

4. GPCRs PREDICTION USING GREY INCIDENCE DEGREE MEASURE AND PRINCIPAL COMPONENT ANALYIS 4. GPCRs PREDICTION USING GREY INCIDENCE DEGREE MEASURE AND PRINCIPAL COMPONENT ANALYIS The GPCRs sequences are made up of amino acid polypeptide chains. We can also call them sub units. The number and

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

More information

Introduction to Principal Components and FactorAnalysis

Introduction to Principal Components and FactorAnalysis Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a

More information

Factor Analysis. Chapter 420. Introduction

Factor Analysis. Chapter 420. Introduction Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information

Common factor analysis

Common factor analysis Common factor analysis This is what people generally mean when they say "factor analysis" This family of techniques uses an estimate of common variance among the original variables to generate the factor

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear

More information

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu Medical Information Management & Mining You Chen Jan,15, 2013 You.chen@vanderbilt.edu 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?

More information

AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)

AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA) AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA) Veena G.S 1, Chandrika Prasad 2 and Khaleel K 3 Department of Computer Science and Engineering, M.S.R.I.T,Bangalore, Karnataka veenags@msrit.edu

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Think of the beards as a layer on top of the face rather than part of the face itself. Using

Think of the beards as a layer on top of the face rather than part of the face itself. Using Tyler Ambroziak Ryan Fox CS 638-1 (Dyer) Spring 2010 Virtual Barber Abstract What would you look like without a beard? Or how about with a different type of beard? Think of the beards as a layer on top

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

A tutorial on Principal Components Analysis

A tutorial on Principal Components Analysis A tutorial on Principal Components Analysis Lindsay I Smith February 26, 2002 Chapter 1 Introduction This tutorial is designed to give the reader an understanding of Principal Components Analysis (PCA).

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Principle Component Analysis: A statistical technique used to examine the interrelations among a set of variables in order to identify the underlying structure of those variables.

More information

Mathematical Model Based Total Security System with Qualitative and Quantitative Data of Human

Mathematical Model Based Total Security System with Qualitative and Quantitative Data of Human Int Jr of Mathematics Sciences & Applications Vol3, No1, January-June 2013 Copyright Mind Reader Publications ISSN No: 2230-9888 wwwjournalshubcom Mathematical Model Based Total Security System with Qualitative

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

Tutorial on Exploratory Data Analysis

Tutorial on Exploratory Data Analysis Tutorial on Exploratory Data Analysis Julie Josse, François Husson, Sébastien Lê julie.josse at agrocampus-ouest.fr francois.husson at agrocampus-ouest.fr Applied Mathematics Department, Agrocampus Ouest

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. Abstract

From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. Abstract To Appear in the IEEE Trans. on Pattern Analysis and Machine Intelligence From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose Athinodoros S. Georghiades Peter

More information

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application

More information

Introduction to Principal Component Analysis: Stock Market Values

Introduction to Principal Component Analysis: Stock Market Values Chapter 10 Introduction to Principal Component Analysis: Stock Market Values The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from

More information

Graphical Representation of Multivariate Data

Graphical Representation of Multivariate Data Graphical Representation of Multivariate Data One difficulty with multivariate data is their visualization, in particular when p > 3. At the very least, we can construct pairwise scatter plots of variables.

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

4.3 Least Squares Approximations

4.3 Least Squares Approximations 18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications

Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Shireesha Chintalapati #1, M. V. Raghunadh *2 Department of E and CE NIT Warangal, Andhra

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

Accurate and robust image superresolution by neural processing of local image representations

Accurate and robust image superresolution by neural processing of local image representations Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica

More information

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA 30602-2501

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA 30602-2501 PRINCIPAL COMPONENTS ANALYSIS (PCA) Steven M. Ho!and Department of Geology, University of Georgia, Athens, GA 30602-2501 May 2008 Introduction Suppose we had measured two variables, length and width, and

More information

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Color Histogram Normalization using Matlab and Applications in CBIR. László Csink, Szabolcs Sergyán Budapest Tech SSIP 05, Szeged

Color Histogram Normalization using Matlab and Applications in CBIR. László Csink, Szabolcs Sergyán Budapest Tech SSIP 05, Szeged Color Histogram Normalization using Matlab and Applications in CBIR László Csink, Szabolcs Sergyán Budapest Tech SSIP 05, Szeged Outline Introduction Demonstration of the algorithm Mathematical background

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

Introduction: Overview of Kernel Methods

Introduction: Overview of Kernel Methods Introduction: Overview of Kernel Methods Statistical Data Analysis with Positive Definite Kernels Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department of Statistical Science, Graduate University

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Exploratory Factor Analysis

Exploratory Factor Analysis Introduction Principal components: explain many variables using few new variables. Not many assumptions attached. Exploratory Factor Analysis Exploratory factor analysis: similar idea, but based on model.

More information

2.2 Creaseness operator

2.2 Creaseness operator 2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Joint models for classification and comparison of mortality in different countries.

Joint models for classification and comparison of mortality in different countries. Joint models for classification and comparison of mortality in different countries. Viani D. Biatat 1 and Iain D. Currie 1 1 Department of Actuarial Mathematics and Statistics, and the Maxwell Institute

More information

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239 STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network. Global Journal of Computer Science and Technology Volume 11 Issue 3 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172

More information

Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Unsupervised and supervised dimension reduction: Algorithms and connections

Unsupervised and supervised dimension reduction: Algorithms and connections Unsupervised and supervised dimension reduction: Algorithms and connections Jieping Ye Department of Computer Science and Engineering Evolutionary Functional Genomics Center The Biodesign Institute Arizona

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

5: Magnitude 6: Convert to Polar 7: Convert to Rectangular

5: Magnitude 6: Convert to Polar 7: Convert to Rectangular TI-NSPIRE CALCULATOR MENUS 1: Tools > 1: Define 2: Recall Definition --------------- 3: Delete Variable 4: Clear a-z 5: Clear History --------------- 6: Insert Comment 2: Number > 1: Convert to Decimal

More information