IMAGE RECOGNITION FOR CATS AND DOGS
|
|
|
- Emory Underwood
- 10 years ago
- Views:
Transcription
1 IMAGE RECOGNITION FOR CATS AND DOGS HYO JIN CHUNG AND MINH N. TRAN Abstract. In this project, we are given a training set of 8 images of cats and 8 images of dogs to classify a testing set of 38 images of cats and dogs. Two different methods are used to detect the edges of the given images. First method is to apply a 2-D Discrete Haar Wavelet Transform (DWT). Second method is to apply a Laplacian filter. Then, Fisher s Linear Discriminant Analysis (FDA) is used to classify the testing images to their appropriate categories. The two methods result in classification rates of 95% and 97%, respectively.. Introduction and Overview It is intriguing how we can recognize people, animals, and places. It is even more intriguing to develop a mathematical algorithm that mimics the mechanism of image recognition. To begin, let us look at how we do that biologically. In our daily life, we take in a huge amount of information from many sources around us. Next, we form a preconception. To visually recognize images, we compare the image at hand with the ones that we have accumalated through our experience. Let us look at Figure as an example. We can easily tell that these pictures are of a dog and a bird since we already have a set of these two animals in our minds. (a) A dog (b) A bird Figure Although these images are only composed of a few very simple lines, we are still able to recognize them since these minimally represent the two animals. They are sufficient for us to see the main features of a dog and a bird. In general, our brains find
2 2 HYO JIN CHUNG AND MINH N. TRAN types of images that have the same features that the unknown image has. Thus, the edges of the images take an important part of our own biological image recognition process. In this project, we develop a mathematical algorithm to recognize the images of cats and dogs, just like what our minds do to recognize images. We are given with a training set of 8 images of cats and 8 images of dogs so that we train the computer to know what a cat and a dog should look like. Then our goal is to have the computer classify each of 38 probe images as either a cat or a dog by comparing those unknown images with the training set. 2. Theoretical Background Before we introduce the image recognition algorithm, we review the mathematical methods that are going to be used for the algorithm. 2.. Discrete Wavelet Transform. DWT is based on wavelet basis which has varying frequency and limited duration. The key advantage of wavelet transform over Fourier transform is that it captures both frequency and location information. This means that features that go undetected in one resolution can be captured in the other. LL HL LH HH Figure 2. 2-D wavelet transform Generally, 2-D wavelet transform takes the original image and creates four subimages. The image in the top left is the result of reducing the resolution of the original image. This is achieved by applying the low pass filter (filter that passes low frequencies) to both rows and columns (LL). The image in the top right is the result of applying the high pass filter (filter that passes high frequencies) to the columns and low pass filter to the rows (HL). The image in bottom left is obtained via L columns and H rows (LH), and in bottom right via H columns and H rows (HH). See Figure Average Filter. Average filter is a smoothing spatial filter. It reduces the sharp transitions in intensities for an easier detection of edges. 2-D 3 3 smoothing spatial filter used here is the weighted average filter () 6 2
3 IMAGE RECOGNITION FOR CATS AND DOGS Laplacian filter. Laplacian filter is a sharpening spatial filter. Its objective is to highlight transitions in intensities. This is accomplished by taking derivatives of the intensities. The simplest isotropic derivative operator is the Laplacian 2 f = 2 f x f y 2. In discrete form, in the x-direction, we have In the y-direction 2 f = f(x +, y) + f(x, y) 2f(x, y). x2 2 f = f(x, y + ) + f(x, y ) 2f(x, y). y2 Thus, the discrete Laplacian of two variables is 2 f(x, y) = f(x +, y) + f(x, y) + f(x, y + ) + f(x, y ) 4f(x, y). This gives the 2-D 3 3 sharpening spatial filter mask (x, y ) (x, y) (x, y) (x +, y) 4. (x, y + ) The diagonal directions can be incorporated by rotating the above filter mask Another Laplacian mask is formed by adding the diagonal directions to the 2-D 3 3 sharpening spatial filter mask using the definition of the digital Laplacian Thus, the 2-D 3 3 sharpening spatial filter mask used here is (x, y ) (x, y ) (x +, y ) (x, y) (x, y) (x +, y) 8. (x, y + ) (x, y + ) (x +, y + ) By multiply this mask by, we have our Laplacian filter mask 8. (2)
4 4 HYO JIN CHUNG AND MINH N. TRAN 2.4. Principal Component Analysis. The main step of PCA is the extraction of the eigenvalues and eigenvectors usually by Singular Value Decomposition (SVD) to form KL (Karhunen-Loéve) basis, which is well-known optimal basis that achieve dimensionality reduction of the data matrix. See Figure 3 Figure 3. Principal Component Analysis 2.5. Fisher s Linear Discriminant Analysis. A simple idea of FDA is in the context of a two-class classification problem. Namely, we will consider two classes of data set and project them onto new bases. The goal is to find a suitable projection that maximizes the distance between the inter-class data while minimize the intraclass data. See Figure 4. Figure 4. Two-class FDA We want to construct a projection w such that w T S B w w = arg max w w T S W w
5 IMAGE RECOGNITION FOR CATS AND DOGS 5 where S B is the between-class scatter matrix and S W is the within-class scatter matrix given by S B = (m 2 m )(m 2 m ) T 2 S W = (x m i )(x m i ) T i= x D i where m is the mean of the first class, m 2 is the mean of the second class, and D, D 2 are two classes of the data set. 3. Algorithm Implementation and Development The main idea of the image recognition process is that we first separate the training set of images into cats and dogs, and then we see which group each of the probe images belongs to. We develop an algorithm which classifies the images in the training set into two classes of cats and dogs, and the similar algorithm would be used to find the membership of the probe images. As we have seen before, the detection of edges is an important integral part of image recognition process. Hence, the first step of the 2-class classification algorithm is to extract the edges from the original images so that it is easier to recognize the main features of the images. In this project, we introduce two different methods for edge detection which are DWT and Laplacian Filter. Then, to consider only the main features of the images for the image recognition process, the second step is to do the dimensionality reduction using PCA. Finally, FDA is used for 2-class classification. Edge Detection Dimension Reduction Classification Once we find a threshold value using the training set, which best separates the two classes of data, we move on to the next part of the image recognition process. We now take the probe images and go through the same classification algorithm that we used for the training set. Then, for example, using the threshold, any probe image that is placed in the class of cat would be recognized as an image of cat. To illustrate the algorithm implementation, let us consider the first picture of a cat and the first picture of a dog from the cat and dog data set. See Figure 5.
6 6 HYO JIN CHUNG AND MINH N. TRAN (a) Cat (b) Dog Figure 5. Original images 3.. Method : Using 2-D Discrete Haar Wavelet Transform. 2-D Discrete Haar Wavelet Transform Principal Componant Analysis Fisher s Linear Discriminant Analysis D Discrete Haar Wavelet Transform. To extract the edges, we take the Haar wavelet transform of the cat and dog. See Figure 6 and 7. The figure on the top left is the reduced resolution of the original image. The one on the top right is the fine scales in the horizontal direction, bottom left is fine scales in the vertical direction, and bottom right is the fine scales in the diagonal direction. Figure 6. Haar wavelet transform of Cat
7 IMAGE RECOGNITION FOR CATS AND DOGS 7 Figure 7. Haar wavelet transform of Dog After decomposing the image using Haar wavelet transform, we extract the edges by adding the fine scales in the horizontal and vertical direction. Adding these two details is good enough since horizontal fine scales are obtained by applying HL and vertical fine scales are obtained by applying LH. The frequencies that go undetected in one direction are captured in the other direction. See Figure 8. The original images are 64-by-64. We notice that the images after DWT are now 32-by-32. (a) Cat (b) Dog Figure 8. After adding vertical and horizontal details The details in the images are not standing out very clearly at this stage since the color intensities have not been rescaled yet. After rescaling the color of the horizontal and vertical fine scales of the cat and dog to the range from to 255, the edges are now more apparent. See Figure 9. Now we can better see the cat features with pointed ears in the image on the left and the dog features with rounded head in the image on the right.
8 8 HYO JIN CHUNG AND MINH N. TRAN (a) Cat (b) Dog Figure 9. Vertical and horizontal details after color mapping Principal Component Analysis. Before we move on to the classification step of the algorithm, we perform SVD on the set of images to find the principal components of the two animals. See Figure. The first four principal components of cat and dog data give the four main features of the animals. We see that there is a strong cat resemblance with the pointed ears in the components. Thus, there will be more chances that we will classify cats correctly. Figure. First four principal components of cat and dog data A plot of the eigenvalues after SVD shows a heavy tail distribution. See Figure. To capture the essential components of the cats and dogs, we only need to retain a few features. In this case, we choose our number of features to be 2. From the matrices S and V produced by SVD, we can lower the dimension of the cat and dog data to efficiently classify them. And we can use the eigenvectors U to project the probes onto this lower dimension.
9 IMAGE RECOGNITION FOR CATS AND DOGS x Figure. Eigenvalues of the cat and dog data Fisher s Linear Discriminant Analysis. We now turn our attention to the statistical aspect of the project. Using the method of FDA, we find the within class variance matrix and between class variance matrix. From here we find the optimal projection w to project the data onto the real line where we actually do the classification. To do that, we find a threshold value that best separates the two classes of data. Figure 2 shows the projection of the cat and dog data down onto the real line and the threshold value. With the training data, we have about 3-4 misclassifications for each cats and dogs (a) Dog data (b) Cat data Figure 2. Projection of cat and dog data onto the real line To have a better understanding of the distribution of the data, we generate the histogram of the cats and dogs with the separation indicator line in red. See Figure 3.
10 HYO JIN CHUNG AND MINH N. TRAN Figure 3. Histogram of cat and dog data after being projected onto the real line 3.2. Method 2: Using Laplacian Filter. Laplacian Filter Principal Componant Analysis Fisher s Linear Discriminant Analysis Laplacian Filter. The second method uses the Laplacian filter for edge detection. Before we apply the Laplacian filter on the image, we first smooth out the image by applying the average filter in Equation (). This process bls any small objects with the background and makes the larger objects easy to detect. See Figure 4. (a) Cat (b) Dog Figure 4. After applying average filter
11 IMAGE RECOGNITION FOR CATS AND DOGS We then extract the edges by applying the Laplacian filter in Equation (2). See Figure 5. We can see that edges are more apparent now. The entire images are very gray so that the edges now stand out better. The ears of the cat are staning out better from the background and the whisker of the cat is more defined. Also, the ears and the mouth of the dog are better shown now. Since the Laplacian filter does not change the resolution, the images after edge detection look as detailed as the original ones. (a) Cat (b) Dog Figure 5. After applying Laplacian mask The resulting figure in the right is darker than the one in the left. To have consistency in color, again we map the color intensity of this figure to the scale from to 255. See Figure 6. (a) Cat (b) Dog Figure 6. After color rescaling Principal Component Analysis. Before classifying the images into cats and dogs, we use the idea of PCA to find the best basis to project the data onto a lower dimension for less computations. We perform SVD to find the best D value, which is a new dimension of the data. The first four principal components after applying SVD is shown in Figure 7. This best D value provide an even lower dimension than the one that we first project the original data onto. Classifying in this dimension is more advantageous since most of the important details are retained and the computational
12 2 HYO JIN CHUNG AND MINH N. TRAN cost is further reduced. In this case, we set the energy level to about 95% to find the best D value. Figure 7. First four principal components of cat and dog data The best D value found here is 2. In Figure 8 the plot of the eigenvalues after applying SVD shows that we need to use most of the eigenvalues. However, comparing to the dimension of 6, the dimension of 2 is more efficient for further computation and still retains the important components of the data x Figure 8. Eigenvalues of the cat and dog data Similar to the PCA step from the previous method, the first 2 eigenvectors U are used to project the data onto this lower dimension.
13 IMAGE RECOGNITION FOR CATS AND DOGS Fisher s Linear Discriminant Analysis. Like the first method, we use FDA for the classification step. Using the method of FDA we find the optimal projection direction w to project the data onto the real line. After projecting the cat and dog data down onto the real line and obtaining the threshold value, we generate a plot. See Figure 9. All of the cats and the dogs in the data are correctly classified. We generate the histogram of the cats and dogs with the indicator line in red. See Figure (a) Dog data (b) Cat data Figure 9. Projection of cat and dog data onto the real line Figure 2. Histogram of cat and dog data after being projected onto the real line 4. Computational Results 4.. Result of Method using 2-D Discrete Haar Wavelet Transform. After running the cat and dog probes through the algorithm and projecting them onto the real line, we get the results below. See Figure 2.
14 4 HYO JIN CHUNG AND MINH N. TRAN (a) Dog probes (b) Cat probes Figure 2. Projection of cat and dog probes onto the real line A histogram of the probes are displayed below. See Figure Figure 22. Histogram of cat and dog probes after being projected onto the real line The confusion matrix is shown below. The classification rate is 94.74%. Classified Actual cat dog cat 9 2 dog 7 The misclassified dogs are shown below. See Figure 23. These two dogs are classified as cats because they have the pointed ears which is a feature of cats.
15 IMAGE RECOGNITION FOR CATS AND DOGS 5 (a) (b) Figure 23. Misclassified Dogs 4.2. Result of Method 2 using Laplacian Filter. After running the cat and dog probes through the algorithm and projecting them onto the real line, we get the results below. See Figure 24. We see that only one dog is misclassified out of 38 cats and dogs (a) Dog probes (b) Cat probes Figure 24. Projection of cat and dog probes onto the real line A histogram of the probes are displayed below. See Figure 25. The confusion matrix is shown below. The classification rate is 97.37%. Classified Actual cat dog cat 9 dog 8 The misclassified dog is shown below. See Figure 26. This dog is classified as a cat because it has the pointed ears feature of cats. This dog is one of the misclassified dogs from Method.
16 6 HYO JIN CHUNG AND MINH N. TRAN Figure 25. Histogram of cat and dog probes after being projected onto the real line Figure 26. Misclassified Dog 5. Summary and Conclusions In summary, to classify the images of cats and dogs, we used two different methods for edge detection which are 2-D Discrete Wavelet Transform and Laplacian filter. Then FDA was used to recognize the images as either cats or dogs. The first method using 2-D Discrete Wavelet Transform resulted in the classification rate of 95%. The second method using Laplacian filter produced the classification rate of 97%. The higher classification rate of the second method is probably because in the edge detection process, the original resolution is maintained (64-by-64), whereas the one in the first method, the resolution is reduced (32-by-32). Comparing Figure 8 and 5, we can see that Laplacian mask does a better job at detecting the image edges. Also, the second method retains a higher number of dimensions (2) than the first method (2). However, due to the fact that the first method takes in a lower dimension and retains smaller number of dimension, it is faster than the second method. Thus, deping on the situation, if high classification rate is desired, then the second method
17 IMAGE RECOGNITION FOR CATS AND DOGS 7 should be used. If time is an issue, then the first method should be used. Moreover, deping on the situation one method might work better than the other. Thus, deciding which method to use is very important. We should note that this algorithm only works for two classes of images. For multiclass recognition, a more sophisticated algorithm should be devised. Also, we should keep in mind that a hundred percent correct classification is very rare especially when we encounter dogs that share features of cats and vice versa.
18 8 HYO JIN CHUNG AND MINH N. TRAN Appix A. MATLAB functions used and brief implementation explanation col- A.. colormap. colormap(gray) sets the current figure s colormap to gray. ormap(gray) is used to set the colors in black and white. A.2. conv2. conv2 performs the 2-D convolution of the input matrices. conv2(a,b, same ) is used to apply the average filter and Laplacian filter where A is a matrix of an image and B is a filter. A.3. dwt2. dwt2 performs a single-level 2-D wavelet decomposition with respect to a particular wavelet. dwt2 is used to decompose the images of cats and dogs into approximation and details. A.4. repmat. B = repmat(a,m,n) creates a large matrix B consisting of an M-by-N tiling of copies of A. The size of B is [size(a,)*m, size(a,2)*n]. repmat is used to subtract the mean of the data from each column of the data matrix. A.5. reshape. reshape(x,m,n) returns the M-by-N matrix whose elements are taken columnwise from X. reshape is used to store a matrix form of an image into a column or vice versa. A.6. svd. [U,S,V] = svd(x) produces a diagonal matrix S, of the same dimension as X and with nonnegative diagonal elements in decreasing order, and unitary matrices U and V so that X = U*S*V. [U,S,V] = svd(x,) is used for Singular Value Decomposition which produces the economy size decomposition. svd is used for dimensionality reduction. A.7. wcodemat. wcodemat is an exted pseudocolor matrix scaling. Y = wcodemat(x,nbcodes) where X is a matrix and NBCODES is the color range, rescales the values in matrix X to the value range from NBCODES. wcodemat is used to rescale colors to appropriate pseudocolor scaling after edge detection.
19 IMAGE RECOGNITION FOR CATS AND DOGS 9 Appix B. MATLAB codes B.. MATLAB codes for the method of using Discrete Haar Wavelet Transform and Fisher s Linear Discriminant Analysis. B... wavelet. function [AA] = wavelet(a,n) % % Purpose: % To decompose an image into an approximation and 3 detail components in an % arrangement % LL HL % LH HH % where L=low, H=high % % Inputs: % A - matrix form of an image % n - level of decomposition % % Output: % AA - matrix form of new image % nbcol = size(colormap(gray),); ca = A; for i=:n [ca,ch,cv,cd] = dwt2(ca, haar ); % rescale to appropriate pseudocolor scaling cod_ch = wcodemat(ch,nbcol); cod_cv = wcodemat(cv,nbcol); AA = cod_ch+cod_cv; B..2. FDA. function [U,vcat,vdog,threshold,w] = FDA(cat,dog,feature) % % Purpose: % To classify using 2-class Fisher Discriminant Analysis (FDA) % % Inputs: % cat - data matrix of cats
20 2 HYO JIN CHUNG AND MINH N. TRAN % dog - data matrix of dogs % feature - number of components to be considered % % Outputs: % U - eigenvectors to reduce dimension of probe % vcat - projection of cats onto the real line % vdog - projection of dogs onto the real line % threshold - separation indicator between cats and dogs % w - projection vector % nc = length(cat(,:)); nd = length(dog(,:)); [U,S,V] = svd([cat dog],); coeff = S*V ; U = U(:,:feature); cats = coeff(:feature,:nc); dogs = coeff(:feature,nc+:nc+nd); mc = mean(cats,2); md = mean(dogs,2); SW = ; % within class variances for i=:nc SW = SW + (cats(:,i)-mc)*(cat~\ref{fi:mask cat dog}s(:,i)-mc) ; for i=:nd SW = SW + (dogs(:,i)-md)*(dogs(:,i)-md) ; SB = (md-mc)*(md-mc) ; % between class variances [V2,D] = eig(sb,sw); [lambda,ind] = max(abs(diag(d))); w = V2(:,ind); w = w/norm(w,2); vdog = w *dogs; vcat = w *cats; if mean(vdog)>mean(vcat)
21 IMAGE RECOGNITION FOR CATS AND DOGS 2 w = -w; vdog = -vdog; vcat = -vcat; sortdog = sort(vdog); sortcat = sort(vcat); very simple. They are only t = length(sortdog); t2 = ; while sortdog(t)>sortcat(t2) t = t-; t2 = t2+; threshold = (sortdog(t)+sortcat(t2))/2; B..3. wavefda. function [clsfy] = wavefda(cats,dogs,probes,feature) % % Purpose: % To classify cats and dogs % % Inputs: % cats - matrix of images of cats % dogs - matrix of images of dogs % probes - matrix of images of cats and dogs for testing % feature - number of components to retain % % Outputs: % clsfy - classification vector % % Functions called: % wavelet % FDA % nc = length(cats(,:)); nd = length(dogs(,:)); np = length(probes(,:)); for i=:nc cat(:,:,i) = reshape(cats(:,i),64,64); % cats are of size 64x64
22 22 HYO JIN CHUNG AND MINH N. TRAN for i=:nd dog(:,:,i) = reshape(dogs(:,i),64,64); % dogs are of size 64x64 for i=:np probe(:,:,i) = reshape(probes(:,i),64,64); % probes are of size 64x64 % decomposition using wavelet n = ; % level of decomposition for i=:nc data = wavelet(cat(:,:,i),n); cat_res(:,i) = reshape(data,size(data,)*size(data,2),); for i=:nd data = wavelet(dog(:,:,i),n); dog_res(:,i) = reshape(data,size(data,)*size(data,2),); for i=:np data = wavelet(probe(:,:,i),n); probe_res(:,i) = reshape(data,size(data,)*size(data,2),); % 2-class classification [U,vcat,vdog,threshold,w] = FDA(cat_res,dog_res,feature); test = U *probe_res; pval = w *test; clsfy = zeros(,np); for i=:np if pval(i)>threshold clsfy(i) = ; B..4. testcode. clear % Read in data cd TIFFtraining fname = ls; % dogs<threshold<cats % classify as cats
23 IMAGE RECOGNITION FOR CATS AND DOGS 23 N = length(fname); for i=3:n data = imread(fname(i,:)); data = double(data); data = data(:,:,); Dmat(:,i-2) = reshape(data,size(data,)*size(data,2),); cd.. % Training Set cats = Dmat(:,:8); dogs = Dmat(:,8:6); % Testing Set load PatternRecAns probes = TestSet; % Number of features feature = 2; % Classify the probes as cats or dogs clsfy = wavefda(cats,dogs,probes,feature); counter = abs(clsfy - hiddenlabels); B.2. MATLAB codes for the method of using Laplacian Filter and Fisher s Linear Discriminant Analysis. B.2.. mask. function [AA] = mask(a) % % Purpose: % To extract the edges(outline) of the image % % Input: % A - matrix of image % % Output: % AA - matrix of image after filter and mask % % The weighted average filter filter = (/6)*[ 2 ; ; 2 ];
24 24 HYO JIN CHUNG AND MINH N. TRAN % The Laplacian mask mask = [- - - ; ; - - -]; Af = conv2(a,filter, same ); Afm = conv2(af,mask, same ); nbcol = size(colormap(gray),); AA = wcodemat(afm,nbcol); B.2.2. maskfda. function[clsfy] = maskfda(cats,dogs,probes,energy) % % Purpose: % To classify cats and dogs % % Inputs: % cats - matrix of images of cats % dogs - matrix of images of dogs % probes - matrix of images of cats and dogs for testing % energy - energy level for PCA % % Outputs: % clsfy - classification vector % % Functions called: % mask % nc = size(cats,2); nd = size(dogs,2); np = size(probes,2); % Reshape the images back to 64*64 for i=:nc cat(:,:,i) = reshape(cats(:,i),64,64); for i=:nd dog(:,:,i) = reshape(dogs(:,i),64,64); for i=:np probe(:,:,i) = reshape(probes(:,i),64,64);
25 IMAGE RECOGNITION FOR CATS AND DOGS 25 % Apply the filter and the mask to extract the edges for i=:nc data = mask(cat(:,:,i)); cats(:,i) = reshape(data,size(data,)*size(data,2),); for i=:nd data = mask(dog(:,:,i)); dogs(:,i) = reshape(data,size(data,)*size(data,2),); for i=:np data = mask(probe(:,:,i)); probes(:,i) = reshape(data,size(data,)*size(data,2),); D = [cats dogs]; D = D - repmat(mean(d,2),[,nc+nd]); % mean subtracted [U S V] = svd(d,); % Find the best d value to use Svec = diag(s); totalenergy = dot(svec,svec); currentenergy = ; for d = : length(svec) currentenergy = currentenergy + ((Svec(d))^2) / totalenergy; if currentenergy >= energy break d % PCA Ud = U(:,:d); qc = Ud *cats; qd = Ud *dogs; qp = Ud *probes; % Compute the optimal projection direction, w A = S(:d,:d)*V(:,:d) ; cat = A(:,:nc); dog = A(:,nc+:nc+nd); mc = mean(cat,2);
26 26 HYO JIN CHUNG AND MINH N. TRAN md = mean(dog,2); SB = (md-mc)*(md-mc) ; M = repmat(mc,[,nc]); M2 = repmat(md,[,nd]); SW = (A(:,:nc) - M) * (A(:,:nc) - M) ; SW2 = (A(:,nc+:nc+nd) - M2) * (A(:,nc+:nc+nd) - M2) ; SW = SW + SW2; [V2,d] = eig(sb,sw); [l,ind] = max(abs(diag(d))); w = V2(:,ind); w = w/norm(w,2); % Project the data and the probe onto a line Cat = w *qc; Dog = w *qd; Probe = w *qp; if mean(dog) > mean(cat) w = -w; Dog = -Dog; Cat = -Cat; Probe = -Probe; sortdog = sort(dog); sortcat = sort(cat); t = length(sortdog); t2 = ; while sortdog(t) > sortcat(t2) t = t - ; t2 = t2 + ; threshold = (sortdog(t)+sortcat(t2))/2; clsfy = zeros(,length(probe));
27 for i = : length(probe) if Probe(i) > threshold clsfy(i) = ; B.2.3. testcode. clear IMAGE RECOGNITION FOR CATS AND DOGS 27 % Read in data cd TIFFtraining fname = ls; N = length(fname); for i = 3:N data = imread(fname(i,:)); data = double(data); data = data(:,:,); Dmat(:,i-2) = reshape(data,size(data,)*size(data,2),); cd.. % Training Set cats = Dmat(:,:8); dogs = Dmat(:,8:6); % Testing Set load PatternRecAns probes = TestSet; % Energy level for PCA energy =.95; % Classify the probes as cats or dogs clsfy = maskfda(cats,dogs,probes,energy); counter = abs(clsfy - hiddenlabels);
28 28 HYO JIN CHUNG AND MINH N. TRAN References [] J. M. Chang, Matrix Methods for Geometric Data Analysis and Pattern Recognition, classnotes. [2] J. Nathan Kutz, AMATH 582, Computation Methods for Data Analysis, classnotes. [3] M. Kirby, Geometric Data Analysis: An Empirical Approach to Dimensionality Reduction and the Study of Patterns, Wiley & Sons, 2.
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India
Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
High-Dimensional Data Visualization by PCA and LDA
High-Dimensional Data Visualization by PCA and LDA Chaur-Chin Chen Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan Abbie Hsu Institute of Information Systems & Applications,
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Sharpening through spatial filtering
Sharpening through spatial filtering Stefano Ferrari Università degli Studi di Milano [email protected] Elaborazione delle immagini (Image processing I) academic year 2011 2012 Sharpening The term
Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
Dimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006. Principal Components Null Space Analysis for Image and Video Classification
1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006 Principal Components Null Space Analysis for Image and Video Classification Namrata Vaswani, Member, IEEE, and Rama Chellappa, Fellow,
Adaptive Face Recognition System from Myanmar NRC Card
Adaptive Face Recognition System from Myanmar NRC Card Ei Phyo Wai University of Computer Studies, Yangon, Myanmar Myint Myint Sein University of Computer Studies, Yangon, Myanmar ABSTRACT Biometrics is
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
Determining optimal window size for texture feature extraction methods
IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec
Blood Vessel Classification into Arteries and Veins in Retinal Images
Blood Vessel Classification into Arteries and Veins in Retinal Images Claudia Kondermann and Daniel Kondermann a and Michelle Yan b a Interdisciplinary Center for Scientific Computing (IWR), University
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation
A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation S.VENKATA RAMANA ¹, S. NARAYANA REDDY ² M.Tech student, Department of ECE, SVU college of Engineering, Tirupati, 517502,
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
T-61.3050 : Email Classification as Spam or Ham using Naive Bayes Classifier. Santosh Tirunagari : 245577
T-61.3050 : Email Classification as Spam or Ham using Naive Bayes Classifier Santosh Tirunagari : 245577 January 20, 2011 Abstract This term project gives a solution how to classify an email as spam or
Text Analytics (Text Mining)
CSE 6242 / CX 4242 Apr 3, 2014 Text Analytics (Text Mining) LSI (uses SVD), Visualization Duen Horng (Polo) Chau Georgia Tech Some lectures are partly based on materials by Professors Guy Lebanon, Jeffrey
1 Example of Time Series Analysis by SSA 1
1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales
SYMMETRIC EIGENFACES MILI I. SHAH
SYMMETRIC EIGENFACES MILI I. SHAH Abstract. Over the years, mathematicians and computer scientists have produced an extensive body of work in the area of facial analysis. Several facial analysis algorithms
Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013
INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major
Principal components analysis
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k
ScienceDirect. Brain Image Classification using Learning Machine Approach and Brain Structure Analysis
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 50 (2015 ) 388 394 2nd International Symposium on Big Data and Cloud Computing (ISBCC 15) Brain Image Classification using
Data Visualization Techniques
Data Visualization Techniques From Basics to Big Data with SAS Visual Analytics WHITE PAPER SAS White Paper Table of Contents Introduction.... 1 Generating the Best Visualizations for Your Data... 2 The
How To Cluster
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main
Computational Foundations of Cognitive Science
Computational Foundations of Cognitive Science Lecture 15: Convolutions and Kernels Frank Keller School of Informatics University of Edinburgh [email protected] February 23, 2010 Frank Keller Computational
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
A fast multi-class SVM learning method for huge databases
www.ijcsi.org 544 A fast multi-class SVM learning method for huge databases Djeffal Abdelhamid 1, Babahenini Mohamed Chaouki 2 and Taleb-Ahmed Abdelmalik 3 1,2 Computer science department, LESIA Laboratory,
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher
Beginner s Matlab Tutorial
Christopher Lum [email protected] Introduction Beginner s Matlab Tutorial This document is designed to act as a tutorial for an individual who has had no prior experience with Matlab. For any questions
Data Visualization Techniques
Data Visualization Techniques From Basics to Big Data with SAS Visual Analytics WHITE PAPER SAS White Paper Table of Contents Introduction.... 1 Generating the Best Visualizations for Your Data... 2 The
Soft Clustering with Projections: PCA, ICA, and Laplacian
1 Soft Clustering with Projections: PCA, ICA, and Laplacian David Gleich and Leonid Zhukov Abstract In this paper we present a comparison of three projection methods that use the eigenvectors of a matrix
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING
AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
Similar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Supervised Feature Selection & Unsupervised Dimensionality Reduction
Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Java Modules for Time Series Analysis
Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series
Object Recognition and Template Matching
Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of
Possibilities of Automation of the Caterpillar -SSA Method for Time Series Analysis and Forecast. Th.Alexandrov, N.Golyandina
Possibilities of Automation of the Caterpillar -SSA Method for Time Series Analysis and Forecast Th.Alexandrov, N.Golyandina [email protected], [email protected] St.Petersburg State University, Russia
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS B.K. Mohan and S. N. Ladha Centre for Studies in Resources Engineering IIT
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Statistical Data Mining. Practical Assignment 3 Discriminant Analysis and Decision Trees
Statistical Data Mining Practical Assignment 3 Discriminant Analysis and Decision Trees In this practical we discuss linear and quadratic discriminant analysis and tree-based classification techniques.
Partial Least Squares (PLS) Regression.
Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Exploratory data analysis for microarray data
Eploratory data analysis for microarray data Anja von Heydebreck Ma Planck Institute for Molecular Genetics, Dept. Computational Molecular Biology, Berlin, Germany [email protected] Visualization
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents
Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.
Modern Technique Of Lecture Attendance Using Face Recognition. Shreya Nallawar, Neha Giri, Neeraj Deshbhratar, Shamal Sane, Trupti Gautre, Avinash Bansod Bapurao Deshmukh College Of Engineering, Sewagram,
A Solution Manual and Notes for: Exploratory Data Analysis with MATLAB by Wendy L. Martinez and Angel R. Martinez.
A Solution Manual and Notes for: Exploratory Data Analysis with MATLAB by Wendy L. Martinez and Angel R. Martinez. John L. Weatherwax May 7, 9 Introduction Here you ll find various notes and derivations
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
NETZCOPE - a tool to analyze and display complex R&D collaboration networks
The Task Concepts from Spectral Graph Theory EU R&D Network Analysis Netzcope Screenshots NETZCOPE - a tool to analyze and display complex R&D collaboration networks L. Streit & O. Strogan BiBoS, Univ.
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Digital image processing
746A27 Remote Sensing and GIS Lecture 4 Digital image processing Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University Digital Image Processing Most of the common
Big Ideas in Mathematics
Big Ideas in Mathematics which are important to all mathematics learning. (Adapted from the NCTM Curriculum Focal Points, 2006) The Mathematics Big Ideas are organized using the PA Mathematics Standards
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
DISCRIMINANT FUNCTION ANALYSIS (DA)
DISCRIMINANT FUNCTION ANALYSIS (DA) John Poulsen and Aaron French Key words: assumptions, further reading, computations, standardized coefficents, structure matrix, tests of signficance Introduction Discriminant
Introduction to Principal Components and FactorAnalysis
Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a
Effective Linear Discriminant Analysis for High Dimensional, Low Sample Size Data
Effective Linear Discriant Analysis for High Dimensional, Low Sample Size Data Zhihua Qiao, Lan Zhou and Jianhua Z. Huang Abstract In the so-called high dimensional, low sample size (HDLSS) settings, LDA
Bar Graphs and Dot Plots
CONDENSED L E S S O N 1.1 Bar Graphs and Dot Plots In this lesson you will interpret and create a variety of graphs find some summary values for a data set draw conclusions about a data set based on graphs
Get The Picture: Visualizing Financial Data part 1
Get The Picture: Visualizing Financial Data part 1 by Jeremy Walton Turning numbers into pictures is usually the easiest way of finding out what they mean. We're all familiar with the display of for example
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
