The von Kries Hypothesis and a Basis for Color Constancy
|
|
|
- Earl Chase
- 10 years ago
- Views:
Transcription
1 Appears in Proc. IEEE International Conference on Computer Vision (ICCV) The von Kries Hypothesis and a Basis for Color Constancy Hamilton Y. Chong Harvard University [email protected] Steven J. Gortler Harvard University [email protected] Todd Zickler Harvard University [email protected] Abstract Color constancy is almost exclusively modeled with diagonal transforms. However, the choice of basis under which diagonal transforms are taken is traditionally ad hoc. Attempts to remedy the situation have been hindered by the fact that no joint characterization of the conditions for {sensors, illuminants, reflectances} to support diagonal color constancy has previously been achieved. In this work, we observe that the von Kries compatibility conditions are impositions only on the sensor measurements, not the physical spectra. This allows us to formulate the von Kries compatibility conditions succinctly as rank constraints on an order 3 measurement tensor. Given this, we propose an algorithm that computes a (locally) optimal choice of color basis for diagonal color constancy and compare the results against other proposed choices. 1. Introduction Color constancy refers to the visual system s treatment of color as an intrinsic material property. For a given scene, the human visual system, post adaptation, will settle on the same perceived color for an object despite changes in illumination. Such an ability to discern illumination-invariant material descriptors has clear evolutionary advantages and also largely simplifies (and hence is widely assumed in) a variety of computer vision algorithms. To achieve color constancy, one must discount the effect of changing illumination through transformations of an observer s trichromatic sensor response values. While many illumination-induced transformations are possible, it is commonly assumed that each of the three sensors reacts (uniformly across the visual field) with a form of independent gain control (i.e., each sensor response value is simply scaled by a multiplicative factor) [4, 10]. This is termed von Kries adaptation. Represented in linear algebra, it is equivalent to multiplying each column vector of sensor response values by a shared diagonal matrix, and is therefore also referred to as the diagonal model for color constancy. Note that while the initial von Kries hypothesis applied only to direct multiplicative adjustments of retinal cone sensors, we follow [4] and use the term more loosely to allow for general trichromatic sensors. We also allow for a change of color basis to occur before the per-channel multiplicative adjustment. (Finlayson et al. [4] refer to this as a generalized diagonal model for color constancy, and they term the change of color basis a sharpening transform.) The (generalized) diagonal model is at the core of the majority of color constancy algorithms. Even a number of algorithms not obviously reliant on the diagonal assumption in fact rely on diagonal models following a change of color basis [5, 4]; their choice of color basis is simply not explicit. Yet, despite the widespread use of the diagonal model, good choices of color bases under which diagonal transforms can be taken are only partially understood. The most theoretically-justified approach to choosing a color space is predicated on the assumption that the spaces of illuminant and reflectance spectra are each lowdimensional. As shown by Finlayson et al. [5, 4], a two-dimensional linear space of illuminants and a threedimensional linear space of reflectances (or vice versa) is sufficient 1 to guarantee diagonal color constancy. This is an important observation because it provides a principled method for choosing a von-kries compatible color space for a given set of sensors, illuminants, and reflectances. While the work of Finlayson et al. [5, 4] is a significant first step, both empirical and analytical observations reveal this characterization to be incomplete. Empirical studies suggest that diagonal color constancy is adequate under more general conditions than the 2 3 model implies [7]. (This is good news for vision systems that seek color constancy in the real world. While the space of natural illuminants may be sufficiently small, the space of reflectances is probably not [12].) From an analytical standpoint, the 2 3 model is known to be only sufficient 1 not necessary for von Kries adaptation. Indeed, once a specific color basis is chosen, one can expand the set of compatible illuminants well beyond a two-dimensional space by adding those illuminants that integrate to zero against (i.e., are black with respect to) all reflectances in the compatible set [7]. More surprisingly, 1 For sufficiency, we must allow complex color bases. See section 4
2 one can algorithmically grow the compatible illuminant set to include additional non-black illuminants as well [16, 7]. This analysis, however, is still incomplete because given color data, the analysis does not reveal a method for computing the color space to begin with. While these limitations have been well-documented, a more complete characterization of the conditions for von Kries compatibility has yet to be established. As a result, the development of more powerful systems for choosing optimized color bases has been slow. This paper addresses these issues by answering the following questions: 1. What are the necessary and sufficient conditions that sensors, illuminants, and materials must satisfy to be exactly von Kries compatible, and what is the structure of the solution space? 2. Given measured spectra or labeled color observations, how do we determine the color space that best supports diagonal color constancy? We observe that the joint conditions are impositions only on the sensor measurements, not the physical spectra. This allows the von Kries compatibility conditions to be succinctly formulated as rank constraints on an order 3 measurement tensor. Our analysis leads directly to an algorithm that, given labeled color data, computes a locally optimal choice of color basis in which to carry out diagonal color constancy computations. The proposed framework also unifies most existing analyses of von Kries compatibility. 2. Theory We define two notions of color constancy. The first definition captures the idea that a single adjustment to the sensors will map all material colors seen under an illuminant E 1 to reference colors under (a possibly chosen standard) illuminant E 2. The second definition (also known as relational color constancy) captures the idea that surface colors have a fixed relationship between each other no matter what overall illumination lights the scene. As stated, these two definitions are not interchangeable. One being true does not imply the other. To define the issues formally, we need a bit of notation. Let R be the smallest linear subspace of L 2 functions enclosing the spectral space of materials of interest. Let E be the smallest linear subspace of L 2 functions enclosing the spectral space of illuminants of interest. Let p R,E be the color (in the sensor basis) of material reflectance R(λ) R under illumination E(λ) E. In the following, D and D are operators that take color vectors and map them to color vectors. D is required to be independent of the material R; likewise, D is required to be independent of the illuminant E. The denotes the action of these operators on color vectors. Figure 1. The 3xIxJ measurement tensor. The tensor can be sliced in three ways to produce the matrices Ω (j), Λ (i), and Γ (k). 1. E 1,E 2 E, D(E 1,E 2 ) s.t. R R p R,E2 = D(E 1,E 2 ) p R,E1 2. R 1,R 2 R, D(R 1,R 2 ) s.t. E E p R2,E = D(R 1,R 2 ) p R1,E In the case that D and D are linear (and hence identified with matrices), is just matrix-vector multiplication. If D is linear, we say that the world supports linear adaptive color constancy. If D is linear, we say the world supports linear relational color constancy. D being linear does not imply D is linear, and vice versa. If both D and D are linear, we say the world supports doubly linear color constancy. In particular, we shall be interested in the case when D and D are both furthermore diagonal (under some choice of color basis). It is proven in [4] that for a fixed color space, D is diagonal if and only if D is diagonal. So the two notions of color constancy are equivalent if either D or D is diagonal, and we say the world supports diagonal color constancy (the doubly modifier is unnecessary). The equivalence is nice because we, as biological organisms, can likely learn to achieve definition 1, but seek to achieve definition 2 for inference. Given a set of illuminants {E i } i=1,...,i, reflectances {R j } j=1,...,j, and sensor color matching functions {ρ k } k=1,2,3, we define a measurement data tensor (see Figure 1): M kij := ρ k (λ)e i (λ)r j (λ)dλ (1) For fixed values of j, we get 3xI matrices Ω (j) := M kij that map illuminants expressed in the {E i } i=1,...,i basis to color vectors expressed in the sensor basis. Likewise, for fixed values of i, we get 3xJ matrices Λ (i) := M kij that map surface reflectance spectra expressed in the {R j } j=1,...,j basis to color vectors. We can also slice the tensor by constant k to get IxJ matrices Γ (k) := M kij. Since color perception can depend only on the eye s trichromatic color measurements, worlds (i.e., sets of illuminant and material spectra) giving rise to the same measurement tensor are perceptually equivalent. To understand diagonal color constancy, therefore, it is sufficient to analyze the space of measurement tensors and the constraints that these tensors must satisfy. This analysis of von Kries compatible measurement tensors is covered in section 2.1. Given a von Kries compatible measurement tensor (e.g., an output from the algorithm in section 3), one may also
3 Figure 2. Core tensor form: a 3x3x3 core tensor is padded with zeros. The core tensor is not unique. be interested in the constraints such a tensor places on the possible spectral worlds. This analysis is covered in section Measurement Constraints The discussion in this section will always assume generic configurations (e.g., color measurements span three dimensions, color bases are invertible). Proofs not essential to the main exposition are relegated to Appendix A. Proposition 1. A measurement tensor supports doubly linear color constancy iff a change of basis for illuminants and materials that reduces it to the core tensor form of Figure 2. More specifically (as is apparent from the proof of Proposition 1 in Appendix A.1), if a single change of illuminant basis makes all the Ω (j) slices null past the third column, the measurement tensor supports linear relational color constancy. Likewise, a change of material basis making all the Λ (i) slices null past the third column implies the measurement tensor supports linear adaptive color constancy. Support for one form of linear constancy does not imply support for the other. The following lemma provides a stepping stone to our main theoretical result and is related to some existing von Kries compatibility results (see section 4). Lemma 1. A measurement tensor supports generalized diagonal color constancy iff there exists a change of color basis such that, for all k, Γ (k) is a rank-1 matrix. This leads to our main theorem characterizing the space of measurement tensors supporting generalized diagonal color constancy. Theorem 1. A measurement tensor supports generalized diagonal color constancy iff it is a rank 3 tensor. 2 An order 3 tensor (3D data block) T is rank N if N is the smallest integer such that there exist vectors { a n, b n, c n } n=1,...,n allowing decomposition as the sum of outer products (denoted by ): T = N c n a n b n (2) n=1 2 There exist measurement tensors supporting generalized diagonal color constancy with rank less than 3, but such examples are not generic. Without loss of generality, let { a n } be vectors of length I, corresponding to the illuminant axis of the measurement tensor; let { b n } be vectors of length J, corresponding to the material axis of the tensor; and let { c n } be vectors of length 3, corresponding to the color sensor axis of the tensor. Let the vectors { a n } make up the columns of the matrix A, vectors { b n } make up the columns of the matrix B, and vectors { c n } make up the columns of the matrix C. Then the decomposition above may be restated as a decomposition into the matrices (A, B, C), each with N columns. Proof. (Theorem 1). First suppose the measurement tensor supports generalized diagonal color constancy. Then by Lemma 1, there exists a color basis under which each Γ (k) is rank-1 (as a matrix). This means each Γ (k) can be written as an outer product, Γ (k) = a k b k. In this color basis then, the measurement tensor is a rank 3 tensor in which the matrix C (following notation above) is just the identity. We also point out that an invertible change of basis (on any of A,B,C) does not affect the rank of a tensor, so the original tensor (before color basis change) was also rank 3. For the converse case, we now suppose the measurement tensor is rank 3. Since C is (in the generic setting) invertible, multi-linearity gives us: ( 3 ) C 1 c n a n 3 ( b n = C 1 ) c n an b n (3) n=1 n=1 The right hand side of Equation (3) is a rank 3 tensor with each Γ (k) slice a rank-1 matrix. By Lemma 1, the tensor must then support diagonal color constancy. In the proof above, note that the columns of C exactly represent the desired color basis under which we get perfect diagonal color constancy. This theorem is of algorithmic importance because it ties the von Kries compatibility criteria to quantities (best rank 3 tensor approximations) that are computable via existing multilinear methods. 3. Color Basis for Color Constancy Given a measurement tensor M generated from realworld data, we would like to find the optimal basis in which to perform diagonal color constancy computations. To do this, we first find the closest von Kries compatible measurement tensor (with respect to the Frobenius norm). We then return the color basis that yields perfect color constancy under this approximate tensor. By Theorem 1, finding the closest von Kries compatible measurement tensor is equivalent to finding the best rank 3 approximation. Any rank 3 tensor may be written in the form of equation (2) with N = 3. We solve for M s best rank 3 approximation (decomposition into A, B, C) via Trilinear Alternating Least Squares (TALS) [8]. For a rank 3 tensor, TALS forces A, B, and C to each have 3 columns. It
4 then iteratively fixes two of the matrices and solves for the third in a least squares sense. Repeating these computations in lockstep guarantees convergence to a local minimum. A, B, C can be used to reconstruct the closest von Kries compatible tensor and the columns of C exactly represent the desired color basis. As a side note, the output of this procedure differs from the best rank-(3,3,3) approximation given by HOSVD [11]. HOSVD only gives orthogonal bases as output and the rank- (3,3,3) truncation does not in general yield a closest rank 3 tensor. HOSVD may, however, provide a good initial guess. The following details on TALS mimic the discussion in [14]. For further information, see [14, 8] and the references therein. The Khatri-Rao product of two matrices A and B with N columns each is given by A B := [ a 1 b 1, a 2 b 2,, a N ] b N where is the Kronecker product. Denote the flattening of the measurement tensor M by M IJ 3 if the elements of M are unrolled such that the rows of matrix M IJ 3 loop over the (i,j)-indices with i = 1,...,I as the outer loop and j = 1,...,J as the inner loop. The column index of M IJ 3 corresponds with the dimension of the measurement tensor that is not unrolled (in this case k = 1,2,3). The notation for other flattenings is defined symmetrically. We can then write: (4) M JI 3 = (B A) C T (5) By symmetry of equation (5), we can write out the least squares solutions for each of the matrices (with the other two fixed). A = B = C = [(B C) MJ3 I ] T [(C A) M3I J ] T [(B A) MJI 3 ] T 4. Relationship to Previous Characterizations As mentioned in the introduction, there are two main sets of theoretical results. There are the works of [7, 16] that give necessary and sufficient conditions for von Kries compatibility under a predetermined choice of color space, and are able to build infinite dimensional von Kries compatible worlds for this choice. Then there are the works of [5, 4] that prescribe a method for choosing the color space, but only for worlds with low dimensional linear spaces of illuminants and materials. We omit direct comparison to the various spectral sharpening techniques [6, 3, 1] in this (6) (7) (8) Figure 3. The rows of a single Λ (1) slice are placed into a new measurement tensor (rows are laid horizontally above) with all other entries set to zero. The marks the nonzero entries. section, as these methods propose more intuitive guidelines rather than formal relationships. Previous analyses treat the von Kries compatibility conditions as constraints on spectra, whereas the analysis here treats them as constraints on color measurements. In this section, we translate between the two perspectives. To go from spectra to measurement tensors is straightforward. To go the other way is a bit more tricky. In particular, given a measurement tensor with rank-1 Γ (k), there is not a unique world generating this data. Any set of illuminants {E i } i=1,...,i and reflectances {R j } j=1,...,j satisfying Equation (1) (with M and ρ k fixed) will be consistent with the data. Many constructions of worlds are thus possible. But if one first selects particular illuminant or material spectra as mandatory inclusions in the world, then one can state more specific conditions on the remaining spectral choices. In [5, 4], it is shown that if the illuminant space is 3 dimensional and the material space is 2 dimensional (or vice versa), then the resulting world is (generalized) von Kries compatible. As a measurement tensor, this translates into stating that any 3x3x2 measurement tensor is (complex) rank 3. However this 3-2 condition is clearly not necessary as almost every rank 3 tensor is not reducible via change of bases to size 3x3x2. In fact, one can always extend a 3x3x2 tensor to a 3x3x3 core tensor such that the Γ (k) are still rank-1. The illuminant added by this extension is neither black with respect to the materials, nor in the linear span of the first two illuminants. The necessary and sufficient conditions provided in [16] can be seen as special cases of Lemma 1. The focus on spectra leads to a case-by-case analysis with arbitrary spectral preferences. However, the essential property these conditions point to is that the 2x2 minors of Γ (k) must be zero (i.e., Γ (k) must be rank-1). One case from [16] is explained in detail in [7]. They fix a color space, a space of material spectra, and a single reference illumination spectrum. They can then solve for the unique space of illumination spectra that includes the reference illuminant and is von Kries compatible (in the fixed color basis) with the given material space. In our framework, this can be interpreted as follows. The given input gives rise to a single Λ (1) measurement slice. The three rows of this slice can be pulled out and placed
5 in a new measurement tensor of the form shown in Figure 3. This measurement tensor is then padded with an infinite number of zero Λ (i) matrices. The Γ (k) slices of this new tensor are clearly rank-1 matrices, and thus this tensor is von Kries compatible in the given color space. Moreover, any measurement tensor with rank-1 Γ (k) that include the original Λ (1) slice in its span must have Λ (i) slices that are spanned by the Λ (i) slices in Figure 3. With this fixed tensor and the fixed material spectra, one can then solve Equation (1) to obtain the space of compatible illumination spectra. This space can be described by three non-black illuminants and an infinite number of black illuminants (giving zero measurements for the input material space). Since the original Λ (1) measurement slice is in the span of the Λ (i) slices, the original reference illuminant must be in the solution space. 5. Results We run our color basis algorithm on the SFU dataset [2] and compare our resulting color basis against previous choices (the cone sensor basis, 4 bases derived from different low dimensional approximations of spectral data, that of Barnard et. al. [1], and the sensor sharpened basis [6]). This comparison is done in an experiment described below. The SFU database provides 8 illuminants simulating daylight, and 1,995 materials including measured spectra of natural objects. Fluorescent spectra were removed from the dataset in hopes of better modeling natural lighting conditions since, in color matching experiments, fluorescent lamps cause unacceptable mismatches of colored materials that are supposed to match under daylight [17]. The low dimensional worlds to which we compare are taken to have either 3 dimensional illuminant spaces and 2 dimensional material spaces (a 3-2 world) or vice versa (a 2-3 world); this allows computing color bases via the procedure in [4]. We take two different approaches to approximating spectra with low dimensional vector spaces. In the first approach (described in [4, 6]), we run SVD on the illuminant and material spectra separately. We then save the best rank-3 and rank-2 approximations. This is Finlayson s perfect sharpening method for databases with multiple lights [6]. As pointed out in [13], if error is to be measured in sensor space, there are alternatives to running PCA on spectra. Given a measurement tensor, the alternative ( tensorbased ) approach instead applies SVD on the tensor flattenings M J3 I and M 3I J to get the principal combination coefficients of the spectral bases (to be solved for) that approximate the sample spectra. Refer to [13] for details. For the experiments, color matching functions are taken to be CIE deg XYZ with Judd 1951 and Vos 1978 modifications [17]. To resolve mismatches in spectral sampling, we interpolate the data using linear reconstruction. Cone fundamentals are taken to be the Vos and Walraven (1971) fundamentals [17]. Experiments are run with illuminant spectra normalized with respect to the L 2 norm. We run the same white-patch normalization experiment as in [4]. As input, we are given a chosen white material W and an illuminant E. For every other material R, we compute a descriptor by dividing each of its 3 observed color coordinates by the 3 color coordinates of W (the resulting 3 ratios are then transformed as a color vector to XYZ coordinates so that consistent comparisons can be made with different choices of color space). In a von Kries world, the descriptor for R would not depend on the illuminant E. To measure the non von Kries-ness of a world, we can look at how much these descriptors vary with the choice of E. More formally, we define the desriptor as: d W,R E = C [ diag ( C 1 p W,E)] 1 C 1 p R,E (9) The function diag creates a matrix whose diagonal elements are the given vector s components. C is a color basis. C, p R,E, p W,E are given in the CIE XYZ coordinate system. To compute a non von Kries-ness error, we fix a canonical illuminant E W,R and compute descriptors d E for every test material R. We then choose some different illuminant W,R d E E and again compute descriptors for every test material R. Errors for every choice of E and R are computed as: Error = 100 W,R W,R d E d E W,R d E (10) For each instance of the experiment, we choose one SFU test illuminant E and compute the errors over all test materials (canonical illuminant E is kept the same for each experimental instance). Each time, the color basis derived from running our method (labeled as Optimized ) performed the best. Figure 4 shows the cumulative histograms for instances in which the stated basis performs the best and worst relative to the next best basis. Curves represent the percentage of material color vectors satisfying the von Kries hypothesis versus the allowable error. Relative performance between two bases is measured as a ratio of the areas under their respective histogram curves. The entire process is then repeated for another canonical E to give a total of 4 graphs. The basis labeled as Barnard requires a choice of canonical illuminant for its specification. Figure 4 shows two sets of graphs. One in which the canonical illuminant for testing matches the Barnard canonical, and one set in which the canonical illuminants differ. The second canonical illuminant is chosen to illustrate the best worst-case relative performance of our algorithm. Barnard optimizes with respect to a particular canonical illuminant, while our method effectively optimizes over all light pairs. We also tested against Finlayson s database sharpening method
6 Figure 4. Percent vectors satisfying von Kries mapping versus percent allowable error. Each curve represents a different choice of color space. For low dimensional worlds, the dimension of the illuminant space precedes the dimension of the material space in the abbreviated notation. Low dimensional approximations were obtained either by running PCA on spectra or by tensor methods described in text. We show the experimental instances in which our derived basis performs the best and worst relative to the next best basis. The left and right halves differ in choice of canonical illuminant for testing. Unlike Barnard, our method effectively optimizes all pairs of lights. [6] (using PCA on lights to handle multiple lights). The results were nearly identical to both of the 2-3 methods. The SFU optimized color basis given in normalized XYZ coordinates is: c 1 = X Y Z (11) c 2 = X Y Z (12) c 3 = X Y Z (13) For baseline comparison, the cone basis (Vos and Walraven) [17], given in normalized XYZ coordinates is: c 1 = X Y Z (14) c 2 = X Y Z (15) c 3 = Z (16) 6. Discussion We have argued for a new data-driven choice of color basis for diagonal color constancy computations. We show that with respect to some existing metrics, the new choice leads to a better diagonal model. While a linear change of color basis poses no problem to those concerned simply with algorithmic modeling, those who seek relevance to human biological mechanisms might object (on theoretical grounds) that sensor measurement acquisition may involve nonlinearities that disrupt the brain s ability to linearly transform the color basis downstream. Fortunately, experimental results based on single-cell responses and psychophysical sensitivity suggest that any existing nonlinearities at this level are negligible [15, 12]. Future work includes investigating positivity constraints on the transformed sensors [3] and running an updated set of color constancy evaluations as in [9]. Human perceptual experiments also need to be run to test the perceptual efficacy of the proposed bases. Acknowledgements: T. Zickler was supported by NSF CAREER award IIS References [1] K. Barnard, F. Ciurea, and B. Funt. Sensor sharpening for computational color constancy. JOSA A, 18(9): , [2] K. Barnard, L. Martin, B. Funt, and A. Coath. A data set for colour research. Color Research and Application, 27(3): , [3] M. S. Drew and G. D. Finlayson. Spectral sharpening with positivity. JOSA A, 17(8): , [4] G. Finlayson, M. Drew, and B. Funt. Diagonal transforms suffice for color constancy. In ICCV93, pages , [5] G. Finlayson, M. Drew, and B. Funt. Enhancing von kries adaptation via sensor transformations. SPIE, [6] G. Finlayson, M. Drew, and B. Funt. Spectral sharpening: sensor transformations for improved color constancy. JOSA A, 11(5): , [7] G. D. Finlayson, S. D. Hordley, and A. Alsam. Investigating von kries-like adaptation using local linear models. Color Research and Application, 31(2):90 101, [8] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an explanatory multimodal factor analysis. UCLA Working Papers in Phonetics, 16:1 84, [9] S. D. Hordley and G. D. Finlayson. Reevaluation of color constancy algorithm performance. JOSA A, 23(5): , [10] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and K. Sobel. A variational framework for retinex. IJCV, 52(1):7 23, [11] L. D. Lathauwer, B. D. Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM J. Matrix Analysis, 21(4): , [12] L. Maloney. Color Vision: From Genes to Perception, chapter Physics-based approaches to modeling surface color perception, pages Cambridge University Press, 1999.
7 [13] D. Marimont and B. Wandell. Linear models of surface and illuminant spectra. JOSA A, 9: , [14] N. Sidiropoulos. Low-rank decomposition of multi-way arrays: A signal processing perspective. In IEEE SAM, July [15] M. A. Webster. Human colour perception and its adaptation. Network: Computation in Neural Systems, 7, [16] G. West and M. H. Brill. Necessary and sufficient conditions for von kries chromatic adaptation to give color constancy. Journal of Mathematical Biology, 15(2): , [17] G. Wyszecki and W. Stiles. Color Science: concepts and methods, quantitative data and formulae. Wiley, A. Proofs For simplicity, we only consider generic conditions. A.1. Proof of Proposition 1 We first show that if a measurement tensor supports doubly linear color constancy, then it can be transformed into the core tensor form of Figure 2 via changes in illuminant and material bases. Lemma 2. If j,j, D(R j,r j ) satisfying the definition of linear relational color constancy, then a single change of basis for the space of illuminants, E, such that j {1,...,J} the 3xI matrix Ω (j) is zero past the third column. Proof. (Lemma 2) By definition of linear relational color constancy, we have D(R j,r j ) is linear, and E E, its corresponding coordinate vector (in the basis of illuminant spectra used to create the measurement tensor) ε satisfies Ω (j ) ε = D(R j,r j )Ω (j) ε. Let N j E be the subset of E whose coordinate vectors (also in the basis of illuminant spectra used to create the measurement tensor), { n}, constitute the null space of Ω (j). Since Ω (j) is rank-3, N j is 3 dimensional. Given the above, we have that n Ω (j )( ε + n) = D(R j,r j )Ω (j) ( ε + n) (17) Ω (j ) ε + Ω (j ) n = D(R j,r j )Ω (j) ε + 0 (18) Ω (j ) n = 0 (19) So N j N j. By symmetry under interchange of j and j, N j N j, so N j = N j j,j. Fix a j, and choose a basis with the first three vectors spanning Nj and orthogonal to N j, and the rest of the basis vectors spanning N j and orthogonal to Nj. Under this choice, every Ω (j) will have zeros past the third column since the decompositions Ker(Ω (j) ) Ker(Ω (j) ) are the same for all j. By symmetry of Lemma 2, we have a similar statement for D(E i,e i ) and a change of basis for R. Hence we can get the core tensor form via change of illuminant and material bases. Figure 5. Following a change of illuminant basis, the 3xI measurement tensor slice Ω (j) is partitioned into a nonzero 3x3 subblock Ω (j) and a zero 3x(I 3) sub-block as shown. We now prove the converse direction. The ability to achieve the core tensor form of Figure 2 implies there is a single change of illuminant basis transforming the measurement tensor s Ω (j) slices into the block form in Figure 5. Denote the nonzero 3x3 sub-block of Ω (j) as Ω (j). Now define for every pair j,j the linear operator Ω (j ) Ω 1 (j). Observe that this operator maps the columns of matrix Ω (j) (color vectors of material R j ) to the corresponding columns of Ω (j ) (color vectors of material R j ). The nonzero columns are mapped appropriately by construction, and the zero columns are mapped to zero by linearity. This means the that for all basis illuminant spectra of E, the operator Ω (j ) Ω 1 (j) correctly maps the associated material R j colors to the appropriate material R j colors. Since color measurements are linear in the illuminant, and the colors associated with each basis illuminant spectrum are mapped correctly, we can conclude that Ω (j ) Ω 1 (j) acts as D(R j,r j ) does on all color vectors. This implies: D(R j,r j ) = Ω (j ) Ω 1 (j) (20) Hence, D is linear. Since D does not depend on illuminants, it is invariant to any illuminant basis change and so the initial change of illuminant basis did not alter whether the measurement tensor supports linear relational color constancy. Likewise, a change of material basis making all the Λ (i) slices null past the third column implies D is linear, and the tensor supports doubly linear color constancy. A.2. Proof of Lemma 1 Our argument will proceed in two steps. We first prove a version of the lemma (referred to as Lemma 3) in which measurement tensor is replaced by core tensor of the form in Figure 2. We then prove Lemma 1 using this core tensor version (Lemma 3). Lemma 3. A core tensor of the form in Figure 2 supports generalized diagonal color constancy iff there exists a change of color basis such that core tensor slice Γ (k) is a rank-1 matrix k. Proof. (Lemma 3) We first prove that if the core tensor has rank-1 Γ (k) slices under some change of color basis, then D(R j,r j ) is diagonal j,j (under that same color basis). This implies the core tensor supports generalized diagonal color constancy. By equation (20), D(Rj,R j ) =
8 Ω (j ) Ω 1 (j). Denote the rows of slice Ω (j) as u T 1, u T 2, u T 3. Since the Γ (k) slices are rank-1 matrices, the rows of another slice Ω (j ) are γ 1 u T 1,γ 2 u T 2,γ 3 u T 3 for some scales γ 1,γ 2,γ 3. Let u k denote the column vector that is dual to the row u T k (i.e., ut k u k = δ kk ). We then have: D(R j,r j ) = Ω (j ) Ω 1 (j) (21) γ 1 u T 1 = γ 2 u T [ ] 2 u 1 u 2 u 3 (22) γ 3 u T 3 γ = 0 γ 2 0 (23) 0 0 γ 3 Thus D(R j,r j ) is diagonal. We now prove that support for generalized diagonal color constancy implies the core tensor has rank-1 Γ (k) slices under an appropriate change of color basis. If the core tensor supports generalized diagonal color constancy, then there exists a change of color basis such that the linear map D(R j,r j ) := Ω (j ) Ω 1 (j) is in fact diagonal j,j. It then suffices to show that D(R j,r j ) being diagonal for all j,j implies Γ (k) is a rank-1 matrix k. Denote the rows of slice Ω (j) as u T 1, u T 2, u T 3. As before, let u 1, u 2, u 3 be their duals. Denote the rows of slice Ω (j ) as v T 1, v T 2, v T 3. Let γ 1,γ 2,γ 3 be the diagonal elements of D(R j,r j ). From equation (20), D(R j,r j ) being diagonal implies: v T k u k = γ kδ kk (24) The uniqueness of the dual relationship implies the row vectors u T k and vt k must be the same up to scale. So the Γ (k) slices are rank-1 matrices. Proof. (Lemma 1) We will make use of the following 3 observations which are not hard to show: (1) changing material and illuminant bases does not affect whether the Γ (k) slices are rank-1 matrices; (2) changing illuminant, material, and color bases does not affect whether a tensor supports generalized diagonal color constancy; (3) the order in which one changes illuminant, material, and color bases does not matter the resulting tensor is the same. Forward direction: Support for generalized diagonal color constancy implies rank-1 Γ (k) in some color basis. A measurement tensor supporting generalized diagonal color constancy must also support doubly linear color constancy (which is a looser restriction). The tensor can then be reduced to the core tensor form of Figure 2 via changes in illuminant and material bases (and the core tensor must also support generalized diagonal color constancy). By the forward direction of Lemma 3, the core tensor has rank-1 Γ (k) slices under some color transform. This implies the original measurement tensor has rank-1 Γ (k) slices under the same color transform. Backward direction: Rank-1 Γ (k) in some color basis implies support for generalized diagonal color constancy. First transform to the color basis in which the Γ k slices are rank-1 matrices. We now show that a measurement tensor with rank-1 Γ (k) k can be transformed into the core tensor form of Figure 2 via changes in illuminant and material bases; the resulting core tensor also has rank-1 Γ (k) slices by observation 1. Without loss of generality we consider the Ω (j) slices. The Λ (i) slices proof is symmetric. The following computation shows that any column i of the 3xI matrix Ω (j) can be (in the generic case) written as a linear combination of the first 3 columns. Let u T k be a row vector consisting of the first 3 elements of the k-th row of Ω (j). Let v k be the k-th element of the i-th column of Ω (j). We seek weights w 1,w 2,w 3 such that: u T 1 u T 2 u T 3 w 1 w 2 w 3 = v 1 v 2 v 3 (25) Since the leftmost matrix is a 3x3 invertible matrix, there exists a unique solution. Suppose we chose a different slice Ω (j ). The rows of this matrix are scales of the original Ω (j) rows because Γ (k) is a rank-1 matrix k. Let the scales be γ 1,γ 2,γ 3. The new system of equations to solve (with unknown weights w 1,w 2,w 3) is: γ 1 u T 1 γ 2 u T 2 γ 3 u T 3 w 1 w 2 w 3 = γ 1 v 1 γ 2 v 2 γ 3 v 3 (26) This is the same linear system as (25) and therefore possesses the same unique solution. We get that w k = w k. If we replace the i-th illuminant spectrum E i (λ) with the spectrum E i (λ) w 1 E 1 (λ) w 2 E 2 (λ) w 3 E 3 (λ), then the transformation is an invertible change of basis (since the initial illuminant samples were linearly independent), the resulting set of illuminant spectra spans the same linear space, and the i-th column of Ω (j) is now zero for all j. Repeating this for every i > 3 makes Ω (j) zero past the third column for all j. By symmetry of the argument for the Λ (i) slices, we reduce the measurement tensor to the core tensor form of Figure 2 via changes in illuminant and material bases. Note furthermore that the core tensor s Γ (k) slices must also be rank-1 matrices since the Γ (k) slices before changes in illuminant and material bases were. By the backward direction of Lemma 3, the core tensor supports generalized diagonal color constancy. This implies that the original measurement tensor also supports generalized diagonal color constancy.
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Linear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
We shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Part I: Factorizations and Statistical Modeling/Inference Amnon Shashua School of Computer Science & Eng. The Hebrew University
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
Direct Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
4: SINGLE-PERIOD MARKET MODELS
4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
Linear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
On the representability of the bi-uniform matroid
On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large
Partial Least Squares (PLS) Regression.
Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component
Visualization of General Defined Space Data
International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.4, October 013 Visualization of General Defined Space Data John R Rankin La Trobe University, Australia Abstract A new algorithm
THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING
THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Applied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS
Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Performance Level Descriptors Grade 6 Mathematics
Performance Level Descriptors Grade 6 Mathematics Multiplying and Dividing with Fractions 6.NS.1-2 Grade 6 Math : Sub-Claim A The student solves problems involving the Major Content for grade/course with
Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent
How To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
ON THE COMPLEXITY OF THE GAME OF SET. {kamalika,pbg,dratajcz,hoeteck}@cs.berkeley.edu
ON THE COMPLEXITY OF THE GAME OF SET KAMALIKA CHAUDHURI, BRIGHTEN GODFREY, DAVID RATAJCZAK, AND HOETECK WEE {kamalika,pbg,dratajcz,hoeteck}@cs.berkeley.edu ABSTRACT. Set R is a card game played with a
A note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected].
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected] This paper contains a collection of 31 theorems, lemmas,
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on
CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace
FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22
FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it
You know from calculus that functions play a fundamental role in mathematics.
CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.
6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives
6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise
KEANSBURG SCHOOL DISTRICT KEANSBURG HIGH SCHOOL Mathematics Department. HSPA 10 Curriculum. September 2007
KEANSBURG HIGH SCHOOL Mathematics Department HSPA 10 Curriculum September 2007 Written by: Karen Egan Mathematics Supervisor: Ann Gagliardi 7 days Sample and Display Data (Chapter 1 pp. 4-47) Surveys and
Linear Algebra: Vectors
A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector
Notes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
Row Ideals and Fibers of Morphisms
Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
How To Know If A Domain Is Unique In An Octempo (Euclidean) Or Not (Ecl)
Subsets of Euclidean domains possessing a unique division algorithm Andrew D. Lewis 2009/03/16 Abstract Subsets of a Euclidean domain are characterised with the following objectives: (1) ensuring uniqueness
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
THE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
MATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
GENERATING SETS KEITH CONRAD
GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors
Math 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
Inner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
Classification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
Multivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
SYMMETRIC EIGENFACES MILI I. SHAH
SYMMETRIC EIGENFACES MILI I. SHAH Abstract. Over the years, mathematicians and computer scientists have produced an extensive body of work in the area of facial analysis. Several facial analysis algorithms
Lecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
6 Scalar, Stochastic, Discrete Dynamic Systems
47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
