Estimation of resolution and covariance for large matrix inversions

Size: px
Start display at page:

Download "Estimation of resolution and covariance for large matrix inversions"

Transcription

1 Geophys. J. Int. (1995) 121, Estimation of resolution and covariance for large matrix inversions Jie Zhang and George A. McMechan Center for Lithospheric Studies, The University of Texas at Dallas, PO Box , Richardson, TX , USA Accepted 1994 September 30. Received 1994 September 30; in original form 1994 April 12. SUMMARY Key advantages of conjugate gradient (CG) methods are that they require far less computer memory than full singular value decomposition (SVD), and that iteration may be stopped at any time to give an approximate solution; this means that they may be used to obtain solutions of problems that are too large for SVD. The disadvantage is that CG does not conveniently provide auxiliary information on the quality of the solution (resolution and covariance matrices). This may be overcome by extensions of Paige and Saunders LSQR algorithm, which is one of the family of CG algorithms. The extensions are produced by analogy with SVD; bidiagonalization in LSQR produces orthonormal basis vectors that can be used to construct solutions and estimates of resolution and covariance. For large problems, for which SVD can not be performed, the new method provides approkimate resolution and covariance estimates that asymptotically approach those of the SVD solutions as the number of iterations increases. Key words: conjugate gradients, covariance, inversion, LSQR, resolution. INTRODUCTION In inverse problems such as tomography, we are interested not only in obtaining a solution but also in estimating the reliability of the solution. There exist several methods that solve an inverse problem and estimate the uniqueness of, and uncertainty in, the solution. A classical method is the generalized inverse, based on singular value decomposition (SVD) of the sensitivity matrix (Jackson 1972; Wiggins 1972; Lee & Stewart 1981). Although commonly used, the generalized inverse method can only be applied to inverse problems with a modest number of unknowns and observations. The practical limitation lies in the CPU memory requirements of the SVD algorithm (Dongarra et al. 1978). In fact, any inversion method that involves explicit multiplication of large matrices is deemed impractical. For large problems, such as tomographic velocity estimations, which commonly involve hundreds of thousands to a few million unknowns, and millions of observations, row-action methods such as the simultaneous iterative reconstruction technique (SIRT) or conjugate gradient methods such as the least-squares QR decomposition (LSQR) may be used. Both operate on one equation (one row of the matrix) at a time and so require very little computer memory. Although, there is no known way to get the data information density matrix, point spread functions may be used to estimate model resolution at a few selected locations (e.g. Clayton & Comer 1984; Humphreys & Clayton 1988; Spakman & Nolet 1988; Brzostowski & McMechan 1992). To obtain the resolution matrix for an entire model requires a number of inversion solutions equal to the number of unknowns; this is computationally prohibitive for large inverse problems (Nolet 1985; Nakanishi & Suetsugu 1986; Trampert & Leveque 1990). Another approach, the jack-knife (e.g. Lees & Crosson 1990) can be used to estimate solution variance, but also involves a number of inversion solutions equal to the number of data partitions. The LSQR algorithm of Paige & Saunders (1982a,b) is a conjugate gradient type of algorithm (Hestenes & Stiefel 1952). Faster in convergence than SIRT, LSQR has become popular for obtaining solutions to linear inverse problems (Nolet 1985, 1987; van der Sluis & van der Vorst 1987; Spakman & Nolet 1988; Meyerholtz, Palvis & Szpakowski 1989; Lees & Crosson 1990; Leiss & Pan 1991). The CPU memory required for LSQR (like SIRT) involves only a few vectors of dimensions equal to those of the observations and unknowns instead of full matrices, so large inverse problems can be solved. The LSQR method bears some very useful similarities to the classical generalized inverse method based on SVD (Spakman & Nolet 1988). It bidiagonalizes the sensitivity matrix with orthonormal vectors in the parameter and data domains (Golub & Kahan 1965; Paige 1974; Bjorck & EldCn 1979); in fact, bidiagonalization serves as the front end for certain SVD algorithms (Golub & Van Loan 1989). Nolet & Snieder (1990) and Berryman (1994a,b) have analytically explored the similarities between LSQR and SVD but these have not been fully investigated numerically. Below, we exploit these properties to construct resolution and 409

2 410 J. Zhang and G. A. McMechan covariance matrices for very large inversion problems, based on UQR. The method is demonstrated numerically with synthetic examples. The results are encouragingly similar to those from SVD, but without the large memory requirements of SVD. The use of our extension of the LSQR algorithm allows direct computation of approximate resolution, information density, and covariance matrices for large inverse problems; these were not previously possible with SVD. THEORETICAL BACKGROUND Solution of a linear inverse problem may be defined as finding a vector x that best (e.g. in the L, or L, sense) satisfies Ax=b, (1) where x is an n-dimensional vector containing the unknown model parameters, b is an m-dimensional vector containing observations, and A is an m x n sensitivity matrix connecting model parameters and observations. SVD solution A solution (Lanczos 1961; Lee & Stewart 1981) to eq. (1) may be obtained via singular value decomposition of matrix A to give its generalized inverse A* = VDelUT, where U and V are orthonormal matrices of dimensions rn x r and n X r, respectively, D is an r X r diagonal matrix with singular values arranged in descending order down the diagonal, D-' is the inverse of D, and r is the rank of matrix A. Superscript T represents matrix transposition. To evaluate the uniqueness of, and uncertainty in, the solution, resolution (R), information density (S), and covariance of the model parameters (C) may be constructed (e.g. Wiggins 1972; Aki & Richards 1980; Lee & Stewart 1981; Menke 1984): R = A*A = WVT, s = AA* = UUT and C = A*(A*)Ta2 = V[D-'(D-')T]VTa2, (5) where D is the standard deviation of the data. For example, in a velocity tomography problem, u would represent the error in time picking. LSQRA solution The LSQR algorithm of Paige & Saunders (1982b) avoids storage of the full matrix A when seeking a solution to eq. (1); all matrix operations involve only one row or column at a time. This makes the LSQR algorithm suitable for solving large linear inverse problems that are otherwise intractable. Also, as shown below, it can be extended to compute approximate resolution and covariance matrices for large inverse problems. Consider now an alternative linear inverse, which we (2) (3) (4) denote LSQRA, constructed through an analogy to the SVD-based generalized inverse. Following Paige (1974) and Paige & Saunders (1982a), the iterative LSQR algorithm is applied to A and b in eq. (1); A is reduced, after k iterations, to a bidiagonal matrix L and two orthonormal matrices U and V. Then B* (an approximate inverse of A) is constructed by analogy to SVD (the Appendix), as B* = VL-'UT (6a) and an approximation to A is constructed as B = ULVT, (6b) where U and V are orthonormal matrices of dimensions m X k and n X k, respectively; L is a k X k non-singular lower bidiagonal matrix; L-' is the inverse of L; and k is the iteration number. We may now express resolution (R) and information density (S) matrices (B*B and BE*, respectively), based directly on the bidiagonalization decomposition, as R = B*B and = (vl-'u~)(ulv~) = VVT S = BB* = (ulv~)(vl-'u~) = uu' Assuming that the observations are statistically independent and have the same variance v2, the parameter covariance matrix c = B *(B*)~~ = v[l-'( L-')T]VTa2 (9) can be constructed. The solution given numerically by LSQR can be expressed analytically in the form 2 = B*b = VL-'UTb (see Appendix); this explicit form of the solution (LSQRA) has apparently not been previously considered (Nolet 1987, p. 20; Meyerholtz et al. 1989), nor have its consequences been exploited. B and B* also satisfy the Moore-Penrose condition (see Appendix). If only a solution is desired, LSQR is sufficient. LSQRA will always take more computer time than LSQR because of the multiplications in eq. (A8), but this is a small fraction of the total computation. LSQRA is a viable approach to also obtaining approximate resolution and covariance matrices at any iteration. Noiet & Snieder (1990) provide a similar formulation, which uses a continuous rather than discrete parameterization. Berryman (1994a,b) uses a form equivalent to eq. (7), but does not address the question of loss of orthogonality as discussed below. Numerical considerations The bidiagonalization procedure used in LSQR has a distinct numerical property, that a loss of orthogonality among the 'Lanczos vectors' (the columns of U and V) accompanies the convergence of the solution. This loss of orthogonality is an intrinsic property of LSQR, and cannot be avoided by higher precision in the numerical calculations (7)

3 Large matrix inversions 41 1 (Golub & Van Loan 1989, pp ). One approach to this problem is to reorthogonalize the Lanczos vectors, which involves a substantial decrease in overall efficiency. With additional iterations beyond the loss of orthogonality, duplicate (as well as new) singular values of the lower bidiagonal matrix L will occur. It is necessary for stability of the LSQRA solution, and for the resolution and covariance estimates, that the duplicate singular values be removed, or that iteration be stopped when the loss of orthogonality occurs. We chose the former as better (i.e. closer to SVD) results could be obtained by further iteration. It is more efficient to identify and remove the duplicate singular values and their associated eigenvectors than to reorthogonalize. Scales (1989) shows how to obtain a singular value spectrum and identify duplicates using a conjugate gradient method with virtually no extra effort. We applied LSQR (the Paige & Saunders (1982a,b) version) to A, and saved the Lanczos vectors and L on disk; after this point, we diverge from LSQR algorithm. As L has dimensions (k) equal to the number of iterations (usually 10-loo), its inverse L- can be readily computed. We used SVD to diagonalize matrix L and reorganize the basis vectors U and V, as follows. With L=PDQT and L- = OD- P, where P and Q are k X k orthonormal matrices, and D is a k X k diagonal matrix, eqs (6a), (6b), (7), (8) and (9) become, respectively, B* = (vq)d- (up)~ (10a) B = (UP)D(VQ) R = (VQ)(VQ) s = (UP)(UP) and c = (VQ)D-~(VQ)*. VQ and UP are reorganized orthonormal matrices in parameter and data domains, respectively. Note this is not reorthogonalization as in Parlett & Scott (1979), but is similar to the procedure of Bjorck (1988). Eqs (loa), (ll), (12), and (13) are identical in form to those of the generalized inverse equations (2), (3), (4) and (5). The diagonal matrix (D) contains the singular values of L which are approximate singular values of the original matrix A arranged along the diagonal in descending order. Scales (1989) provides another approach to computation of singular values in conjugate gradient solutions for tridiagonal matrices. To the extent that only k independent components are represented, B may be viewed as a filtered version of A. We may exclude small singular values to control the trade-off between resolution and solution error. With eqs (ll), (12) and (13), we can construct approximate resolution, information density, and covariance matrices for large inverse problems that are intractable using the full SVD-based generalized inverse. This is the main contribution of this paper. NUMERICAL EXAMPLES In this section, the LSQRA method is illustrated with three synthetic traveltime tomography examples, and the results are compared with those of SVD, SIRT, and LSQR. Example 1: four-sided geometry The model for the first example (Fig. 1) is parameterized as a 4x4 discrete pixel array and so has 16 unknowns. Four sources and receivers are located along each of the four edges, one station in each pixel, giving 96 traveltime observations. We used the two-point ray-tracing method of Urn & Thurber (1987) to generate synthetic traveltime data, and save ray segment lengths in each pixel to form A in eq. (1); i.e. for convenience, initially we investigate only a pure linear problem. We assume a reference model of constant velocity (2.0 km s- ), and so reduce the traveltime observations to time residual data. Slowness perturbations are used to describe the model parameters, so the problem becomes solving the linear system (l), where x is the slowness perturbation vector (of length 16) to be solved, b is the traveltime residual data vector (of length 96), and A is a 96 X 16 rectangular matrix, each row of which consists of the distance segments traversed by a ray. After eq. (1) is solved, the obtained slowness perturbations are combined with the reference model to construct the updated velocity model. Figure 1 also shows solutions by SIRT, SVD, LSQR, and the proposed LSQRA methods from a starting model with constant velocity of 2.0 km s-i. Table 1 shows the singular values of matrix A included in the construction of the SVD and LSQRA solutions. LSQP-9 corresponds to the loss of orthogonality (when the projection of <the current Lanczos vector onto any previous one exceeds 1 per cent). At iteration 9, the smaller singular values in LSQRA are only approximate (compare with SVD-16), and a few others are still missing. By iteration 24, all 16 singular values are recovered by LSQRA. Using the LSQR or LSQRA eigenstructures that are the best fit to all the data at any iteration provides a better solution than those partial SVD eigenstructures for the same number of similar sized singular values. This is demonstrated by comparison of the SVD-16(9), LSQR-9 and LSQRA-9(9) solutions in Fig. 1. Figure 2 shows the resolution matrices computed using SVD and LSQRA. For comparison, we also show the normalized ray-density distribution because it helps to understand resolution and may, when the ray paths are independent, also be considered as a measure of resolution. The complete SVD and LSQRA solutions (with 16 singular values) show an identity diagonal matrix, confirming that all model parameters (slowness perturbations) are uniquely determined. The resolution matrices of SVD-16(9) and LSQRA-9(9) both show a clear diagonal trend with smaller positive and negative undulations in the off-diagonal positions. Still, the diagonal values for the two high and low anomalies in LSQRA-9(9) are equal to, or larger than, theif respective values in SVD-16(9), which is consistent with the solution results in Fig. 1. Figure 3 shows the sixth rows of the resolution matrices in Fig. 2, and the SIRT point spread function, f& the high-velocity pixel in Fig. 1. Each value in this row is plotted at its physical location in the model so that the spatial relations may be examined; this format is also used in displaying matrix rows in the other two examples below. From the definition (ir=rx=rf), each row of the resolution matrix relates one parameter to all the other parameters in the model. The less-than-ideal distribution for

4 412 J. Zhang and G. A. McMechan DISTANCE (KM) 4.0 0,4 SIRT-16 LSQR-9 LSQ R A-9 (9) LSQRA-24( 16) Figure 1. Example 1. The correct solution (upper left) has two velocity anomalies (2.5 and 1.5 km sc') superimposed on a constant-velocity background (2.0 km s I). Also shown are solutions by SIRT, SVD, LSQR, and the proposed LSQRA method. In each solution the number after the dash is the number of iterations performed for SIRT and LSQR (or singular values determined for SVD) and the number in parentheses is the number of singular values used to reconstruct the solution. See Table 1.

5 Large matrix inversions 413 Table 1. Singular values for the four-sided survey geometry. Dots in the first column indicate values not found by 1,SQRA in nine iterations (the second column). Dots in the third column indicate duplicate (and one very small) singular values not used in the construction of the LSQRA solution. SVD-16(9) LSQRA-9(9) LSQRA-24( 16) SVD-16(9) and LSQRA-9(9) suggests that the model parameters, instead of being uniquely determined, are mutually dependent because of the incompleteness of the eigenstructures used (9 out of 16). The SIRT point spread response also does not reach unity at this point, but is a reasonable approximation. As the number of eigenvalues determined becomes more complete, the parameters become progressively better resolved. To examine the model error caused by data errors, we assumed a s standard deviation in time residual data (a time picking error) and computed the covariance matrices. For ease of comparison with the velocity model, each covariance matrix of slowness perturbations was converted into a velocity error matrix. The diagonal values are the standard deviation in velocity (in kms-i) in each pixel of the model: for any row, the off-diagonal values show the correlation between the error in the pixel corresponding to the diagonal element, and the error in all the other pixels. When each value in a row is plotted at its physical location (as for the resolution matrices described above) we refer to it as an error correlation plot. Figures 4 and 5 show the full velocity error matrix and its sixth row, respectively. As in the solution and resolution, the model errors in the complete (16 singular values) solutions are virtually identical. Figs 2 and 4 (or 3 and 5) taken together, demonstrate the trade-off between resolution and model error; better resolution inevitably corresponds to larger model error (Jackson 1972). Example 2: cross-hole geometry The second example has the same velocity model as the first (Fig. l), but a cross-hole survey recording geometry is used. Four sources and four receivers are equally spaced along the two opposite vertical edges of the model, so the number of traveltime observations is 16, making the matrix A in eq. (1) of dimensions 16 X 16. This geometry gives a smaller angular aperture which reduces the ability to recover the model (McMechan 1983). This effect is manifested quantitatively by the larger condition number of A: in this example (1488.1) than in the previous example (3.5) (Tables 1 and 2). A larger condition number corresponds to greater sensitivity to data error or greater ill-posedness of the inverse problem (Lee & Stewart 1981). Figure 6 shows complete and partial SVD, LSQR and LSQRA solutions for the cross-hole experiment: Fig. 7 shows the corresponding resolution matrices and velocity error matrices. Loss of orthogonality occurs at iteration 8 in the LSQR and LSQRA solutions. Again, the full solutions are indistinguishable (compare SVD-16( 16) and LSQRA- 50(16) in Fig. 6); the partial solutions, and the resolution and model error estimates asymptotically approach those for the full solutions (Figs 6 and 7). The smallest singular values were removed from the SVD and LSQRA spectra (Table 2) to obtain SVD-16( 13) and &SQRA-20( 13) because their inclusion would generate unacceptably large velocity errors. Solutions, resolutions, and error matrices of SVD-16(13) and LSQRA-20(13) appear identical (Figs 6 and 7). Compared with the four-sided survey (Figs 4 and 5), the cross-hole survey tends to give a larger maximum velocity error for a comparable number of singular values of iterations, reflecting the effect of ray angular coverage and consequently greater ill-posedness of eq. (1). Example 3: large cross-hole model The model for the third example (Fig. 8) consists of a circular feature and a double-wedge. Sources and receivers are equally spaced in boreholes along the two vertical edges. The matrix A in eq. (1) now has dimensions of (observations) X 7500 (unknowns), which cannot be stored in CPU memory to apply the standard SVD algorithm. The LSQRA method provides a solution and the associated resolution and error estimates. As LSQR and LSQRA iterations proceed (Fig. 8) the details of the model (e.g. the boundaries between the highand low-velocity areas) improve, and the interior velocities become closer to the correct values, because more eigenstructures are included. LSQR and LSQRA convergefaster than SIRT, probably because of the more optimal use of conjugate updating directions (U and V) in LSQR (van der Sluis & van der Vorst 1987, 1990; Claerbout 1992). Figure 9 contains resolution and velocity error mbtrices for the LSQRA-41(32) solution. The diagonal trends in both resolution and velocity error matrices are clearly visible. Periodicity in both plots reflects the way in which the parameters of the 2-D model are arranged along the 1-D matrix rows (or columns). The presence of off-diagonal values in the resolution plot reveals mutual dependence among parameters. The relatively large off-diagonal values in the velocity error plot suggest that parameter errors

6 414 J. Zhang and G. A. McMechan 0 x E X e, 5 0 Y U W + w B a U a n Q

7 Large matrix inversions 415 U 2 R W * 2 M 8.-

8 416 J. Zhang and G. A. McMechan PARAMETER INDEX 1 16 UI SVD-16( 16) SVD-16(9) LSQRA-9(9) LSQRA-24(16) KM/S 0.06 Figure 4. The velocity error matrix. This is the signed square root of the absolute value of the covariance matrix after converting slowness to velocity. LSQRA-9(9) is a better approximation lo the full solutions (SVD-lh(l6) and LSQRA-24(16)) than SVD-16(9) is. The sixth rows of these matrices are plotted in Fig. 5.

9 Large matrix inversions 417 DISTANCE (KM) 0.4 P SVD-16( 16) SVD-16(9) LSQRA-9(9) LSQRA-24(16) KMlS 0.06 Figure 5. The sixth rows of matrices in Fig. 4. These show the spatial relations (correlations) between the model errors in all other pixels, and that of the highest velocity pixel (the darkest one in each plot).

10 418 J. Zhang and G. A. McMechan Table 2. Singular values for the cross-hole geometry. Dots indicate small (first column) Or dupliate fthitd column) values that are omitted in the model reconstructions in Fig. 6. SVD16( 13) E E E E E E E E-03 LSQRA-8(8) E E-01 LSQRA-PO( 13) E E E E E-02 correlate over long distances. Physically, this is attributable to data projection along ray paths inherent in the inversion. Figure 10 shows the diagonal elements of the resolution and velocity error matrices. The trade-off between resolution and error is manifested by higher resolution and larger errors as iterations proceed. Larger resolution values and velocity errors are concentrated near major velocity boundaries where ray densities are higher. Figure 11 shows the resolution and error correlation for two points in the model. In each resolution distribution, spreads of energy pass through the point being examined. Secondary patterns, related to the survey geometry and velocity distribution are also present. For both points, the velocity error also correlates with other parts of the model through ray crossings. It is interesting that the resolution has high amplitude in regions away from the point being considered, in contrast, for example, to the expected point spread function. The reason is that a point spread considers only first-order effects, and so, by definition, peaks at the point of interest; the corresponding resolution matrix takes into account all the higher order interactions. In general, each point influences all others. To complete the analysis of this example, Fig. 12 contains the singular value spectrum, and Fig. 13 contains every fourth eigenvector in the LSQRA-41(32) solution. The singular value spectrum smoothly decreases so there is no obvious discontinuity at which to truncate small values. The eigenvectors in Fig. 13 (columns of VQ in eq. (loa)) show a good correlation with structural features and boundaries in the solution (Fig. 8). Artefacts visible in each eigenvector tend to cancel upon superposition of the corresponding partial solutions. The level of detail increases as the magnitude of the singular value decreases; thus the lowwavenumber features of the solution are provided by the large singular values, and high-wavenumber features by the small singular values. DISCUSSION When a matrix is large and sparse, bidiagonalization via the Householder transformation, on which standard SVD algorithms are based, ceases to be computationally feasible and row-action algorithms like LSQR are necessary. None the less, the concepts of linear inverse theory can still be implemented numerically using the LSQR method in the same way as SVD. This is the essential interpretation from which the LSQRA method is extended to compute resolution and covariance matrices. If only a solution is required, LSQR by itself is sufficient. We used SVD to invert L so that the analogy between LSQRA and SVD emerges; singular values and orthonormal bases all result naturally. By using SVD on L instead of on A, the matrix dimension problem is transferred from CPU memory to hard disk. Obviously, the problem of limited CPU memory will arise again when the iteration number becomes too large. However, the number of iterations performed in a linear inversion is usually less than 10 ; for this size, inversion of L is tractable by SVD. Other methods such as Gaussian reduction can be used to invert L in eqs (7), (8) and (9), for L of larger dimensions. We found it numerically unstable to include duplicate singular values or use the Lanczos vectors in eq. (7) directly instead of the reorganized basis vectors VQ in eq. (11) to construct the solution, resolution, and covariance. By removing duplicate singular values and using the reorganized basis vectors, stability is achieved. There are other methods that keep Lanczos vectors orthogonal through iterations at the cost of increased algorithm complexity (e.g. Parlett & Scott 1979; Golub & Van Loan 1989). Strictly speaking, the parameter resdlution matrix R is defined as B*A instead of B*B. However, because A ru = VLT (eq. (2.4) of Paige, 1974), R = B*A = VL- UTA = VL- LV = VVT = B*B, which is eq. (7). We have numerically confirmed this relation. We did not study the information density matrix (Aki & Richards 1980; Lee & Stewart 1981) although it can also be computed by the LSQRA method (eqs 8 and 12), or by direct multiplication AB*. CONCLUSION Paige & Saunders (1982a,b) LSQR algorithm can be used to obtain singular values and orthonormal basis vectors and to produce a new (LSQRA) solution directly. Approximate model resolution and covariance matrices can also be constructed using the same eigenstructures. The number of LSQRA iterations defines the number of singular values and basis vectors used in the solution. Approximate resolution and covariance matrices are available at every iteration; with increasing iterations these asymptotically approach those for the full SVD solution. ACKNOWLEDGMENTS The research leading to this paper was funded by the NSF under grant EAR and by the Sponsors of the UT

11 Large matrix inuersions 419 DISTANCE (KM) b SVD-16( 1 6) SVD-16( 13) LSQ R A-8( 8) LSQRA-20(13) LSQR-50 LSQRA-50( 16) Figure 6. Example 2. The correct solution (upper left) is the same as in Fig. 1. The sources and receivers are equally spaced on the two vertical edges. Notation is the same as in Fig. 1.

12 420 J. Zhang and G. A. McMechan PARAMETER INDEX 16

13 E' Y W W 0 z a L Large matrix inversions 421

14 LO'O siuun f 00'0 O'tr S/WY,o 1 x 00 0's,Ol x S'O 0

15 DISTANCE (KM) b KMlS Figure 11. Resolution (above) and error correlation (below) for two representative locations (indicated by the arrow tips). Each of these corresponds to one row in the LSQRA-41132) matrices in Fig. 9. LSQRA-Il(32) I singular value number Figure 12. Normalized singular value spectrum for the LSQRA-41(32) solution in Fig. 8. Numbers placed next to the larger singular values are the number of duplicates at those points.

16 424 J. Zhang and G. A. McMechan L - m a M 0 N 0 s m s L. P 0 c

17 Large matrix inversions 425 Dallas Geophysical Consortium. The authors acknowledge helpful comments from J. VanDecar, R. Snieder and an anonymous reviewer. Computations were performed on a Convex C-3 at the University of Texas at Dallas. The manuscript was expertly typed by Charlotte Stromer. Contribution No. 791 from the Program in Geosciences at the University of Texas at Dallas. REFERENCES Aki, K. & Richards, P., Quantitative Seismology, Vol. 2, Freeman. San Francisco. Berryman. J.G., 1994a. Rcsolution of iterative invcrscs in seismic tomography, in Proc. Cornelius Lanctos International Centenary Conference, pp , eds Brown, J.D., Chu, M.T., Ellison, D.C. & Plemmons, R.J., SIAM, Philadelphia. Berryman, J.C., 1994b. Tomographic resolution without singular value decomposition, in Mathematical Methods in Geophysical Imaging 11, Proc. of SPIE, UOI, pp. 2-13, ed. Hassanzadeh, S., SPIE, Bellingham. Bjorck, A., A bidiagonalization algorithm for solving large and sparse ill-posed systems of linear equations, BIT, 28, Bjorck, A. & Eldh L Method in Numerical Algebra Ill-posed Problems, Technical Report LiTH-MATH-R , Linkoping Univcrsity. Brzostowski, M.A. & McMechan, G.A., D tomographic imaging of near-surface seismic velocity and attenuation, Geophysics, 57, Claerbout, J.F., Earth Sounding Analysis: Processing Versus Inversion, Blackwcll Scientific Publications. Boston. Clayton, R.W. & Comer, R.P., A tomographic analysis of mantle heterogeneities, Terra Cognita, 4, Dongarra, J., Bunch, J.R., Moler, C.F. & Stewart, G.W., LINPACK Users Guide, SIAM Publications, Philadelphia. Golub, G.H. & Kahan, W., Calculating the singular values and pseudo-inverse of a matrix, SIAM J. Numer. Anal., 2, Golub, G.H. & Van Loan, C.F., Matrix Computation, 2nd edn, Johns Hopkins University, Baltimore. Hestenes, M.R. & Steifel, E., Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand., 49, Humphreys, E. & Clayton, R.W., Adaptation of back projection tomography to seismic traveltime problems, J. geophys. Res., 93, Jackson, D.D., Interpretation of inaccurate, insufficient, and inconsistent data, Geophys. 1. R. astr. Suc., 28, Lanczos, C., Linear Differential Operators, Chap. 3, Van Nostrand, London. Lee, W.H.K. & Stewart, S.W., Principles and Applications of Microearthquake Networks, Academic Press, New York. Lees, J.M. & Crosson, R.S., Tomographic imaging of local earthquake delay times for three-dimensional velocity variation in western Washington, J. geophys. Res., 95, Leiss, E.L. & Pan, J.-M., Inverse techniques in geophysical tomography: a comparison of noisy data, in Expanded Abstracts, 61st Annual lnt. Meeting, pp Society of Exploration Geophysicists, Houston. McMechan, G.A., Seismic tomography in boreholes, Geophys. J. R. astr. Soc., 74, Menke, W., Geophysical Data Analysis: Discrete Inverse Theory, Academic Press, New York. Meyerholtz, K.A., Palvis, G.L. & Szpakowski, S.A., Convolutional quelling in seismic tomography, Geophysics, 54, Nakanishi, 1. & Suetsugu, D., Resolution matrix calculated by a tomographic inversion method, J. Phys. Earth, 34, Nolet, G., Solving and resolving inadequate and noise tomographic systems, J. Comput. Phys., 61, Nolet, G., Seismic wave propagation and seismic tomography, in Seismic Tomography, pp. 1-23, ed. Nolet, G., Reidel, Dordrecht. Nolet, G. & Snieder, R., Solving large linear inverse problems by projection, Geophys. J. Int., 103, Paige, C.C., Bidiagonalization of matrices and solution of linear equations, SIAM J. Numer. Anal., 11, Paige, C.C. & Saunders, M.A., 1982a. LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Software, 8, Paige, C.C. & Saunders, M.A., 1982b. Algorithm 583 LSQR: sparse linear equations and least square problems, ACM Trans. Math. Sofmare, 8, Parlett, B.N. & Scott, D.S., The Lanczos algorithm with selective orthogonalization, Math. Comput., 33, Scales, J.A., On the use of conjugate gradient to calculate the eigenvalues and singular values of large, sparse matrices, Geophys. J., 97, Spakman, W. & Nolet, G., Imaging algorithms, accuracy and resolution in delay time tomography, in Mathematical Geophysics, pp , eds Vlaar, N.J., Nolet, G., Wortel, M.J.R. & Cloetingh, S.A.P.L., Reidel, Dordrecht. Trampert, J. & Leveque, J., Simultaneous iterative reconstruction technique: physical interpretation based on the generalized least-squares sblution, J. geophys. Res., 95, Um, J. & Thurber, C., A fast algorithm for two-point seismic ray tracing, Bull. seism. Soc. Am., 77, van der Sluis, A. & van der Vorst, H.A., Numerical solution of large, sparse linear algebraic systems arising from tomographic problems, in Seismic Tomography, pp , ed. Nolet, G., Reidel, Dordrecht. van der Sluis, A. & van der Vorst, H.A., SIRT- and CG-type methods for iterative solution of sparse linear least-squares problems, Linear Algebra and its Applications, 130, Wiggins, R.A., The general linear inverse problem: implication of surface waves and free oscillations for earth structure, Rev. Geophys. Space Phys., 10, APPENDIX A The derivation of Paige & Saunders (1982a) LSQR algorithm consists of two parts; first, bidiagonalization of matrix A (Golub & Kahan 1965; Paige 1974) and second, QR decomposition of the lower bidiagonal matrix L (Paige & Saunders 1982a). Numerical implementation of the LSQR algorithm is given in detail by Paige & Saunders (1982a,b). Here we sketch their procedure to show that a new numerical solution (LSQRA) implied by their procedure, is identical to that of multiplying the inverse B* by the observation vector b. Following Paige & Saunders (1982a), suppose the minimum least-squares solution has the form I x = vy, ( 41) where V is the n X k orthonormal matrix in the model domain, y is a vector of length k, and x is the solution vector of length n. Then, the problem of minimizing b- Ax becomes min IIPle, - Lyll,, (A21

18 426 J. Zhang and G. A. McMechan where L is the k X k lower bidiagonal matrix obtained using LSQR, Dl is a scalar equal to the norm of b, el is a unit vector equal to the first column of a k X k identity matrix, and Plel = UTb, ('43) where U is the m X k orthonormal matrix in the data domain, and UT is the transpose of U. Applying the QR decomposition to the lower bidiagonal matrix L and vector Pie, (Golub & Van Loan 1989), we obtain QL=R (A4) where R is an upper diagonal matrix, Q is the transforming orthogonal matrix, and c is the transformed data vector. Matrices Q, L and R are full-rank square matrices. Now, the minimization problem (A2) translates into finding y from Ry = c. ('46) Because R is full rank, we obtain y in eq. (A6) and a solution to the minimization problem (A2). The ingenuity of Paige & Saunders' LSQR algorithm lies in their numerical procedure that combines the matrix bidiagonalization and the finding of the solution in a single iterative process requiring minimal CPU memory. In analytical form, the solution can be expressed by applying the inverse B* to b. By combining eqs (Al)-(A6), the minimum least-squares solution given by LSQR becomes %=Vy = VR-'c = VL-'Q-lQ(p,e,), where R-', L-I and Q-' are the inverses of R, L and 0, respectively, and Q satisfies 0Q-l = Q-'Q = 1. With eq. (A3) and Q-'Q = I, eq. (A7) becomes f = VL-'(P,e,) = VL-'UTb. (A8) Eq. (A8) gives the numerical solution of the LSQR algorithm by an analytical formula, which provides the mathematical basis for forming eqs (6a) and (6b). This formulation has apparently not been used before, probably because it still involves matrix multiplication; these, matrices are, however, much smaller than those of the original problems being solved. Because of eqs (6a) and (6b), the following relations hold: BB*B = (UUT)ULVT = U(UTU)LVT = ULVT = B, B*BB* = (WT)VL-'UT = v(vtv)l-'u' = VL-'UT = B*, (BB*)~ = (uut)' = (UT)TUT and = UUT = BB* (B*B)~ = (vvt)' = (VT)TVT = WT = B*B. Therefore, B and B* also satisfy the Moore-Penrose conditions.

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

INITIAL RESULTS AT REDUCING SYSTEMATIC ERRORS FOR SEISMIC EVENT LOCATIONS USING A MODEL INCORPORATING ANISOTROPIC REGIONAL STRUCTURES

INITIAL RESULTS AT REDUCING SYSTEMATIC ERRORS FOR SEISMIC EVENT LOCATIONS USING A MODEL INCORPORATING ANISOTROPIC REGIONAL STRUCTURES INITIAL RESULTS AT REDUCING SYSTEMATIC ERRORS FOR SEISMIC EVENT LOCATIONS USING A MODEL INCORPORATING ANISOTROPIC REGIONAL STRUCTURES Gideon P. Smith and Douglas A. Wiens Washington University in St Louis

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 226 (2009) 92 102 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

2.2 Creaseness operator

2.2 Creaseness operator 2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Factor Analysis. Chapter 420. Introduction

Factor Analysis. Chapter 420. Introduction Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

More information

LU Factoring of Non-Invertible Matrices

LU Factoring of Non-Invertible Matrices ACM Communications in Computer Algebra, LU Factoring of Non-Invertible Matrices D. J. Jeffrey Department of Applied Mathematics, The University of Western Ontario, London, Ontario, Canada N6A 5B7 Revised

More information

4.3 Least Squares Approximations

4.3 Least Squares Approximations 18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations

More information

Partial Least Squares (PLS) Regression.

Partial Least Squares (PLS) Regression. Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

Typical Linear Equation Set and Corresponding Matrices

Typical Linear Equation Set and Corresponding Matrices EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

More information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Constrained Least Squares

Constrained Least Squares Constrained Least Squares Authors: G.H. Golub and C.F. Van Loan Chapter 12 in Matrix Computations, 3rd Edition, 1996, pp.580-587 CICN may05/1 Background The least squares problem: min Ax b 2 x Sometimes,

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

How do we understand and visualize uncertainty?

How do we understand and visualize uncertainty? How do we understand and visualize uncertainty? MALCOLM SAMBRIDGE, Australian National University, Canberra CAROLINE BEGHEIN, Arizona State University, Tempe, USA FREDERIK J. SIMONS, University College,

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

1. Introduction. Consider the computation of an approximate solution of the minimization problem

1. Introduction. Consider the computation of an approximate solution of the minimization problem A NEW TIKHONOV REGULARIZATION METHOD MARTIN FUHRY AND LOTHAR REICHEL Abstract. The numerical solution of linear discrete ill-posed problems typically requires regularization, i.e., replacement of the available

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

1 Finite difference example: 1D implicit heat equation

1 Finite difference example: 1D implicit heat equation 1 Finite difference example: 1D implicit heat equation 1.1 Boundary conditions Neumann and Dirichlet We solve the transient heat equation ρc p t = ( k ) (1) on the domain L/2 x L/2 subject to the following

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

5: Magnitude 6: Convert to Polar 7: Convert to Rectangular

5: Magnitude 6: Convert to Polar 7: Convert to Rectangular TI-NSPIRE CALCULATOR MENUS 1: Tools > 1: Define 2: Recall Definition --------------- 3: Delete Variable 4: Clear a-z 5: Clear History --------------- 6: Insert Comment 2: Number > 1: Convert to Decimal

More information

Least Squares Approach for Initial Data Recovery in Dynamic

Least Squares Approach for Initial Data Recovery in Dynamic Computing and Visualiation in Science manuscript No. (will be inserted by the editor) Least Squares Approach for Initial Data Recovery in Dynamic Data-Driven Applications Simulations C. Douglas,2, Y. Efendiev

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

A Simultaneous Solution for General Linear Equations on a Ring or Hierarchical Cluster

A Simultaneous Solution for General Linear Equations on a Ring or Hierarchical Cluster Acta Technica Jaurinensis Vol. 3. No. 1. 010 A Simultaneous Solution for General Linear Equations on a Ring or Hierarchical Cluster G. Molnárka, N. Varjasi Széchenyi István University Győr, Hungary, H-906

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information