P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition



Similar documents
W009 Application of VTI Waveform Inversion with Regularization and Preconditioning to Real 3D Data

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Chapter 6. Orthogonality

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Lecture 5: Singular Value Decomposition SVD (1)

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Linear Algebra Review. Vectors

DATA ANALYSIS II. Matrix Algorithms

Nonlinear Iterative Partial Least Squares Method

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Similar matrices and Jordan form

Similarity and Diagonalization. Similar Matrices

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Component Ordering in Independent Component Analysis Based on Data Power

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 Orthogonal Vectors and Matrices

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

ALGEBRAIC EIGENVALUE PROBLEM

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MAT 242 Test 2 SOLUTIONS, FORM T

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

Factorization Theorems

Least-Squares Intersection of Lines

Least Squares Estimation

A Reading List in Inverse Problems

Inner Product Spaces and Orthogonality

Statistical machine learning, high dimension and big data

Applications to Data Smoothing and Image Processing I

x = + x 2 + x

Orthogonal Diagonalization of Symmetric Matrices

Accurate and robust image superresolution by neural processing of local image representations

by the matrix A results in a vector which is a reflection of the given

Th Salt Exit Velocity Retrieval Using Full-waveform Inversion

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

The Image Deblurring Problem

PEST - Beyond Basic Model Calibration. Presented by Jon Traum

SOLVING LINEAR SYSTEMS

MATH APPLIED MATRIX THEORY

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Manifold Learning Examples PCA, LLE and ISOMAP

[1] Diagonal factorization

Data Mining: Algorithms and Applications Matrix Math Review

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

P013 INTRODUCING A NEW GENERATION OF RESERVOIR SIMULATION SOFTWARE

T : Classification as Spam or Ham using Naive Bayes Classifier. Santosh Tirunagari :

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

1 Example of Time Series Analysis by SSA 1

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach

EarthStudy 360. Full-Azimuth Angle Domain Imaging and Analysis

Understanding and Applying Kalman Filtering

Partial Least Squares (PLS) Regression.

System Identification for Acoustic Comms.:

1 Finite difference example: 1D implicit heat equation

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Numerical Methods I Eigenvalue Problems

Subspace Analysis and Optimization for AAM Based Face Alignment

THE PROBLEM OF finding localized energy solutions

A Direct Numerical Method for Observability Analysis

Pa8ern Recogni6on. and Machine Learning. Chapter 4: Linear Models for Classifica6on

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing

Linear Algebra and TI 89

2.2 Creaseness operator

Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics

Lecture 3: Linear methods for classification

Examination paper for TMA4205 Numerical Linear Algebra

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems

ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING

GeoEast-Tomo 3D Prestack Tomographic Velocity Inversion System

Converted-waves imaging condition for elastic reverse-time migration Yuting Duan and Paul Sava, Center for Wave Phenomena, Colorado School of Mines

Business Process Services. White Paper. Price Elasticity using Distributed Computing for Big Data

PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)

(Quasi-)Newton methods

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

LINEAR ALGEBRA. September 23, 2010

The Method of Least Squares

The Characteristic Polynomial

AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Mean value theorem, Taylors Theorem, Maxima and Minima.

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Point Cloud Segmentation via Constrained Nonlinear Least Squares Surface Normal Estimates

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Proposal for Undergraduate Certificate in Large Data Analysis

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Numerical Methods For Image Restoration

LINEAR ALGEBRA W W L CHEN

Transcription:

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic velocity model building has become an industry standard for depth migration. However, regularizing tomography still remains a subjective virtue, if not black magic. Singular value decomposition (SVD) of a tomographic operator or, similarly, eigendecomposition of corresponding normal equations, are well known as a useful framework for analysis of most significant dependencies between model and data. However, application of this approach in velocity model building has been limited, primarily because of the perception that it is computationally prohibitively expensive for the contemporary actual tomographic problems. In this paper, we demonstrate that iterative decomposition for such problems is practical with the computer clusters available today, and as a result, this allows us to efficiently optimize regularization and conduct uncertainty and resolution analysis.

Introduction Velocity model building is one of the most challenging problems in modern depth imaging. The necessity to account for anisotropy in seismic velocities complicates this issue even further. 3D tomographic analysis became the key technology to tackle this problem (Woodward et al., 1998). However, the tomographic inverse problem is ill-posed, which leads to big uncertainties and ambiguities in the reconstruction. To circumvent this, various forms of regularization are used. In particular, truncated singular value decomposition (SVD) could be used as a form of regularization (Lines and Treitel, 1984; Scales et al., 1990). In general, SVD provides a benign framework for analysis of most significant dependencies between model and data and of resolution and uncertainty for the estimates. For large datasets when direct SVD solution is not computationally feasible, various iterative schemes for partial SVD have been proposed, pioneered by Lanczos. Because the properties of iterative decompositions for symmetric positive-definite matrices are better known than for non-square matrices, instead of computing the SVD of the tomographic operator, one can do eigendecomposition of corresponding symmetric positive-definite matrix of the normal equations. We refer the reader to Saad (2003) and Meurant (2006) for an extensive review on modern iterative methods for sparse matrices and the Lanczos method. LSQR solvers based on Lanczos iterations became a popular tool for solving the system of linear tomographic equations. Various extensions of LSQR solvers for eigendecomposition in application to covariance and resolution analysis in seismic tomography (primarily for global seismology) have been proposed (Zhang and McMechan, 1995; Berryman, 2000; Minkoff, 1996; Yao et al., 1999; Vasco et al., 2003). While it is argued in Nolet et al. (1999) that such approaches do not span the full eigenspace with the limited number of LSQR iterations, and therefore, the corresponding covariance and resolution analysis is inadequate, Zhang and Thurber (2007) showed that Lanczos iterations with random starting vector can overcome this limitation. In this paper, we took an approach similar to Zhang and Thurber (2007) and applied it to tomographic velocity model building. Our method is a modification of preconditioned regularized least squares and based on the eigendecomposition of Fisher information matrix. We investigate the process of tomographic regularization and resolution quantification using the apparatus of eigendecomposition. Preconditioned regularized least squares We consider the following linearized forward problem of seismic tomography y = A x + n, (1) where y and x are data and model perturbations, respectively, and n is the additive noise. A is the linear operator obtained from the Frechet derivatives of the non-linear tomography operator that models the sensitivity of the picks to the velocity (and optionally to anisotropic parameters). We refer the reader to Woodward et al. (1998) and references within for the details and linearization of the forward problem of common image point (CIP) tomography. A regularized least squares solution is one of the most common ways to solve equation 1. Its preconditioned version, according to Woodward et al. (1998), can be formulated by first preconditioning the model by x = P x P, and then minimizing the cost functional Φ( x P ) = ( y AP x P ) T D 1 ( y AP x P ) + λ 2 x T P x P. (2) Here, D is the noise covariance matrix; and λ is the damping parameter that controls the tradeoff between minimizing the data misfit and minimizing the preconditioned model norm. Taking the variational derivative of Φ( x P ) with respect to x P and equating the result to zero, the preconditioned regularized solution x P can be written as x P = I 1 λ,p PT A T D 1 y, with I λ,p := I P + λ 2 I, I P := P T A T D 1 AP being the Fisher information matrix corresponding to the preconditioned model parameterization x P, and I denotes the identity matrix.

We recognize x P as the solution of the following augmented system of linear equations: ( D 1/2 ) ( AP D x λi P = 1/2 ) y. (3) 0 In practice, LSQR methods are used to solve equation 3. The choice of λ and P is subjective. It is either an educated guess or determined by a user after testing a range of possibilities. Moreover, as we mentioned in the introduction, the resolution and uncertainty analysis for LSQR is questionable. Truncated iterative eigendecomposition Now let us consider the objective function in equation 2 for λ = 0, i.e. preconditioned likelihood function. Then Φ( x P ) obtains its minimum at x P = I 1 P PT A T D 1 y. As an alternative to the preconditioned regularized least-squares method described in the previous section, one can perform a truncated eigenvalue decomposition of I P and compute a regularized inverse I P,n by utilizing finite number of eigenvalues and eigenvectors of I P, i.e. I P,n = U I P,n Λ IP,n U T I P,n, where Λ I P,n is the diagonal matrix formed by the inverses of the n largest eigenvalues of I P, and U IP,n is the matrix formed by the corresponding n eigenvectors of I P. The truncation (n) and damping (λ) may be chosen to give approximately the same results (Scales et al., 1990). Damping optimization requires computing I 1 λ,p, or equivalently solving (3), for a set of λ. On the other hand, iterative eigendecomposition makes it possible to compute a solution for I P,n+1 by incorporating a new eigenvalue and eigenvector to the eigendecomposition of I P,n, without requiring computation of the eigendecomposition from scratch. This property makes truncated eigenvalue decomposition appealing for the process of regularization optimization, even though eigendecomposition normally requires more iterations and some extra computations than one run of LSQR iterations. Furthermore, automated cutoff criteria for truncation may be developed based on the statistics of the residual updates as we compute more eigenvalues and eigenvectors. The resolution matrix for x P is defined by R P := I P,n I P. Eigendecomposition I P = U IP Λ IP U T I P provides a straightforward way to calculate the resolution matrix: R P = [ U IP,n Λ I P,n U T I P,n ]U IP Λ IP U T I P = U IP,n U T I P,n. (4) Case Study The following example illustrates application of truncated iterative eigendecomposition for regularization and resolution analysis using a sample field dataset. Figure 1 shows slices of 3D tomographic velocity update, a diagonal and a row of the resolution matrix for 100 eigenvectors (Figure 1, left column), and 180 eigenvectors (Figure 1, right column). The row of resolution matrix is taken for the node at the cross section of the slices. Figures 1(g) and (h) show the 100th and 180th eigenvector, respectively. The discretization of the volume of interest is 98x170x88 nodes. The free surface corresponds to z = 0. A preconditioner with smoothing lengths 10x10x4 nodes was applied. Lanczos iterations with full Gram-Schmidt orthogonalization were used. Iterations were automatically stopped at the 1288th iteration after the loss of orthogonality in single precision as an indication of a good approximation for spanning the whole eigenspace. The solution for 100 and 180 eigenvectors are presented in Figures 1 (a) and (b), respectively. Since the resolution is a function of illumination, data and modeling errors, preconditioning and truncation, the behavior of the resolution matrix as a function of number of iterations could be used as a truncation criterion. In our case, the results presented in Figures 1 (c), (d), (e) and (f) allow us to judge the quality of tomographic reconstruction between the solutions for 100 and 180 eigenvectors. Although, the solution for 180 eigenvectors (Figure 1 (b)) has more visual

details and higher amplitudes than the solution for 100 eigenvectors (Figure 1 (a)), the solution for 100 eigenvectors is more reliable than that of for 180 eigenvectors because of the smaller spread of the resolution. Furthermore, the eigenfunction in Figure 1(h) has some unphysical edge anomalies. Conclusions Tomography is an important tool in the modern industry practice for velocity model building. However, a great care should be taken in regularizing the solution. The proper choice of regularization parameters is a big challenge in general. Furthermore, control over the solution uncertainty is of great importance in the inverse problem. In this paper we revisited a long-known eigendecomposition approach, and demonstrated that its iterative implementation is feasible on modern parallel computer architectures for the actual exploration seismic problems. This approach provides extra capabilities for regularization optimization and resolution analysis. Acknowledgments We thank Wayne Wright and Shruti Sheorey for helping with software development. References Berryman, J.G. [2000] Analysis of approximate inverses in tomography i-ii. resolution analysis of common inverse. Optimization and Engineering 1, 87 115, 437 473. Lines, L.R., and Treitel, S. [1984] Tutorial: a review of least-squares inversion and its application to geophysical problems. Geophysical Prospecting 32:22, 159 186. Meurant, G. [2006] The lanczos and conjugate gradient algorithms: From theory to finite precision computations SIAM. Minkoff, S.E. [1996] A computationally feasible approximation resolution matrix for seismic inverse problems. Geophysics Journal International 126, 345 359. Nolet, G., Montelli, R., and Virieux, J. [1999] Explicit, approximate expressions for the resolution and a posteriori covariance of massive tomographic systems. Geophysics Journal International 138, 36 44. Saad, Y. [2003] Iterative methods for sparse linear systems, 2nd edition SIAM. Scales, J. A., Docherty, P., and Gersztenkorn, A. [1990] Regularization of nonlinear inverse problems: Imaging the near-surface weathering layer. Inverse Problems 6, 115 131. Vasco, D.W., Johnson, L.R., and Marques, O. [2003] Resolution, uncertainty, and whole earth tomography. Journal of Geophysical Research 108(B1), 2022. Woodward, M., Farmer, P., Nichols, D., and Charles, S. [1998] Automated 3D tomographic velocity analysis of residual moveout in prestack depth migrated common image point gathers. 68 th Annual International Meeting SEG, Expanded Abstracts 17, 1218 1221. Yao, Z.S., Roberts, R.G., and Tryggvason, A. [1999] Calculating resolution and covariance matrices for seismic tomography with the lsqr method. Geophysics Journal International 138, 886 894. Zhang, H., and Thurber, C. [2007] Estimating the model resolution matrix for large seismic tomography problems based on lanczos bidiagonalization with partial reorthogonalization. Geophysics Journal International 170, 337 345. Zhang, J., and McMechan, G.A. [1995] Estimation of resolution and covariance for large matrix inversions. Geophysics Journal International 121, 409 426.

(a) (b) (c) (d) (e) (f) (g) (h) Figure 1: Tomographic velocity update using (a) 100 and (b) 180 eigenvectors. Diagonal of the resolution matrix (see equation 4) for (c) 100 and (d) 180 eigenvectors. Row of the resolution matrix for the velocity node with coordinates x = 85, y = 151, and z = 7 using (e) 100 and (f) 180 eigenvectors. Eigenvectors (g) 100th and (h) 180th.