A New Lower Bound for the Minimal Singular Value for Real Non-Singular Matrices by a Matrix Norm and Determinant

Similar documents
Continuity of the Perron Root

Similar matrices and Jordan form

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

DATA ANALYSIS II. Matrix Algorithms

Notes on Symmetric Matrices

Numerical Analysis Lecture Notes

A characterization of trace zero symmetric nonnegative 5x5 matrices

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Vector and Matrix Norms

A note on companion matrices

by the matrix A results in a vector which is a reflection of the given

Chapter 6. Orthogonality

Data Mining: Algorithms and Applications Matrix Math Review

What is Linear Programming?

Inner Product Spaces and Orthogonality

[1] Diagonal factorization

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

8 Square matrices continued: Determinants

Similarity and Diagonalization. Similar Matrices

n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n lim m 2 1

Observability Index Selection for Robot Calibration

Linear Programming I

Appendix A. Rayleigh Ratios and the Courant-Fischer Theorem

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA

Numerical Methods I Eigenvalue Problems

October 3rd, Linear Algebra & Properties of the Covariance Matrix

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

University of Lille I PC first year list of exercises n 7. Review

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks

1 Introduction to Matrices

4.6 Linear Programming duality

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Lecture 4: Partitioned Matrices and Determinants

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

Applied Linear Algebra I Review page 1

Factorization Theorems

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Using row reduction to calculate the inverse and the determinant of a square matrix

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Bounds on the spectral radius of a Hadamard product of nonnegative or positive semidefinite matrices

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

The Characteristic Polynomial

Introduction to Matrix Algebra

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

7 Gaussian Elimination and LU Factorization

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

BX in ( u, v) basis in two ways. On the one hand, AN = u+

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

Matrix Differentiation

1 Solving LPs: The Simplex Algorithm of George Dantzig

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

On the representability of the bi-uniform matroid

1 VECTOR SPACES AND SUBSPACES

LINEAR ALGEBRA. September 23, 2010

Multidimensional data and factorial methods

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Study Guide 2 Solutions MATH 111

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

ELA

Lecture 13 Linear quadratic Lyapunov theory

Suk-Geun Hwang and Jin-Woo Park

Direct Methods for Solving Linear Systems. Matrix Factorization

ALGEBRAIC EIGENVALUE PROBLEM

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Factor analysis. Angela Montanari

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Section Inner Products and Norms

Prime Numbers and Irreducible Polynomials

SOLVING LINEAR SYSTEMS

3 Orthogonal Vectors and Matrices

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Understanding Big Data Spectral Clustering

Lecture 5: Singular Value Decomposition SVD (1)

Some Research Problems in Uncertainty Theory

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

The Equivalence of Linear Programs and Zero-Sum Games

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Orthogonal Diagonalization of Symmetric Matrices

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Inner products on R n, and more

More than you wanted to know about quadratic forms

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

Discrete Frobenius-Perron Tracking

Continued Fractions and the Euclidean Algorithm

Visualization of General Defined Space Data

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Two classes of ternary codes and their weight distributions

Ideal Class Group and Units

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

1 Review of Least Squares Solutions to Overdetermined Systems

24. The Branch and Bound Method

Transcription:

Applied Mathematical Sciences, Vol. 4, 2010, no. 64, 3189-3193 A New Lower Bound for the Minimal Singular Value for Real Non-Singular Matrices by a Matrix Norm and Determinant Kateřina Hlaváčková-Schindler Commission for Scientific Visualization Austrian Academy of Sciences Donau-City Str. 1, A-1220 Vienna, Austria katerina.schindler@assoc.oeaw.ac.at Abstract A new lower bound on minimal singular values of real matrices based on Frobenius norm and determinant is presented. We show that under certain assumptions on matrix A is this estimate sharper than a recent bound from Hong and Pan based on a matrix norm and determinant. Keywords: real non-singular matrix, singular values, lower bound 1 Introduction The singular values or eigenvalues of real matrices are fundamental quantities describing the properties of a given matrix. They are however difficult to evaluate in general. It is useful to know at least approximate values of an interval of their occurrence. The first bounds for eigenvalues were obtained more than a hundred years ago. The first paper using traces in eigenvalue inequalities was from Schur in 1909 [5]. Possibly the best-known inequality on eigenvalues is from Gerschgorin in 1931 [3]. Recently, several other lower bounds have been proposed for the smallest singular value of a square matrix, such as Johnson s bound, Brauer-type bound, Li s bound and and Ostrowskitype bound [6, 7, 1, 9, 12]. In our paper we deal with the interval bound on the minimal singular values derived by means of a matrix norm or determinant by applying a stronger version of the so-called Kantorovich inequality from Diaz and Metcalf [2].

3190 Kateřina Hlaváčková-Schindler 2 Preliminary Notes Let A be a n n, n 2 matrix with real elements. Let A E =( n i,j=1 a ij 2 ) 1/2 be the Frobenius norm of matrix A. Trace of a n n matrix A denotes tr(a) = n i=1 a ii. The spectral norm of the matrix A is A 2 = max 1 i n λ i, where λ i is eigenvalue of A T A.Ifλ 1,...,λ n are the eigenvalues of the matrix A, then deta = λ 1 λ 2...λ n. Denote the smallest singular value of A by σ n and its largest singular value by σ 1. It holds that A 2 E = n i=1 σi 2 = tr(a T A), where trace tr(a T A)= n i=1 σi 2. Hong and Pan gave in [4] a lower bound for σ n for a nonsingular matrix as σ min > ( n 1 n )(n 1)/2 deta. (1) In 2007 Turkmen and Civcic in [10] also used matrix norm and determinant for finding upper bounds for maximal and minimal singular value of positive definite matrices. For symmetric positive definite matrix A, one can suppose A 2 = 1, i.e. that the matrix A is normalized, where. 2 is the spectral norm. Consequently for its condition number is κ(a) = A 2 A 1 2 = 1 σ n. The matrix normalization can be always achieved by multiplying the set of equations Ax = b by a suitable constant or for example by the divisive normalization defined by Weiss [11] or Ng et al. [8], which uses the Laplacian L of the symmetric positive definite matrix A. The transformation is defined by D 1/2 AD 1/2, where D = {d ij } n i,j=1 and d ij = 0 for i j and d ij = n j=1 a ij for i = j, where A = {a ij } n i,j=1. 3 Main Results Theorem 3.1 Let A be a nonsingular matrix with singular values σ i so that σ max = σ 1... σ n = σ min and let σ max σ min. Let A E = ( n i,j=1 a ij 2 ) 1/2 be the Frobenius norm of matrix A. (i) Then for its minimal and maximal singular values holds 0 < A 2 E 1/2 nσ2 max <σ min. (2) n(1 σ2 max ) deta 2/n (ii) For σ max =1(supposing that deta 1) holds 0 < ( deta 2/n ( A 2 ) 1/2 n) <σ min.

A new lower bound for the minimal singular value 3191 Proof: (i) We will apply the following result of Diaz and Metcalf [2] which is a stronger form of Pólya-Szegö and Kantorovich s inequality. Let the real numbers a k 0 and b k (k =1,...,n) satisfy m b k a k M. Then m k=1 b 2 k + mm n k=1 a 2 k (m + M) n k=1 a k b k. Let b k = σ k, a k = 1 σ k, m = σmin, 2 M = σmax, 2 let m M. Then from the Diaz and Metcalf s inequality follows, that σk 2 +mm 1 (M +m)n. From the σk 2 latter inequality and from the relationship of arithmetic and geometric mean follows A 2 E <Mn+ mn mmn and also A 2 deta 2/n E M<m(1 M ) and n deta 2/n A from that follows 2 E Mn <m= M n(1 deta 2/n ) σ2 min and the statement of the theorem follows. It holds that 0 < ), since to be true must hold ( A 2 E Mn > 0 and deta 2/n M Mn < 0 and 1 < 0). The second case A 2 E Mn M n(1 1 M > 0) or ( A 2 deta 2/n E A 2 E Mn < 0 and 1 M deta 2/n < 0 holds, since A 2 deta 2/n E = n i=1 σi 2 <σmax 2 n and Π n i=1 σ2 i = deta 2 <M n =(σ 2 max )n.(ii) follows from (i). 4 Comparison with the estimate from Hong and Pan We will show that our estimate of minimal singular value is under certain conditions sharper than the estimate from from Hong and Pan [4]. Theorem 4.1 Let for a n n- nonsingular matrix A with n>1 hold for its singular values σ i σ max = σ 1 =1... σ n = σ min and let σ max σ min. If also 0 < deta, A E < 1 then ( n 1 n )(n 1)/2 deta < ( deta 2/n ( A 2 ) 1/2 n) (3) i.e. our estimate is sharper than the estimate (1) from Hong and Pan. Proof: Since 0 < deta, A E < 1, is (3) equivalent to ( n 1 n )(n 1) deta 2 < deta 2/n ( A 2 n). (4) Since between arithmetical mean A(σi 2) and geometrical mean G(σ2 i )ofσ2 1,...,σ2 n holds A(σi 2 ) G(σi 2 ), then is ( 1 n A 2 E 1) 1. Assume by contradiction that ( deta 2/n 1)

3192 Kateřina Hlaváčková-Schindler the opposite of (4) holds: ( 1 n A 2 E 1) ( deta 2/n 1) (n 1 n )(n 1) deta 2 2 n. (5) Since the right hand side of (5) is smaller than 1 then (5) cannot hold and brings (5) to a contradiction. ACKNOWLEDGEMENTS. The author is grateful for the support by the projects of the Czech Academy of Sciences GACR 102/08/0567 and MSMT CR 2C06001. References [1] A. Brauer, I.C. Gentry: Bounds for the greatest characteristic root of an irreducible nonnegative matrix. Linear Algebra Appl., 8:105-107, 1974. [2] J.B. Diaz, F.T. Metcalf: Stronger Forms of a Class of Inequalities of G. Pólya-G. Szegö, and L.V. Kantorovich. University of Maryland and U.S. Naval Ordnance Laboratory, 1963. [3] S. Gerschgorin: Über die Abgrenzung der Eigenwerte einer Matrix. Izv. Akad. Nauk. USSR Otd. Fiz.-Mat. Nauk 7, 749754, 1931. [4] Y.P. Hong, C.T. Pan: A lower bound for the smallest singular value, Linear Algebra Appl., 172:27-32, 1992. [5] L. Mirsky: An Introduction to Linear Algebra, Oxford U.P., 1955. [6] C.R. Johnson: A Gersgorin type lower bound for the smallest singular value, Linear Algebra and Its Applications Vol. 112, pp.1-7, 1989. [7] C.R. Johnson, T. Szulc: Further lower bounds for the smallest singular value, Linear Algebra and Its Applications, Vol. 272, pp. 169-179, 1998. [8] A.Y. Ng, M.I. Jordan, Y. Weiss: On spectral clustering: Analysis and an algorithm. In S. Becker, T. G. Dietterich, and Z. Ghahramani (Eds.), Advances in neural information processing systems. Cambridge, MA: MIT Press, 2002. [9] L. Li: Lower bounds dor the smallest singular value, Computers and Mathematics with Applications, Vol. 41, pp.483-487, 2001. [10] R. Turkmen, H. Civcic: Some Bound for the Singular Values of Matrices, Applied Mathematical Sciences, Vo. 1, 2007, no. 49, 2443-2449.

A new lower bound for the minimal singular value 3193 [11] Y. Weiss: Segmentation using eigenvectors: A unifying view. In Proceedings IEEE International Conference on Computer Vision (Vol. 9, pp. 975982). Piscataway, NJ: IEEE Press, 1999. [12] Y. Yamamoto: Equality Conditions for Lower Bounds on the Smallest Singular Value of a Bidiagonal Matrix, Applied Mathematics and Computation, Vol. 200, Issue 1, June 2008, pp.254-260. Received: May, 2010