Virtual Eigenvalues of the High Order Schrödinger operator II

Size: px
Start display at page:

Download "Virtual Eigenvalues of the High Order Schrödinger operator II"

Transcription

1 ESI The Erwin Schrödinger International Boltzmanngasse 9 Institute for Mathematical Physics A-090 Wien, Austria Virtual Eigenvalues of the High Order Schrödinger operator II Jonathan Arazy Leonid Zelenko Vienna, Preprint ESI 706 (2005) October 3, 2005 Supported by the Austrian Federal Ministry of Education, Science and Culture Available via

2 Virtual eigenvalues of the high order Schrödinger operator II Jonathan Arazy and Leonid Zelenko Abstract We consider the Schrödinger operator H γ = ( ) l + γv (x) acting in the space L 2(IR d ), where 2l d, V (x) 0, V (x) is continuous and is not identically zero, and lim x V (x) = 0. We study the asymptotic behavior as γ 0 of the non-bottom negative eigenvalues of H γ, which are born at the moment γ = 0 from the lower bound λ = 0 of the spectrum σ(h 0) of the unperturbed operator H 0 = ( ) l (virtual eigenvalues). To this end we use the Puiseux-Newton diagram for a power expansion of eigenvalues of some class of polynomial matrix functions. For the groups of virtual eigenvalues, having the same rate of decay, we obtain asymptotic estimates of Lieb-Thirring type. Mathematics Subject Classification 2000, Primary: 47F05, Secondary: 47E05, 35Pxx Keywords. Schrödinger operator, virtual eigenvalues, coupling constant, asymptotic behavior of virtual eigenvalues, Puiseux-Newton diagram, Lieb-Thirring estimates Introduction In the present paper, which is a continuation of [Ar-Z], we consider the elliptic differential operator of order 2l (l IN) H γ = ( ) l + γv (x) (.) acting in the space L 2 (IR d ). Here V (x) is the multiplication operator in L 2 (IR d ) by the real-valued, continuous, not identically zero, non-negative function V (x), defined on IR d and tending to zero sufficiently fast as x. We denote this operator briefly by V. We assume that the coupling constant γ is real. As it is conventional in the literature, we call the operator H γ the Schrödinger operator of order 2l and we call the function V (x) the potential. We consider the so-called virtual eigenvalues of the operator H γ. Recall that these are the negative eigenvalues which are born at the moment γ = 0 from the endpoint λ = 0 of the gap (, 0) of the spectrum σ(h 0 ) of the unperturbed operator H 0 = ( ) l, while γ varies from 0 to a small negative value γ 0 (see Definitions 3.4 and 3.3 of [Ar-Z]). Both authors were partially supported from the Israel Science Foundation (ISF), grant number 585/00, and from the German-Israeli Foundation (GIF), grant number I /200. The second author was partially supported also by the KAMEA Project for Scientific Absorption in Israel.

3 In [Ar-Z] we studied the asymptotic behavior of the bottom virtual eigenvalue of the operator H γ as γ 0. To this end we have developed in Section 3 of [Ar-Z] a supplement to the Birman-Schwinger theory ([Bi3], [Sc], [Re-Si], [S]), in which we studied the process of the birth of eigenvalues in a gap of the spectrum of the unperturbed operator for a small coupling constant. This is a generalization (to the case of relatively compact perturbations) of the theory developed in our earlier paper [Ar-Z] for the case of finite-rank perturbations. In Section 4 of [Ar-Z] we have extracted a finite-rank portion Φ(λ) from the Birman-Schwinger operator X V (λ) = V 2 ( ( ) l λi ) V 2 (λ < 0), such that the norm of the remainder X V (λ) Φ(λ) is uniformly bounded with respect to λ in ( δ, 0) for some δ > 0. In Section 5 of [Ar-Z], in order to obtain the asymptotic expansion of the bottom virtual eigenvalue of the operator H γ in the case of IR d with d odd, we have used a simple version of Schrödinger method ([Bau], Ch. 3, n o 3..2) for a power expansion of the maximal eigenvalue µ 0 (λ) of the operator Φ(λ), The use of this method was possible thanks the fact that the quantity t 2l d µ 0 ( t 2l ) is born at the moment t = 0 from a simple eigenvalue of the operator Φ 0 = lim t 0 t 2l d Φ( t 2l ). (.2) The leading terms of the desired asymptotic expansion of the bottom virtual eigenvalue have been obtained via an inversion of the asymptotic expansion of the maximal eigenvalue of the operator Φ(λ). In the case of an even dimension d we have used a modification of the method mentioned above. The goal of the present paper is to obtain asymptotic estimates for non-bottom virtual eigenvalues of the operator H γ as γ 0. It turns out that in most cases the leading coefficients of these estimates are algebraically computable in the sense that they are algebraic functions of power moments of the potential V (x). In some particular cases these coefficients can be calculated explicitly (see Theorem 3.4 and Examples 3., 4.). Notice that in [N-W] (Lemma 5.) the only result concerning the asymptotic of the non-bottom virtual eigenvalues is that their rate of decay as γ 0 is bigger than the rate of decay of the bottom virtual eigenvalue. On the basis of the results mentioned above we obtain some new asymptotic formulas of Lieb-Thirring type. The paper is divided into five sections and an appendix. After this introduction (Section ), we give in Section 2 the list of notation used in the paper. In Section 3 we obtain asymptotic estimates for the non-bottom virtual eigenvalues of the operator H γ as γ 0 in the case of an odd dimension d and 2l > d (Theorem 3.3). The leading terms of these asymptotic estimates are of power type, because the finiterank portion Φ( t 2l ) of the Birman-Schwinger operator X V ( t 2l ) is a meromorphic operator function in the case of an odd dimension d (see Lemma 4.3 and Proposition 4.4 of [Ar-Z]). We propose an algorithm for evaluation of leading coefficients of these asymptotic estimates. Notice that we cannot use in this case the simple version of Schrödinger method for a power expansion of unbounded branches µ j ( t 2l ) of nonmaximal eigenvalues of the operator Φ( t 2l ), because the corresponding quantities t 2l d µ j ( t 2l ) are born at the moment t = 0 from a multiple eigenvalue µ = 0 of the operator Φ 0 defined by (.2). Therefore we use the Puiseux-Newton diagram for a power expansion of eigenvalues of some class of polynomial matrix functions 2

4 explained in the Appendix of the paper (see [Bau], A.7 and [Va-Tr], Ch. I, 2). In the one-dimensional case (d = ) we derive from Theorem 3.3 explicit formulas for the leading coefficients in the asymptotic representation of the non-bottom virtual eigenvalues (Theorem 3.4). In Section 4 we obtain asymptotic estimates for non-bottom virtual eigenvalues of the operator H γ as γ 0 in the case of an even dimension d (Theorem 4.2). In this case the finite-rank portion Φ( t 2l ) of the Birman-Schwinger operator X V ( t 2l ) is no longer a meromorphic operator function because it contains summands with ln( t ) in its expansion near the point t = 0 (see Lemma 4.6 and Proposition 4.8 of [Ar-Z]). We try to overcome this difficulty with the help of some tricks. Notice that B. Simon has shown that for d = 2 and l = the unique virtual eigenvalue of the operator H γ has an exponential rate of decay for γ 0 (see [S], [S] and Remark 5.7 of [Ar-Z]). It turns out that also in the general case of an even d and 2l d the operator H γ always has virtual eigenvalues with an exponential rate of decay for γ 0, but all the rest of the virtual eigenvalues (if they exist) have a power rate of decay. We succeeded in obtaining asymptotic formulas with algebraically computable leading coefficients only for virtual eigenvalues with a power rate. For virtual eigenvalues with an exponential rate we have obtained in general only two-sided exponential estimates (see assertion (iv) of Theorem 4.2). But in some particular cases we get an asymptotic estimate of logarithm of such eigenvalues with algebraically computable leading coefficients (see assertion (i) of Theorem 5.6 of [Ar-Z] and assertion (ii) of Theorem 4.3). In Section 5 we obtain asymptotic formulas of Lieb-Thirring type, mentioned above, on the basis of the results of Sections 3 and 4 (Theorem 5.3). Notice that in [N-W] estimates (ordinary or asymptotic with respect to γ) of the sum of the form λ ν (γ) κ (γ < 0, κ > 0), ν have been carried out, where λ ν (γ) are negative eigenvalues of the operator H γ. In contrast to this, we consider groups of virtual eigenvalues {λ ν (γ)} ν Nj with the same power rate of decay as γ 0 and obtain asymptotic estimates for the sums of the form ν N j λ ν (γ) κj (.3) with suitable powers κ j > 0. These asymptotic estimates enable us to get an explicit information about asymptotic behavior of generalized means of these groups of virtual eigenvalues (see Remark 5.4). Notice that in general we cannot write an asymptotic formula for each individual virtual eigenvalue with an explicitly calculated leading coefficient, because to this end we must solve an algebraic equation of a high order (see assertion (iv) of Theorem 3.3). In some cases for the group of exponentially decaying virtual eigenvalues we obtain an asymptotic estimate of the sum, which is a logarithmic analog of the sum (.3) (see assertion (iii) of Theorem 5.3). The Appendix is devoted to the above mentioned Puiseux-Newton diagram for a class of polynomial matrix functions. We add the label A to the number of propositions and formulas in the Appendix. 2 Notation In this section we give a list of notation used in the present paper. 3

5 Z is the ring of all integers; IN is the set of all natural numbers, 2,...; Z + = IN {0}; IR is the field of all real numbers; IR + = [0, ); CI is the field of all complex numbers; R(z), I(z) are the real and the imaginary parts of a number z CI. #S is the number of elements of a finite set S; If X Y then we shall occasionally write X for Y \ X if Y is understood. O(x) is generic notation for a neighborhood of a point x. If M is a metric space, then dist(x, y) and dist(x, Y ) are the distance between points x, y M and the distance between a point x M and a set Y M. span(m) is the closure of the linear span of a subset of a Hilbert space H. CI d = d j= CI; IRd = d j= IR; Zd + = d j= Z +. x y = d j= x jy j is the canonical inner product of vectors x = (x, x 2,..., x d ) and y = (y, y 2,..., y d ) belonging to IR d ; x = x x is the Euclidean norm in IR d ; k = d j= k j is the l -norm of a multi-index k = (k, k 2,..., k d ) Z d +. x k = d j= xkj j, where x = (x,..., x d ) IR d and k = (k,..., k d ) Z d +; k! = d j= k j!, where k = (k,..., k d ) Z d +. f G is the restriction of a mapping f : A A 2 on a subset G A. If A is a closed linear operator acting in a Hilbert space H, then: R(A) is the resolvent set of A, that is the set of all λ CI such that A λi is continuously invertible; R λ (A) (λ R(A)) is the resolvent of A, that is R λ (A) = (A λi) ; σ(a) = CI \ R(A) is the spectrum of A. P G is the orthogonal projection on a closed subspace G of a Hilbert space H. S 2 is the Hilbert-Schmidt class of operators acting in a Hilbert space H; T 2 is the Hilbert-Schmidt norm of T S 2. On the set Z d + we define the ordering relation in the following manner: for k,n Z d + we say that k n, if either k < n, or k = n and in the last case the sequence k = (k, k 2,..., k d ) is lexically less than the sequence n = (n, n 2,..., n d ). According to this definition, we can write: We also denote: Z d + = {k ν } ν=0 and 0 = k 0 k k n.... (2.) K j = {k Z d + k = j} (j Z + ); 4

6 S j = #K j ; It is evident that K 0 = {0}, hence S 0 = ; G j = {k Z d + k j} (j Z + ); G j = #G j. Assume that j Z + and C K j. We denote: ( j ) C = ν=0 K ν C, if j > 0; If j = 0 and C = {0}, we put C = {0}; If j = 0 and C =, we put C =. The entries of the matrices which will be considered below are indexed by the elements of a linearly ordered set A with an ordering relation. A t is the transposed matrix for a matrix A; A is the matrix whose entries are complex conjugate to the corresponding entries of the matrix A; A = A t is the conjugate matrix to a matrix A; colon(c n,c n2,...,c nl ) (n n 2,..., n l ) is a matrix with the columns c n,c n2,...,c nl ; A k,n (k,n A) is the entry of a matrix A lying in the row with the index k and in the column with the index n; A H,G (H, G A) is the submatrix of the matrix A, in which the rows have indices from the set H and the columns have indices from the set G; if H = G, we write A H ; A H,G (H, G A) is the minor of the matrix A, in which the rows have indices from the set H and the columns have indices from the set G, that is A H,G = det(a H,G ); if H = G, we write A H. Let H and G be finite subsets of A, H G and H = G \ H. We denote: H,H = ( ) ν, where ν (= ν H,H ) is the number of pairs (k,n) such that n H, k H and k n. 3 Asymptotic estimates for non-bottom virtual eigenvalues in case of an odd dimension In this and in the next sections we shall obtain asymptotic estimates for non-bottom virtual eigenvalues of the operator H γ as γ 0. We propose an algorithm for evaluation of leading coefficients of these asymptotic estimates. We use the Puiseux-Newton diagram for a power expansion of eigenvalues of some class of polynomial matrix functions explained in Appendix. In this section we shall consider the case of IR d with d odd. 3. o Assume that 2l > d. Let us recall that in the case of an odd d the finiterank portion Φ(λ) has been extracted from the Birman-Schwinger operator X V (λ) in Lemma 4.3 and Proposition 4.4 of [Ar-Z], that is it has the form: Φ(λ) = k,n: k+n 2m λ k+n 2m 2l ξ k+n (, h n )h k, (3.) 5

7 where and the quantities ξ k are defined by ξ k = (2π) d m = l d + 2 IR d s k ds s 2l + (3.2) ( k < 2l d), (3.3) and they have the following explicit form: d ξ k = (2π) d π l sin ( j= ( Γ ( ) m j + 2 π d l 2 + m )) Γ (. (3.4) d 2 + m ) Furthermore, recall that h k (x) = (ix)k (V (x)) 2 ( k 2m). (3.5) k! We shall assume that the potential V (x) satisfies the conditions and the condition V ( ) C(IR d ), (3.6) V (x) 0 x IR d, (3.7) lim V (x) = 0. (3.8) x IR d x 2(2l d) V (x)dx <, (3.9) which ensures the membership of the functions h k (x) ( k 2m) to the class L 2 (IR d ). Our immediate goal is to get a convenient matrix representation for the operator Φ(λ). The entries of all the matrices, considered below, are indexed by elements of the linearly ordered set Z d + with the ordering relation defined in Notation (see (2.)). Let us carry out the orthogonalization of the sequence h k (x) ( k 2m) by Schmidt, that is consider in L 2 (IR d ) the orthonormal sequence constructed in the following manner: {f j } G2m j=0 (G 2m = #{k Z d + k 2m}), f 0 = h 0 h 0, (3.0) where As is known, f j = h k j P Lj h kj h kj P Lj h kj (j =, 2,..., G 2m ), (3.) L j = span ( {h 0, h k,..., h kj } ). (3.2) j P Lj h kj = λ j,ν h kν, (3.3) ν=0 6

8 where λ j,ν = det(γ(j, ν)) det(γ(j)), (3.4) Γ(j) is the Gram matrix of the sequence h 0, h k,..., h kj, that is and Γ(j, ν) is a matrix of the form: Γ(j) r,s = (h kr, h ks ) (r, s {0,,..., j }) (3.5) Γ(j, ν) r,s = { (hkr, h ks ), if s ν, (h kr, h kj ), if s = ν (r, s, ν {0,,..., j }). (3.6) Recall that k 0 = 0 (see Notation, (2.)). Thus, we have the following orthogonalization formulas: f 0 = ω 0,0 h 0 f = ω,0 h 0 + ω, h k (3.7) f G = ω G,0 h 0 + ω G, h k + + ω G,G h kg (G = G 2m ), where and ω j,j = ω 0,0 = h 0, (3.8) h kj j r=0 λ j,rh kr (j {, 2,..., G 2m }) (3.9) λ j,ν ω j,ν = h kj j r=0 λ j,rh kr (j {, 2,..., G 2m }, ν {0,,..., j ). (3.20) Recall that the numbers λ j,ν are defined by (3.4), (3.5) and (3.6). We shall need the following Lemma 3.. Let F be the subspace of L 2 (IR d ) of the form ( ) F = span {h kj } G2m j=0 and {f j } G2m j=0 be the orthonormal sequence obtained from {h kj } G2m j=0 via the orthogonalization process (3.7). Let W be a linear operator acting in F and defined by the conditions: f j = Wh kj (j = 0,,..., G 2m ). (3.2) Then the matrix representation of the operator W in the basis {f j } G2m j=0 has the form: W ν,ρ = { ων,ρ, for ρ ν, 0, for ρ < ν, (ν, ρ {0,,..., G 2m }), (3.22) where ω ν,ρ are coefficients used in the orthogonalization formulas (3.7). 7

9 Proof. Equalities (3.7) imply that where the matrix h 0 = α 0,0 f 0 h k = α,0 f 0 + α, f (3.23) h kg = α G,0 f 0 + α G, f + + α G,G f G (G = G 2m ), A ν,ρ = { αν,ρ, for ν ρ, 0, for ν < ρ (ν, ρ {0,,..., G 2m }) is inverse to the matrix B ν,ρ = { ων,ρ, for ν ρ, 0, for ν < ρ, (ν, ρ {0,,..., G 2m }) which is the transposed matrix for the matrix, defined by (3.22). On the other hand, equalities (3.23) mean that the matrix A t is the representation of the operator W in the basis {f j } G2m j=0. These circumstances mean that the matrix, defined by (3.22), is the representation of the operator W in this basis. The lemma is proven. We now turn to the main statement of this subsection. Proposition 3.2. Assume that 2l > d. Then for any fixed λ < 0 the set of all nonzero eigenvalues µ = µ(λ) of the operator Φ(λ), defined by (3.), coincides with the set of all non-zero roots of the equation where L(λ) is the matrix of the form: L(λ) k,n = det( L(λ) µt ) = 0, { λ k+n 2m 2l ξ k+n for k + n 2m 0 for k + n > 2m (k,n G 2m), (3.24) the integer m is defined by (3.2), the numbers ξ k are defined by (3.4), the matrix T has the form T = WW (3.25) and W is the matrix of the form (3.22), in which ω ν,ρ are coefficients used in the orthogonalization formulas (3.7) and calculated by (3.8), (3.9), (3.20), (3.4), (3.5) and (3.6). Proof. We see from (3.) that the subspace ( ) F = span {h kj } G2m j=0 of the space L 2 (IR d ) is an invariant subspace of the operator Φ(λ) and, furthermore, σ(φ(λ)) = σ(φ F (λ)) {0}, 8

10 where Φ F (λ) = Φ(λ) F. Then it is enough to prove the assertion of the proposition for the operator Φ F (λ). Let f j L (j = 0,,..., G 2m ) be functions obtained from h kj by the procedure of the orthogonalization (3.7). Consider a linear operator W acting in the space F and defined by the conditions (3.2). Since the sequence {h kj } G2m j=0 is linearly independent, the operator W realizes a linear topological automorphism of the finitedimensional space F. We see from (3.) that Φ F (λ) = W L(λ)(W ), (3.26) where L(λ) is a linear operator acting in L, whose matrix representation in the orthonormal basis {f j } G2m j=0 has the form (3.24). Representation (3.26) together with Lemma 3. imply the assertion of the proposition. 3.2 o We now turn to the main result of this section. We assume that the virtual eigenvalues of the operator H γ at λ = 0 are indexed by the elements of the linearly ordered set Z d + (see Notation, (2.)) such that for γ < 0 λ 0 (γ) λ k (γ) λ kj (γ).... Theorem 3.3. Assume that d is odd, 2l > d, the potential V (x) is not identically zero and it satisfies conditions (3.6)-(3.8) and condition (3.9). Denote m = l d+ 2. Then: (i) The operator H γ, defined by (.), has r = #({k Z d + k m} = ( ) m+d d (3.27) virtual eigenvalues {λ k (γ)} k m at the endpoint λ = 0 of the gap (, 0) of σ(h 0 ) ; (ii) For the bottom virtual eigenvalue λ 0 (γ) the asymptotic expansion, described in Theorem 5.3 of [Ar-Z], is valid; (iii) For the rest of virtual eigenvalues λ k (γ) (k 0) the following asymptotic representation is valid for γ 0: ( ) 2l λ k (γ) = c k γ 2m+ 2 k + O( γ 2m+ 2 k ) (0 < k m); (3.28) (iv) The constants c k in (3.28) are calculated by the following formula for k = j {, 2,..., m}: 2m+ 2 k c k = ek, (3.29) in which the numbers {e k } k =j are positive and form the set of roots of the following algebraic equation: S j ( e) Sj Ξ Kj T Kj + ( e) Sj ν C,D K j: #C=#D=ν ν= 2l Ξ C, D T C, D C, C D, D = 0. (3.30) This fact have been established in [W] (Corollary 6.), but we prove it in the process of the proof of this theorem. 9

11 Here the matrix Ξ has the form Ξ k,n = { ξk+n, if k + n 2m, 0, if k + n > 2m ( k 2m, n 2m) (3.3) and ξ k is expressed by (3.4). Recall that the matrix T is defined by (3.25), (3.22), (3.8), (3.9), (3.20), (3.4), (3.5), (3.6) and (3.5). Proof. For λ < 0 consider the Birman-Schwinger operator X V (λ), Recall that it is defined in the following manner: X V (λ) = V 2 Rλ (H 0 )V 2. (3.32) By Proposition 4. of [Ar-Z], this operator is compact. By Proposition 4.4 of [Ar-Z], the representation is valid: X V (λ) = Φ(λ) + T(λ), (3.33) where, in view of formulas (4.7) and (4.7) from [Ar-Z] for the kernel Φ(x,y, λ) of the integral operator Φ(λ), the latter is a self-adjoint bounded operator of the rank G 2m having the form (3.). Furthermore, T(λ) is the integral operator belonging to the Hilbert-Schmidt class. Taking into account that T(λ) T(λ) 2, we obtain from estimate (4.8) of [Ar-Z]: Denote and T > 0, λ < 0 : T(λ) T. (3.34) t = λ 2l (3.35) Φ(t) = Φ( t 2l ). (3.36) By Proposition 3.2, for any fixed t the set of non-zero eigenvalues of the operator Φ(t) coincides with the set of all non-zero roots of the equation det( L( t 2l ) µt ) = 0, where the matrix L(λ) is defined by (3.24), that is it has the form: in which L(t) k,n = L( t 2l ) = L(t), t2m+ { ξk+n t k+n, if k + n 2m, 0, if k + n > 2m ( k 2m, n 2m). Observe that, in view of (3.25), the matrix T is positive-definite. Furthermore, the matrix Ξ Gm is positive-definite too, because in view of (3.3) and formula (3.3) for ξ k, it is a Gram matrix of a linearly independent system. The above circumstances and Proposition A.2 with N = 2m, p = m imply that the identically non-zero branches µ(t) of eigenvalues of the operator Φ(t) can be indexed by the elements of the linearly ordered set Z d + such that t O(0) (0, ) : µ 0 (t) µ k (t) µ k2 (t) µ kn+ (t) > 0 > µ kn+ (t)... (3.37) 0

12 for some neighborhood O(0) and these branches have the form: e k φ t 2m+ 2 k k (t), if k m, µ k (t) = tψ k (t), if m < k 2m, (3.38) where the functions φ k (t) and ψ k (t) are analytic in O(0) and φ k (0) =. Furthermore, the numbers e k ( k m) are positive and they are calculated according to the rule indicated in assertion (iv) of the theorem. We see from (3.38) that lim µ k (t) = t 0 { + for k m, 0 for k > m. (3.39) Let {µ + k j (λ)} m+ j=0 be the positive characteristic branches of the operator H 0 with respect to V on the gap (, 0) of σ(h 0 ) (see Definition 3.2 of [Ar-Z]), indexed by elements of the linearly ordered set Z d + and arranged in the non-increasing ordering: µ + 0 (λ) µ+ k (λ) µ + k m+ (λ) > 0. (3.40) Recall that for any fixed λ < 0 the sequence {µ + k j (λ)} m+ j=0 coincides with the set of all positive eigenvalues of the Birman-Schwinger ) operator X V (λ). Observe that, in view of (3.35), (3.36) and (3.37), { µ kj ( λ) 2l } n+ j=0 is the sequence of all positive branches of eigenvalues of the operator Φ(λ). Here λ varies in ( σ, 0) for some σ > 0. In view of representation (3.33), estimate (3.34) and orderings (3.37), (3.40), we get by Lemma 3.4 of [Ar-Z] for λ ( σ, 0): ) µ + k j (λ) µ kj ( λ) 2l T, ( ) if µ kj λ) 2l T > 0 (3.4) and ) µ kj ( λ) 2l µ + k j (λ) T, if µ + k j (λ) T > 0. (3.42) Let {µ + k j (λ)} l(0) j=0 be the main characteristic branches of H 0 with respect to V near the endpoint λ = 0 of the gap (, 0) of σ(h 0 ) (see Definition 3.9 of [Ar-Z]). Here l(0) is the corresponding asymptotic multiplicity M(0, H 0, V ) of the endpoint λ = 0. This means that { limµ + + for 0 j l(0), λ 0 k j (λ) = < + for j > l(0). Then properties (3.39), (3.4) and (3.42) imply that l(0) = r, where r is defined by (3.27). So, assertion (i) of the theorem is proven. Moreover, in view of (3.38), the asymptotic representation is valid for λ 0: µ + k (λ) = e k λ 2m+ 2 k 2l ( + O ( )) λ 2l ( k m). By Proposition 3.6 of [Ar-Z], the latter circumstances mean that there exist exactly r branches λ k (γ) ( k m) of virtual eigenvalues of the operator H γ at λ = 0 and they have the form: λ k (γ) = ( ( ) µ + ) k (γ < 0), γ

13 hence asymptotic representation (3.28) is valid, in which the coefficients c k are calculated according to the rule indicated in assertion (iv) of the theorem. The theorem is proven. 3.3 o In the one-dimensional case (d = ) Theorem 3.3 yields explicit formulas for the leading coefficients in the asymptotic representation of the non-bottom virtual eigenvalues. Before formulating the corresponding theorem, let us specify the notation introduced above for the one-dimensional case. We have for d = : m = l d + 2 = l r = #{k Z + k m} = l; for j Z + K j = {k Z + k = j} = {j}, (3.43) hence for j IN and C K j : S j = #K j = ; (3.44) C = { {0,,..., j}, if C = {j}, {0,,..., j }, if C =. (3.45) For d = the sequence of functions (3.5) has the form: h j (x) = (ix)j (V (x)) 2 (j {0,,..., 2l 2}). (3.46) j! In the case d = the matrix Ξ, defined by (3.3), has the form: Ξ ν,ρ = { ξν+ρ, if ν + ρ 2l 2, 0, if ν + ρ > 2l 2 (ν, ρ {0,,..., 2l 2}), (3.47) where ξ j = s j ds s 2l +. (3.48) The orthogonalization formulas (3.7) acquire the form in the case d = : f 0 = ω 0,0 h 0 f = ω,0 h 0 + ω, h (3.49) f G = ω G,0 h 0 + ω G, h + + ω G,G h G (G = 2l 2). We now turn to the theorem promised in the beginning of this subsection. Theorem 3.4. Let H γ be the operator, defined in L 2 (IR) by the operation: ( d2 ) l + γv (x), dx 2 2

14 where the potential V (x) is not identically zero and satisfies conditions (3.6)-(3.8) (with d = ) and the condition x 2(2l ) V (x) dx <. Then: (i) The operator H γ has l virtual eigenvalues {λ j (γ)} l j=0 (γ < 0) at the endpoint λ = 0 of the gap (, 0) of σ(h 0 ); (ii) For the bottom virtual eigenvalue λ 0 (γ) the asymptotic expansion, described in Theorem 5.3 of [Ar-Z], is valid with d = and m = l ; (iii) For the rest of virtual eigenvalues λ j (γ) (j 0) the following asymptotic representation is valid for γ 0: ( ) 2l λ j (γ) = c j γ 2l 2j + O( γ 2l 2j ) (j =, 2,..., l ); (3.50) (iv) The constants c j in (3.50) are positive and they are calculated by the following formula for j {, 2,..., l }: where 2l 2l 2j c j = ej, (3.5) e j = Ξ {0,,...,j} Ξ {0,,...,j } (dist(h j, L j )) 2, (3.52) L j = span ({h 0, h,..., h j }) (3.53) and the matrix Ξ is defined by (3.47) and (3.48). Recall that the integral (3.48) is calculated by the formula (3.4) with d =. Proof. By virtue of Theorem 3.3, it remains only to prove formula (3.52). In view of (3.43), (3.44) and (3.45), equation (3.30) (see assertion (iv) of Theorem 3.3) takes the following form in the case d = : Recall that ( e)ξ {0,,...,j } T {0,,...,j } + Ξ {0,,...,j} T {0,,...,j} = 0. (3.54) where W is a triangular matrix of the form: T = WW, (3.55) W ν,ρ = { ων,ρ, for ρ ν, 0, for ρ < ν, (ν, ρ {0,,..., 2l 2}). Here ω ν,ρ are coefficients used in the orthogonalization formulas (3.49). The root of the equation (3.54) is following: e j = Ξ {0,,...,j} Ξ {0,,...,j } T {j+,j+2,...,2l 2} T {j,j+,...,2l 2}. (3.56) 3

15 Since W is an upper triangular matrix, then (3.55) implies that T {j,j+,...,2l 2} = W {j,j+,...,2l 2} ( W {j,j+,...,2l 2}), hence T {j,j+,...,2l 2} = W {j,j+,...,2l 2} W {j,j+,...,2l 2}. (3.57) On the other hand, since W {j,j+,...,2l 2} is a triangular matrix, then W {j,j+,...,2l 2} = W {j,j+,...,2l 2} = ω j,jω j+,j+ ω 2l 2,2l 2. The latter equality, (3.56) and (3.57) imply that On the other hand, in view of (3.9) and (3.3), e j = Ξ {0,,...,j} Ξ {0,,...,j } ωj,j 2. (3.58) ω j,j = h j P Lj h j = dist(h j, L j ), (3.59) where the subspace L j is defined by (3.53). From (3.58) and (3.59) we get the desired formula (3.52). The theorem is proven. Remark 3.5. In formula (3.52) the quantity dist(h j, L j ) can be expressed explicitly in the following manner: j dist(h j, L j ) = h j λ j,ν h ν, ν=0 where and Γ(j, ν) r,s = λ j,ν = det(γ(j, ν)) det(γ(j)), Γ(j) r,s = (h r, h s ) (r, s {0,,..., j }) { (hr, h s ), if s ν, (h r, h j ), if s = ν (r, s, ν {0,,..., j }). 3.4 o In this subsection we shall consider an example of the operator H γ on the basis of application of Theorem 5.3 from [Ar-Z] and of Theorem 3.3. In this example the coefficients of the asymptotic representation can be found in an explicit form. First we shall prove a lemma, which will be used also in the next sections. Observe that the set K = {k Z d + k = } consists of d vectors: where k j has the form: K = {k,k 2,...,k d }, (k j ) ν = {, ν = j, 0, ν j. 4

16 Lemma 3.6. For any k K the following equality is valid: T {k} = T {0} ( h k 2 (h k, h 0 ) 2 ) h 0 2. (3.60) Proof. Assume that k = k j (j {, 2,..., d}). Making use of the formula for an inverse matrix, we have: ( T {k} = T {0,kj} = T {0} T {0} ). (3.6) k j,k j On the other hand, since T = WW and the matrix W is upper triangular, then Let us recall that the entries of the matrix T {0} = W {0} ( W {0} ). (3.62) W t ν,ρ = { ωρ,ν, for ρ ν, 0, for ρ > ν, (ν, ρ {0,,..., G 2m }) are used in the orthogonalization formulas (3.7). Then the equalities hold: h 0 = α 0,0 f 0 h k = α,0 f 0 + α, f (3.63) h kg = α G,0 f 0 + α G, f + + α G,G f G (G = G 2m ), where the matrix A ν,ρ = { αν,ρ, for ν ρ, 0, for ν < ρ (ν, ρ {0,,..., G 2m }) is inverse to the matrix W t. It is evident that ( ( A {0} = W {0} ) ) t. Hence we get from (3.62) that (T {0} ) = A {0} ( A {0} ) t. (3.64) Then, taking into account the facts that {f j } G2m j=0 is an orthonormal system and α 0,0 = h 0, we get from (3.63): (A {0} ( A {0} ) t ) k j,k j = j α j,ν 2 = ν= h kj 2 α j,0 2 = h kj 2 (h kj, f 0 ) 2 = h kj 2 (h k j, h 0 ) 2 h 0 2. The latter equality and equalities (3.6), (3.64) imply the desired equality (3.60). The lemma is proven. 5

17 We now turn to an example of the operator H γ, promised in the beginning of this subsection. Example 3.. Consider the case l = d = 3 and assume that the potential V (x) satisfies the conditions of Theorem 5.3 from [Ar-Z] and of Theorem 3.3 with l = d = 3. In this case m = l d+ = and r = ( ) m+d 2 m = 4. The latter means that the operator H γ has four virtual eigenvalues at the endpoint λ = 0 of the gap (, 0) of σ(h 0 ). We shall write their asymptotic representation making use of the theorems mentioned above. First of all, observe that in our case G 2 = #{k Z 3 + k 2} = 0 and the sets have the form: and where K j = {k Z 3 + k = j} (j {0,, 2}) K 0 = {k 0 } = {0}, K = {k,k 2,k 3 } K 2 = {k 4,k 5,k 6,k 7,k 8,k 9 }, k = {0, 0, }, k 2 = {0,, 0}, k 3 = {, 0, 0}, k 4 = {0, 0, 2}, k 5 = {0,, }, k 6 = {0, 2, 0}, k 7 = {, 0, }, k 8 = {,, 0}, k 9 = {2, 0, 0}. Making use of Theorem 5.3 from [Ar-Z], we can write the following asymptotic expansion for the bottom virtual eigenvalue of the operator H γ for γ 0: ( ) 3 λ 0 (γ) = γ 2 δ 0 + δ γ O(γ), (3.65) where δ 0 and δ are coefficients of the polynomial p(ǫ) = ǫ(δ 0 + δ ǫ), which is calculated by the following procedure: t 0 (ǫ) = 0, t (ǫ) = ǫ (θ 0 (t 0 (ǫ))) 0, p(ǫ) = ǫ (θ (t (ǫ))). Here hence t (ǫ) = ǫν and θ 0 (t) = ν 2 3 0, θ (t) = (ν 0 + ν t) 2 3, p(ǫ) = ǫν ( + 2 ) 3 ν ν 3 0 ǫ. These circumstances mean that the asymptotic expansion (3.65) acquires the following explicit form: λ 0 (γ) = γ 2 (ν ν ν 3 0 γ O(γ) ) 3. 6

18 Recall that ν 0 = ξ 0 h 0 2 = ξ 0 IR 3 V (s) ds, ν = (Φ X 0, X 0 ), X 0 = h 0 V (x) h 0 = V (s) ds IR 3 and Φ = ξ k+n (, h n )h k. k,n: k+n =2 Also recall that the numbers ξ k are calculated by formula (3.4) and the functions h k are defined by (3.5). We now turn to an asymptotic representation of the rest of virtual eigenvalues of the operator H γ. Let Ξ be the matrix with the size 0 0, defined by (3.3), (3.4) and T be the matrix with the same size, defined by (3.25), (3.22), (3.8), (3.9), (3.20), (3.4), (3.5) and (3.6). Observe that, in view of properties (i) and (ii) of the numbers ξ k given in Proposition 4.5 of [Ar-Z], { 0, if j ν, Ξ kj,k ν = (j, ν {, 2, 3}). ξ (2,0,0), if j = ν Hence we get: Ξ C, D = { 0, if C D, ξ 0 ( ξ(2,0,0) ) #C, if C = D (C, D K ). (3.66) By Theorem 3.3, for the non-bottom virtual eigenvalues λ k (γ) ( k = ) of the operator H γ the following asymptotic representation holds as γ 0: where λ k (γ) = c k γ 6 ( + O(γ)), c k = (e k ) 6 and, in view of (3.30) and (3.66), the quantities e k, e k2, e k3 are the roots of the following cubic equation: ( ) e 3 T {k,...,k 9} + e 2 ξ (2,0,0) T{k2,k 3,...,k 9} + T {k,k 3,...,k 9} + T {k,k 2,k 4,...,k 9} e ( ) 2 ( ) ξ (2,0,0) T{k3,...,k 9} + T {k2,k 4,...,k 9} + T {k,k 4,...,k 9} + ( ξ (2,0,0) ) 3 T{k4,...,k 9} = 0. Observe that, by Lemma 3.6, T {k2,k 3,...,k 9} = T {k,k 2,...,k 9} ( h k 2 (h k, h 0 ) 2 ) h 0 2, T {k,k 3,...,k 9} = T {k,k 2,...,k 9} ( h k2 2 (h k 2, h 0 ) 2 ) h 0 2 and T {k,k 2,k 4,...,k 9} = T {k,k 2,...,k 9} ( h k3 2 (h k 3, h 0 ) 2 h 0 2 ). 7

19 4 Asymptotic estimates for non-bottom virtual eigenvalues in case of an even dimension We see from Lemma 4.6 and Proposition 4.8 of [Ar-Z] that in the case of IR d with d even the finite-rank portion Φ( t 2l ) of the Birman-Schwinger operator X V ( t 2l ) is no longer a meromorphic operator function, because it contains summands with ln ( t) in its expansion near the point t = 0. In this section we try to overcome this difficulty with the help of some tricks. 4. o Before formulating the main theorem of this section, we prove the following Lemma 4.. Let L(t) be an operator function defined on an interval (a, b] and taking values in the set of linear self-adjoint operators acting in a Hilbert space H of a finite dimension N. Assume that there exists a subspace N H, whose codimension is equal to n, such that lim sup t a sup (L(t)v, v) <. (4.) v N, v = For each t (a, b] consider the eigenvalues of the operator L(t) arranged in the nonincreasing ordering: µ (t) µ 2 (t) µ N (t). Then at most n branches of the above eigenvalues have the property: limµ k (t) = +. (4.2) t a Proof. Assume, on the contrary, that at least n + branches have property (4.2). Let µ (t) µ 2 (t) µ n+ (t) e (t), e 2 (t),..., e n+ (t) be the eigenvectors corresponding to these eigenvalues. Consider the family of subspaces: E n+ (t) = span ( {e j (t)} n+ j= ). In view of (4.2) with k =, 2,..., n +, lim t a Since codim(n) = n, we have By condition (4.), inf v E n+(t), v = lim sup t a F(t) = E n+ (t) N = {0}. sup v F(t), v = On the other hand, in view of (4.3), we obtain the relation which contradicts (4.4). lim t a inf v F(t), v = (L(t)v, v) = +. (4.3) (L(t)v, v) <. (4.4) (L(t)v, v) = +, 8

20 We now turn to the main result of this section. As above, we assume that the virtual eigenvalues of the operator H γ at λ = 0 are indexed by the elements of the linearly ordered set Z d + (see Notation, (2.)) such that for γ < 0 λ 0 (γ) λ k (γ) λ kj (γ).... Theorem 4.2. Assume that d is even, 2l d, the potential V (x) is not identically zero and it satisfies conditions (3.6)-(3.8) and the condition IR d x 2(2l d) (ln( + x )) 2 V (x)dx <. Denote m = l d 2. Then: (i) The operator H γ, defined by (.), has r = #({k Z d + k m} = ( ) m+d d (4.5) virtual eigenvalues {λ k (γ)} k m at the endpoint λ = 0 of the gap (, 0) of σ(h 0 ) 2 ; (ii) For the bottom virtual eigenvalue λ 0 (γ) the asymptotic expansion, described in Theorem 5.6 of [Ar-Z], is valid. For the rest of virtual eigenvalues λ k (γ) (k 0) the following asymptotic properties are valid: (iii) If k = j {, 2,..., m }, the asymptotic estimate holds for γ 0: ( ( )) l λ k (γ) = c k γ m k + O γ 2(m k ) ln γ, (4.6) where the constants c k are calculated by the following formula: l m k c k = ek. (4.7) Here the numbers {e k } k =j are positive and form the set of roots of the algebraic equation (3.30), in which the matrix T is defined by (3.25), (3.22), (3.8), (3.9), (3.20), (3.4), (3.5), (3.6) and (3.5), and the matrix Ξ is defined by (3.3) and (3.4); (iv) If k = m, then for a small enough γ < 0 ( ) ( ) f f+ exp λ k (γ) exp, (4.8) γ γ where f + and f are positive numbers, which do not depend on γ. Proof. For λ < 0 consider the finite-rank portion Φ(λ), extracted from the Birman- Schwinger operator X V (λ) in Section 4 of [Ar-Z]. This is an integral operator with the kernel Φ(x,y, λ) defined by (4.48) of [Ar-Z], where F(x,y, λ) is defined by (4.22) of [Ar-Z] (see Lemma 4.6 and Proposition 4.8 of [Ar-Z]). For k 2m consider 2 As in the case of an odd d, we prove this fact in the process of the proof of this theorem, despite it have been established in [W] (Corollary 6.). 9

21 also the functions h k (x) of the form (3.5). Thus, for m > 0 the operator Φ(λ) can be written in the following manner: Φ(λ) = k,n: k+n 2m +ln ( λ ) λ 2m k+n 2l k,n: k+n =2m ξ k+n (, h n )h k + η k+n (, h n )h k. (4.9) Recall that the quantity ξ k is defined by formula (3.3) and η k is defined by the formula: η k = (2π) d s k ds ( s 2l ( k < 4l d), (4.0) + ) 2 or in an explicit form: η k = (2π) d IR d d «2 + m l π Qd lsin( π l ( d + m )) j= Γ(mj+ 2) 2 Γ( d 2 + m ), if k < 2l d, Q d j= Γ(mj+ 2) Γ( d 2 + m )l, if k = 2l d. (4.) Consider the Birman-Schwinger operator X V (λ) defined by (3.32). Let µ + k ν (λ) be the positive characteristic branches of the operator H 0 with respect to the operator V on the gap (, 0) of σ(h 0 ), that is they are positive eigenvalues of the operator X V (λ), arranged in the non-increasing ordering for any fixed λ < 0 (see Definition 3.2 of [Ar-Z]). Let µ + k ν (Φ(λ)) be the branches of positive eigenvalues of the operator Φ(λ), arranged in the same manner. We shall write briefly: µ + ν (λ) = µ + k ν (λ), µ + ν (Φ(λ)) = µ + k ν (Φ(λ)). By Proposition 4.8 of [Ar-Z], for some λ 0 < 0 and for any λ (λ 0, 0) X V (λ) Φ(λ) T, (4.2) where T > 0 does not depend on λ. Thus, for a small enough λ < 0 the estimates are valid: µ + ν (λ) µ+ ν (Φ(λ)) T, if µ + ν (Φ(λ)) T > 0 (4.3) and Consider the subspace µ + ν (Φ(λ)) µ + ν (λ) T, if µ + ν (λ) T > 0. (4.4) F = span ( {h k } k 2m ) of the space L 2 (IR d ). In view (4.9), F is an invariant subspace of the operator Φ(λ). Consider the operator Φ F (λ) = Φ(λ) F. It is evident that the set of all non-zero eigenvalues of the operator Φ(λ) coincides with the set of non-zero eigenvalues of the operator Φ F (λ). Let W be a linear operator acting in F and defined by the conditions: f j = Wh kj (j = 0,,..., G 2m ), 20

22 where the sequence {f j } G2m j=0 is obtained from the sequence {h kj } G2m j=0 by the orthogonalization process (3.0), (3.), (3.2). By Lemma 3., the operator W has the matrix representation (3.22) in the basis {f j } G2m j=0 of F, where ω ν,ρ are coefficients used in the orthogonalization formulas (3.7). Like in the proof of Proposition 3.2, we get from (4.9) that Φ F (λ) = W K(λ)(W ), (4.5) where K(λ) is a linear operator in F, whose matrix representation in the orthonormal basis {f j } G2m j=0 has the following form for m > 0: K(λ) k,n = ξ k+n λ 2m k+n 2l for k + n 2m, ( ) ln λ η k+n for k + n = 2m, 0 for k + n > 2m Consider the following subspace of F: where E = W (Ẽ), Ẽ = span ( {f j } m< kj 2m). ( k, n 2m). (4.6) It is evident that E has codimension r in F, where r is defined by (4.5). Furthermore, from the matrix representation (4.6) of the operator K(λ) and formula (4.5) we see that (Φ F (λ)y, y) = ( K(λ)(W ) y, (W ) y ) = 0 for any y E. Then, by Lemma 4., the self-adjoint operator Φ F (λ) has at most r branches of eigenvalues having the property limµ + ν (Φ(λ)) = +. λ 0 The latter circumstance, estimate (4.4) and the fact that µ + ν (λ) are increasing functions force the asymptotic multiplicity (Definition 3.9 of [Ar-Z]) to satisfy M(0, H 0, V ) r. Our immediate goal is to prove that M(0, H 0, V ) r and to get a lower estimate for the main characteristic branches {µ + ν (λ)} ν=0 r of H 0 with respect to V near the endpoint λ = 0 of the gap (, 0) of σ(h 0 ) (see Definition 3.9 of [Ar-Z]). To this end consider another form of the matrix K(λ), connected with formula (4.2) from [Ar-Z] for the function F(x, y, λ) (which takes part in the representation (4.48) from [Ar-Z] for the kernel Φ(x, y, λ) of the integral operator Φ(λ)): K(λ) k,n = 2lη k+n λ 2m k+n 2l (2m k+n ) for k + n 2m, ( ) ln λ η k+n for k + n = 2m, 0 for k + n > 2m, where k, n 2m. Also consider the r-dimensional subspace F = span ( ) {f j } kj m (4.7) 2

23 of the space F. Let us put λ = t 2l and consider for t > 0 the operator function M(t) = P F K( t2l ) F, having the following matrix representation in the basis {f j } kj m: M(t) k,n = 2l t 2m k+n (2m k+n ) η k+n for k + n 2m, 2l ln ( t) ηk+n for k + n = 2m Let us calculate the derivative of the latter matrix: ( ) d dt M(t) 2l = t η 2m+ k+n k+n ( k, n m). k,n ( k, n m). We see from the latter formula and formula (4.0) for η k that the matrix function N(t) = t 2m+ d M(t) (4.8) dt satisfies the same conditions as the matrix function L(t) in Proposition A.2 with N = p = m. Hence by this proposition the minimal eigenvalue of the matrix N(t) has the form for a small enough t > 0: where ẽ > 0. Thus, in view of (4.8), Then for t (0, δ) Hence, since λ = t 2l, µ min (N(t)) = ẽt 2m ( + O(t)), δ > 0, C > 0, t (0, δ) : d dt M(t) C t I. M(t) M(δ) = δ t ( ddt ) ( ) δ M(t)( dt C ln I. t λ 0 < 0, C > 0, λ (λ 0, 0) : P F K(λ) F C ln Consider the following r-dimensional subspace of the space F: G = W ( F). Then, in view of (4.5) and (4.9), for any f G and λ (λ 0, 0): (Φ(λ)f, f) = (Φ F (λ)f, f) = ( K(λ)(W ) f, (W ) f ) ( ) ( ) C ln (W ) f 2 C λ W ln f 2. 2 λ ( ) I. (4.9) λ Hence, in view of (4.2), λ 0 < 0, C 2 > 0, λ (λ 0, 0) : ( ) (X V (λ)f, f) inf C f G, f 0 f 2 2 ln. λ 22

24 Thus, by Proposition 3.7 of [Ar-Z], M(0, H 0, V ) r. Since the inverse estimate have been proved above, we have: M(0, H 0, V ) = r. So, we have proved assertion (i) of the theorem. Moreover, Proposition 3.7 of [Ar-Z] yields the following estimate for the main characteristic branches of H 0 near λ = 0: λ 0 < 0, C 2 > 0, λ (λ 0, 0) ν {0,,..., r } : ( ) µ + ν (λ) C 2 ln. (4.20) λ We now turn to the asymptotic estimation of the first r main characteristic branches µ + k (λ) of the operator H 0 near λ = 0, where r = #{k Z d + k m }, (4.2) and to the upper estimation of the rest of the main characteristic branches. Observe that, in view of property (ii) of ξ k given in Proposition 4.5 of [Ar-Z], ξ k = 0 for k = 2m. Then we can write the matrix K(λ), defined by (4.6), in the form: where and K (λ) k,n = ξ k+n λ 2m k+n 2l K(λ) = K (λ) + ln ( λ for k + n 2(m ), 0 for k + n > 2(m ) 0 for k + n 2m, R k,n = η k+n for k + n = 2m, 0 for k + n > 2m ) R, (4.22) ( k, n 2m) (4.23) ( k, n 2m). Making use of (4.5), consider also the corresponding representation of the operator Φ F (λ): ( ) Φ F (λ) = Φ (λ) + ln R, (4.24) λ where and Φ (λ) = W K (λ)(w ) R = W R(W ). Hence for any fixed λ < 0 the set of eigenvalues of the operator Φ (λ) coincides with the set of roots of the equation det(k! (λ) µt ) = 0. Recall that the matrix T is defined by (3.25), hence it is positive-definite. Observe that, in view of (4.23) and formula (3.3) for ξ k, the matrix t 2m K ( t 2l ) satisfies the same conditions as the matrix L(t) in Proposition A.2 with N = 2m and p = m. Thus, by Proposition A.2, the branches of eigenvalues µ k (Φ (λ)) of the operator Φ (λ) can be chosen such that µ k (Φ (λ)) = ) e k φ λ m k k ( λ 2l, if k m l ψ k ( λ 2l ), if m k 2m, (4.25) 23

25 where the functions φ k (t), ψ k (t) are analytic in a neighborhood O(0), φ k (0) = and e k > 0 ( k m ). Furthermore, for any fixed j {, 2,..., m } the numbers {e k } k =j form the set of roots of the algebraic equation (3.30). Applying the same arguments as in the proof of Theorem 3.3 and making use of (4.3), (4.4), (4.24) and of Lemma 3.4 from [Ar-Z], we get the following estimates for the main characteristic branches µ + k (λ) of H 0 at λ = 0: λ 0 < 0, M > 0, λ (λ 0, 0) : ( ) ( ) µ + k (Φ (λ)) M ln µ + k λ (λ) µ+ k (Φ (λ)) + M ln λ ( k m). Thus, in view of (4.25), we have the asymptotic estimates for λ 0: µ + k (λ) = e ( ( )) ( ( )) k + O λ λ m k 2l + O ln k : k m l λ and the estimates for a small enough λ < 0: ( ) µ + k (λ) M ln λ k : k = m, where M > 0 does not depend on λ. The latter estimates, estimate (4.20) and Proposition 3.6 of [Ar-Z] imply assertions (iii) and (iv) of the theorem. The theorem is proven. 4.2 o In the particular case m = l d = it is possible to strengthen assertion 2 (iv) of Theorem 4.2, replacing the two-sided exponential estimate (4.8) of non-bottom virtual eigenvalues by an asymptotic representation of the logarithm of these eigenvalues with algebraically computable leading coefficients. Furthermore, in this case it is possible to get a simple asymptotic expansion for the bottom virtual eigenvalue on the basis of Theorem 5.6 of [Ar-Z]. The following theorem says about this: Theorem 4.3. Assume that, in addition to the conditions of Theorem 4.2, m =, that is d = 2l 2. Then for r = d+ virtual eigenvalues {λ k (γ)} k of the operator H γ at λ = 0 the following asymptotic properties are valid: (i) For the bottom virtual eigenvalue λ 0 (γ) the following asymptotic expansion is valid if γ 0: ( ) l λ 0 (γ) = γ (ν l 0 + lν ν 0 γ ln + O(γ)), (4.26) γ where ν 0 = ξ 0 h 0 2 = ξ 0 IR d V (s) ds, (4.27) ν = (Ψ X 0, X 0 ), (4.28) X 0 = h 0 V (x) h 0 = V (s) ds, IR d (4.29) Ψ = k,n: k+n =2 η k+n (, h n )h k. (4.30) 24

26 Recall that the quantities ξ k and η k are defined by (3.4) and (4.), respectively, and the functions h k (x) are defined by (3.5); (ii) If k =, then for γ 0 the following asymptotic representation is valid: ln ( ) λ k (γ) = ( + O(γ)), (4.3) e k γ where the numbers e k ( k = ) are positive and they form the set of roots of the following algebraic equation: ( e) d T {0} + d ( e) d ν ( ) ν η (2,0,...,0) ν= C K : #C=ν T C = 0. (4.32) Recall that the matrix T is defined by (3.25), (3.22), (3.8), (3.9), (3.20), (3.4), (3.5), (3.6) and (3.5). Proof. Let us prove assertion (i). By Theorem 5.6 of [Ar-Z], the bottom virtual eigenvalue λ 0 (γ) of the operator H γ has the following asymptotic expansion for γ 0: where the function η(ǫ) has the form: λ 0 (γ) = γ l (η( γ ) + O(γ)) l, η(ǫ) = δ 0 + δ ǫ ln and it is calculated by the following procedure: ( ) ǫ t 0 (ǫ) = 0, t (ǫ) = ǫ (θ 0 (t 0 (ǫ)) 0, η(ǫ) = ν 0 + lν (t (ǫ)) ln ( ). ǫ Observe that in our case θ 0 (ǫ) = ν 0, hence t (ǫ) = ǫν 0. These circumstances imply the desired asymptotic expansion (4.26). Let us prove assertion (ii). As in the proof of Theorem 4.2, consider the operator Φ(λ), defined by (4.9), the operator Φ F (λ) = Φ(λ) F, where and the operator F = span ( {h k } k 2 ), K(λ) = WΦ F (λ)w. Recall that W is a linear operator acting in F and defined by the conditions: f j = Wh kj (j = 0,,..., G 2m ), where the sequence {f j } G2m j=0 is obtained from the sequence {h kj } G2m j=0 by the orthogonalization process (3.0), (3.), (3.2). Taking into account Remark 4.7 of [Ar-Z], we have: η k = 0 for k =. (4.33) 25

27 Then in our case m = the matrix representation (4.7) of the operator K(λ) in the basis {f j } G2m j=0 acquires the form: l η λ 0 for k = n = 0, l 0 K(λ) k,n = ( ) for k + n =, ln ( k, n 2). λ η k+n for k + n = 2, 0 for k + n > 2 We put t = t(λ) = λ l l ( ) ln. (4.34) λ We see that the function t(λ) is continuous and decreasing for a small enough λ < 0, and t(0) = lim λ 0 t(λ) = 0. Then there exists the continuous inverse function λ = λ(t) defined for a small enough t 0 and λ(0) = 0. Let us put K(t) = l λ(t) l K(λ(t)). Then, taking into account (4.33), we can write K(t) in the form: K(t) k,n = { ηk+n t k+n for k + n 2, 0 for k + n > 2 ( k, n 2). (4.35) Thus, taking into account (4.5) and the fact that T = WW, we get the following form of the eigenvalues of the operator Φ F (λ): µ k (Φ F (λ)) = µ k(t(λ))l, (4.36) λ l where µ k (t) are the roots of the equation ( ) det K(t) µt = 0. (4.37) In view of formula (4.0) for η k, the matrix K(t), defined by (4.35), satisfies the same conditions as the matrix L(t) in Proposition A.2 with p =. Furthermore, observe that the matrix T is positive-definite. Thus, by Proposition A.2, equation (4.37) has r = d + branches of roots with the asymptotic representation µ k (t) = e k t 2 k ( + O(t)) ( k ), (4.38) where e k > 0 and the quantities e k with k = form the set of roots of the equation: ( e) d η 0 T {0} + d ( e) d ν ν= C,D K : #C=#D=ν and Π is a matrix of the form: { ηk+n for k + n 2, Π k,n = 0 for k + n > 2 Π C, D T C, D C, C D, D = 0 (4.39) ( k, n 2). 26

28 Observe that, by Remark 4.7 of [Ar-Z], { 0, if k n, Π k,n = η (2,0,0), if k = n (k,n K ). Hence we get: Π C, D = { 0, if C D, η 0 ( η(2,0,0) ) #C, if C = D (C, D K ). (4.40) Thus, equation (4.39) acquires the form (4.32). In view of (4.36), (4.34) and (4.38), we have the following asymptotic estimate for λ 0: ( ) ( ( ( ) )) µ k (Φ F (λ)) = e k ln + O λ l ln. λ λ In the same manner as in the proof of Theorem 4.2, we obtain the desired estimate (4.3), making use of the latter estimate, estimates (4.3), (4.4) for the main characteristic branches µ + k (λ) and Proposition 3.6 of [Ar-Z]. The theorem is proven. Example 4.. We now turn to a particular case of the situation, considered in Theorem 4.3. Namely, put l = d = 2. In this case m =, where K 0 = {0}, K = {k,k 2 }, K 2 = {k 3,k 4,k 5 }, k = {0, }, k 2 = {, 0}, k 3 = {0, 2}, k 4 = {, }, k 5 = {2, 0}. By Theorem 4.3, for three virtual eigenvalues of the operator H γ we have the following asymptotic formulas if γ 0: (i) For the bottom virtual eigenvalue we have: ( ) 2 λ 0 (γ) = γ (ν ν ν 0 γ ln + O(γ)), γ where the numbers ν 0 and ν are defined by (4.27)-(4.30); (ii) For j {, 2} we have: ( ) ln = ( + O(γ)), λ kj (γ) e kj γ where e k, e k2 are the roots of the quadratic equation: ( e) 2 ( ) T {k,...,k 5} + ( e)η (2,0) T{k2,k 3,k 4,k 5} + T {k,k 3,k 4,k 5} + ( η (2,0) ) 2 T{k3,k 4,k 5} = 0. Recall that η 2,0 = s 2 ds ds 2 ((s 2 + s2 2 )2 + ) 2 27

29 and η 2,0 is calculated by formula (4.) with k = (2, 0). Also observe that by Lemma 3.6: ( T {k2,k 3,k 4,k 5} = T {k,k 2,...,k 5} hk 2 (h k, h 0 ) 2 h 0 2) = T {k,k 2,...,k 5} ( ( ) 2 x 2 x V (x, x 2 ) dx dx 2 V (x, x 2 ) dx dx 2 V (x, x 2 ) dx dx 2 and ( T {k,k 3,k 4,k 5} = T {k,k 2,...,k 5} hk2 2 (h k2, h 0 ) 2 h 0 2) = T {k,k 2,...,k 5} ( ( ) 2 x 2 x 2V (x, x 2 ) dx dx 2 2V (x, x 2 ) dx dx 2 V (x., x 2 ) dx dx 2 Recall that the matrix T is defined by (3.25), (3.22), (3.8), (3.9), (3.20), (3.4), (3.5), (3.6) and (3.5). In connection with Theorems 4.2 and 4.3 the following problem appears: Problem 4.. Does the asymptotic estimate of the form ( ) ln = f k λ k (γ) γ ( + O(γ)) (f k > 0) as γ 0 hold for k = m = l d also in the case where 2l d > 2 and d is even? If the 2 answer is positive, find an algorithm for the computation of the constants f k. 5 Asymptotic formulas of Lieb-Thirring type There are papers devoted to estimates (ordinary and asymptotic with respect to the coupling constant γ) of the sum of the form λ j (γ) κ (γ < 0, κ > 0), j where λ j (γ) are negative eigenvalues of the operator H γ arranged in the non-decreasing ordering ([L-Thr], [N-W]). As above, we assume that these eigenvalues are indexed by the elements of the linearly ordered set Z d + (see Notation, (2.)), that is for γ < 0 λ 0 (γ) λ k (γ) λ kj (γ).... It turns out that the asymptotic formulas for λ k (γ), obtained in Sections 3 and 4, enable us to get asymptotic formulas with respect to a small γ < 0 for the sums of the form λ k (γ) κj k: k =j with suitable powers κ j, where { {,..., m}, if d is odd, j {,..., m }, if d is even. 28

Virtual eigenvalues of the high order Schrödinger operator II

Virtual eigenvalues of the high order Schrödinger operator II Integr. equ. oper. theory 99 (9999) 0 0 0378-620X/99000-0 $.50+0.20/0 c 9999 Birkhäuser Verlag Basel/Switzerland Integral Equations and Operator Theory Virtual eigenvalues of the high order Schrödinger

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

The Ideal Class Group

The Ideal Class Group Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Chapter 5. Banach Spaces

Chapter 5. Banach Spaces 9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on

More information

Lecture Notes on Polynomials

Lecture Notes on Polynomials Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Factoring of Prime Ideals in Extensions

Factoring of Prime Ideals in Extensions Chapter 4 Factoring of Prime Ideals in Extensions 4. Lifting of Prime Ideals Recall the basic AKLB setup: A is a Dedekind domain with fraction field K, L is a finite, separable extension of K of degree

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Rotation Rate of a Trajectory of an Algebraic Vector Field Around an Algebraic Curve

Rotation Rate of a Trajectory of an Algebraic Vector Field Around an Algebraic Curve QUALITATIVE THEORY OF DYAMICAL SYSTEMS 2, 61 66 (2001) ARTICLE O. 11 Rotation Rate of a Trajectory of an Algebraic Vector Field Around an Algebraic Curve Alexei Grigoriev Department of Mathematics, The

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Properties of BMO functions whose reciprocals are also BMO

Properties of BMO functions whose reciprocals are also BMO Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a non-negative BMO-function w, whose reciprocal is also in BMO, belongs to p> A p,and

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information