Subspace intersection tracking using the Signed URV algorithm

Size: px
Start display at page:

Download "Subspace intersection tracking using the Signed URV algorithm"

Transcription

1 Subspace intersection tracking using the Signed URV algorithm Mu Zhou and Alle-Jan van der Veen TU Delft, The Netherlands 1

2 Outline Part I: Application 1. AIS ship transponder signal separation 2. Algorithm based on Generalized SVD (GSVD) Part II: Subspace tracking 1. Signed (hyperbolic) URV to approximate the GSVD 2. Updating the SURV 2

3 AIS signal separation Automatic Idenfication of Ships (AIS) A default AIS message is a binary sequence of 256 bits GMSK modulated, kbps, MHz Short data packets in a TDMA system (225 time slots = 1 minute) Data includes ID, GPS location, course, speed Used for ship-ship (anti-collision) and ship-shore (tracking) 3

4 AIS signal separation Idea: use LEO satellites for ship tracking On surface: 5 km range; from satellite: 5 km range: many packet collisions (also only partial synchronization) Significant Doppler shifts (only partial frequency overlap) Many partially overlapping signals, no user codes: need blind source separation 4

5 AIS signal separation ISIS AIS satellite prototype (Triton-1 mission) Launched December 213 5

6 AIS signal separation Global ship distribution and a satellite field of view. The red dots denote ships within the FoV. 6

7 AIS signal separation TU Delft experimental AIS 4-channel receiver 7

8 AIS signal separation AIS overlapping signals Example of a measurement 5 x antenna measurements 4 Amplitude Samples 3 2 one of the signals (after separation) Amplitude Samples 8

9 AIS signal separation Proposed multi-user receiver Blind beamforming stage 1: asynchronous interference suppression stage 2: synchronous interference cancellation (block constant modulus algorithm) Demodulator Bank of standard single-channel GMSK receivers 9

10 Data model Received signal Assume antennas, stack received signals into columns : : tall, full column rank; columns normalized to targets interference Analysis window 1 # "!

11 Data model The signals can be considered zero constant modulus. Constant modulus algorithms cannot directly be applied because part of the signal is zero. We will derive a blind separation algorithm for the structure zero/non-zero. That will suppress the asynchronous interference. The targets can be further separated using constant modulus algorithms (e.g. ACMA). Data model The noise is considered white with power. targets interference Analysis window 11

12 Data model Covariance model Assume that The signal covariance matrices are We assume these are diagonal matrices; the diagonal entries contain the signal powers. The signals are considered independent. targets interference Analysis window 12 and contain stationary data (they don t):

13 where Data model Covariance model The distinction between target signals and interfering signals is defined by I.e., target signals are stronger (more samples present) in the first data block than in the second data block. (This can be generalized to.) Objective Compute a separating beamforming matrix of size, such that is any full rank matrix (residual mixing of the target signals). 13

14 Tools from linear algebra Generalized SVD For two matrices GSVD, (both, wide ), the GSVD is is an invertible matrix, and are square positive diagonal matrices, are semi-unitary matrices of size Columns of are scaled to norm 1. This definition is transposed compared to the Matlab definition. Also the scaling is different: Matlab has. 14

15 contains the common column span, i.e.,, partition correspondingly as but not in is the subspace of columns that are in Tools from linear algebra Generalized SVD (cont d). Given some tolerance and as and is the subspace of columns that are in but not in,, is a common left null space. Thus, the GSVD provides subspace intersection. 15

16 , is invertible and Unclear if the decomposition exists if Tools from linear algebra Generalized Eigenvalue Decomposition (GEV) Squaring the GSVD, we obtain (for positive definite matrices GEV ) where are diagonal and positive ( ). and indefinite ( and may become complex). Can partition in the same way as for the GSVD. 16

17 Source separation Noise-free case Recall the data model: The GEV of is For a small threshold, partition,,as and moreover, sort s.t. 17

18 Using Now, Source separation Comparing the sorted GEV with the data model, we immediately obtain, we can construct a separating beamformer as or, alternatively Case with white noise with known covariance! from GEV changes (unlike EVD of a single matrix in white noise which will shift eigenvalues but not change the eigenvectors). Could compute GEV ; but risk that matrices become indefinite. First need to remove the noise subspace. Single matrix: If the noisefree decomposition is, then with noise 18

19 Source separation Algorithm using SVD and GEV 1. Preprocessing to remove noise subspace: compute the SVD: Then apply a rank and dimension reduction: 2. Compute the rank-reduced covariance matrices 3. Compute the GEV of the noise-shifted rank-reduced covariance matrices, GEV 4. Sort the entries of and correspondingly partition. The term should be absent as the noise subspace has been removed. 5. The separating beamformer is 19

20 Source separation Separation performance: SINR as function of SIR for SNR = 15 db Packets with random time offsets (2 targets, 3 interferers), antennas. 2

21 Source separation Extensive simulation Carrier frequency Channel bandwidth Satellite altitude Satellite speed Orbit period Radius of FoV Ship visible time MHz 25 khz (modulation 9.6 kbps GMSK, 6 km m/s s nautical miles 74 s per sat. pass Ship emission power 12.5 W(Class A)/2 W(Class B) Ship transmit antenna Sat. receive antenna Sat. antenna spacing Array spinning speed Max. SNR at the receiver Cell size Half-wave dipole Array of directional elements Half wavelength 1 round/3 s 25 db (square) Num. of Cells in FoV 5184 Ship report interval 6 s 21

22 Source separation Sip detection probability GSVD-T+ACMA GSVD-SI+ACMA ACMA ESPRIT+Capon Ship detection probability Uniform ship distribution System time period = 74 s Sat. altitude = 6 km Number of ships in FoV = 5, Number of ship IDs = 12,747 Ship report interval = 6 s Number of sent messages = 296,32 Avg. number of messages per slot = Number of antennas 22

23 Source separation Tracking The analysis window slides over the data. This allows to receive new messages as targets. Need updating and downdating. Analysis window 23

24 Source separation Tracking The analysis window slides over the data. This allows to receive new messages as targets. Need updating and downdating. Analysis window 24

25 Source separation Tracking The analysis window slides over the data. This allows to receive new messages as targets. Need updating and downdating. Analysis window 25

26 Towards part II The source separation algorithm works nice, but... Uses both SVD and GEV, thus not suitable for tracking (sliding window operation); The noise shifting is awkward. We propose to use a new tool, the Schur subspace estimator (SSE), which can replace the SVD and GSVD, and is easily updated allowing sliding window tracking of subspaces. Recall, the Schur algorithm establishes the stability of a polynomial (roots inside unit circle) without explicitly computing the roots. Likewise, the SSE partitions the space into a dominant and a minor subspace w.r.t. a threshold, without computing the SVD. 26

27 !!! " " " Intermezzo Elementary rotations ladder lattice Consider a rotation: Conservation of energy: ( ) 27! "

28 Intermezzo Schur recursion Such elementary rotations are used in the familiar Schur recursion: the analysis filter consists of hyperbolic rotations which create zeros in the input vectors, the synthesis filter of Givens rotations. The are the reflection coeffients. e stable allpass filter e e Synthesis e Analysis 28

29 is Intermezzo Properties of elementary hyperbolic rotations With we have conservation of energy in the -inner product: Define With it follows that -unitary: Note also that and. This generalizes to larger -unitary matrices. The case where is problematic and should be avoided. 29

30 is a perbolic QR factorization, where the role of is played by Replacing GEV by SSE Schur subspace estimator (SSE) We show how the SSE partitions the space into a positive and negative subspace, without computing the SVD. For two given matrices and, compute (not unique) such that SSE [ ] [ ] where is square and -unitary matrix: decomposes into a series of hyperbolic rotations, so this looks like a hy-. 3

31 Replacing GEV by SSE If we square the data, we obtain and capture the positive and negative part of using factors of minimal dimensions. In our application, we had the asymptotic data model: (Note that the noise covariance is cancelled in the difference.) We can show there exists a such that (asymptotically) In particular,,. For finite, these become good approximations. Thus, the SSE gives directly the required subspaces. But how is it computed? 31

32 The Schur Subspace Estimator Subspace estimation is related to the following problem: Problem For a given matrix and tolerance level, find all approximants such that where is equal to the number of singular values of that are larger than. ( denotes the matrix 2-norm.) The usual solution goes via a truncated SVD: TSVD is expensive to compute, especially for on-line applications 32

33 The Schur Subspace Estimator TSVD OTHER SOLUTION + Results There are many other approximants that do not set singular values to zero. They are still optimal in 2-norm, not in the Frobenius norm. A generalized Schur algorithm provides a parametrization of all solutions without computing SVDs, but rather a Hyperbolic QR (actually Hyperbolic URV) 33

34 and The Schur Subspace Estimator Schur subspace estimator (SSE) For two given matrices, compute such that is a where has full column rank and -unitary matrix: If we square the data, we obtain and capture the positive and negative part of using factors of minimal dimensions. The decomposition always exists but, and are not unique. 34

35 HURV Hyperbolic URV decomposition An example is given by the signed Cholesky factorization, where is lower or upper triangular. This corresponds to a hyperbolic QR factorization. However, this decomposition doesn t always exist, the triangular shape is too restrictive. This motivates to introduce a QR-factorization of : where is unitary and is lower (or upper) triangular. The result is a two-sided decomposition ( hyperbolic URV ) 35

36 Assume that Hyperbolic URV decomposition Low-rank approximation Consider, where is a threshold, and introduce the SVD of as where has singular values larger than ; none equal to. We compute the SSE has is parametrized as where (inertia) has columns and columns. 1 Theorem parametrize all rank- approximants such that (matrix 2-norm) In particular, the column span of any such with with 36

37 Hyperbolic URV decomposition Example A valid rank- approximant is Indication of proof: Rank because has columns The norm property follows from } {{ } } {{ } 37

38 Hyperbolic URV decomposition Subspace estimation All subspace estimates are given by We could choose and simply use as an estimate for the principal column span of In particular we will use (SSE-2), but there are other choices.. The TSVD is a special case of such an approximant, corresponding to a decomposition with and a specific. 38

39 Hyperbolic URV decomposition Pre-whitened low-rank approximation More in general, consider. Then all low-rank approximants with such that such that have a column span parametrized by. In applications, could be a data matrix (including noise), and could be an imitation of the noise process, e.g., from a nearby frequency. 39

40 Hyperbolic URV decomposition Relation to GSVD We can show that the GSVD is a special case of the SSE: The GSVD of two matrices is where the sorting and partitioning is such that, (for simplicity of notation, assume there is no common null space: is missing). Compare this to the SSE 4

41 particular, In, Hyperbolic URV decomposition Squaring the GSVD, we have the GEV partitioned such that,. Then Squaring the SSE gives We can show there exists a such that. 41

42 SURV updating The signed URV (SURV) is a stable algorithm to compute and update the HURV. The decomposition is not unique, and we will subsequently place an additional constraint that leads to favorable properties. Elementary rotations Let be an (unsorted) signature matrix, and similar for. A matrix is an elementary rotation if it satisfies,. Given and input signature. We can determine such that The output signature follows from sign of and inertia. 42

43 well-defined: or or or where where SURV updating Elementary rotations such that 1. If (Hyperbolic rotation), and, : ; ;. 2. If (Hyperbolic rotation), and, : ; 3. If (Givens rotation) where :, ; ;. Case 1 or 2 (hyperbolic rotation): If, then is unbounded but the result is 43 (sign reversal);

44 SURV updating Suppose we have already computed the decomposition where is square, lower triangular and sorted according to signature. To update, let us say that we want to find a new factorization where either (downdate), or (update). It suffices to find [ ] [ ] where (signature ), or (signature );. Denote the signature of by. The rank of the principal subspace before the update is, after the update. 44 and such that

45 -th column of SURV updating Zeroing schemes for GCR: Givens Column Rotations Apply only if 1. Compute Givens rotation 2. Apply to the : such that and (no sign change). GCR 45

46 -th column of SURV updating Zeroing schemes for HCR: Hyperbolic Column Rotations Apply if 1. Set 2. Apply : to the, and compute and and ; update signatures following (possible sign change). HCR Try to avoid this operation as can be very large (unbounded if ). 46 such that

47 SURV updating Zeroing schemes for GRCR: Givens Row and Column Rotations Apply only if 1. Compute Givens row rotation 2. Apply to rows 3. Compute Givens column rotation 4. Apply to columns of of : such that ; apply such that [ ] [ to columns (no sign change). of ] ; ; ; 47

48 SURV updating Zeroing schemes for GRR: Givens Row Rotations to zero Apply only if : 1. Compute Givens row rotation 2. Apply to rows of such that ; apply [ ] to columns of [ ] ; GRR This is used as a clean-up operation after is zeroed. 48 ".

49 Case ( Case ( SURV updating Updating sequence for GCR GRCR HCR or ): no sign change, no rank change;. Done. ): sign reversal, rank decrease;. Continue: 49

50 SURV updating Signature sorting steps GRR swap 5

51 SURV updating Updating sequence for GRCR swap Tentative rank increase. Continue as in step () for, before. 51 GRCR

52 SURV updating At most a single hyperbolic rotation is used (corresponding to a single rank change decision). It involves and. If then is unbounded but the result is well defined, and this unbounded acts only on columns for which the other entries are already. Thus, will remain bounded. This is one of the keys to show numerical stability, despite the use of hyperbolic rotations. Computational complexity: per update. Only and are tracked/stored. 52

53 SURV updating SSE-2 definition and properties The HURV decomposition is not unique, and we can place additional constraints to reach desired properties. All valid subspace estimates have the form such that where is a contractive matrix that parametrizes all solutions. Given a specific it is always possible to transform, using additional ro- tations to a new, i.e., the same subspace is obtained using and a new parameter. 53

54 SURV updating The Schur Subspace Estimate SSE-2 [2] is obtained for where This is interesting because of the following: Theorem 2 Given an HURV decomposition, and consider =. Then. This shows that the estimator is unbiased and bounded by the input data. The SSE-2 is still not unique. The SVD subspace estimate is a special case of an SSE-2. 54

55 SURV updating The SURV algorithm provides an SSE-2 decomposition Idea: use the available freedom on to add constraints that ensure. Theorem 3 For given matrices,, such that and, there exist matrices [ ] [ ] [ ] where is unitary, is an invertible matrix (actually, is ). -unitary, is lower triangular, and Let. Then is an SSE-2 subspace estimate. 55

56 SURV updating Corollary 1 For this decomposition, is bounded if is nonsingular. In any case we have of the decomposition are bounded by the inputs, even if Thus, the results may be unbounded. Also the corresponding subspaces are well-defined. The norm properties could be key to a formal proof on numerical stability of this algorithm.. Theorem 4 The SURV algorithm presented before provides the required decomposition (without explicitly computing or storing and ). 56

57 Conclusions GSVD is a nice tool for separating partially overlapping data packets. SURV is a nice tool to replace the GSVD in subspace tracking applications. Similar algorithms are applicable for separating airplane signals (SSR system) and RFID signals, and for suppressing Bluetooth interference from WiFi signals. 57

58 Background material References [1] J. Götze and A.-J. van der Veen, On-line subspace estimation using a Schurtype method, IEEE Trans. Signal Process., vol. 44, no. 6, pp , Jun [2] A.-J. van der Veen, A Schur method for low-rank matrix approximation, SIAM J. Matrix Anal. Appl., vol. 17, no. 1, pp , [3] M. Zhou and A.-J. van der Veen, Stable subspace tracking algorithm based on a signed URV decomposition, IEEE Trans. Signal Process., vol. 6, no. 6, pp , Jun [4] Mu Zhou and A.J. van der Veen, Blind Beamforming Techniques for Automatic Identification System using GSVD and Tracking, in Proc. Int. Conf. Acoustics, Speech, Signal Proc. (ICASSP 214), Florence (Italy), May

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

MIMO CHANNEL CAPACITY

MIMO CHANNEL CAPACITY MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

More information

A SIMULATION STUDY ON SPACE-TIME EQUALIZATION FOR MOBILE BROADBAND COMMUNICATION IN AN INDUSTRIAL INDOOR ENVIRONMENT

A SIMULATION STUDY ON SPACE-TIME EQUALIZATION FOR MOBILE BROADBAND COMMUNICATION IN AN INDUSTRIAL INDOOR ENVIRONMENT A SIMULATION STUDY ON SPACE-TIME EQUALIZATION FOR MOBILE BROADBAND COMMUNICATION IN AN INDUSTRIAL INDOOR ENVIRONMENT U. Trautwein, G. Sommerkorn, R. S. Thomä FG EMT, Ilmenau University of Technology P.O.B.

More information

8 MIMO II: capacity and multiplexing

8 MIMO II: capacity and multiplexing CHAPTER 8 MIMO II: capacity and multiplexing architectures In this chapter, we will look at the capacity of MIMO fading channels and discuss transceiver architectures that extract the promised multiplexing

More information

Capacity Limits of MIMO Channels

Capacity Limits of MIMO Channels Tutorial and 4G Systems Capacity Limits of MIMO Channels Markku Juntti Contents 1. Introduction. Review of information theory 3. Fixed MIMO channels 4. Fading MIMO channels 5. Summary and Conclusions References

More information

Dynamic Eigenvalues for Scalar Linear Time-Varying Systems

Dynamic Eigenvalues for Scalar Linear Time-Varying Systems Dynamic Eigenvalues for Scalar Linear Time-Varying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

System Identification for Acoustic Comms.:

System Identification for Acoustic Comms.: System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY. Manolis C. Tsakiris and Patrick A. Naylor

FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY. Manolis C. Tsakiris and Patrick A. Naylor FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY Manolis C Tsakiris and Patrick A Naylor Dept of Electrical and Electronic Engineering, Imperial College London Communications

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 8, AUGUST 2008 3425. 1 If the capacity can be expressed as C(SNR) =d log(snr)+o(log(snr))

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 8, AUGUST 2008 3425. 1 If the capacity can be expressed as C(SNR) =d log(snr)+o(log(snr)) IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 54, NO 8, AUGUST 2008 3425 Interference Alignment and Degrees of Freedom of the K-User Interference Channel Viveck R Cadambe, Student Member, IEEE, and Syed

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak Path Loss Radio Wave Propagation The wireless radio channel puts fundamental limitations to the performance of wireless communications systems Radio channels are extremely random, and are not easily analyzed

More information

Improved satellite detection of AIS

Improved satellite detection of AIS Report ITU-R M.2169 (12/2009) Improved satellite detection of AIS M Series Mobile, radiodetermination, amateur and related satellites services ii Rep. ITU-R M.2169 Foreword The role of the Radiocommunication

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

THE problems of characterizing the fundamental limits

THE problems of characterizing the fundamental limits Beamforming and Aligned Interference Neutralization Achieve the Degrees of Freedom Region of the 2 2 2 MIMO Interference Network (Invited Paper) Chinmay S. Vaze and Mahesh K. Varanasi Abstract We study

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP Department of Electrical and Computer Engineering Ben-Gurion University of the Negev LAB 1 - Introduction to USRP - 1-1 Introduction In this lab you will use software reconfigurable RF hardware from National

More information

Revision of Lecture Eighteen

Revision of Lecture Eighteen Revision of Lecture Eighteen Previous lecture has discussed equalisation using Viterbi algorithm: Note similarity with channel decoding using maximum likelihood sequence estimation principle It also discusses

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Bluetooth voice and data performance in 802.11 DS WLAN environment

Bluetooth voice and data performance in 802.11 DS WLAN environment 1 (1) Bluetooth voice and data performance in 802.11 DS WLAN environment Abstract In this document, the impact of a 20dBm 802.11 Direct-Sequence WLAN system on a 0dBm Bluetooth link is studied. A typical

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers

Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers Sung-won ark and Jose Trevino Texas A&M University-Kingsville, EE/CS Department, MSC 92, Kingsville, TX 78363 TEL (36) 593-2638, FAX

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

Voice services over Adaptive Multi-user Orthogonal Sub channels An Insight

Voice services over Adaptive Multi-user Orthogonal Sub channels An Insight TEC Voice services over Adaptive Multi-user Orthogonal Sub channels An Insight HP 4/15/2013 A powerful software upgrade leverages quaternary modulation and MIMO techniques to improve network efficiency

More information

IN current film media, the increase in areal density has

IN current film media, the increase in areal density has IEEE TRANSACTIONS ON MAGNETICS, VOL. 44, NO. 1, JANUARY 2008 193 A New Read Channel Model for Patterned Media Storage Seyhan Karakulak, Paul H. Siegel, Fellow, IEEE, Jack K. Wolf, Life Fellow, IEEE, and

More information

A Novel Decentralized Time Slot Allocation Algorithm in Dynamic TDD System

A Novel Decentralized Time Slot Allocation Algorithm in Dynamic TDD System A Novel Decentralized Time Slot Allocation Algorithm in Dynamic TDD System Young Sil Choi Email: choiys@mobile.snu.ac.kr Illsoo Sohn Email: sohnis@mobile.snu.ac.kr Kwang Bok Lee Email: klee@snu.ac.kr Abstract

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa

ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS. Mikhail Tsitsvero and Sergio Barbarossa ON THE DEGREES OF FREEDOM OF SIGNALS ON GRAPHS Mikhail Tsitsvero and Sergio Barbarossa Sapienza Univ. of Rome, DIET Dept., Via Eudossiana 18, 00184 Rome, Italy E-mail: tsitsvero@gmail.com, sergio.barbarossa@uniroma1.it

More information

GSM/EDGE Output RF Spectrum on the V93000 Joe Kelly and Max Seminario, Verigy

GSM/EDGE Output RF Spectrum on the V93000 Joe Kelly and Max Seminario, Verigy GSM/EDGE Output RF Spectrum on the V93000 Joe Kelly and Max Seminario, Verigy Introduction A key transmitter measurement for GSM and EDGE is the Output RF Spectrum, or ORFS. The basis of this measurement

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2 On the Traffic Capacity of Cellular Data Networks T. Bonald 1,2, A. Proutière 1,2 1 France Telecom Division R&D, 38-40 rue du Général Leclerc, 92794 Issy-les-Moulineaux, France {thomas.bonald, alexandre.proutiere}@francetelecom.com

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

ISI Mitigation in Image Data for Wireless Wideband Communications Receivers using Adjustment of Estimated Flat Fading Errors

ISI Mitigation in Image Data for Wireless Wideband Communications Receivers using Adjustment of Estimated Flat Fading Errors International Journal of Engineering and Management Research, Volume-3, Issue-3, June 2013 ISSN No.: 2250-0758 Pages: 24-29 www.ijemr.net ISI Mitigation in Image Data for Wireless Wideband Communications

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

MASTER'S THESIS. Improved Power Control for GSM/EDGE

MASTER'S THESIS. Improved Power Control for GSM/EDGE MASTER'S THESIS 2005:238 CIV Improved Power Control for GSM/EDGE Fredrik Hägglund Luleå University of Technology MSc Programmes in Engineering Department of Computer Science and Electrical Engineering

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

CS263: Wireless Communications and Sensor Networks

CS263: Wireless Communications and Sensor Networks CS263: Wireless Communications and Sensor Networks Matt Welsh Lecture 4: Medium Access Control October 5, 2004 2004 Matt Welsh Harvard University 1 Today's Lecture Medium Access Control Schemes: FDMA TDMA

More information

EECC694 - Shaaban. Transmission Channel

EECC694 - Shaaban. Transmission Channel The Physical Layer: Data Transmission Basics Encode data as energy at the data (information) source and transmit the encoded energy using transmitter hardware: Possible Energy Forms: Electrical, light,

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

GSM frequency planning

GSM frequency planning GSM frequency planning Band : 890-915 and 935-960 MHz Channel spacing: 200 khz (but signal bandwidth = 400 khz) Absolute Radio Frequency Channel Number (ARFCN) lower band: upper band: F l (n) = 890.2 +

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

LTE PHY Fundamentals Roger Piqueras Jover

LTE PHY Fundamentals Roger Piqueras Jover LTE PHY Fundamentals Roger Piqueras Jover DL Physical Channels - DL-SCH: The DownLink Shared CHannel is a channel used to transport down-link user data or Radio Resource Control (RRC) messages, as well

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

The Effect of Network Cabling on Bit Error Rate Performance. By Paul Kish NORDX/CDT

The Effect of Network Cabling on Bit Error Rate Performance. By Paul Kish NORDX/CDT The Effect of Network Cabling on Bit Error Rate Performance By Paul Kish NORDX/CDT Table of Contents Introduction... 2 Probability of Causing Errors... 3 Noise Sources Contributing to Errors... 4 Bit Error

More information

THE PROBLEM OF finding localized energy solutions

THE PROBLEM OF finding localized energy solutions 600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Re-weighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,

More information

Spatial soundfield reproduction with zones of quiet

Spatial soundfield reproduction with zones of quiet Audio Engineering Society Convention Paper 7887 Presented at the 7th Convention 9 October 9 New York NY, USA The papers at this Convention have been selected on the basis of a submitted abstract and extended

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information