Part-II Parametric Signal Modeling and Linear Prediction Theory 3. Linear Prediction
|
|
- Magdalen Warner
- 7 years ago
- Views:
Transcription
1 Part-II Parametric Signal Modeling and Linear Prediction Theory 3. Linear Prediction Electrical & Computer Engineering University of Maryland, College Park Acknowledgment: ENEE630 slides were based on class notes developed by Profs. K.J. Ray Liu and Min Wu. The LaTeX slides were made by Prof. Min Wu and Mr. Wei-Hong Chuang. Contact: Updated: November 11, ENEE630 Lecture Part-2 1 / 31
2 Review of Last Section: FIR Wiener Filtering Two perspectives leading to the optimal filter s condition (NE): 1 write J(a) to have a perfect square = 0 principle of orthogonality E [e[n]x [n k]] = 0, 2 a k k = 0,...M 1. ENEE630 Lecture Part-2 2 / 31
3 Recap: Principle of Orthogonality E [e[n]x [n k]] = 0 for k = 0,...M 1. E [d[n]x [n k]] = M 1 l=0 a l E [x[n l]x [n k]] r dx (k) = M 1 l=0 a lr x (k l) Normal Equation p = R T a J min = Var(d[n]) Var(ˆd[n]) ] where Var(ˆd[n]) = E [ˆd[n]ˆd [n] = E [ a T x[n]x H [n]a ] = a T R x a bring in N.E. for a Var(ˆd[n]) = a T p = p H R 1 p May also use the vector form to derive N.E.: set gradient a J = 0 ENEE630 Lecture Part-2 3 / 31
4 Forward Linear Prediction Recall last section: FIR Wiener filter W (z) = M 1 k=0 a kz k Let c k ak (i.e., c k represents the filter coefficients and helps us to avoid many conjugates in the normal equation) Given u[n 1], u[n 2],..., u[n M], we are interested in estimating u[n] with a linear predictor: This structure is called tapped delay line : individual outputs of each delay are tapped out and diverted into the multipliers of the filter/predictor. ENEE630 Lecture Part-2 4 / 31
5 Forward Linear Prediction c = û [n S n 1 ] = M k=1 c k u[n k] = ch u[n 1] S n 1 denotes the M-dimensional space spanned by the samples u[n 1],..., u[n M], and c 1 u[n 1] c 2 u[n 2]. c M, u[n 1] =. u[n M] u[n 1] is vector form for tap inputs and is x[n] from General Wiener ENEE630 Lecture Part-2 5 / 31
6 Forward Prediction Error The forward prediction error f M [n] = u[n] û [n S n 1 ] e[n] d[n] From general Wiener filter notation The minimum mean-squared prediction error P M = E [ f M [n] 2] Readings for LP: Haykin 4th Ed ENEE630 Lecture Part-2 6 / 31
7 Optimal Weight Vector To obtain optimal weight vector c, apply Wiener filtering theory: 1 Obtain the correlation matrix: R = E [ u[n 1]u H [n 1] ] = E [ u[n]u H [n] ] (by stationarity) where u[n] = u[n] u[n 1]. u[n M + 1] 2 Obtain the cross correlation vector between the tap inputs and the desired output d[n] = u[n]: r( 1) E [u[n 1]u r( 2) [n]] =. r r( M) ENEE630 Lecture Part-2 7 / 31
8 Optimal Weight Vector 3 Thus the Normal Equation for FLP is Rc = r The prediction error is P M = r(0) r H c ENEE630 Lecture Part-2 8 / 31
9 Relation: N.E. for FLP vs. Yule-Walker eq. for AR The Normal Equation for FLP is Rc = r N.E. is in the same form as the Yule-Walker equation for AR ENEE630 Lecture Part-2 9 / 31
10 Relation: N.E. for FLP vs. Yule-Walker eq. for AR If the forward linear prediction is applied to an AR process of known model order M and optimized in MSE sense, its tap weights in theory take on the same values as the corresponding parameter of the AR process. Not surprising: the equation defining the forward prediction and the difference equation defining the AR process have the same mathematical form. When u[n] process is not AR, the predictor provides only an approximation of the process. This provide a way to test if u[n] is an AR process (through examining the whiteness of prediction error e[n]); and if so, determine its order and AR parameters. Question: Optimal predictor for {u[n]}=ar(p) when p < M? ENEE630 Lecture Part-2 10 / 31
11 Forward-Prediction-Error Filter f M [n] = u[n] c H u[n 1] Let a M,k = { 1 k = 0 c k k = 1, 2,..., M, i.e., a M f M [n] = [ M k=0 a M,k u[n k] = ah M u[n] u[n M] ] a M,0. a M,M ENEE630 Lecture Part-2 11 / 31
12 Augmented Normal Equation for FLP From the above results: { Rc = r P M = r(0) r H c Normal Equation or Wiener-Hopf Equation prediction error Put together: [ r(0) r H ] } r R M {{ } R M+1 [ 1 c ] = [ PM 0 ] Augmented N.E. for FLP R M+1 a M = [ PM 0 ] ENEE630 Lecture Part-2 12 / 31
13 Summary of Forward Linear Prediction Tap input Desired response (conj) Weight vector Estimated sig Estimation error Correlation matrix Cross-corr vector MMSE Normal Equation Augmented N.E. General Wiener Forward LP Backward LP (detail) ENEE630 Lecture Part-2 13 / 31
14 Backward Linear Prediction Given u[n], u[n 1],..., u[n M + 1], we are interested in estimating u[n M]. Backward prediction error b M [n] = u[n M] û [n M S n ] S n : span {u[n], u[n 1],..., u[n M + 1]} Minimize mean-square prediction error P M,BLP = E [ b M [n] 2] ENEE630 Lecture Part-2 14 / 31
15 Backward Linear Prediction Let g denote the optimal weight vector (conjugate) of the BLP: i.e., û[n M] = M k=1 g k u[n + 1 k]. To solve for g, we need 1 Correlation matrix R = E [ u[n]u H [n] ] 2 Crosscorrelation vector E [u[n]u [n M]] = r(m) r(m 1). r(1) B r Normal Equation for BLP Rg = r B The BLP prediction error: P M,BLP = r(0) (r B ) T g ENEE630 Lecture Part-2 15 / 31
16 Relations between FLP and BLP Recall the NE for FLP: Rc = r Rearrange the NE for BLP backward: R T g B = r Conjugate R H g B = r Rg B = r optimal predictors of FLP: c = g B, or equivalently g = c B By reversing the order & complex conjugating c, we obtain g. ENEE630 Lecture Part-2 16 / 31
17 Relations between FLP and BLP [ ] P M,BLP = r(0) (r B ) T g = r(0) (r B ) T c B = r(0) r H c }{{} real, scalar = r(0) r H c = P M,FLP This relation is not surprising: the process is w.s.s. (s.t. r(k) = r ( k)), and the optimal prediction error depends only on the process statistical property. Recall from Wiener filtering: J min = σ 2 d ph R 1 p (FLP) r H R 1 r (BLP) r B H R 1 r B = (r H R T 1 r) B = r H R 1 r B ENEE630 Lecture Part-2 17 / 31
18 Backward-Prediction-Error Filter b M [n] = u[n M] M k=1 g k u[n + 1 k] Using the a i,j notation defined earlier and g k = a M,M+1 k : b M [n] = M k=0 a M,M ku[n k] [ = a BT u[n] M u[n M] ], where a M = a M,0. a M,M ENEE630 Lecture Part-2 18 / 31
19 Augmented Normal Equation for BLP Bring together {Rg = r B P M = r(0) (r B ) T g [ R r B ] } (r B ) T r(0) {{ } R M+1 [ g 1 ] [ 0 = P M ] Augmented N.E. for BLP R M+1 a B M = [ 0 P M ] ENEE630 Lecture Part-2 19 / 31
20 Summary of Backward Linear Prediction Tap input Desired response (conj) Weight vector Estimated sig Estimation error Correlation matrix Cross-corr vector MMSE Normal Equation Augmented N.E. General Wiener Forward LP Backward LP (detail) ENEE630 Lecture Part-2 20 / 31
21 Whitening Property of Linear Prediction (Ref: Haykin 4th Ed. 3.4 (5) Property) Conceptually: The best predictor tries to explore the predictable traces from a set of (past) given values onto the future value, leaving only the unforeseeable parts as the prediction error. Also recall the principle of orthogonality: the prediction error is statistically uncorrelated with the samples used in the prediction. As we increase the order of the prediction-error filter, the correlation between its adjacent outputs is reduced. If the order is high enough, the output errors become approximately a white process (i.e., be whitened ). ENEE630 Lecture Part-2 21 / 31
22 Analysis and Synthesis From forward prediction results on the {u[n]} process: u[n] + am,1 u[n 1] a M,M u[n M] = f M[n] û[n] = am,1 u[n 1]... a M,Mu[n M] + v[n] Synthesis Analysis Here v[n] may be quantized version of f M [n], or regenerated from white noise If {u[n]} sequence has high correlation between adjacent samples, then f M [n] will have a much smaller dynamic range than u[n]. ENEE630 Lecture Part-2 22 / 31
23 Compression tool #3: Predictive Coding Recall two compression tools from Part-1: (1) lossless: decimate a bandlimited signal; (2) lossy: quantization. Tool #3: Linear Prediction. we can first figure out the best predictor for a chunk of approximately stationary samples, encode the first sample, then do prediction and encode the prediction residues (as well as the prediction parameters). The structures of analysis and synthesis of linear prediction form a matched pair. This is the basic principle behind Linear Prediction Coding (LPC) for transmission and reconstruction of digital speech signals. ENEE630 Lecture Part-2 23 / 31
24 Linear Prediction: Analysis u[n] + a M,1 u[n 1] a M,M u[n M] = f M[n] If {f M [n]} is white (i.e., the correlation among {u[n], u[n 1],...} values have been completely explored), then the process {u[n]} can be statistically characterized by a M vector, plus the mean and variance of f M [n]. ENEE630 Lecture Part-2 24 / 31
25 Linear Prediction: Synthesis û[n] = am,1 u[n 1]... a M,Mu[n M] + v[n] If {v[n]} is a white noise process, the synthesis output {u[n]} using linear prediction is an AR process with parameters {a M,k }. ENEE630 Lecture Part-2 25 / 31
26 LPC Encoding of Speech Signals Partition speech signal into frames s.t. within a frame it is approximately stationary Analyze a frame to obtain a compact representation of the linear prediction parameters, and some parameters characterizing the prediction residue f M [n] (if more b.w. is available and higher quality is desirable, we may also include some coarse representation of f M [n] by quantization) This gives much more compact representation than simple digitization (PCM coding): e.g., 64kbps 2.4k-4.8kbps A decoder uses the synthesis structure to reconstruct the speech signal, with a suitable driving sequence (periodic impulse train for voiced sound & white noise for fricative sound; quantized f M [n] if b.w. allowed) ENEE630 Lecture Part-2 26 / 31
27 Review: Recursive Relation of Correlation Matrix ENEE630 Lecture Part-2 28 / 31
28 Matrix Inversion Lemma for Homework ENEE630 Lecture Part-2 29 / 31
Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)
Lecture 2 1 During this lecture you will learn about The Least Mean Squares algorithm (LMS) Convergence analysis of the LMS Equalizer (Kanalutjämnare) Background 2 The method of the Steepest descent that
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationLecture 5: Variants of the LMS algorithm
1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output
More informationEnhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm
1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,
More informationAutomatic Detection of Emergency Vehicles for Hearing Impaired Drivers
Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers Sung-won ark and Jose Trevino Texas A&M University-Kingsville, EE/CS Department, MSC 92, Kingsville, TX 78363 TEL (36) 593-2638, FAX
More informationSolutions to Exam in Speech Signal Processing EN2300
Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.
More informationIntroduction to image coding
Introduction to image coding Image coding aims at reducing amount of data required for image representation, storage or transmission. This is achieved by removing redundant data from an image, i.e. by
More informationProbability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationL9: Cepstral analysis
L9: Cepstral analysis The cepstrum Homomorphic filtering The cepstrum and voicing/pitch detection Linear prediction cepstral coefficients Mel frequency cepstral coefficients This lecture is based on [Taylor,
More informationSystem Identification for Acoustic Comms.:
System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation
More informationLecture 5 Least-squares
EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationComputer exercise 2: Least Mean Square (LMS)
1 Computer exercise 2: Least Mean Square (LMS) This computer exercise deals with the LMS algorithm, which is derived from the method of steepest descent by replacing R = E{u(n)u H (n)} and p = E{u(n)d
More informationA Comparison of Speech Coding Algorithms ADPCM vs CELP. Shannon Wichman
A Comparison of Speech Coding Algorithms ADPCM vs CELP Shannon Wichman Department of Electrical Engineering The University of Texas at Dallas Fall 1999 December 8, 1999 1 Abstract Factors serving as constraints
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationJPEG Image Compression by Using DCT
International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Issue-4 E-ISSN: 2347-2693 JPEG Image Compression by Using DCT Sarika P. Bagal 1* and Vishal B. Raskar 2 1*
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationVoice---is analog in character and moves in the form of waves. 3-important wave-characteristics:
Voice Transmission --Basic Concepts-- Voice---is analog in character and moves in the form of waves. 3-important wave-characteristics: Amplitude Frequency Phase Voice Digitization in the POTS Traditional
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationAdaptive Equalization of binary encoded signals Using LMS Algorithm
SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) volume issue7 Sep Adaptive Equalization of binary encoded signals Using LMS Algorithm Dr.K.Nagi Reddy Professor of ECE,NBKR
More informationUnderstanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
More informationThe Filtered-x LMS Algorithm
The Filtered-x LMS Algorithm L. Håkansson Department of Telecommunications and Signal Processing, University of Karlskrona/Ronneby 372 25 Ronneby Sweden Adaptive filters are normally defined for problems
More informationTTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008
Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT40 Digital Signal Processing Suggested Solution to Exam Fall 008 Problem (a) The input and the input-output
More informationDigital Speech Coding
Digital Speech Processing David Tipper Associate Professor Graduate Program of Telecommunications and Networking University of Pittsburgh Telcom 2720 Slides 7 http://www.sis.pitt.edu/~dtipper/tipper.html
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationSGN-1158 Introduction to Signal Processing Test. Solutions
SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationChapter 4: Vector Autoregressive Models
Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...
More informationImage Compression through DCT and Huffman Coding Technique
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul
More informationAdvanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
More informationSince it is necessary to consider the ability of the lter to predict many data over a period of time a more meaningful metric is the expected value of
Chapter 11 Tutorial: The Kalman Filter Tony Lacey. 11.1 Introduction The Kalman lter ë1ë has long been regarded as the optimal solution to many tracing and data prediction tass, ë2ë. Its use in the analysis
More informationmin ǫ = E{e 2 [n]}. (11.2)
C H A P T E R 11 Wiener Filtering INTRODUCTION In this chapter we will consider the use of LTI systems in order to perform minimum mean-square-error (MMSE) estimation of a WSS random process of interest,
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationChapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem
Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become
More informationOn the Many-to-One Transport Capacity of a Dense Wireless Sensor Network and the Compressibility of Its Data
On the Many-to-One Transport Capacity of a Dense Wireless Sensor etwork and the Compressibility of Its Data Daniel Marco, Enrique J. Duarte-Melo Mingyan Liu, David L. euhoff University of Michigan, Ann
More informationFinal Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones
Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic
More informationEstimating an ARMA Process
Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators
More informationRevision of Lecture Eighteen
Revision of Lecture Eighteen Previous lecture has discussed equalisation using Viterbi algorithm: Note similarity with channel decoding using maximum likelihood sequence estimation principle It also discusses
More informationThe Algorithms of Speech Recognition, Programming and Simulating in MATLAB
FACULTY OF ENGINEERING AND SUSTAINABLE DEVELOPMENT. The Algorithms of Speech Recognition, Programming and Simulating in MATLAB Tingxiao Yang January 2012 Bachelor s Thesis in Electronics Bachelor s Program
More informationReading.. IMAGE COMPRESSION- I IMAGE COMPRESSION. Image compression. Data Redundancy. Lossy vs Lossless Compression. Chapter 8.
Reading.. IMAGE COMPRESSION- I Week VIII Feb 25 Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive, transform coding basics) 8.6 Image
More information4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : sss40@eng.cam.ac.
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : sss40@eng.cam.ac.uk 1 1 Outline The LMS algorithm Overview of LMS issues
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More information6.02 Fall 2012 Lecture #5
6.2 Fall 22 Lecture #5 Error correction for linear block codes - Syndrome decoding Burst errors and interleaving 6.2 Fall 22 Lecture 5, Slide # Matrix Notation for Linear Block Codes Task: given k-bit
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationjorge s. marques image processing
image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)
More informationRegression III: Advanced Methods
Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models
More informationBroadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.
Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet
More informationLinear Predictive Coding
Linear Predictive Coding Jeremy Bradbury December 5, 2000 0 Outline I. Proposal II. Introduction A. Speech Coding B. Voice Coders C. LPC Overview III. Historical Perspective of Linear Predictive Coding
More informationScott C. Douglas, et. Al. Convergence Issues in the LMS Adaptive Filter. 2000 CRC Press LLC. <http://www.engnetbase.com>.
Scott C. Douglas, et. Al. Convergence Issues in the LMS Adaptive Filter. 2000 CRC Press LLC. . Convergence Issues in the LMS Adaptive Filter Scott C. Douglas University of Utah
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationGSM speech coding. Wolfgang Leister Forelesning INF 5080 Vårsemester 2004. Norsk Regnesentral
GSM speech coding Forelesning INF 5080 Vårsemester 2004 Sources This part contains material from: Web pages Universität Bremen, Arbeitsbereich Nachrichtentechnik (ANT): Prof.K.D. Kammeyer, Jörg Bitzer,
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationIntroduction to Geometric Algebra Lecture II
Introduction to Geometric Algebra Lecture II Leandro A. F. Fernandes laffernandes@inf.ufrgs.br Manuel M. Oliveira oliveira@inf.ufrgs.br Visgraf - Summer School in Computer Graphics - 2010 CG UFRGS Checkpoint
More informationy t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where
. Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationCS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major
More information2.3. Finding polynomial functions. An Introduction:
2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned
More informationComputer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya (LKaya@ieee.org) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper
More informationTime and Frequency Domain Equalization
Time and Frequency Domain Equalization Presented By: Khaled Shawky Hassan Under Supervision of: Prof. Werner Henkel Introduction to Equalization Non-ideal analog-media such as telephone cables and radio
More informationSampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.
Sampling Theorem We will show that a band limited signal can be reconstructed exactly from its discrete time samples. Recall: That a time sampled signal is like taking a snap shot or picture of signal
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
More informationTTT4110 Information and Signal Theory Solution to exam
Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT4 Information and Signal Theory Solution to exam Problem I (a The frequency response is found by taking
More informationEricsson T18s Voice Dialing Simulator
Ericsson T18s Voice Dialing Simulator Mauricio Aracena Kovacevic, Anna Dehlbom, Jakob Ekeberg, Guillaume Gariazzo, Eric Lästh and Vanessa Troncoso Dept. of Signals Sensors and Systems Royal Institute of
More informationAnalysis/resynthesis with the short time Fourier transform
Analysis/resynthesis with the short time Fourier transform summer 2006 lecture on analysis, modeling and transformation of audio signals Axel Röbel Institute of communication science TU-Berlin IRCAM Analysis/Synthesis
More informationAdmin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
More informationADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING
www.arpapress.com/volumes/vol7issue1/ijrras_7_1_05.pdf ADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING 1,* Radhika Chinaboina, 1 D.S.Ramkiran, 2 Habibulla Khan, 1 M.Usha, 1 B.T.P.Madhav,
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationJPEG compression of monochrome 2D-barcode images using DCT coefficient distributions
Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationAnalysis of Filter Coefficient Precision on LMS Algorithm Performance for G.165/G.168 Echo Cancellation
Application Report SPRA561 - February 2 Analysis of Filter Coefficient Precision on LMS Algorithm Performance for G.165/G.168 Echo Cancellation Zhaohong Zhang Gunter Schmer C6 Applications ABSTRACT This
More informationCM0340 SOLNS. Do not turn this page over until instructed to do so by the Senior Invigilator.
CARDIFF UNIVERSITY EXAMINATION PAPER Academic Year: 2008/2009 Examination Period: Examination Paper Number: Examination Paper Title: SOLUTIONS Duration: Autumn CM0340 SOLNS Multimedia 2 hours Do not turn
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationWavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition
More informationTorgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
More informationChapter 2 Portfolio Management and the Capital Asset Pricing Model
Chapter 2 Portfolio Management and the Capital Asset Pricing Model In this chapter, we explore the issue of risk management in a portfolio of assets. The main issue is how to balance a portfolio, that
More informationGeometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v
12.4 Cross Product Geometric description of the cross product of the vectors u and v The cross product of two vectors is a vector! u x v is perpendicular to u and v The length of u x v is uv u v sin The
More informationContent. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11
Content Credits 11 Chapter 1 Arithmetic Refresher 13 1.1 Algebra 14 Real Numbers 14 Real Polynomials 19 1.2 Equations in one variable 21 Linear Equations 21 Quadratic Equations 22 1.3 Exercises 28 Chapter
More informationLECTURE 4. Last time: Lecture outline
LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More information12.5 Equations of Lines and Planes
Instructor: Longfei Li Math 43 Lecture Notes.5 Equations of Lines and Planes What do we need to determine a line? D: a point on the line: P 0 (x 0, y 0 ) direction (slope): k 3D: a point on the line: P
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More information