Computer exercise 2: Least Mean Square (LMS)



Similar documents
Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)

4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department sss40@eng.cam.ac.

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Lecture 5: Variants of the LMS algorithm

Adaptive Equalization of binary encoded signals Using LMS Algorithm

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

A Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH

Analysis of Filter Coefficient Precision on LMS Algorithm Performance for G.165/G.168 Echo Cancellation

Probability and Random Variables. Generation of random variables (r.v.)

Component Ordering in Independent Component Analysis Based on Data Power

Scott C. Douglas, et. Al. Convergence Issues in the LMS Adaptive Filter CRC Press LLC. <

Luigi Piroddi Active Noise Control course notes (January 2015)

TTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008

ADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )

EE289 Lab Fall LAB 4. Ambient Noise Reduction. 1 Introduction. 2 Simulation in Matlab Simulink

Data Mining: Algorithms and Applications Matrix Math Review

Lecture 3: Linear methods for classification

ECE302 Spring 2006 HW4 Solutions February 6,

The Filtered-x LMS Algorithm

Java Modules for Time Series Analysis

Trend and Seasonal Components

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Descriptive Statistics

Curve Fitting with Maple

Point Biserial Correlation Tests

The Bivariate Normal Distribution

Package EstCRM. July 13, 2015

Adaptive Notch Filter for EEG Signals Based on the LMS Algorithm with Variable Step-Size Parameter

1 Maximum likelihood estimation

Understanding and Applying Kalman Filtering

ITSM-R Reference Manual

Signal Detection C H A P T E R SIGNAL DETECTION AS HYPOTHESIS TESTING

Mini-project in TSRT04: Cell Phone Coverage

3 Orthogonal Vectors and Matrices

Estimation of Fractal Dimension: Numerical Experiments and Software

The Algorithms of Speech Recognition, Programming and Simulating in MATLAB

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

Simple Linear Regression Inference

Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA , USA kbell@gmu.edu, hlv@gmu.

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Autocovariance and Autocorrelation

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 8 - Power Density Spectrum

Optimal linear-quadratic control

System Identification for Acoustic Comms.:

Tutorial: Structural Models of the Firm

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

LMS is a simple but powerful algorithm and can be implemented to take advantage of the Lattice FPGA architecture.

Forecasting in supply chains

Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

6 Scalar, Stochastic, Discrete Dynamic Systems

Advanced Signal Processing and Digital Noise Reduction

Analecta Vol. 8, No. 2 ISSN

SGN-1158 Introduction to Signal Processing Test. Solutions

CS 688 Pattern Recognition Lecture 4. Linear Models for Classification

Time Series Analysis

TTT4110 Information and Signal Theory Solution to exam

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

Capacity Limits of MIMO Channels

ROUTH S STABILITY CRITERION

Lab 1. The Fourier Transform

Subspace Analysis and Optimization for AAM Based Face Alignment

ADVANCED ALGORITHMS FOR EQUALIZATION ON ADSL CHANNEL

Evaluating System Suitability CE, GC, LC and A/D ChemStation Revisions: A.03.0x- A.08.0x

Maximum Likelihood Estimation of ADC Parameters from Sine Wave Test Data. László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Statistical Models in R

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

Chapter 4: Vector Autoregressive Models

min ǫ = E{e 2 [n]}. (11.2)

WEEK #3, Lecture 1: Sparse Systems, MATLAB Graphics

Lecture 2: ARMA(p,q) models (part 3)

Solutions to Exam in Speech Signal Processing EN2300

Multivariate Normal Distribution

How to assess the risk of a large portfolio? How to estimate a large covariance matrix?

Sections 2.11 and 5.8

Nonlinear Iterative Partial Least Squares Method

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Intro to scientific programming (with Python) Pietro Berkes, Brandeis University

Coding and decoding with convolutional codes. The Viterbi Algor

Trading activity as driven Poisson process: comparison with empirical data

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm

Exercise 1.12 (Pg )

Linear Regression. Guy Lebanon

x = + x 2 + x

Transcription:

1 Computer exercise 2: Least Mean Square (LMS) This computer exercise deals with the LMS algorithm, which is derived from the method of steepest descent by replacing R = E{u(n)u H (n)} and p = E{u(n)d (n)} with the instantaneous estimates R(n) = u(n)u H (n) and p(n) = u(n)d (n), respectively. Computer exercise 2.1 The recursive equations for the error and the filter coefficients of the least mean square algorithm are given by e(n) = d(n) ŵ H (n)u(n) ŵ(n + 1) = ŵ(n) + µu(n)e (n). Write a function in Matlab, which takes an input vector u and a reference signal d, both of length N, and calculates the error e for all time instants. function [e,w]=lms(mu,m,u,d); Call: [e,w]=lms(mu,m,u,d); Input arguments: mu = step size, dim 1x1 M = filter length, dim 1x1 u = input signal, dim Nx1 d = desired signal, dim Nx1 Output arguments: e = estimation error, dim Nx1 w = final filter coefficients, dim Mx1 Let the initial values of the filter coefficients be ŵ(0) = [0, 0,..., 0] T. Mind the order of the elements in u(n) = [u(n), u(n 1),..., u(n M + 1)] T! In Matlab you have to use uvec=u(n:-1:(n-m+1)), so that uvec corresponds to u(n). Furthermore, the input signal vector u is required to be a column vector. Computer exercise 2.2 Now you shall verify that your LMS algorithm works properly. As a simple test, the adaptive filter should identify a short FIR-filter, shown in the figure below.

2 u(n) h d(n) ŵ M=5 ˆd(n) + Σ LMS e(n) The filter that should be identified is h(n) = {1, 1, 1, 1, 1 }. Use white Gaussian noise as input signal that you filter with h(n) in order to obtain the desired 2 4 8 16 signal d(n). Write in Matlab filter coefficients h=0.5.^[0:4]; input signal u=randn(1000,1); filtered input signal == desired signal d=conv(h,u); LMS [e,w]=lms(0.1,5,u,d); Compare the final filter coefficients (w) obtained by the LMS algorithm with the filter that it should identify (h). If the coefficients are equal, your LMS algorithm is correct. Note that in the current example there is no noise source influencing the driving noise u(n). Furthermore, the length of the adaptive filter M corresponds to the length of the FIR-filter to be identified. Therefore the error e(n) tends towards zero. Computer exercise 2.3 Now you shall follow the example in Haykin, edition 4, chapter 5.7, pp. 285-291, (edition 3: chapter 9.7, pp. 412-421), Computer Experiment on Adaptive Equalization, and reproduce the result. Below follow some hints that will simplify the implementation. A Bernoulli sequence is a random sequence of +1 and -1, where both occur with probability 1. In Matlab, such a sequence is generated by 2

3 Bernoulli sequence of length N x=2*round(rand(n,1))-1; A learning curve is generated by taking the mean of the squared error e 2 (n) over several realizations of an ensemble, i.e., J(n) = 1 K K k=1 e 2 k (n), n = 0,..., N 1 where e k (n) is the estimation error at time instant n for the k-th realization, and K is the number of realizations to be considered. In order to plot J(n) with a logarithmic scale on the vertical axis, use the command semilogy. Computer exercise 2.4 Calculation of the autocorrelation matrix R = E{u(n)u H (n)} and the cross-correlation vector p = E{u(n)d (n)} for the system in Haykin yields r(0) r(1) r(2)... r(10) r(1) r(0) r(1)... r(9) R =........., r(10) r(9) r(8)... r(0) with and r(0) = (h 2 1 + h2 2 + h3 3 )σ2 x + σ2 v r(1) = (h 1 h 2 + h 2 h 3 )σ 2 x r(2) = h 1 h 3 σ 2 x r(k) = 0, k > 2, p = σ 2 x[0, 0, 0, 0, h 3, h 2, h 1, 0, 0, 0, 0 ] T, respectively. The autocorrelation matrix can be generated in Matlab by R=sigmax2*toeplitz([h1^2+h2^2+h3^2,h1*h2+h2*h3,h1*h3,zeros(1,8)]) +sigmav2*eye(11); Calculate the Wiener filter for W = 3.1, and determine J min. Give an estimate for J ex ( ) in Haykin, edition 4, figure 5.23, (edition 3: figure 9.23) for µ = 0.075 and µ = 0.025.

4 Which value of µ results in quicker convergence? Which value of µ results in a smaller value for J ex ( )? If you have time, simulate the case µ = 0.0075 for N = 2500 samples, and give an estimate for J ex ( )! Compare your results with the results obtained by applying the rules of thumb for M.

5 Program code LMS function [e,w]=lms(mu,m,u,d); Call: [e,w]=lms(mu,m,u,d); Input arguments: mu = step size, dim 1x1 M = filter length, dim 1x1 u = input signal, dim Nx1 d = desired signal, dim Nx1 Output arguments: e = estimation error, dim Nx1 w = final filter coefficients, dim Mx1 inital values: 0 w=zeros(m,1); number of samples of the input signal N=length(u); Make sure that u and d are column vectors u=u(:); d=d(:); LMS for n=m:n uvec=u(n:-1:n-m+1); e(n)=d(n)-w *uvec; w=w+mu*uvec*conj(e(n)); end e=e(:); Optimal filter function [Jmin,R,p,wo]=wiener(W,sigmav2); WIENER Returns R and p, together with the Wiener filter

6 solution and Jmin for computer exercise 2.4. Call: [Jmin,R,p,wo]=wiener(W,sigmav2); Input arguments: W =eigenvalue spread, dim 1x1 sigmav2 =variance of the additive noise source, dim 1x1 Output arguments: Jmin =minimum MSE obtained by the Wiener filter, dim 1x1 R =autocorrelation matrix, dim 11x11 p =cross-correlation vector, dim 11x1 wo =optimal filter, dim 11x1 Choose the remaining parameters according to Haykin chapter 9.7. filter coefficients h1,h2,h3 h1=1/2*(1+cos(2*pi/w*(1-2))); h2=1/2*(1+cos(2*pi/w*(2-2))); h3=1/2*(1+cos(2*pi/w*(3-2))); variance of driving noise sigmax2=1; theoretical autocorrelation matrix R 11x11 R=sigmax2*toeplitz([h1^2+h2^2+h3^2,... h1*h2+h2*h3,h1*h3,zeros(1,8)])+sigmav2*eye(11); theoretical cross-correlation vector p 11x1 p=sigmax2*[zeros(4,1);h3;h2;h1;zeros(4,1)]; Wiener filter wo=r\p; Jmin Jmin=sigmax2-p *wo;