LEAST-MEAN-SQUARE ADAPTIVE FILTERS



Similar documents
Statistics for Experimenters

Analysis of Financial Time Series

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Fundamentals of Financial Planning and Management for mall usiness

Adaptive Equalization of binary encoded signals Using LMS Algorithm

ADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING

MANAGEMENT OF DATA IN CLINICAL TRIALS

4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department sss40@eng.cam.ac.

Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)

Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm


Lecture 5: Variants of the LMS algorithm

HUMAN RESOURCES MANAGEMENT FOR PUBLIC AND NONPROFIT ORGANIZATIONS

Component Ordering in Independent Component Analysis Based on Data Power

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

International Marketing Research

Graph Analysis and Visualization

NEURAL NETWORKS A Comprehensive Foundation

Modeling, Analysis, and Control of Dynamic Systems

The Filtered-x LMS Algorithm

Advanced Signal Processing and Digital Noise Reduction

Analysis of Filter Coefficient Precision on LMS Algorithm Performance for G.165/G.168 Echo Cancellation

Numerical Methods for Engineers

Appendix H: Control System Computational Aids

Adaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: VSSLMSEV

Art Direction for Film and Video

Effective Methods for Software and Systems Integration

NONLINEAR TIME SERIES ANALYSIS

COVERS ALL TOPICS IN LEVEL I CFA EXAM REVIEW CFA LEVEL I FORMULA SHEETS

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Contents. Preface. xiii. Part I 1

NANOCOMPUTING. Computational Physics for Nanoscience and Nanotechnology

Praise for Agile Contracts

Probability and Statistics

System Identification for Acoustic Comms.:

Applied Linear Algebra I Review page 1

This page has been left blank intentionally

The p-norm generalization of the LMS algorithm for adaptive filtering

Core Curriculum to the Course:

Software and Hardware Solutions for Accurate Data and Profitable Operations. Miguel J. Donald J. Chmielewski Contributor. DuyQuang Nguyen Tanth

Programming Interviews Exposed: Secrets to Landing Your Next Job

Introduction to Engineering System Dynamics

Dynamic Process Modeling. Process Dynamics and Control

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

LMS is a simple but powerful algorithm and can be implemented to take advantage of the Lattice FPGA architecture.

Data Visualization. Principles and Practice. Second Edition. Alexandru Telea

AN INTRODUCTION TO OPTIONS TRADING. Frans de Weert

Using quantum computing to realize the Fourier Transform in computer vision applications

Empirical Model-Building and Response Surfaces

Praise for Launch. Hands on and generous, Michael shows you precisely how he does it, step by step. Seth Godin, author of Linchpin

Parameter identification of a linear single track vehicle model

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Univariate and Multivariate Methods PEARSON. Addison Wesley

Least-Squares Intersection of Lines

Computer exercise 2: Least Mean Square (LMS)

D-optimal plans in observational studies

E-Commerce Operations Management Downloaded from -COMMERCE. by on 06/15/16. For personal use only.

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network

Adaptive Sampling Rate Correction for Acoustic Echo Control in Voice-Over-IP Matthias Pawig, Gerald Enzner, Member, IEEE, and Peter Vary, Fellow, IEEE

METNUMER - Numerical Methods

Experiment 7: Familiarization with the Network Analyzer

NICK SMITH AND ROBERT WOLLAN WITH CATHERINE ZHOU. John Wiley & Sons, Inc.

IMPROVED NETWORK PARAMETER ERROR IDENTIFICATION USING MULTIPLE MEASUREMENT SCANS

Integrated Reservoir Asset Management

SOLVING LINEAR SYSTEMS

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

TAGUCHI APPROACH TO DESIGN OPTIMIZATION FOR QUALITY AND COST: AN OVERVIEW. Resit Unal. Edwin B. Dean

Machine Learning in Statistical Arbitrage

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS

Classification of Bad Accounts in Credit Card Industry

Introduction to Matrix Algebra

Linear Threshold Units

INTELLIGENT SYSTEMS, CONTROL, AND AUTOMATION: SCIENCE AND ENGINEERING

Optimal Design of α-β-(γ) Filters

VOLATILITY AND DEVIATION OF DISTRIBUTED SOLAR

Mathematical Modeling and Methods of Option Pricing

Multivariate Statistical Inference and Applications

US News & World Report Best Undergraduate Engineering Programs: Specialty Rankings 2014 Rankings Published in September 2013

Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies

A Method for Measuring Amplitude and Phase of Each Radiating Element of a Phased Array Antenna

Rashad Moarref 1/5. Rashad Moarref. Postdoctoral Scholar in Aerospace Graduate Aerospace Laboratories Phone: (626)

Lecture 6. Artificial Neural Networks

Statistical Modeling by Wavelets

Social Services Administration In Hong Kong

RF Network Analyzer Basics

ARCHITECTING THE CLOUD

Characterization Of Polynomials Using Reflection Coefficients

The Method of Least Squares

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Transcription:

LEAST-MEAN-SQUARE ADAPTIVE FILTERS

LEAST-MEAN-SQUARE ADAPTIVE FILTERS Edited by S. Haykin and B. Widrow A JOHN WILEY & SONS, INC. PUBLICATION

This book is printed on acid-free paper. Copyright q 2003 by John Wiley & Sons Inc. All rights reserved. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, New Jersey 07030, (201) 748-6011, fax (201) 748-6008, E-Mail: PERMREQ@WILEY.COM. For ordering and customer service, call 1-800-CALL-WILEY. Library of Congress Cataloging-in-Publication Data: Least-mean-square adaptive filters/edited by S. Haykin and B. Widrow p. cm. Includes bibliographical references and index. ISBN 0-471-21570-8 (cloth) 1. Adaptive filters Design and construction Mathematics. 2. Least squares. I. Widrow, Bernard, 1929- II. Haykin, Simon, 1931- TK7872.F5L43 2003 621.3815 0 324 dc21 Printed in the United States of America. 10987654321 2003041161

This book is dedicated to Bernard Widrow for inventing the LMS filter and investigating its theory and applications Simon Haykin

CONTENTS Contributors Introduction: The LMS Filter (Algorithm) Simon Haykin ix xi 1. On the Efficiency of Adaptive Algorithms 1 Bernard Widrow and Max Kamenetsky 2. Traveling-Wave Model of Long LMS Filters 35 Hans J. Butterweck 3. Energy Conservation and the Learning Ability of LMS Adaptive Filters 79 Ali H. Sayed and V. H. Nascimento 4. On the Robustness of LMS Filters 105 Babak Hassibi 5. Dimension Analysis for Least-Mean-Square Algorithms 145 Iven M. Y. Mareels, John Homer, and Robert R. Bitmead 6. Control of LMS-Type Adaptive Filters 175 Eberhard Hänsler and Gerhard Uwe Schmidt 7. Affine Projection Algorithms 241 Steven L. Gay 8. Proportionate Adaptation: New Paradigms in Adaptive Filters 293 Zhe Chen, Simon Haykin, and Steven L. Gay 9. Steady-State Dynamic Weight Behavior in (N)LMS Adaptive Filters 335 A. A. (Louis) Beex and James R. Zeidler vii

viii CONTENTS 10. Error Whitening Wiener Filters: Theory and Algorithms 445 Jose C. Principe, Yadunandana N. Rao, and Deniz Erdogmus Index 491

CONTRIBUTORS A. A. (LOUIS) BEEX, Systems Group DSP Research Laboratory, The Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061-0111 Department of Mechanical and Aerospace Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0411 ROBERT R. BITMEAD, Technische Universiteit Eindhoven, Faculteit Elektrotechniek, EH 5.29, Postbus 513, 5600 MB Eindhoven, Netherlands HANS BUTTERWECK, ZHE CHEN, Department of Electrical and Computer Engineering, CRL 102, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1 DENIZ ERDOGMUS, Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL 32611 Acoustics and Speech Research Department, Bell Labs, Room 2D-531, 600 Mountain Ave., Murray Hill, NJ 07974 STEVEN L. GAY, Institute of Communication Technology, Darmstadt University of Technology, Merckstrasse 25, D-64283 Darmstadt, Germany PROF. DR.-ING. EBERHARD HÄNSLER, BABAK HASSIBI, Department of Electrical Engineering, 1200 East California Blvd., M/C 136-93, California Institute of Technology, Pasadena, CA 91101 Department of Electrical and Computer Engineering, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1 SIMON HAYKIN, JOHN HOMER, School of Computer Science and Electrical Engineering, The University of Queensland, Brisbane 4072 Stanford University, David Packard Electrical Engineering, 350 Serra Mall, Room 263, Stanford, CA 94305-9510 MAX KAMENETSKY, IVEN M. Y. MAREELS, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne Vic 3010 ix

x CONTRIBUTORS V. H. NASCIMENTO, Department of Electronic Systems Engineering, University of São Paulo, Brazil JOSE C. PRINCIPE, Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL 32611 YADUNANDANA N. RAO, Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL 32611 Department of Electrical Engineering, Room 44-123A Engineering IV Bldg, University of California, Los Angeles, CA 90095-1594 ALI H. SAYED, Institute of Communication Technology, Darmstadt University of Technology, Merckstrasse 25, D-64283 Darmstadt, Germany GERHARD UWE SCHMIDT, BERNARD WIDROW, Stanford University, David Packard Electrical Engineering, 350 Serra Mall, Room 273, Stanford, CA 94305-9510 JAMES R. ZEIDLER, Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92092

INTRODUCTION: THE LMS FILTER (ALGORITHM) SIMON HAYKIN The earliest work on adaptive filters may be traced back to the late 1950s, during which time a number of researchers were working independently on theories and applications of such filters. From this early work, the least-mean-square ðlmsþ algorithm emerged as a simple, yet effective, algorithm for the design of adaptive transversal (tapped-delay-line) filters. The LMS algorithm was devised by Widrow and Hoff in 1959 in their study of a pattern-recognition machine known as the adaptive linear element, commonly referred to as the Adaline [1, 2]. The LMS algorithm is a stochastic gradient algorithm in that it iterates each tap weight of the transversal filter in the direction of the instantaneous gradient of the squared error signal with respect to the tap weight in question. Let ^wðnþ denote the tap-weight vector of the LMS filter, computed at iteration (time step) n. The adaptive operation of the filter is completely described by the recursive equation (assuming complex data) ^wðn þ 1Þ ¼ ^wðnþþmuðnþ½dðnþ ^w H ðnþuðnþš*; ð1þ where uðnþ is the tap-input vector, dðnþ is the desired response, and m is the step-size parameter. The quantity enclosed in square brackets is the error signal. The asterisk denotes complex conjugation, and the superscript H denotes Hermitian transposition (i.e., ordinary transposition combined with complex conjugation). Equation (1) is testimony to the simplicity of the LMS filter. This simplicity, coupled with desirable properties of the LMS filter (discussed in the chapters of this book) and practical applications [3, 4], has made the LMS filter and its variants an important part of the adaptive signal processing kit of tools, not just for the past 40 years but for many years to come. Simply put, the LMS filter has withstood the test of time. Although the LMS filter is very simple in computational terms, its mathematical analysis is profoundly complicated because of its stochastic and nonlinear nature. Indeed, despite the extensive effort that has been expended in the literature to xi

xii INTRODUCTION: THE LMS FILTER (ALGORITHM) analyze the LMS filter, we still do not have a direct mathematical theory for its stability and steady-state performance, and probably we never will. Nevertheless, we do have a good understanding of its behavior in a stationary as well as a nonstationary environment, as demonstrated in the chapters of this book. The stochastic nature of the LMS filter manifests itself in the fact that in a stationary environment, and under the assumption of a small step-size parameter, the filter executes a form of Brownian motion. Specifically, the small step-size theory of the LMS filter is almost exactly described by the discrete-time version of the Langevin equation 1 [3]: Dn k ðnþ ¼n k ðn þ 1Þ n k ðnþ ¼ ml k n k ðnþþf k ðnþ; k ¼ 1; 2;...; M; ð2þ which is naturally split into two parts: a damping force ml k n k ðnþ and a stochastic force f k ðnþ. The terms used herein are defined as follows: M ¼ order (i.e., number of taps) of the transversal filter around which the LMS filter is built l k ¼ kth eigenvalue of the correlation matrix of the input vector uðnþ, which is denoted by R f k ðnþ ¼kth component of the vector mq H uðnþe* o ðnþ Q ¼ unitary matrix whose M columns constitute an orthogonal set of eigerivectors associated with the eigenvalues of the correlation matrix R e o ðnþ ¼optimum error signal produced by the corresponding Wiener filter driven by the input vector uðnþ and the desired response dðnþ To illustrate the validity of Eq. (2) as the description of small step-size theory of the LMS filter, we present the results of a computer experiment on a classic example of adaptive equalization. The example involves an unknown linear channel whose impulse response is described by the raised cosine [3] 8 1 h n ¼ 2 1 þ cos 2p < ðn 2Þ ; n ¼ 1; 2; 3; W ð3þ : 0; otherwise where the parameter W controls the amount of amplitude distortion produced by the channel, with the distortion increasing with W. Equivalently, the parameter W controls the eigenvalue spread (i.e., the ratio of the largest eigenvaiue to the smallest eigenvalue) of the correlation matrix of the tap inputs of the equalizer, with the eigenvalue spread increasing with W. The equalizer has M ¼ 11 taps. Figure 1 presents the learning curves of the equalizer trained using the LMS algorithm with the step-size parameter m ¼ 0:0075 and varying W. Each learning curve was obtained by averaging the squared value of the error signal eðnþ versus the number of iterations n over an ensemble of 100 independent trials of the experiment. The 1 The Langevin equation is the engineer s version of stochastic differential (difference) equations.

INTRODUCTION: THE LMS FILTER (ALGORITHM) xiii Figure 1 Learning curves of the LMS algorithm applied to the adaptive equalization of a communication channel whose impulse response is described by Eq. (3) for varying eigenvalue spreads: Theory is represented by continuous well-defined curves. Experimental results are represented by fluctuating curves. continuous curves shown in Figure 1 are theoretical, obtained by applying Eq. (2). The curves with relatively small fluctuations are the results of experimental work. Figure 1 demonstrates close agreement between theory and experiment. It should, however, be reemphasized that application of Eq. (2) is limited to small values of the step-size parameter m. Chapters in this book deal with cases when m is large. REFERENCES 1. B. Widrow and M. E. Hoff, Jr. (1960). Adaptive Switching Circuits, IRE WESCON Conv. Rec., Part 4, pp. 96 104. 2. B. Widrow (1966). Adaptive Filters I: Fundamentals, Rep. SEL-66-126 (TR-6764-6), Stanford Electronic Laboratories, Stanford, CA. 3. S. Haykin (2002). Adaptive Filter Theory, 4th Edition, Prentice-Hall. 4. B. Widrow and S. D. Stearns (1985). Adaptive Signal Processing, Prentice-Hall.