Size: px
Start display at page:





3 This book is printed on acid-free paper. Copyright q 2003 by John Wiley & Sons Inc. All rights reserved. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) , fax (978) Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, New Jersey 07030, (201) , fax (201) , For ordering and customer service, call CALL-WILEY. Library of Congress Cataloging-in-Publication Data: Least-mean-square adaptive filters/edited by S. Haykin and B. Widrow p. cm. Includes bibliographical references and index. ISBN (cloth) 1. Adaptive filters Design and construction Mathematics. 2. Least squares. I. Widrow, Bernard, II. Haykin, Simon, TK7872.F5L dc21 Printed in the United States of America

4 This book is dedicated to Bernard Widrow for inventing the LMS filter and investigating its theory and applications Simon Haykin

5 CONTENTS Contributors Introduction: The LMS Filter (Algorithm) Simon Haykin ix xi 1. On the Efficiency of Adaptive Algorithms 1 Bernard Widrow and Max Kamenetsky 2. Traveling-Wave Model of Long LMS Filters 35 Hans J. Butterweck 3. Energy Conservation and the Learning Ability of LMS Adaptive Filters 79 Ali H. Sayed and V. H. Nascimento 4. On the Robustness of LMS Filters 105 Babak Hassibi 5. Dimension Analysis for Least-Mean-Square Algorithms 145 Iven M. Y. Mareels, John Homer, and Robert R. Bitmead 6. Control of LMS-Type Adaptive Filters 175 Eberhard Hänsler and Gerhard Uwe Schmidt 7. Affine Projection Algorithms 241 Steven L. Gay 8. Proportionate Adaptation: New Paradigms in Adaptive Filters 293 Zhe Chen, Simon Haykin, and Steven L. Gay 9. Steady-State Dynamic Weight Behavior in (N)LMS Adaptive Filters 335 A. A. (Louis) Beex and James R. Zeidler vii

6 viii CONTENTS 10. Error Whitening Wiener Filters: Theory and Algorithms 445 Jose C. Principe, Yadunandana N. Rao, and Deniz Erdogmus Index 491

7 CONTRIBUTORS A. A. (LOUIS) BEEX, Systems Group DSP Research Laboratory, The Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA Department of Mechanical and Aerospace Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA ROBERT R. BITMEAD, Technische Universiteit Eindhoven, Faculteit Elektrotechniek, EH 5.29, Postbus 513, 5600 MB Eindhoven, Netherlands HANS BUTTERWECK, ZHE CHEN, Department of Electrical and Computer Engineering, CRL 102, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1 DENIZ ERDOGMUS, Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL Acoustics and Speech Research Department, Bell Labs, Room 2D-531, 600 Mountain Ave., Murray Hill, NJ STEVEN L. GAY, Institute of Communication Technology, Darmstadt University of Technology, Merckstrasse 25, D Darmstadt, Germany PROF. DR.-ING. EBERHARD HÄNSLER, BABAK HASSIBI, Department of Electrical Engineering, 1200 East California Blvd., M/C , California Institute of Technology, Pasadena, CA Department of Electrical and Computer Engineering, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1 SIMON HAYKIN, JOHN HOMER, School of Computer Science and Electrical Engineering, The University of Queensland, Brisbane 4072 Stanford University, David Packard Electrical Engineering, 350 Serra Mall, Room 263, Stanford, CA MAX KAMENETSKY, IVEN M. Y. MAREELS, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne Vic 3010 ix

8 x CONTRIBUTORS V. H. NASCIMENTO, Department of Electronic Systems Engineering, University of São Paulo, Brazil JOSE C. PRINCIPE, Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL YADUNANDANA N. RAO, Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL Department of Electrical Engineering, Room A Engineering IV Bldg, University of California, Los Angeles, CA ALI H. SAYED, Institute of Communication Technology, Darmstadt University of Technology, Merckstrasse 25, D Darmstadt, Germany GERHARD UWE SCHMIDT, BERNARD WIDROW, Stanford University, David Packard Electrical Engineering, 350 Serra Mall, Room 273, Stanford, CA JAMES R. ZEIDLER, Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92092

9 INTRODUCTION: THE LMS FILTER (ALGORITHM) SIMON HAYKIN The earliest work on adaptive filters may be traced back to the late 1950s, during which time a number of researchers were working independently on theories and applications of such filters. From this early work, the least-mean-square ðlmsþ algorithm emerged as a simple, yet effective, algorithm for the design of adaptive transversal (tapped-delay-line) filters. The LMS algorithm was devised by Widrow and Hoff in 1959 in their study of a pattern-recognition machine known as the adaptive linear element, commonly referred to as the Adaline [1, 2]. The LMS algorithm is a stochastic gradient algorithm in that it iterates each tap weight of the transversal filter in the direction of the instantaneous gradient of the squared error signal with respect to the tap weight in question. Let ^wðnþ denote the tap-weight vector of the LMS filter, computed at iteration (time step) n. The adaptive operation of the filter is completely described by the recursive equation (assuming complex data) ^wðn þ 1Þ ¼ ^wðnþþmuðnþ½dðnþ ^w H ðnþuðnþš*; ð1þ where uðnþ is the tap-input vector, dðnþ is the desired response, and m is the step-size parameter. The quantity enclosed in square brackets is the error signal. The asterisk denotes complex conjugation, and the superscript H denotes Hermitian transposition (i.e., ordinary transposition combined with complex conjugation). Equation (1) is testimony to the simplicity of the LMS filter. This simplicity, coupled with desirable properties of the LMS filter (discussed in the chapters of this book) and practical applications [3, 4], has made the LMS filter and its variants an important part of the adaptive signal processing kit of tools, not just for the past 40 years but for many years to come. Simply put, the LMS filter has withstood the test of time. Although the LMS filter is very simple in computational terms, its mathematical analysis is profoundly complicated because of its stochastic and nonlinear nature. Indeed, despite the extensive effort that has been expended in the literature to xi

10 xii INTRODUCTION: THE LMS FILTER (ALGORITHM) analyze the LMS filter, we still do not have a direct mathematical theory for its stability and steady-state performance, and probably we never will. Nevertheless, we do have a good understanding of its behavior in a stationary as well as a nonstationary environment, as demonstrated in the chapters of this book. The stochastic nature of the LMS filter manifests itself in the fact that in a stationary environment, and under the assumption of a small step-size parameter, the filter executes a form of Brownian motion. Specifically, the small step-size theory of the LMS filter is almost exactly described by the discrete-time version of the Langevin equation 1 [3]: Dn k ðnþ ¼n k ðn þ 1Þ n k ðnþ ¼ ml k n k ðnþþf k ðnþ; k ¼ 1; 2;...; M; ð2þ which is naturally split into two parts: a damping force ml k n k ðnþ and a stochastic force f k ðnþ. The terms used herein are defined as follows: M ¼ order (i.e., number of taps) of the transversal filter around which the LMS filter is built l k ¼ kth eigenvalue of the correlation matrix of the input vector uðnþ, which is denoted by R f k ðnþ ¼kth component of the vector mq H uðnþe* o ðnþ Q ¼ unitary matrix whose M columns constitute an orthogonal set of eigerivectors associated with the eigenvalues of the correlation matrix R e o ðnþ ¼optimum error signal produced by the corresponding Wiener filter driven by the input vector uðnþ and the desired response dðnþ To illustrate the validity of Eq. (2) as the description of small step-size theory of the LMS filter, we present the results of a computer experiment on a classic example of adaptive equalization. The example involves an unknown linear channel whose impulse response is described by the raised cosine [3] 8 1 h n ¼ 2 1 þ cos 2p < ðn 2Þ ; n ¼ 1; 2; 3; W ð3þ : 0; otherwise where the parameter W controls the amount of amplitude distortion produced by the channel, with the distortion increasing with W. Equivalently, the parameter W controls the eigenvalue spread (i.e., the ratio of the largest eigenvaiue to the smallest eigenvalue) of the correlation matrix of the tap inputs of the equalizer, with the eigenvalue spread increasing with W. The equalizer has M ¼ 11 taps. Figure 1 presents the learning curves of the equalizer trained using the LMS algorithm with the step-size parameter m ¼ 0:0075 and varying W. Each learning curve was obtained by averaging the squared value of the error signal eðnþ versus the number of iterations n over an ensemble of 100 independent trials of the experiment. The 1 The Langevin equation is the engineer s version of stochastic differential (difference) equations.

11 INTRODUCTION: THE LMS FILTER (ALGORITHM) xiii Figure 1 Learning curves of the LMS algorithm applied to the adaptive equalization of a communication channel whose impulse response is described by Eq. (3) for varying eigenvalue spreads: Theory is represented by continuous well-defined curves. Experimental results are represented by fluctuating curves. continuous curves shown in Figure 1 are theoretical, obtained by applying Eq. (2). The curves with relatively small fluctuations are the results of experimental work. Figure 1 demonstrates close agreement between theory and experiment. It should, however, be reemphasized that application of Eq. (2) is limited to small values of the step-size parameter m. Chapters in this book deal with cases when m is large. REFERENCES 1. B. Widrow and M. E. Hoff, Jr. (1960). Adaptive Switching Circuits, IRE WESCON Conv. Rec., Part 4, pp B. Widrow (1966). Adaptive Filters I: Fundamentals, Rep. SEL (TR ), Stanford Electronic Laboratories, Stanford, CA. 3. S. Haykin (2002). Adaptive Filter Theory, 4th Edition, Prentice-Hall. 4. B. Widrow and S. D. Stearns (1985). Adaptive Signal Processing, Prentice-Hall.