Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)
|
|
- Augustus Lamb
- 8 years ago
- Views:
Transcription
1 Lecture 2 1 During this lecture you will learn about The Least Mean Squares algorithm (LMS) Convergence analysis of the LMS Equalizer (Kanalutjämnare) Background 2 The method of the Steepest descent that was studies at the last lecture is a recursive algorithm for calculation of the Wiener filter when the statistics of the signals are known (knowledge about R och p). The problem is that this information is oftenly unknown! LMS is a method that is based on the same principles as the method of the Steepest descent, but where the statistics is estimated continuously. Since the statistics is estimated continuously, the LMS algorithm can adapt to changes in the signal statistics; The LMS algorithm is thus an adaptive filter. Because of estimated statistics the gradient becomes noisy. The LMS algorithm belongs to a group of methods referred to as stochastic gradient methods, while the method of the Steepest descent belongs to the group deterministic gradient methods. The Least Mean Square (LMS) algorithm 3 The Least Mean Square (LMS) algorithm 4 We want to create an algorithm that minimizes E{ e(n) 2 }, just like the SD, but based on unkown statistics. A strategy that then can be used is to uses estimates of the autocorrelation matrix R and the cross correlationen vector p. If instantaneous estimates are chosen, br(n) = u(n)u H (n) bp(n) = u(n)d (n) the resulting method is the Least Mean Squares algorithm. For the SD, the update of the filter weights is given by w(n+1) = w(n) µ[ J(n)] where J(n) = 2p + 2Rw(n). In the LMS we use the estimates R b och bp to calculate J(n). b Thus, also the updated filter vector becomes an estimate. It is therefore denoted bw(n); b J(n) = 2bp(n) + 2b R(n)bw(n)
2 The Least Mean Square (LMS) algorithm 5 The Least Mean Square (LMS) algorithm 6 For the LMS algorithm, the update equation of the filter weights becomes bw(n + 1) = bw(n) + µu(n)e (n) where e (n) = d (n) u H (n)bw(n). Illustration of gradient noise. R och p in the example from Lecture 1. µ = 0.1. SD LMS Compare with the corresponding equation for the SD: w w w(n + 1) = w(n) + µe{u(n)e (n)} where e (n) = d (n) u H (n)w(n) The difference is thus that E{u(n)e (n)} is estimated as u(n)e (n). This leads to gradient noise w 1 w 1 Because of the noise in the gradientm the LMS never reaches J min, which the SD does. Convergence analysis of the LMS, overview 7 1. The filter weight error vector 8 Consider the update equation bw(n + 1) = bw(n)+µu(n)e (n). We want to know how fast and towards what the algorithm converges,in terms of J(n) and bw(n). Strategy: 1. Introduce the filter weight error vector ǫ(n) = bw(n) w o. 2. Express the update equation in ǫ(n). 3. Express J(n) in ǫ(n). This leads to K(n) = E{ǫ(n)ǫ H (n)}. 4. Calculate the difference equation in K(n), which controls the convergence. 5. Make a variable change from K(n) to X(n) which converges equally fast. The LMS contains feedback, and just like for the SD there is a risk that the algorithm diverges, if the stepsize µ is not chosen appropriately. In order to investigate the stability, the filter weight error vector is introduced ǫ(n): ǫ(n) = bw(n) w o (w o = R 1 p Wiener-Hopf) Note that ǫ(n) corresponds to c(n) = w(n) w o for the Steepest descent, but that ǫ(n) is stochastic because of bw(n).
3 2. Update equation in ǫ(n) 9 3. Express J(n) in ǫ(n) 10 The update equation for ǫ(n) at time n+1 can be expressed recursively as ǫ(n+1) = [I µb R(n)]ǫ(n) + µ[bp(n) b R(n)wo ] = [I µu(n)u H (n)]ǫ(n) + µu(n)e o (n) where e o (n) = d(n) wh o u(n). The convergence analysis of the LMS is considerably more complicated than that for the SD. Therefore, two tools are needed (approximations). These are Independence theory Direct-averaging method Compare to that of the SD: c(n + 1) = (I µr)c(n) 3. Express J(n) i ǫ(n), cont Express J(n) in ǫ(n), cont. 12 Independence Theory 1. The input vectors of different time instants n, u(1), u(2),..., u(n), is independet of each other (and thereby uncorrelated). 2. The input signal vector at time n is independent of the desired signal at all earlier time instants, u(n) idependent of d(1), d(2),..., d(n 1). 3. The desired signal at time n, d(n), is independent of all earlier desired signals, d(1), d(2),..., d(n 1), but dependent of the input signal vector at time n, u(n). Direct-averaging Direct-averaging means that in the update equation for ǫ, ǫ(n + 1) = [I µu(n)u H (n)]ǫ(n) + µu(n)e o (n), the instantaneous estimate b R(n) = u(n)u H (n) is substituted with the expectation over the ensemble, R = E{u(n)u H (n)}, such that results. ǫ(n + 1) = (I µr)ǫ(n) + µu(n)e o (n) 4. The signals d(n) and u(n) at the same time instant n is mutually normally distributed.
4 3. Express J(n) i ǫ(n), cont Express J(n) i ǫ(n), cont. 14 For the SD we could show that J(n) = J min + [w(n) w o ] H R[w(n) w o ] and that the eigenvalues of R controls the convergence speed. For the LMS, ǫ(n) = bw(n) w o, J(n) = E{ e(n) 2 } = E{ d(n) bw H (n)u(n) 2 } = E{ e o (n) ǫ H (n)u(n) 2 } = i = E{ e o (n) 2 H +E{ǫ (n) H u(n)u (n) ǫ(n)} {z } {z } J min br(n) Here, (i) indicates that the indpendence assupmtion is used. Since ǫ and b R are estimates, and in addition not independent, the expectation operator cannot just be moved into b R. The behavior of J ex (n) = J(n) J min = E{ǫ H (n)b Rǫ(n)} determines the convergence. Since the following holds for the trace of a matrix, 1. tr(scalar) = scalar 2. tr(ab) = tr(ba) 3. E{tr(A)} = tr(e{a}) J ex (n) can be written as J ex (n) 1,2 = E{tr[b R(n)ǫ(n)ǫ H (n)]} 3 = tr[e{b R(n)ǫ(n)ǫ H (n)}] i = tr[e{b R(n)}E{ǫ(n)ǫ H (n)}] = tr[rk(n)] The convergence thus depends on K(n) = E{ǫ(n)ǫ H (n)}. 4. Difference equation of K(n) Changing of variables from K(n) to X(n) 16 The update equation for the filter weight error is, as shown previously, ǫ(n + 1) = [I µu(n)u H (n)]ǫ(n) + µu(n)e o (n) The solution to this stochastic difference equation is for small µ close to the solution for ǫ(n + 1) = (I µr)ǫ(n) + µu(n)e o (n), which is resulting from Direct-averaging. This equation is still difficult to solve, but now the behavior of K(n) can be described by the following difference equation K(n + 1) = (I µr)k(n)(i µr) + µ 2 J min R The variable changes Q H RQ = Λ Q H K(n)Q = X(n) where Λ is the diagonal eigenvalue matrix of R, and where X(n) normally is not diagonal, yields J ex (n) = tr(rk(n)) = tr(λx(n)) The convergence can be discribed by X(n + 1) = (I µλ)x(n)(i µλ) + µ 2 J min Λ
5 5. Changing of variables from K(n) to X(n), cont. 17 J ex (n) depends only on the diagonal elements of X(n) (because of MX the trace tr[λx(n)]). We can therefore write J ex (n) = λ i x i (n). The update of the diagonal elements is controlled by x i (n + 1) = (1 µλ i ) 2 x i (n) + µ 2 J min λ i i=1 This is an inhomogenous difference equation.this means that the solution will contain a transient part, and a stationary part that depends on J min and µ. The cost function can therefore be written Properties of the LMS 18 Convergence for the same interval as the SD: 0 < µ < 2 λ max X M µλ i J ex ( ) = J min 2 µλ i=1 i Misadjustment M = J ex( ) is a measure of how close the the J min optimal solution the LMS (in mean-square sense). J(n) = J min + J ex ( ) + J trans (n) where J min +J ex ( ) och J trans (n) is the stationary and transient solutions, respectively. At the best, the LMS can thus reach J min +J ex ( ). Rules of thumb LMS 19 Summary LMS 20 The eigenvalues of R are rarely known. Therefore, a set of rules of thumbs is commonly used, that is based on the tap-input power, P M 1 k=0 E{ u(n k) 2 } = Mr(0) = tr(r). The stepsize 0 < µ < 2 P M 1 k=0 E{ u(n k) 2 } Misadjustment M µ P M 1 2 k=0 E{ u(n k) 2 } Summary of the iterations: 1. Set start values for the filter coefficients, ŵ(0) 2. Calculate the error e(n) = d(n) ŵ H (n)u(n) 3. Update the filter coefficients ŵ(n + 1) = ŵ(n) + µu(n)[d (n) u H (n)ŵ(n)] 4. Repeat steps 2 and 3 Average time constant τ mse,av 1 2µλav där λ av = 1 M P Mi=1 λ i. When λ i is unknown, the following relationship is useful. Approximate relationship M and τ mse,av M M 4τmse,av Here, τ mse,av denotes the time it takes for J(n) to decay a factor of e 1.
6 Example: Channel equalizer in training phase 21 Example, cont: Learning Curve 22 p(n) p(n) =..., 1, 1, 1, 1, 1,... Channel h(n) WGN v(n) σ 2 v = u(n) Σ + h(n) = {0, , 1.000, , 0} z 7 FIR M = 11 Fördröjning LMS d(n) bd(n) + Σ e(n) Sent information in the form of a pn-sequence is affected by a channel (h(n)). An adaptive inverse filter is used to compensate for the channel such that the total effect can be approximated by a delay. White Gaussian noise is added during the transmission (WGN). Average of MSE, J(n) Learning curves based on 200 realizations. The decay is related to τmse,av. A large µ gives fast convergence, but also a large Jex( ). µ = µ = µ = Iteration n The curves illustrates learning curves for two different µ. What to read? 23 Least Mean Squares, Haykin chap , 5.4 (see lecture), Exercises: 4.1, 4.2, 1.2, 4.3, 4.5, (4.4) Computer exercise, theme: Implement the LMS algorithm in Matlab, and generate the results in Haykin chap.5.7
Computer exercise 2: Least Mean Square (LMS)
1 Computer exercise 2: Least Mean Square (LMS) This computer exercise deals with the LMS algorithm, which is derived from the method of steepest descent by replacing R = E{u(n)u H (n)} and p = E{u(n)d
More information4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : sss40@eng.cam.ac.
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : sss40@eng.cam.ac.uk 1 1 Outline The LMS algorithm Overview of LMS issues
More informationLecture 5: Variants of the LMS algorithm
1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output
More informationScott C. Douglas, et. Al. Convergence Issues in the LMS Adaptive Filter. 2000 CRC Press LLC. <http://www.engnetbase.com>.
Scott C. Douglas, et. Al. Convergence Issues in the LMS Adaptive Filter. 2000 CRC Press LLC. . Convergence Issues in the LMS Adaptive Filter Scott C. Douglas University of Utah
More informationAdaptive Equalization of binary encoded signals Using LMS Algorithm
SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) volume issue7 Sep Adaptive Equalization of binary encoded signals Using LMS Algorithm Dr.K.Nagi Reddy Professor of ECE,NBKR
More informationADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING
www.arpapress.com/volumes/vol7issue1/ijrras_7_1_05.pdf ADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING 1,* Radhika Chinaboina, 1 D.S.Ramkiran, 2 Habibulla Khan, 1 M.Usha, 1 B.T.P.Madhav,
More informationEnhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm
1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,
More informationThe Filtered-x LMS Algorithm
The Filtered-x LMS Algorithm L. Håkansson Department of Telecommunications and Signal Processing, University of Karlskrona/Ronneby 372 25 Ronneby Sweden Adaptive filters are normally defined for problems
More informationLMS is a simple but powerful algorithm and can be implemented to take advantage of the Lattice FPGA architecture.
February 2012 Introduction Reference Design RD1031 Adaptive algorithms have become a mainstay in DSP. They are used in wide ranging applications including wireless channel estimation, radar guidance systems,
More informationStability of the LMS Adaptive Filter by Means of a State Equation
Stability of the LMS Adaptive Filter by Means of a State Equation Vítor H. Nascimento and Ali H. Sayed Electrical Engineering Department University of California Los Angeles, CA 90095 Abstract This work
More informationAdaptive Notch Filter for EEG Signals Based on the LMS Algorithm with Variable Step-Size Parameter
5 Conference on Information Sciences and Systems, The Johns Hopkins University, March 16 18, 5 Adaptive Notch Filter for EEG Signals Based on the LMS Algorithm with Variable Step-Size Parameter Daniel
More informationAdaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: VSSLMSEV
Adaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: VSSLMSEV Ajjaiah H.B.M Research scholar Jyothi institute of Technology Bangalore, 560006, India Prabhakar V Hunagund Dept.of
More information4F7 Adaptive Filters (and Spectrum Estimation) Kalman Filter. Sumeetpal Singh Email : sss40@eng.cam.ac.uk
4F7 Adaptive Filters (and Spectrum Estimation) Kalman Filter Sumeetpal Singh Email : sss40@eng.cam.ac.uk 1 1 Outline State space model Kalman filter Examples 2 2 Parameter Estimation We have repeated observations
More informationFinal Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones
Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic
More informationLuigi Piroddi Active Noise Control course notes (January 2015)
Active Noise Control course notes (January 2015) 9. On-line secondary path modeling techniques Luigi Piroddi piroddi@elet.polimi.it Introduction In the feedforward ANC scheme the primary noise is canceled
More informationAdvanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
More informationSystem Identification for Acoustic Comms.:
System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation
More informationSolutions to Exam in Speech Signal Processing EN2300
Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.
More informationAnalysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 7, JULY 2002 1873 Analysis of Mean-Square Error Transient Speed of the LMS Adaptive Algorithm Onkar Dabeer, Student Member, IEEE, Elias Masry, Fellow,
More informationAnalysis of Filter Coefficient Precision on LMS Algorithm Performance for G.165/G.168 Echo Cancellation
Application Report SPRA561 - February 2 Analysis of Filter Coefficient Precision on LMS Algorithm Performance for G.165/G.168 Echo Cancellation Zhaohong Zhang Gunter Schmer C6 Applications ABSTRACT This
More informationIMPLEMENTATION OF THE ADAPTIVE FILTER FOR VOICE COMMUNICATIONS WITH CONTROL SYSTEMS
1. JAN VAŇUŠ IMPLEMENTATION OF THE ADAPTIVE FILTER FOR VOICE COMMUNICATIONS WITH CONTROL SYSTEMS Abstract: In the paper is described use of the draft method for optimal setting values of the filter length
More informationChapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )
Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) and Neural Networks( 類 神 經 網 路 ) 許 湘 伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 35 13 Examples
More informationA Stochastic 3MG Algorithm with Application to 2D Filter Identification
A Stochastic 3MG Algorithm with Application to 2D Filter Identification Emilie Chouzenoux 1, Jean-Christophe Pesquet 1, and Anisia Florescu 2 1 Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est,
More informationLEAST-MEAN-SQUARE ADAPTIVE FILTERS
LEAST-MEAN-SQUARE ADAPTIVE FILTERS LEAST-MEAN-SQUARE ADAPTIVE FILTERS Edited by S. Haykin and B. Widrow A JOHN WILEY & SONS, INC. PUBLICATION This book is printed on acid-free paper. Copyright q 2003 by
More informationFormulations of Model Predictive Control. Dipartimento di Elettronica e Informazione
Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations
More informationLinear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
More informationCS 688 Pattern Recognition Lecture 4. Linear Models for Classification
CS 688 Pattern Recognition Lecture 4 Linear Models for Classification Probabilistic generative models Probabilistic discriminative models 1 Generative Approach ( x ) p C k p( C k ) Ck p ( ) ( x Ck ) p(
More informationThe Algorithms of Speech Recognition, Programming and Simulating in MATLAB
FACULTY OF ENGINEERING AND SUSTAINABLE DEVELOPMENT. The Algorithms of Speech Recognition, Programming and Simulating in MATLAB Tingxiao Yang January 2012 Bachelor s Thesis in Electronics Bachelor s Program
More informationEcho Cancellation in Digital Subscriber Line Systems
Echo Cancellation in Digital Subscriber Line Systems July 3, 2006 Table of Contents 1 Table of Contents 2 3 4 5 6 7 8 Conclusion and Bibliography ADC... Analog Digital Converter ADSL...Asymmetric Digital
More informationProbability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
More informationCS 295 (UVM) The LMS Algorithm Fall 2013 1 / 23
The LMS Algorithm The LMS Objective Function. Global solution. Pseudoinverse of a matrix. Optimization (learning) by gradient descent. LMS or Widrow-Hoff Algorithms. Convergence of the Batch LMS Rule.
More informationPerformance Analysis of the LMS Algorithm with a Tapped Delay Line (Two-Dimensional Case)
1542 IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. ASSP-34, NO, 6, DECEMBER 1986 Performance Analysis of the LMS Algorithm with a Tapped Delay Line (Two-Dimensional Case) SHAUL FLORIAN
More informationMIMO CHANNEL CAPACITY
MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationUnderstanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
More informationSignal Detection. Outline. Detection Theory. Example Applications of Detection Theory
Outline Signal Detection M. Sami Fadali Professor of lectrical ngineering University of Nevada, Reno Hypothesis testing. Neyman-Pearson (NP) detector for a known signal in white Gaussian noise (WGN). Matched
More information(Refer Slide Time: 01:11-01:27)
Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,
More informationA STUDY OF ECHO IN VOIP SYSTEMS AND SYNCHRONOUS CONVERGENCE OF
A STUDY OF ECHO IN VOIP SYSTEMS AND SYNCHRONOUS CONVERGENCE OF THE µ-law PNLMS ALGORITHM Laura Mintandjian and Patrick A. Naylor 2 TSS Departement, Nortel Parc d activites de Chateaufort, 78 Chateaufort-France
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More informationImpact of Remote Control Failure on Power System Restoration Time
Impact of Remote Control Failure on Power System Restoration Time Fredrik Edström School of Electrical Engineering Royal Institute of Technology Stockholm, Sweden Email: fredrik.edstrom@ee.kth.se Lennart
More informationBig Data - Lecture 1 Optimization reminders
Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationHow I won the Chess Ratings: Elo vs the rest of the world Competition
How I won the Chess Ratings: Elo vs the rest of the world Competition Yannis Sismanis November 2010 Abstract This article discusses in detail the rating system that won the kaggle competition Chess Ratings:
More informationState Space Time Series Analysis
State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State
More informationCCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
More informationSMORN-VII REPORT NEURAL NETWORK BENCHMARK ANALYSIS RESULTS & FOLLOW-UP 96. Özer CIFTCIOGLU Istanbul Technical University, ITU. and
NEA/NSC-DOC (96)29 AUGUST 1996 SMORN-VII REPORT NEURAL NETWORK BENCHMARK ANALYSIS RESULTS & FOLLOW-UP 96 Özer CIFTCIOGLU Istanbul Technical University, ITU and Erdinç TÜRKCAN Netherlands Energy Research
More informationCSCI567 Machine Learning (Fall 2014)
CSCI567 Machine Learning (Fall 2014) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu September 22, 2014 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 2014) September 22, 2014 1 /
More informationWAVES AND FIELDS IN INHOMOGENEOUS MEDIA
WAVES AND FIELDS IN INHOMOGENEOUS MEDIA WENG CHO CHEW UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN IEEE PRESS Series on Electromagnetic Waves Donald G. Dudley, Series Editor IEEE Antennas and Propagation Society,
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475. G. George Yin, Fellow, IEEE, and Vikram Krishnamurthy, Fellow, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475 LMS Algorithms for Tracking Slow Markov Chains With Applications to Hidden Markov Estimation and Adaptive Multiuser Detection G.
More informationComputational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More informationSparse recovery and compressed sensing in inverse problems
Gerd Teschke (7. Juni 2010) 1/68 Sparse recovery and compressed sensing in inverse problems Gerd Teschke (joint work with Evelyn Herrholz) Institute for Computational Mathematics in Science and Technology
More informationAdaptive Solution for Blind Identification/Equalization Using Deterministic Maximum Likelihood
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 50, NO 4, APRIL 2002 923 Adaptive Solution for Blind Identification/Equalization Using Deterministic Maximum Likelihood Florence Alberge, Pierre Duhamel, Fellow,
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationAdaptive Sampling Rate Correction for Acoustic Echo Control in Voice-Over-IP Matthias Pawig, Gerald Enzner, Member, IEEE, and Peter Vary, Fellow, IEEE
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 1, JANUARY 2010 189 Adaptive Sampling Rate Correction for Acoustic Echo Control in Voice-Over-IP Matthias Pawig, Gerald Enzner, Member, IEEE, and Peter
More informationSparse LMS via Online Linearized Bregman Iteration
1 Sparse LMS via Online Linearized Bregman Iteration Tao Hu, and Dmitri B. Chklovskii Howard Hughes Medical Institute, Janelia Farm Research Campus {hut, mitya}@janelia.hhmi.org Abstract We propose a version
More informationA Cold Startup Strategy for Blind Adaptive M-PPM Equalization
A Cold Startup Strategy for Blind Adaptive M-PPM Equalization Andrew G Klein and Xinming Huang Department of Electrical and Computer Engineering Worcester Polytechnic Institute Worcester, MA 0609 email:
More informationThe CUSUM algorithm a small review. Pierre Granjon
The CUSUM algorithm a small review Pierre Granjon June, 1 Contents 1 The CUSUM algorithm 1.1 Algorithm............................... 1.1.1 The problem......................... 1.1. The different steps......................
More informationTTT4120 Digital Signal Processing Suggested Solution to Exam Fall 2008
Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT40 Digital Signal Processing Suggested Solution to Exam Fall 008 Problem (a) The input and the input-output
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationDiffusion Adaptation Strategies for Distributed Optimization and Learning over Networks
Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks Jianshu Chen, Student Member, IEEE, and Ali H. Sayed, Fellow, IEEE 1 arxiv:1111.0034v3 [math.oc] 12 May 2012 Abstract
More informationSince it is necessary to consider the ability of the lter to predict many data over a period of time a more meaningful metric is the expected value of
Chapter 11 Tutorial: The Kalman Filter Tony Lacey. 11.1 Introduction The Kalman lter ë1ë has long been regarded as the optimal solution to many tracing and data prediction tass, ë2ë. Its use in the analysis
More informationLog-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network
Recent Advances in Electrical Engineering and Electronic Devices Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Ahmed El-Mahdy and Ahmed Walid Faculty of Information Engineering
More informationSimulation-based optimization methods for urban transportation problems. Carolina Osorio
Simulation-based optimization methods for urban transportation problems Carolina Osorio Civil and Environmental Engineering Department Massachusetts Institute of Technology (MIT) Joint work with: Prof.
More informationTime Series Analysis III
Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis
More informationThe Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
More informationLoad-Balanced Implementation of a Delayless Partitioned Block Frequency-Domain Adaptive Filter
Load-Balanced Implementation of a Delayless Partitioned Block Frequency-Domain Adaptive Filter M. Fink, S. Kraft, M. Holters, U. Zölzer Dept. of Signal Processing and Communications Helmut-Schmidt-University
More informationt := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
More informationReinforcement Learning
Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Martin Lauer AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg martin.lauer@kit.edu
More informationThe Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy
BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.
More informationKalman Filter Applied to a Active Queue Management Problem
IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, Volume 9, Issue 4 Ver. III (Jul Aug. 2014), PP 23-27 Jyoti Pandey 1 and Prof. Aashih Hiradhar 2 Department
More informationPart II Redundant Dictionaries and Pursuit Algorithms
Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries
More informationComponent Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
More informationSequential Tracking in Pricing Financial Options using Model Based and Neural Network Approaches
Sequential Tracking in Pricing Financial Options using Model Based and Neural Network Approaches Mahesan Niranjan Cambridge University Engineering Department Cambridge CB2 IPZ, England niranjan@eng.cam.ac.uk
More informationTTT4110 Information and Signal Theory Solution to exam
Norwegian University of Science and Technology Department of Electronics and Telecommunications TTT4 Information and Signal Theory Solution to exam Problem I (a The frequency response is found by taking
More informationCommunication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student
More informationLinear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S
Linear smoother ŷ = S y where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S 2 Online Learning: LMS and Perceptrons Partially adapted from slides by Ryan Gabbard
More informationA linear algebraic method for pricing temporary life annuities
A linear algebraic method for pricing temporary life annuities P. Date (joint work with R. Mamon, L. Jalen and I.C. Wang) Department of Mathematical Sciences, Brunel University, London Outline Introduction
More informationReinforcement Learning
Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Joschka Bödecker AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg jboedeck@informatik.uni-freiburg.de
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More information250325 - METNUMER - Numerical Methods
Coordinating unit: 250 - ETSECCPB - Barcelona School of Civil Engineering Teaching unit: 751 - ECA - Department of Civil and Environmental Engineering Academic year: Degree: 2015 BACHELOR'S DEGREE IN GEOLOGICAL
More informationA Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com
A Regime-Switching Model for Electricity Spot Prices Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com May 31, 25 A Regime-Switching Model for Electricity Spot Prices Abstract Electricity markets
More informationMISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
More informationCentre for Central Banking Studies
Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics
More informationParallel & Distributed Optimization. Based on Mark Schmidt s slides
Parallel & Distributed Optimization Based on Mark Schmidt s slides Motivation behind using parallel & Distributed optimization Performance Computational throughput have increased exponentially in linear
More informationCapacity Limits of MIMO Channels
Tutorial and 4G Systems Capacity Limits of MIMO Channels Markku Juntti Contents 1. Introduction. Review of information theory 3. Fixed MIMO channels 4. Fading MIMO channels 5. Summary and Conclusions References
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationSTUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION
STUDY OF MUTUAL INFORMATION IN PERCEPTUAL CODING WITH APPLICATION FOR LOW BIT-RATE COMPRESSION Adiel Ben-Shalom, Michael Werman School of Computer Science Hebrew University Jerusalem, Israel. {chopin,werman}@cs.huji.ac.il
More informationGLM, insurance pricing & big data: paying attention to convergence issues.
GLM, insurance pricing & big data: paying attention to convergence issues. Michaël NOACK - michael.noack@addactis.com Senior consultant & Manager of ADDACTIS Pricing Copyright 2014 ADDACTIS Worldwide.
More informationLecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
More informationTentamen i GRUNDLÄGGANDE MATEMATISK FYSIK
Karlstads Universitet Fysik Tentamen i GRUNDLÄGGANDE MATEMATISK FYSIK [ VT 2008, FYGB05] Datum: 2008-03-26 Tid: 8.15 13.15 Lärare: Jürgen Fuchs c/o Carl Stigner Tel: 054-700 1815 Total poäng: 28 Godkänd:
More informationA Load Balancing Method in SiCo Hierarchical DHT-based P2P Network
1 Shuang Kai, 2 Qu Zheng *1, Shuang Kai Beijing University of Posts and Telecommunications, shuangk@bupt.edu.cn 2, Qu Zheng Beijing University of Posts and Telecommunications, buptquzheng@gmail.com Abstract
More informationDigital LMS Adaptation of Analog Filters Without Gradient Information
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 50, NO. 9, SEPTEMBER 2003 539 Digital LMS Adaptation of Analog Filters Without Gradient Information Anthony Chan
More informationChapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationExperimental Identification an Interactive Online Course
Proceedings of the 17th World Congress The International Federation of Automatic Control Experimental Identification an Interactive Online Course L. Čirka, M. Fikar, M. Kvasnica, and M. Herceg Institute
More informationSIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I
Lennart Edsberg, Nada, KTH Autumn 2008 SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Parameter values and functions occurring in the questions belowwill be exchanged
More information