System Identification for Acoustic Comms.:



Similar documents
Hybrid processing of SCADA and synchronized phasor measurements for tracking network state

Analysis of Bayesian Dynamic Linear Models

Advanced Signal Processing and Digital Noise Reduction

4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department sss40@eng.cam.ac.

LMS is a simple but powerful algorithm and can be implemented to take advantage of the Lattice FPGA architecture.

Lecture 5: Variants of the LMS algorithm

Understanding and Applying Kalman Filtering

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Environmental Effects On Phase Coherent Underwater Acoustic Communications: A Perspective From Several Experimental Measurements

The Filtered-x LMS Algorithm

EE 570: Location and Navigation

Nonlinear Iterative Partial Least Squares Method

Multiple Linear Regression in Data Mining

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

Adaptive Demand-Forecasting Approach based on Principal Components Time-series an application of data-mining technique to detection of market movement

Univariate and Multivariate Methods PEARSON. Addison Wesley

19 LINEAR QUADRATIC REGULATOR

State Space Time Series Analysis

Adaptive Equalization of binary encoded signals Using LMS Algorithm

1 Short Introduction to Time Series

ISI Mitigation in Image Data for Wireless Wideband Communications Receivers using Adjustment of Estimated Flat Fading Errors

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach

Common factor analysis

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Probability and Random Variables. Generation of random variables (r.v.)

Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA , USA kbell@gmu.edu, hlv@gmu.

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Principles of Digital Communication

Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)

Dimensionality Reduction: Principal Components Analysis

Collaborative Filtering. Radek Pelánek

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Kalman Filter Applied to a Active Queue Management Problem

D-optimal plans in observational studies

MUSIC-like Processing of Pulsed Continuous Wave Signals in Active Sonar Experiments

ADAPTIVE ALGORITHMS FOR ACOUSTIC ECHO CANCELLATION IN SPEECH PROCESSING

Java Modules for Time Series Analysis

6. Cholesky factorization

11. Time series and dynamic linear models

Clustering & Visualization

Financial TIme Series Analysis: Part II

How To Understand And Solve A Linear Programming Problem

Regression Clustering

Time Series Analysis III

Object tracking & Motion detection in video sequences

Introduction to Principal Components and FactorAnalysis

SYSTEMS OF REGRESSION EQUATIONS

This content downloaded on Tue, 19 Feb :28:43 PM All use subject to JSTOR Terms and Conditions

Linear regression methods for large n and streaming data

Dynamic data processing

Politecnico di Torino. Porto Institutional Repository

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

Solving Systems of Linear Equations Using Matrices

IMPROVED NETWORK PARAMETER ERROR IDENTIFICATION USING MULTIPLE MEASUREMENT SCANS

The p-norm generalization of the LMS algorithm for adaptive filtering

Principal components analysis

GRADES 7, 8, AND 9 BIG IDEAS

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Discrete Frobenius-Perron Tracking

EE289 Lab Fall LAB 4. Ambient Noise Reduction. 1 Introduction. 2 Simulation in Matlab Simulink

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Factor analysis. Angela Montanari

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.

Efficient Recovery of Secrets

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

Component Ordering in Independent Component Analysis Based on Data Power

Partial Least Squares (PLS) Regression.

Bag of Pursuits and Neural Gas for Improved Sparse Coding

Chapter 4: Vector Autoregressive Models

α = u v. In other words, Orthogonal Projection

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Row Echelon Form and Reduced Row Echelon Form

Solutions to Exam in Speech Signal Processing EN2300

Francesco Sorrentino Department of Mechanical Engineering

MATHEMATICAL METHODS OF STATISTICS

Orthogonal Diagonalization of Symmetric Matrices

GLM, insurance pricing & big data: paying attention to convergence issues.

Social Media Mining. Data Mining Essentials

Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

7 Gaussian Elimination and LU Factorization

On Correlating Performance Metrics

Statistics Graduate Courses

Scicos is a Scilab toolbox included in the Scilab package. The Scicos editor can be opened by the scicos command

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

Linear Models and Conjoint Analysis with Nonlinear Spline Transformations

MIMO CHANNEL CAPACITY

Transcription:

System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation of coherent acoustic communications signals requires estimating the state of the communications channel. A Class of Underwater Acoustic Channel (sparse and rapidly timevarying) Adaptive Equalizers for Coherent Communications require estimates of the time-varying channel impulse response. Rapid Time Variation -> need to estimate both channel impulse response and the parameters describing the time variation. Sparseness results in some of the parameters describing the time variation being unobservable in a state-space model sense. Several classes of methods of jointly address the challenges posed by sparse and rapidly time-varying channels. Results with experimental data

Surface Scattered Channels Wavefronts II Experiment (40 meter range) Surface Wave Field Impulse Response Estimation Error

Acoustic Focusing by Surface Waves Time-Varying Channel Impulse Response Dynamics of the first surface scattered arrival Time (seconds)

Channel Dynamics (Scattering Function)

Surface wave focusing and the signal prediction error (channel estimation error) Signal Prediction Error using RLS algorithm Channel Estimate using RLS algorithm

Impulse Response Estimation Error (surface scattering dominates, 238 m range) Signal Prediction Error is good surrogate for channel estimation error. Time scale of error oscillations is same as that for dominant surface waves. Single surface bounce arrivals contribute most significantly to the channel estimation error.

Relevant Aspects of Surface Scattered Channels The surface scattered arrivals can have very high intensities and be rapidly fluctuating. Thus, they can be a major source of error in estimating the channel impulse response. The dynamics of the impulse response containing surface scattered arrivals can change almost as rapidly as the impulse response itself. Algorithms must accommodate rapid time variation of both the impulse response and the parameters describing the dynamics of the impulse response fluctuations. The channel can be sparse and the subset of energetic taps of the impulse response can change rapidly with time. Arrivals appear and disappear as surface conditions evolve in addition to the movement of arrivals in delay.

System Equations Time-Varying Channel Impulse Response Vector Transmitted Data Vector Channel State-Space Model Received Signal (Observation Equation)

Model Simplifications A(θ,n) and the covariance matrix of the process noise, w[n], are a diagonal matrices. The temporal fluctuations of the elements channel impulse response vector, g[n], are uncorrelated from element to element. The temporal fluctuations of each element of the channel impulse response vector is a first order AR process. (Single pole) The process noise, w[n], and observation noise, v[n], are both white noise processes. Covariance of process and observation noise assumed known (in reality, they are used as algorithm tuning parameters).

Algorithm Approaches The Extended Kalman Filter (EKF) (developed in the context of joint parameter/state estimation and must be modified to address issues related to sparse channels) Comments on the Estimate Maximize (EM) and related approaches. Matching Pursuit and related approaches (developed in the context of representation with a sparse set of basis vectors and must be modified to address issues related to the identification of time-varying systems)

The Extended Kalman Filter Approach Random Walk Parameter Model -> Augmented State -> Augmented State and Observation Equations Linearized State Equation

Parameter Observability and Detectability Intuition: Estimating the state transition coefficient associated with a state variable that equals zero or is very weak is a ill defined problem. Formally: A necessary and sufficient condition for the observability of the parameter vector a[n] is that the sequence of channel estimates is persistently exciting and the underlying channel model is observable. In a sparse channel and the random walk (unstable) assumed parameter model, the elements of the a[n] vector associated with the very weak channel taps are not detectable. Result: The EKF can be unstable and the error covariance can grow without bound.

A Dual-Model EKF Approach: Partition the channel taps (elements of the vector g[n]) into two sets: the energetic taps and the non-energetic or quiescent taps. Different models for the time evolution of the elements of the parameter vector a[n] associated with the energetic and quiescent taps. Model for parameters associated with quiescent taps is stable and tends towards a fixed value.

Channel and Doppler Estimates

Doppler Estimates

Effect of Changing β Parameter

Estimate Maximize (EM) Based Approaches Intuition: For typical state estimation algorithms, the dynamics of the sequence of state estimates can be close to the dynamics of the sequence of states. This holds even if the algorithm has too long an averaging interval to accurately estimate the state sequence. State Model: Notional Estimate of Dynamics: The EM Algorithm formalizes this in an iterative estimation algorithm which accounts for the errors in the state estimates. Estimate State Sequence Estimate Transition Matrix (A)

Comments on EM based approaches The EM algorithm customarily operates on blocks of data. The parameter A is treated as a non-random parameter that is constant over each block. We have developed recursive variants of the traditional EM algorithm and are working on developing methods of accommodating time variability in the parameter A. The EM algorithm suffers from the same unobservabilty problem as the EKF.

Matching Pursuit Approaches Sparse system: MP -> sequentially select columns of C[n] (or equivalently, elements of g[n]). When columns of C[n] are not orthogonal, use variants based on orthogonalization and least squares metrics.

Modification for time-variability The i th column of C[n] contains the time series of the transmitted data symbols that map the i th channel tap onto the received signal vector y[n]. Scattering Function Representation:

Signal Estimation Error (Wavefronts II data) (NOTE: STANDARD EKF WILL BECOME UNSTABLE)

Conclusions Surface scattered channels can be both highly dynamic and sparse. Rapid fluctuations require explicitly estimating parameters describing channel dynamics as well as the channel state. Parameters can be unobservable if the channel is sparse (I.e., some taps of the channel impulse response have low energy.) Channel estimation techniques that jointly account for both the channel sparseness and the rapid fluctuations show performance improvements over techniques that do not account for both factors.