Lecture 8: Signal Detection and Noise Assumption



Similar documents
Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

Lecture Notes 1. Brief Review of Basic Probability

Multivariate Normal Distribution

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Introduction to Detection Theory

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Radar Systems Engineering Lecture 6 Detection of Signals in Noise

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Probability and Random Variables. Generation of random variables (r.v.)

CS229 Lecture notes. Andrew Ng

1 Maximum likelihood estimation

Lecture 5 Least-squares

α = u v. In other words, Orthogonal Projection

Classification Problems

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Similarity and Diagonalization. Similar Matrices

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

by the matrix A results in a vector which is a reflection of the given

Master s Theory Exam Spring 2006

Linear Threshold Units

Introduction to General and Generalized Linear Models

MIMO CHANNEL CAPACITY

The Effect of Network Cabling on Bit Error Rate Performance. By Paul Kish NORDX/CDT

Statistical Machine Learning

The CUSUM algorithm a small review. Pierre Granjon

Sections 2.11 and 5.8

LINES AND PLANES CHRIS JOHNSON

Lecture 14: Section 3.3

NRZ Bandwidth - HF Cutoff vs. SNR

How To Calculate The Kullback-Leibler Divergence And The Kulback-Kullback Divergence

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

JUST THE MATHS UNIT NUMBER 8.5. VECTORS 5 (Vector equations of straight lines) A.J.Hobson

An introduction to OBJECTIVE ASSESSMENT OF IMAGE QUALITY. Harrison H. Barrett University of Arizona Tucson, AZ

Subspaces of R n LECTURE Subspaces

Multivariate Analysis of Variance (MANOVA): I. Theory

Pricing Formula for 3-Period Discrete Barrier Options

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Inner Product Spaces

Multivariate normal distribution and testing for means (see MKB Ch 3)

Signal Detection C H A P T E R SIGNAL DETECTION AS HYPOTHESIS TESTING

MAT 242 Test 3 SOLUTIONS, FORM A

5 Transforming Time Series

Understanding and Applying Kalman Filtering

Lecture 8 ELE 301: Signals and Systems

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

E190Q Lecture 5 Autonomous Robot Navigation

1 Review of Least Squares Solutions to Overdetermined Systems

Lecture 3: Linear methods for classification

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

Section 9.5: Equations of Lines and Planes

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Linear Discrimination. Linear Discrimination. Linear Discrimination. Linearly Separable Systems Pairwise Separation. Steven J Zeil.

Digital Modulation. David Tipper. Department of Information Science and Telecommunications University of Pittsburgh. Typical Communication System

Logistic Regression (1/24/13)

A Lightweight Rao-Cauchy Detector for Additive Watermarking in the DWT-Domain

Solutions to Math 51 First Exam January 29, 2015

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

SYSTEMS OF REGRESSION EQUATIONS

Extreme Value Modeling for Detection and Attribution of Climate Extremes

3.1 Least squares in matrix form

Bayes and Naïve Bayes. cs534-machine Learning

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

8 MIMO II: capacity and multiplexing

1 Short Introduction to Time Series

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Applied Linear Algebra

Leak Detection Theory: Optimizing Performance with MLOG

Orthogonal Diagonalization of Symmetric Matrices

Applied Math 245 Midterm Exam: Winter 2007

Dirichlet forms methods for error calculus and sensitivity analysis

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Concentration inequalities for order statistics Using the entropy method and Rényi s representation

Indices of Model Fit STRUCTURAL EQUATION MODELING 2013

Ideal Class Group and Units

Linear Algebra Notes

Statistical Distributions in Astronomy

Linear Classification. Volker Tresp Summer 2015

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

67 Detection: Determining the Number of Sources

Basics of Floating-Point Quantization

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Application Note Noise Frequently Asked Questions

Lecture 18: The Time-Bandwidth Product

Adaptive Equalization of binary encoded signals Using LMS Algorithm

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Inner product. Definition of inner product

Math Practice Exam 2 with Some Solutions

Chapter 8 - Power Density Spectrum

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Statistical machine learning, high dimension and big data

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Transcription:

ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,..., s n ] T is the known signal waveform. P (X = P (X = (πσ n (πσ n exp( σ XT X exp[ σ (X ST (X S] The second equation holds because under hypothesis, W = X S. The Log Likelihood Ratio test is log Λ(x = log After simplifying it, we can get P W (X P W (X S = σ [(X ST (X S X T X] = σ [ XT + S T S] H X T S H σ γ + ST S In this case, X T S is the sufficient statistics t(x for the parameter θ =,. Note that S T S = is the signal energy. The LR detector filters data by projecting them onto signal subspace.. Example Suppose we want to control the probability of false alarm. For example, choose γ so that P(X T S > γ.5. The test statistic X T S is usually called matched filter. In particular, projection onto subspace spanned by S is = γ P S = SST S T S = S ST γ

Lecture 8: Signal Detection and Noise Assumption X PsX S Figure : Projection of X onto subspace S P S X = SST X = S (XT S where XT S is just a number. Geometrically, suppose the horizontal line is the subspace S and X is some other vector. The projection of vector X into subspace S can be expressed in the figure.. Example Suppose the signal value S k is sinusoid. S k = cos(πf k + θ, k =,..., n The match filter in this case is to compute the value in the specific frequency. So P S in this example is a bandpass filter..3 Performance Analysis Next problem what we want to know is what s the probability density of X T S, which is the sufficient statistics of this test. : X N(, σ I H : X N(S, σ I X T S = n k= X ks k is also Gaussian distributed. Recall if X N(µ, Σ, then Y = AX N(Aµ, AΣA T, where A is a matrix. Since Y = X T S = S T X, Y is a scalar. So we can get : X T S N( T S, S T σ IS = N(, σ H : X T S N(S T S, S T σ IS = N(, σ The probability of false alarm is P F A = Q( γ σ, and the probability of detection is = Q( γ σ = γ Q( σ σ. Since Q function is invertible, we can get γ σ = Q (P F A. Therefore, = Q(Q (P F A. In the equation, is the square root of Signal Noise Ratio( SNR. σ σ

Lecture 8: Signal Detection and Noise Assumption 3 S Figure : Distribution of P and P.9.8.7.6 =.5.4.3. = 3 = 6. 5 5 5 5 SNR(dB AWGN Assumption Figure 3: Relation between probability of detection and false alarm Is real-world noise really additive, white and Gaussian? Well, here are a few observations. Noise in many applications (e.g. communication and radar arose from several independent sources, all adding together at sensors and combining additively to the measurement. AWGN is gaussian distributed as the following formula. W N(, σ I CLT(Central Limit Theorem: If x,..., x n are independent random variables with means µ i and variances σi <,then Z n = n x i µ i n i= σ i N(, in distribution quite quickly. Thus, it is quite reasonable to model noise as additive and Gaussian list in many applications. However, whiteness is not always a good assumption.. Example 3 Suppose W = S + S + + S k, where S, S,... S k are inteferencing signals that are not of interest. But each of them is structured/correlated in time. Therefore, we need a more generalized form of noise, which is Colored Gaussian Noise.

Lecture 8: Signal Detection and Noise Assumption 4.9.8.7.6.5.4.3 Increasing SNR....4.6.8 3 Colored Gaussian Noise Figure 4: Relation between probability of detection and SNR W N(, Σ is called correlated or colored noise, where Σ is a structured covariance matrix. Consider the binary hypothesis test in this case. : X = S + W H : X = S + W where W N(, Σ and S and S are know signal waveforms. So we can rewrite the hypothesis as : X N(S, Σ H : X N(S, Σ The probability density of each hypothesis is P i (X = exp[ (π n (Σ (X S i T Σ (X S i ], i =, The log likelihood ratio is log( P (X P (X = [(X S T Σ(X S (X S T Σ (X S ] = X T Σ (S S ST Σ S + ST Σ H S γ Let t(x = (S S Σ X, we can get (S S Σ X H γ + ST Σ S ST Σ S = γ The probability of false alarm is : t N((S S Σ S, (S S T Σ (S S H : t N((S S Σ S, (S S T Σ (S S

Lecture 8: Signal Detection and Noise Assumption 5 In this case it is natural to define γ (S S T Σ S P F A = Q( [(S S T Σ (S S ] SNR = (S S T Σ (S S 3. Example 4 [ ] [ ] ρ ρ S = [, ], S = [, ], Σ =, Σ ρ = ρ. ρ The test statistics is [ ] [ ] y = (S S Σ ρ x X = [, ] ρ ρ x = + ρ (x + x The probability of false alarm is : y N( + ρ, + ρ H : y N(+ + ρ, + ρ The probability of detection is +ρ +ρ P F A = Q( γ + +ρ +ρ = Q( γ.9.8.7.6 p=.5 p= p=.9.5.4.3....4.6.8 Figure 5: ROC curve at different ρ