Lecture 8: Signal Detection and Noise Assumption


 Beatrice Kennedy
 1 years ago
 Views:
Transcription
1 ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,..., s n ] T is the known signal waveform. P (X = P (X = (πσ n (πσ n exp( σ XT X exp[ σ (X ST (X S] The second equation holds because under hypothesis, W = X S. The Log Likelihood Ratio test is log Λ(x = log After simplifying it, we can get P W (X P W (X S = σ [(X ST (X S X T X] = σ [ XT + S T S] H X T S H σ γ + ST S In this case, X T S is the sufficient statistics t(x for the parameter θ =,. Note that S T S = is the signal energy. The LR detector filters data by projecting them onto signal subspace.. Example Suppose we want to control the probability of false alarm. For example, choose γ so that P(X T S > γ.5. The test statistic X T S is usually called matched filter. In particular, projection onto subspace spanned by S is = γ P S = SST S T S = S ST γ
2 Lecture 8: Signal Detection and Noise Assumption X PsX S Figure : Projection of X onto subspace S P S X = SST X = S (XT S where XT S is just a number. Geometrically, suppose the horizontal line is the subspace S and X is some other vector. The projection of vector X into subspace S can be expressed in the figure.. Example Suppose the signal value S k is sinusoid. S k = cos(πf k + θ, k =,..., n The match filter in this case is to compute the value in the specific frequency. So P S in this example is a bandpass filter..3 Performance Analysis Next problem what we want to know is what s the probability density of X T S, which is the sufficient statistics of this test. : X N(, σ I H : X N(S, σ I X T S = n k= X ks k is also Gaussian distributed. Recall if X N(µ, Σ, then Y = AX N(Aµ, AΣA T, where A is a matrix. Since Y = X T S = S T X, Y is a scalar. So we can get : X T S N( T S, S T σ IS = N(, σ H : X T S N(S T S, S T σ IS = N(, σ The probability of false alarm is P F A = Q( γ σ, and the probability of detection is = Q( γ σ = γ Q( σ σ. Since Q function is invertible, we can get γ σ = Q (P F A. Therefore, = Q(Q (P F A. In the equation, is the square root of Signal Noise Ratio( SNR. σ σ
3 Lecture 8: Signal Detection and Noise Assumption 3 S Figure : Distribution of P and P = = 3 = SNR(dB AWGN Assumption Figure 3: Relation between probability of detection and false alarm Is realworld noise really additive, white and Gaussian? Well, here are a few observations. Noise in many applications (e.g. communication and radar arose from several independent sources, all adding together at sensors and combining additively to the measurement. AWGN is gaussian distributed as the following formula. W N(, σ I CLT(Central Limit Theorem: If x,..., x n are independent random variables with means µ i and variances σi <,then Z n = n x i µ i n i= σ i N(, in distribution quite quickly. Thus, it is quite reasonable to model noise as additive and Gaussian list in many applications. However, whiteness is not always a good assumption.. Example 3 Suppose W = S + S + + S k, where S, S,... S k are inteferencing signals that are not of interest. But each of them is structured/correlated in time. Therefore, we need a more generalized form of noise, which is Colored Gaussian Noise.
4 Lecture 8: Signal Detection and Noise Assumption Increasing SNR Colored Gaussian Noise Figure 4: Relation between probability of detection and SNR W N(, Σ is called correlated or colored noise, where Σ is a structured covariance matrix. Consider the binary hypothesis test in this case. : X = S + W H : X = S + W where W N(, Σ and S and S are know signal waveforms. So we can rewrite the hypothesis as : X N(S, Σ H : X N(S, Σ The probability density of each hypothesis is P i (X = exp[ (π n (Σ (X S i T Σ (X S i ], i =, The log likelihood ratio is log( P (X P (X = [(X S T Σ(X S (X S T Σ (X S ] = X T Σ (S S ST Σ S + ST Σ H S γ Let t(x = (S S Σ X, we can get (S S Σ X H γ + ST Σ S ST Σ S = γ The probability of false alarm is : t N((S S Σ S, (S S T Σ (S S H : t N((S S Σ S, (S S T Σ (S S
5 Lecture 8: Signal Detection and Noise Assumption 5 In this case it is natural to define γ (S S T Σ S P F A = Q( [(S S T Σ (S S ] SNR = (S S T Σ (S S 3. Example 4 [ ] [ ] ρ ρ S = [, ], S = [, ], Σ =, Σ ρ = ρ. ρ The test statistics is [ ] [ ] y = (S S Σ ρ x X = [, ] ρ ρ x = + ρ (x + x The probability of false alarm is : y N( + ρ, + ρ H : y N(+ + ρ, + ρ The probability of detection is +ρ +ρ P F A = Q( γ + +ρ +ρ = Q( γ p=.5 p= p= Figure 5: ROC curve at different ρ
CS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually
More information67 Detection: Determining the Number of Sources
Williams, D.B. Detection: Determining the Number of Sources Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 999 c 999byCRCPressLLC 67 Detection:
More informationIndices of Model Fit STRUCTURAL EQUATION MODELING 2013
Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit A recommended minimal set of fit indices that should be reported and interpreted when reporting the results of SEM analyses:
More informationBasics of FloatingPoint Quantization
Chapter 2 Basics of FloatingPoint Quantization Representation of physical quantities in terms of floatingpoint numbers allows one to cover a very wide dynamic range with a relatively small number of
More informationThe Chinese Restaurant Process
COS 597C: Bayesian nonparametrics Lecturer: David Blei Lecture # 1 Scribes: Peter Frazier, Indraneel Mukherjee September 21, 2007 In this first lecture, we begin by introducing the Chinese Restaurant Process.
More informationLecture Notes 1. Brief Review of Basic Probability
Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters 3 are a review. I will assume you have read and understood Chapters 3. Here is a very
More informationIndependent Component Analysis: Algorithms and Applications
Independent Component Analysis: Algorithms and Applications Aapo Hyvärinen and Erkki Oja Neural Networks Research Centre Helsinki University of Technology P.O. Box 5400, FIN02015 HUT, Finland Neural Networks,
More informationGaussian Processes in Machine Learning
Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany carl@tuebingen.mpg.de WWW home page: http://www.tuebingen.mpg.de/ carl
More informationThe Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables 1 The Monte Carlo Framework Suppose we wish
More informationMultivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I  1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
More information11. Analysis of Casecontrol Studies Logistic Regression
Research methods II 113 11. Analysis of Casecontrol Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationChapter 6: Point Estimation. Fall 2011.  Probability & Statistics
STAT355 Chapter 6: Point Estimation Fall 2011 Chapter Fall 2011 6: Point1 Estimat / 18 Chap 6  Point Estimation 1 6.1 Some general Concepts of Point Estimation Point Estimate Unbiasedness Principle of
More informationOneWay Analysis of Variance (ANOVA) Example Problem
OneWay Analysis of Variance (ANOVA) Example Problem Introduction Analysis of Variance (ANOVA) is a hypothesistesting technique used to test the equality of two or more population (or treatment) means
More informationU.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
More informationTime series analysis. Jan Grandell
Time series analysis Jan Grandell 2 Preface The course Time series analysis is based on the book [7] and replaces our previous course Stationary stochastic processes which was based on [6]. The books,
More information9. Sampling Distributions
9. Sampling Distributions Prerequisites none A. Introduction B. Sampling Distribution of the Mean C. Sampling Distribution of Difference Between Means D. Sampling Distribution of Pearson's r E. Sampling
More informationGaussian Process Dynamical Models
Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 {jmwang,hertzman}@dgp.toronto.edu, fleet@cs.toronto.edu
More informationThe VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.
Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium
More informationThe Optimality of Naive Bayes
The Optimality of Naive Bayes Harry Zhang Faculty of Computer Science University of New Brunswick Fredericton, New Brunswick, Canada email: hzhang@unbca E3B 5A3 Abstract Naive Bayes is one of the most
More informationPart C. InSAR processing: a mathematical approach
Part C InSAR processing: a mathematical approach Statistics of SAR and InSAR images 1. Statistics of SAR and InSAR images 1.1 The backscattering process 1.1.1 Introduction The dimension of the resolution
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationHOW TO USE YOUR HP 12 C CALCULATOR
HOW TO USE YOUR HP 12 C CALCULATOR This document is designed to provide you with (1) the basics of how your HP 12C financial calculator operates, and (2) the typical keystrokes that will be required on
More informationHow to Gamble If You Must
How to Gamble If You Must Kyle Siegrist Department of Mathematical Sciences University of Alabama in Huntsville Abstract In red and black, a player bets, at even stakes, on a sequence of independent games
More informationThe Method of Least Squares
The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the
More informationWISE Power Tutorial All Exercises
ame Date Class WISE Power Tutorial All Exercises Power: The B.E.A.. Mnemonic Four interrelated features of power can be summarized using BEA B Beta Error (Power = 1 Beta Error): Beta error (or Type II
More informationMining Data Streams. Chapter 4. 4.1 The Stream Data Model
Chapter 4 Mining Data Streams Most of the algorithms described in this book assume that we are mining a database. That is, all our data is available when and if we want it. In this chapter, we shall make
More informationPopulation Mean (Known Variance)
Confidence Intervals Solutions STATUB.0103 Statistics for Business Control and Regression Models Population Mean (Known Variance) 1. A random sample of n measurements was selected from a population with
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More information