UNIVERSITY OF MINNESOTA. Alejandro Ribeiro

Size: px
Start display at page:

Download "UNIVERSITY OF MINNESOTA. Alejandro Ribeiro"

Transcription

1 UNIVERSITY OF MINNESOTA This is to certify that I have examined this copy of a Master s thesis by Alejandro Ribeiro and have found that it is complete and satisfactory in all respects, and that any and all revisions required by the final examining committee have been made. Name of Faculty Advisor(s) Signature of Faculty Advisor(s) Date GRADUATE SCHOOL

2 Distributed Quantization-Estimation for Wireless Sensor Networks A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY Alejandro Ribeiro IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE Professor Georgios B. Giannakis, Advisor August 2005

3 c Alejandro Ribeiro 2006

4 i Abstract: Distributed Quantization-Estimation for Wireless Sensor Networks At the crossroad of sensing, control and wireless communications, wireless sensor networks (WSNs), whereby large numbers of individual nodes collaborate to monitor and control environments, have emerged in recent years along with the field of distributed signal processing. This thesis studies the intertwining between quantization and estimation that arises due to the distributed nature of WSNs. Given that each sensor has available only part of the measurements parameter estimation requires quantization of the original observations, transforming the problem into one of estimation based on quantized observations certainly different from estimation based on the analog-amplitude observations. This intertwining is studied in a number of setups with an eye towards realistic scenarios. We start with a simple mean location deterministic parameter estimation problem, in the presence of additive white Gaussian noise which we follow with generalizations to deterministic parameter estimation for pragmatic signal models. Among this class of signal models we consider i) known univariate but generally non-gaussian noise probability density functions (pdfs); ii) known noise pdfs with a finite number of unknown parameters; iii) completely unknown noise pdfs; and iv) practical generalizations to multivariate and possibly correlated pdfs. Within a different paradigm, we also derive and analyze distributed state estimators of dynamical stochastic processes. Following a Kalman filtering (KF) approach, we develop recursive algorithms for distributed state estimation based on the sign of innovations (SOI). Surprisingly, in all scenarios considered we reveal two common properties: i) the performance of estimators based on quantization to a few bits per sensor can come very close to the performance of estimators based on the analog-amplitude observations; and ii) the complexity of optimal estimators based on quantized observations is low even though quantization leads to a discontinuous signal model.

5 ii Contents Abstract List of Figures i v 1 Wireless Sensor Networks Distributed Estimation with WSNs WSN topologies Some motivating applications Estimating a vector wind flow Target tracking with SOI-EKF The thesis in context Mean-location in additive white Gaussian noise Introduction Problem statement MLE based on binary observations: common thresholds MLE based on binary observations: non-identical thresholds Selecting the parameters (τ, ρ) An achievable upper bound on B W (τ, ρ) Algorithmic Implementation Relaxing the Bandwidth Constraint Optimum threshold spacing Quantized sample mean estimator Numerical results Designing (τ, ρ) Estimation with 1 bit per sensor Comparison with deterministic control signals Appendices

6 CONTENTS iii Proof of Proposition Proof of Theorems 2.1 and Proof of Proposition Proof of Proposition Proof of Proposition Distributed batch estimation based on binary observations Introduction Problem Statement Scalar parameter estimation Parametric Approach Known noise pdf Known Noise pdf with Unknown Variance Dependent binary observations Scalar parameter estimation Unknown noise pdf Independent binary observations Dependent binary observations Practical Considerations Vector parameter Generalization Colored Gaussian Noise Simulations Scalar parameter estimation Vector Parameter Estimation A Motivating Application Appendices Proofs of Lemma 3.1 and Proposition Proofs of Lemma 3.2 and Proposition Proof of Proposition Distributed state estimation using the sign of innovations Introduction Problem statement and preliminaries The Kalman filter benchmark State estimation using the sign of innovations Exact MMSE Estimator Approximate MMSE estimator Vector state - vector observation case Performance analysis Simulations

7 CONTENTS iv Target tracking with SOI-EKF Appendix Proof of (4.20) Conclusions and Future Work Future research Maximum a posteriori estimation with binary observations Extensions of the SOI-KF Bibliography 117

8 v List of Figures 1.1 WSN with a Fusion Center: the sensors act as data gathering devices Ad hoc WSN: the network itself is in charge of estimation The wind v incises over a certain sensor capable of measuring the normal component of v Average variance for the components of v. The empirical as well as the bound (1.6) are compared with the analog observations based MLE (v = (1, 1), σ = 1) Target tracking with EKF and SOI-EKF yield almost identical estimates. The scheduling algorithm works in cycles of duration T. At the beginning of the cycle, we schedule the sensor S k closest to the estimate ˆx(n n 1), next the second closest and so on until we complete the cycle (T = 4, T s = 1s, L = 2km, K = 100, α = 3.4 σ u = 0.2m, σ v = 1) Standard deviation of the estimates in Fig. 1.5 are in the order of 5m-10m for both filters CRLB and Chernoff bound in (2.13) as a function of the distance between τ c and θ measured in AWGN standard deviation (σ) units MLE in (2.6) based on binary observations performs close to the clairvoyant sample mean estimator when θ is close to the threshold defining the binary observation (σ = 1, τ c = 0, and θ = 1) Variance of the estimator relying on the whole sequence of binary observations. The room for improved performance once τ < σ is small Variation of the threshold spacing that minimizes the worst case per bit CRLB with the SNR. C b (τ) is very flat around the optimum and τ has a small change when the SNR moves over a range of 50 db

9 LIST OF FIGURES vi 2.5 Gaussian noise and Gaussian-shaped weight function. Although a threshold spacing τ = σ reduces the approximation error to almost zero, a spacing τ = 2σ is good enough in practice (σ = 1, and σ θ = 2) Gaussian noise and Uniform weight function. A threshold spacing τ = σ has smaller MSE but a spacing τ = 2σ is better in most of the non-zero probability interval (σ = 1, and prior U[-7,7]) Gaussian noise and Gaussian weight function. With a threshold spacing τ = 2σ we achieve a good approximation to the minimum asymptotic average variance (σ = 1, τ = 2, and σ θ = 2) The average variance of the optimum set (τ, ρ) found as the solution of (2.37), yields a noticeable advantage over the use of equispaced equal frequency thresholds as defined by (2.55) (σ = 1, τ = 2, and σ θ = 2) Per bit CRLB when the binary observations are independent (Section 3.3.2) and dependent (Section 3.3.3), respectively. In both cases, the variance increase with respect to the sample mean estimator is small when the σ- distances are close to 1, being slightly better for the case of dependent binary observations (Gaussian noise) When the noise pdf is unknown numerically integrating the CCDF using the trapezoidal rule yields an approximation of the mean The vector of binary observations b takes on the value {β 1, β 2 } if and only if x(n) belongs to the region B {β1,β 2 } Selecting the regions B k (n) perpendicular to the covariance matrix eigenvectors results in independent binary observations Noise of unknown power estimator. The CRLB in (3.15) is an accurate prediction of the variance of the MLE estimator (3.14); moreover, its variance is close to the clairvoyant sample mean estimator based on the analog observations (σ = 1, θ = 0, Gaussian noise) Universal estimator introduced in Section 3.4. The bound in (3.39) overestimates the real variance by a factor that depends on the noise pdf (σ = 1, T = 5, θ chosen randomly in [ 2, 2]) The vector flow v incises over a certain sensor capable of measuring the normal component of v Average variance for the components of v. The empirical as well as the bound (3.68) are compared with the analog observations based MLE (v = (1, 1), σ = 1)

10 LIST OF FIGURES vii 4.1 Ad hoc WSN: the network itself is in charge of tracking the state x(n) WSN with a Fusion Center: the sensors act as data gathering devices The MSEs, tr[m(t s ; n n)] of the estimator and tr[m(t s ; n n 1)] of the predictor converge to the continuous-time MSE tr[m c (nt s )] as T s decreases (A c (t) = I, h c (t) = [1, 2] T, C uc (t) = I, and σv 2 c (t) = 1) The MSE tr[m(t s ; n n)] of the SOI-KF and the MSE tr[m π/2 (T s ; n n)] of the (π/2)-kf are indistinguishable for small T s ; as T s increases there is a noticeable but still small difference. The penalty with respect to tr[m K (T s ; n n)] is small for moderate T s (A c (t) = I, h c (t) = [1, 2] T, C uc (t) = I, and σ vc (t) = 1) SOI-KF compared with the (π/2)-kf. The filtered MSEs of the two filters are indistinguishable for small T s, but as T s becomes large, the (π/2)-kf is not a good predictor of the SOI-KF s performance (β 1 = 0.1, β 2 = 0.2, σu 2 = 1 and σv 2 = 1) SOI-KF compared with KF: even for moderate values of T s, the performance penalty is small (β 1 = 0.1, β 2 = 0.2, σu 2 = 1 and σv 2 = 1) Target tracking with EKF and SOI-EKF yield almost identical estimates. The scheduling algorithm works in cycles of duration T. At the beginning of the cycle, we schedule the sensor S k closest to the estimate ˆx(n n 1), next the second closest and so on until we complete the cycle (T = 4, T s = 1s, L = 2km, K = 100, α = 3.4 σ u = 0.2m, σ v = 1) Standard deviation of the estimates in Fig. 4.7 are in the order of 5m-10m for both filters

11 1 Chapter 1 Wireless Sensor Networks Recent years have witnessed the evolution of wireless sensor networks (WSNs), which in broad terms can be defined as a group of wireless sensors. A wireless sensor, in turn, is a signal processing device capable of sensing physical variables, acting in the physical environment and communicating with other devices over a wireless channel. By touching upon the centuries old fields of sensing and control and the decades old field of wireless communications the former definition hardly contains any novel idea at all; however, it is the combination of these fields that has led to a whole new set of applications. Indeed, the major ability that has been rare so far and WSNs have, is that the addition of wireless communication abilities to the sensors enables distributed sensing and control. While this may not look like a significant difference, there are a number of applications that become possible or are simpler to perform with a distributed WSN. Consider as a typical example, habitat monitoring, where we want to sense variables of pertinent interest to a particular environment, e.g., air quality indicators in a certain neighborhood. The difficulty for a centralized sensing system is that there is no single indicator but a space varying field. This field can be more easily estimated by a distributed network of sensors. Yet another canonical example is target tracking. While a centralized tracker will do just fine, a distributed network can collect more accurate observations given the greater likelihood a sensor has to be close to the target. Even if we could spend pages describing potential applications, the important point here is that the physical world is inherently distributed and

12 1.1 Distributed Estimation with WSNs 2 if we want to sense and take actions in it, the ultimate goal is a distributed sensing/control network. And that is what a WSN is. Trying to keep this introduction as general as possible we have omitted a number of assumptions that are customary in WSN research and that will be considered integral to our WSN setup in the rest of the dissertation. Besides its already described abilities, a sensor is supposed to be a relatively inexpensive device. Thus, the quality of the observations it makes is considered low, and its processing capabilities limited. Moreover, it is a usual requirement to have severe power and bandwidth constraints being not rare the assumption that a sensor can only transmit a few bits and be active a few minutes per hour. The network, on the other hand, is considered to consist of a large number of sensors randomly distributed in the area of interest. These properties ensure that WSNs can be easily deployed, are robust to failures and can operate on limited energy for long periods of time. 1.1 Distributed Estimation with WSNs Foremost among the tasks performed by WSNs is the observation of physical phenomena, either a goal in itself in e.g., environmental monitoring applications or the first step in distributed control. While a number of tools in the fields of statistics and information theory among others, have been developed over the years, the unique characteristics of WSNs require rethinking of many of the algorithms traditionally used for estimation. Indeed, the distributed nature of the observations necessitates transmission of the individual sensors data; moreover, the power/bandwidth available for transmission and signal processing is severely limited. To complicate matters even more the parametric data model used and the knowledge of sensor noise distributions are not easy to characterize; observations taken by (small, cheap) sensors are very noisy; and the WSN size and topology may change dynamically. To appreciate the challenges implied by these properties, consider a customary meanlocation parameter estimation problem in which we estimate a parameter in additive zeromean noise. The distributed nature of the observations dictates quantization of the original observations prior to digital transmission transforming the estimation problem into one

13 1.1 Distributed Estimation with WSNs 3 of estimation based on the quantized digital messages certainly different from estimation based on the original analog-amplitude observations. Besides, the severe bandwidth/power constraint requires these messages to contain only a few bits and the lack of an accurate data/noise model preempts application of optimum estimation algorithms. Thus, estimation with WSNs requires studying the intertwining between quantization and estimation based on severely quantized data in possibly unknown data/noise models. The main focus of the present thesis is to study the problem of distributed estimation using a WSN with particular emphasis in the intertwining between quantization and estimation. We begin by studying distributed mean-location parameter estimation in the presence of additive white Gaussian noise (AWGN) in Chapter 2. We seek Maximum Likelihood Estimators (MLE) based on quantized observations and benchmark their variances with the Cramer-Rao Lower Bound (CRLB) that, at least asymptotically, is achieved by the MLE. We show that the deciding factor in the choice of the estimator is the relation between the dynamic range of the parameter and the observation noise variance. When the dynamic range of the parameter is small or comparable with the noise variance, we introduce a class of maximum likelihood estimators that require transmitting just one bit per sensor to achieve an estimation variance close to that of the (clairvoyant) sample mean estimator. When the dynamic range is comparable or larger than the noise standard deviation, we show that an optimum quantization step exists to achieve the best possible variance for a given bandwidth constraint. We also establish that in this case the sample mean estimator formed by quantized observations is preferable for complexity reasons. We finally touch upon algorithm implementation issues and guarantee that all the numerical maximizations required by the proposed estimators are concave implying that low complexity optimization algorithms like e.g., Newton s method converge to the unique global maximum. One of the most important conclusions of Chapter 2 is that when the parameter s dynamic range is comparable with the noise variance, the variance performance of a estimator based on the transmission of a single bit per observation is within a small factor of the variance of the clairvoyant sample mean estimator. The goal of Chapter 3 is to show that

14 1.1 Distributed Estimation with WSNs 4 this fundamental property extends to more pragmatic models. Indeed, we show in Chapter 3 that for a large class of distributed estimation problems, even a single bit per sensor can afford minimal increase in estimation variance. Among these pragmatic signal models, we consider: i) known univariate but generally non-gaussian noise probability density functions (pdfs); ii) known noise pdfs with a finite number of unknown parameters; iii) completely unknown noise pdfs; and iv) practical generalizations to multivariate and possibly correlated pdfs. Quite surprisingly, besides the small performance penalty paid in all of these scenarios it also turns out that the MLE can either be obtained in closed form or as the (unique) maximum of a concave function. Corroborating our theoretical findings we consider a motivating application entailing distributed parameter estimation where a WSN is used for habitat monitoring. A conclusion of Chapters 2 and 3 is the possibility of accurate parameter estimation based on severe quantization to a single bit per observation when we have reasonably accurate prior knowledge about the parameter. A problem in which this is indeed true is state estimation of dynamical stochastic processes, in which the state prediction based on past observations can be used to quantize the current observation. This is the subject of Chapter 4 where we derive and analyze distributed state estimators of dynamical stochastic processes, whereby low communication cost is effected by requiring the transmission of a single bit per observation. Following a Kalman filtering (KF) approach, we develop recursive algorithms for distributed state estimation based on the sign of innovations (SOI). Even though SOI-KF can afford minimal communication overhead, we prove that in terms of performance and complexity it comes very close to the clairvoyant KF which is based on the analog-amplitude observations. Reinforcing our conclusions, we show that the SOI-KF applied to distributed target tracking based on distance only observations yields accurate estimates at low communication cost. It is worth noting that the flow of the thesis is not only towards increasingly complex problems but towards more realistic ones. As results in Chapter 2 are insightful but of little practical significance, we introduce the pragmatic signal models of Chapter 3. Alas, both Chapters left unaddressed the issue of prior knowledge. That is addressed in Chapter 4,

15 1.2 WSN topologies 5 P l a n t.... x(0) v(0) x(1) v(1) x(k) v(k) S 0 S 1 S k m(0) m(1) m(k) f(n) F u s i o n C e n t e r Figure 1.1: WSN with a Fusion Center: the sensors act as data gathering devices. where a practical state estimation algorithm based on binary SOI observations is developed, analyzed and tested. 1.2 WSN topologies Two different WSN topologies characterized by the presence or absence of a fusion center (FC) are considered in this thesis. When an FC is present, the WSN is termed hierarchical in the sense that sensors act as information gathering devices for the FC that is in charge of processing this information. A hierarchical WSN used to estimate parameters of a given plant is shown in Fig Sensor S k collects information about the plant and encodes this information on the message m(k) that it communicates to the FC. The FC collects information from different sensors that it later processes to estimate the plant parameters of interest. This topology may also include a feedback channel from the FC to the sensor in which at time slot n messages f(n) are broadcast to the sensors. In ad-hoc WSNs, the network itself is responsible for processing the collected information, and to this end sensors communicate with each other through the shared wireless medium; see Fig We assume that the message m(k) sent by sensor S k is received by all other sensors, using a forwarding mechanism the details of which go beyond the scope

16 1.3 Some motivating applications 6 P l a n t x(0) w(0) x(1) w(1).... x(k) w(k) S 0 S 1 S k m(0) m(1) m(k) Figure 1.2: Ad hoc WSN: the network itself is in charge of estimation of the present thesis. Though not explicitly addressed in this thesis, hybrid models in which some low level processing is performed by the network and high level ones by the FC are also common in practice. An important distinction between ad-hoc and hierarchical architectures pertains to the amount of information available to each sensor. In ad-hoc WSNs, the messages m(k) percolate through all sensors. Consequently, in addition to the information collected locally, the sensors have available plant observations collected by other sensors. In hierarchical WSNs, on the other hand, the information is sent to the FC and each sensor has available only the information collected locally. A third level of information availability arises in hierarchical WSN with a feedback channel in which the sensors receive plant information via feedback from the FC. In this work, we assume that the messages m(k) and f(n) are correctly received by either the sensors or the FC, which requires deployment of sufficiently powerful error control codes. 1.3 Some motivating applications This section presents two motivating applications that illustrate the type of problems to which results in this thesis are applicable. It also serves as a prelude for the results that will be derived in ensuing chapters.

17 1.3 Some motivating applications 7 x 1 n φ (n) v x 0 Figure 1.3: The wind v incises over a certain sensor capable of measuring the normal component of v Estimating a vector wind flow Consider the problem of estimating a wind flow (velocity and direction) using incidence observations. With reference to Fig. 1.3, consider the flow vector v := (v 0, v 1 ) T, and a sensor positioned at an angle φ(n) with respect to a known reference direction. The so called incidence observations {x(n)} N 1 n=0 corresponding sensor, measure the component of the flow normal to the x(n) := v, n + w(n) = v 0 sin[φ(n)] + v 1 cos[φ(n)] + w(n), (1.1) where, denotes inner product, w(n) is zero-mean AWGN with variance E[w 2 (n)] := σ 2, and the equation holds for n = 0, 1,..., N 1. It is not difficult to find the MLE ˆv of v using {x(n)} N 1 n=0. More important, though, it is possible to find the Fisher Information Matrix (FIM) that can be used as an approximation of the performance of this MLE. The FIM for this problem is given by N 1 1 sin I = 2 [φ(n)] sin[φ(n)] cos[φ(n)] σ 2. (1.2) sin[φ(n)] cos[φ(n)] cos 2 [φ(n)] n=0 Assuming that the sensors are randomly deployed, the angles φ(n) will be uniformly dis-

18 1.3 Some motivating applications 8 tributed φ(n) U[ π, π] and we can compute the average: Ī = 1 N/2 0 σ 2. (1.3) 0 N/2 But if the number of sensors is large we can invoke the law of large numbers to claim that I Ī. Using the CRLB and the fact that the MLE variance approaches the CRLB as N grows large, we have that the estimation variance will be approximately given by var(ˆv 0 ) = var(ˆv 1 ) = 2σ2 N. (1.4) The problem is, of course, that computing ˆv requires transmitting the observations {x(n)} N 1 n=0 incurring a significant cost in terms of power and bandwidth. In Chapter 3 we will develop MLEs ˇv based on the transmission of binary observations defined as the indicator function of x(n) being greater than a certain threshold τ b(n) = 1{x(n) τ}. (1.5) Interestingly, the variance for the estimation of v given the binary observations {b(n)} N 1 n=0 will be shown to be var(ˇv 0 ) = var(ˇv 1 ) = 2ρ2 N. (1.6) where the equivalent noise can be as small as ρ 2 = (π/2)σ 2. This implies that quantizing to a single bit per observation entails an increase in estimation variance that can be as small as π/2. Furthermore, we will show that ˇv can be obtained as the maximum of a concave function and thus, quantizing to a single bit per observation does not entail a significant increase neither in the estimation variance nor on the complexity of the estimator. Fig. 1.4 depicts the bound (1.6), as well as the simulated variances var(ˇv 0 ) and var(ˇv 1 ) in comparison with the clairvoyant MLE variances var(ˆv 0 ) and var(ˆv 1 ), corroborating that ˇv and ˆv are indistinguishable for practical purposes Target tracking with SOI-EKF Target tracking based on distance only measurements is a typical problem in bandwidthconstrained distributed estimation with WSNs (see e.g., [2, 11]) for which a variation of the

19 1.3 Some motivating applications 9 Empirical and theoretical variance for first component of v empirical theoretical analog MLE variance number of sensors 10 2 Empirical and theoretical variance for second component of v empirical theoretical analog MLE variance number of sensors 10 2 Figure 1.4: Average variance for the components of v. The empirical as well as the bound (1.6) are compared with the analog observations based MLE (v = (1, 1), σ = 1). SOI-KF that will be developed in Chapter 4 appears to be particularly attractive. Consider K sensors randomly and uniformly deployed in a square region of 2L 2L meters and suppose that sensor positions {x k } K k=1 are known. The WSN is deployed to track the position x(n) := [x 1 (n), x 2 (n)] T of a target, whose state model accounts for x(n) and the velocity v(n) := [v 1 (n), v 2 (n)] T, but not for the acceleration that is modelled as a random quantity. Under these assumptions, we obtain the state equation [14] 1 0 T s 0 Ts 2 /2 0 x(n) T = s x(n 1) 0 T + 2 s /2 u(n), (1.7) v(n) v(n 1) T s T s where T s is the sampling period and the random vector u(n) R 2 is zero-mean white Gaussian; i.e., p(u(n)) = N (u(n); 0; σui). 2 The sensors gather information about their distance to the target by measuring the received power of a pilot signal following the path-

20 1.3 Some motivating applications Sensors Target EKF SOI EKF 1000 position x 2 (m) position x 1 (m) Figure 1.5: Target tracking with EKF and SOI-EKF yield almost identical estimates. The scheduling algorithm works in cycles of duration T. At the beginning of the cycle, we schedule the sensor S k closest to the estimate ˆx(n n 1), next the second closest and so on until we complete the cycle (T = 4, T s = 1s, L = 2km, K = 100, α = 3.4 σ u = 0.2m, σ v = 1). loss model y k (n) = α log x(n) x k + v(n), (1.8) with α 2 a constant, x(n) x k denoting the distance between the target and S k, and v(n) the observation noise with distribution p(v(n)) = N (v(n); 0; σ 2 v). Following an extended (E)KF approach, we linearize (1.8) in a neighborhood of ˆx(n n 1) to obtain y k (n) y 0 k (n) ht (n)x(n) + v(n), (1.9) where h(n) := αˆx(n n 1)/ ˆx(n n 1) x k 2 and yk 0 (n) is a known function of α, ˆx(n n 1) and x k. As is the case in state estimation problems, we are interested in finding the minimum mean squared error (MMSE) estimate that is defined as ˆx(n n) := E[x(n) y k,0:n ], (1.10)

21 1.3 Some motivating applications EKF SOI EKF 16 distance from target to estimate (m) time (s) Figure 1.6: Standard deviation of the estimates in Fig. 1.5 are in the order of 5m-10m for both filters. where y k,0:n = [y k (0),..., y k (n)] T. Alas, as well as for the deterministic parameter estimation problem in Section this entails the high communication cost of transmitting the analog-amplitude observations y k (n). Reducing the cost of this communication is addressed in Chapter 4 with the introduction of the extended SOI-(E)KF that is based on the single bit transmission of the sign of the difference between the actual observation and its predicted value, +1, if y k (n) ŷ k (n n 1) b k (n) = sign[y k (n) ŷ k (n n 1)] := 1, if y k (n) < ŷ k (n n 1), (1.11) where ŷ k (n n 1) := E[y k (n) b k,0:n 1 ] is the well-known innovation sequence and the binary data b k,0:n = [b k (0),..., b k (n)] T. The counterpart of (1.10) for the estimation of x(n) based on the binary observations in (1.11) is ˇx(n n) := E[x(n) b k,0:n ]. (1.12) As well as an approximation to ˆx(n n) in (1.10) can be found by using the EKF, an approximation to ˇx(n n) in (1.12) can be found by using the SOI-EKF. Quite surprisingly, we will show in chapter 4 that the EKF and SOI-EKF have similar complexity and performance.

22 1.4 The thesis in context 12 To illustrate this result we compare the EKF and the SOI-EKF for the tracking problem described in this section. This comparison is depicted in Figs. 1.5 and 1.6, where we see that the SOI-EKF succeeds in tracking the target with an accuracy of less than 10 meters (m). While this accuracy is just a result of the specific parameters of the experiment, the important point here is that the clairvoyant EKF and the SOI-EKF yield almost identical performance even when the former relies on analog-amplitude observations and the SOI- EKF on the transmission of a single bit per sensor. Moreover, as we will see in Chapter 4, the complexity of these two algorithms is almost identical. 1.4 The thesis in context Statistical inference is usually divided between detection problems in which we have to decide between a set of hypotheses and estimation problems in which we estimate the value of a certain parameter. Not surprisingly, these two different approaches have also been considered in the context of WSNs. The development of distributed detection algorithms is by now a well understood problem (see e.g., [43,44] and references therein), but the field of distributed estimation addressed in this thesis has not yet received as much attention. Without explicit mention to WSNs, various design and implementation issues of distributed estimation were addressed in early literature [3, 13, 20]. In the context of WSNs a number of works address distributed detection from the perspective of exploiting spatial correlation to reduce transmission requirements [4, 5, 12, 27, 31, 33]. These works, however, do not address the intertwining between quantization and estimation. More related to the present thesis, the design of quantizers in different scenarios was studied in [1,28,29], where the concept of information loss was defined as the relative increase in estimation variance when using quantized observations with respect to the equivalent estimation problem based on analog-amplitude observations. Interestingly, these works showed that for some simple problems quantization to a single bit per sensor leads to minimal loss in performance, a result from where we start building on in Chapter 2. A different perspective introduced in [21 23] is to take into account the challenge of building suitable noise models for WSNs. Since this may be difficult in practice, universal estimators

23 1.4 The thesis in context 13 that work irrespective of the noise distribution were introduced in these works and shown to have an information loss independent of the network size. The problem of distributed state estimation of stochastic processes with quantized observations has received attention in the non-linear filtering community. While the discontinuous non-linearity created by quantization precludes application of the extended (E)KF, the problem can be handled with more powerful techniques such as the unscented (U)KF [15], or the Particle Filter (PF) [10, 18]. These directions have been pursued in the context of filtering [8, 45] and target tracking with a WSN [2, 11]. Results in the present thesis have appeared in [34 40]. Fundamental properties of the problem comprising the material covered in Chapter 2 are discussed in [37, 40]. The pragmatic signal models considered in Chapter 3 and corresponding results appeared in [34,36,38]. A precursor to the SOI-KF is introduced in [35] whereas the SOI-KF discussed in Chapter 4 was introduced in [39]. The recent interest in WSNs has led to a number of special issues that are a good starting point for the uninitiated reader. Fundamental performance limits have been analyzed in [41]; sensor collaboration is argued to be the reason why WSNs can perform complex tasks even though they consist of inexpensive devices in [19]; and the field of distributed signal processing for WSNs the area to which this thesis belongs is surveyed in [24].

24 14 Chapter 2 Mean-location in additive white Gaussian noise 2.1 Introduction Our focus here in the present chapter is on understanding the fundamental properties of bandwidth-constrained distributed estimation by looking at the problem of mean-location parameter estimation in Additive White Gaussian Noise (AWGN). We seek Maximum Likelihood Estimators (MLE) and benchmark their variances with the Cramer-Rao Lower Bound (CRLB) that, at least asymptotically, is achieved by the MLE. We will show that the deciding factor in the choice of the estimator is the Signal Noise Ratio (SNR), defined here as the dynamic range of the parameter square over the observation noise variance. Our approach is motivated by the observation that an estimator based on the transmission of a single binary observation per sensor can have variance as small as π/2 times that of the clairvoyant sample mean estimator (Section 2.3). This result was derived first in [28] and is included here as a motivational starting point. By noting that this excellent performance can only be achieved under careful design choices, we introduce a class of estimators that minimize the average variance over a given weight function, establishing that in the low-to-medium SNR range this class of MLE performs close to the clairvoyant estimator s variance (Section 2.3). We then turn our attention to the high SNR regime, and show that

25 2.2 Problem statement 15 a quantization step close to the noise s standard deviation is nearly optimal in the sense of minimizing a properly defined per-bit CRLB (Section 2.5), establishing a second result, on the optimal number of bits per sensor to be transmitted. The sample mean estimator based on quantized observations is subsequently analyzed to show that at high SNR even a simple-minded estimator requires transmission of only a small number of extra bits than the MLE. This allows us to establish analytically that bandwidth-constrained distributed estimation is not a relevant problem in high SNR scenarios. For such cases, we advocate using the sample mean estimator based on the quantized observations for its low complexity (Section 2.6). The last conclusion of the present chapter is that numerical maximization required by our MLE can be posed as a convex optimization problem, thus ensuring convergence of e.g., Newton-type iterative algorithms. We finally present numerical results in Section Problem statement This chapter considers the problem of estimating a deterministic scalar parameter θ in the presence of zero-mean AWGN, x(n) = θ + w(n), n = 0, 1,..., N 1, (2.1) where w(n) N (0, σ 2 ), and n is the sensor index. Throughout, we will use p(w) := 1/( 2πσ) exp[ w 2 /(2σ 2 )] to denote the noise probability density function (pdf). If all the observations {x(n)} N 1 n=0 were available, the MLE of θ would be the Sample Mean Estimator, x = N 1 N 1 n=0 x(n). Rightfully, this can be regarded as a clairvoyant estimator for the bandwidth constrained problem, whose variance is known to be [16, p. 30] var( x) = σ2 N. (2.2) Due to bandwidth limitations, however, the observations x(n) have to be quantized and estimation can only be based on these quantized values. To this end, we will henceforth think of quantization as the construction of a set of indicator variables (that will be referred

26 2.3 MLE based on binary observations: common thresholds 16 to, as binary observations) b k (n) = 1{x(n) (τ k, + )}, k Z, (2.3) where τ k is a threshold defining b k (n), Z denotes the set of integers, and k is used to index the set of binary observations constructed from the observation x(n). The bandwidth constraint manifests itself in dictating estimation of θ to be based on the binary observations {b k (n), k Z} N 1 n=0. The goal of this chapter is twofold: i) develop the MLE for estimating θ given a set of binary observations, and ii) study the associated CRLB a bound that is achieved by the MLE as N. Instrumental to the ensuing derivations is the fact that each b k (n) in (2.3) is a Bernoulli random variable with parameter q k (θ) := Pr{b k (n) = 1} = F (τ k θ), k Z, (2.4) where F (x) := 1/( 2πσ) + x exp( u 2 /2σ 2 )du is the complementary cumulative distribution function (CDF) of w(n). The problem under consideration bears similarities and differences with quantization. On the one hand, for a fixed n the set of binary observations {b k (n), k Z} specifies uniquely the quantized value of x(n) to one of the pre-specified levels {τ k, k Z}. On the other hand, different from quantization in which the goal is to reconstruct x(n) (and the optimum solution is known to be given by Lloyd s quantizer [32, p.108]); our goal here is to estimate θ. 2.3 MLE based on binary observations: common thresholds Let us consider the most stringent bandwidth constraint, requiring sensors to transmit one bit per x(n) observation. And as a simple first approach, let every sensor use the same threshold τ c to form b(n) = 1{x(n) (τ c, + )}, n = 0, 1,..., N 1. (2.5) Dropping the subscript k, we let b := [b(0),..., b(n 1)] T, and denote as q(θ) the parameter of these Bernoulli variables. We are now ready to derive the MLE and the pertinent CRLB.

27 2.3 MLE based on binary observations: common thresholds 17 Proposition 2.1 [28] The MLE ˆθ based on the vector of binary observations b is given by ( ˆθ = τ c F 1 1 N N 1 n=0 b(n) ). (2.6) Furthermore, the CRLB for any unbiased estimator ˆθ based on b is given by Proof: var(ˆθ) 1 [ p 2 ] 1 (τ c θ) := B(θ). (2.7) N F (τ c θ)[1 F (τ c θ)] Due to the noise independence, the pdf of b is p(b, θ) = N 1 n=0 [q(θ)]b(n) [1 q(θ)] 1 b(n). Taking logarithm yields the log-likelihood L(θ) = N 1 n=0 whose second derivative with respect to θ is N 1 L(θ) = b(n) n=0 N 1 b(n) ln(q(θ)) + (1 b(n)) ln(1 q(θ)), (2.8) [ p2 (τ c θ) q 2 + ṗ(τ ] c θ) + (2.9) (θ) q(θ) [1 b(n)] n=0 [ p2 (τ c θ) [1 q(θ)] 2 ṗ(τ ] c θ) ; 1 q(θ) for which we used that q(θ)/ θ = p(τ c θ), and introduced the definition ṗ(θ) := p(θ)/ θ. Since for a Bernoulli variable E[b(n)] = q(θ), the CRLB in (2.7) follows after taking the negative inverse of E[ L(θ)]. The MLE can be found either by maximizing (2.8), or simply after recalling that the MLE of q(θ) is, q = 1 N N 1 n=0 and using the invariance of MLE [c.f. (2.4) and (2.10)]. b(n), (2.10) Proposition 2.1 asserts that θ can be consistently estimated from a single binary observation per sensor, with variance as small as B(θ). Minimizing the latter over θ reveals that B min is achieved when τ c = θ and is given by B min = 2πσ2 4N 1.57σ2 N. (2.11)

28 2.3 MLE based on binary observations: common thresholds CRLB Chernoff bound 10 8 CRLB (τ θ)/σ c Figure 2.1: CRLB and Chernoff bound in (2.13) as a function of the distance between τ c and θ measured in AWGN standard deviation (σ) units. In words, if we place τ c optimally, the variance increases only by a factor of π/2 with respect to the clairvoyant estimator x that relies on unquantized observations. Using the (tight) Chernoff bound for the complementary CDF F (τ c θ)[1 F (τ c θ)] 1 4 e (τ c θ) 2 based on which a simple bound on B(θ) can be obtained 2σ 2, (2.12) B(θ) πσ2 2N e+ 1 2 [(τc θ)/σ]2. (2.13) Fig. 2.1 depicts B(θ) and its Chernoff bound, from where it becomes apparent that for τ c θ /σ 1 the increase in variance relative to (2.2) will be around 2 [c.f. (2.7) and (2.13)]. Roughly speaking, to achieve a variance close to var( x) in (2.2), it suffices to place τ c σ close to θ. Fig. 2.2 shows a simulation where we have chosen τ c = θ + σ, to verify that the penalty is, indeed, small. Accounting for the dependence of var(ˆθ) on τ, σ and the unknown θ, one can envision an iterative algorithm in which the threshold is iteratively adjusted over time. Call τ (j) c threshold used at time j, and ˆθ (j) the corresponding estimate obtained as in (2.6). Having the

29 2.4 MLE based on binary observations: non-identical thresholds clairvoyant estimator CRLB simulation Variance N Figure 2.2: MLE in (2.6) based on binary observations performs close to the clairvoyant sample mean estimator when θ is close to the threshold defining the binary observation (σ = 1, τ c = 0, and θ = 1). this estimate, we can now set τ (j+1) c = ˆθ (j), for subsequent estimates not only benefit from the increased number of observations but also from improved binary observations. Such an iterative algorithm fits rather nicely to e.g., a target tracking application. 2.4 MLE based on binary observations: non-identical thresholds The variance of the estimator introduced in Section 2.3 will be close to var( x) whenever the actual parameter θ is close to the threshold τ in standard deviation (σ) units. This can be guaranteed when the possible values of θ are restricted to an interval of size comparable to σ; or in other words, when the dynamic range of θ is in the order of σ. When the dynamic range of θ is large relative to σ, we pursue a different approach using binary observations b k (n), generated from different regions (τ k, + ) in order to assure that there will always be a threshold τ k close to the true parameter. Consider, for each n, the set of binary

30 2.4 MLE based on binary observations: non-identical thresholds 20 measurements defined by (2.3) and to maintain the bandwidth constraint, let each sensor transmit only one out of this set of binary observations. Let N k be the total number of sensors transmitting binary observations based on the threshold τ k, and define ρ k := N k /N as the corresponding fraction of sensors. We further suppose that the index k n chosen by sensor n, is known at the destination (the fusion center or peer sensors in an ad-hoc WSN). Algorithmically, we can summarize our approach in three steps: [S1] [S2] [S3] Define a set of thresholds τ = {τ k, k Z} and associated frequencies ρ = {ρ k, k Z}. Assign the index k n to sensor n; i.e., sensor n generates the binary observation b kn (n) using the threshold τ kn. Define b := [b k0 (0),..., b kn 1 (N 1)] T. Transmit the corresponding binary observations to find the MLE as we describe next. Similar to (2.8), the log-likelihood function is given by L(θ) = N 1 n=0 b kn (n) ln(q kn (θ)) + (1 b kn (n)) ln(1 q kn (θ)), (2.14) from where we can define the MLE of θ given the {b kn } N 1 n=0, ˆθ = arg max θ {L(θ)}. (2.15) As ˆθ in (2.15) cannot be found in closed-form, we resort to a numerical search, such as Newton s algorithm that is based on the iteration ˆθ (i+1) = ˆθ (i) L(ˆθ (i) ) L(ˆθ (i) ), (2.16) where L(θ) := L(θ)/ θ, and L(θ) := 2 L(θ)/ θ 2 are the first and second derivatives of the log-likelihood function that we compute explicitly in (2.58) and (2.59) of Appendix A. Albeit numerically found, the MLE in (2.15) is guaranteed to converge to the global optimum of L(θ) thanks to the following property: Proposition 2.2 The MLE problem (2.14) - (2.15) is convex on θ.

31 2.4 MLE based on binary observations: non-identical thresholds 21 Proof: The Gaussian pdf p(x) is log-concave [6, p. 104]; furthermore, the regions R k := (τ k, + ) and R (c) k are half-lines, and accordingly convex sets. To complete the proof just note that q k (θ) and 1 q k (θ) are integrals of a log-concave function (p(x)) over convex sets (R k and R (c) k respectively); thus, they are log-concave and their logarithms are concave. Given that summation preserves concavity, we infer that L(θ) is a concave function of θ. Although numerical MLE problems are typically difficult to solve, due to local minima requiring complicated search algorithms, this is not the case here. The concavity of L(θ) guarantees convergence of the Newton iteration (2.16) to the global optimum, regardless of initialization. The CRLB for this problem follows from the expected value of L(θ) and is stated in the following proposition. Proposition 2.3 The CRLB for any unbiased estimator ˆθ based on b is B(θ, τ, ρ) = 1 N [ k ] 1 ρ k p 2 (τ k θ) F (τ k θ)[1 F (τ k θ)] := 1 N S 1 (θ, τ, ρ). (2.17) Proof: See Appendix A. Since the CRLB in (2.17) depends on the design parameters (τ, ρ), Proposition 2.3 reveals that using non-identical thresholds across sensors provides an additional degree of freedom. This is precisely what we were looking for in order to overcome the limitations of the estimator introduced in Section 2.3. In the ensuing subsection, we will delve on the selection of (τ, ρ) Selecting the parameters (τ, ρ) Since the CRLB depends also on θ, the selection of (τ, ρ) depends not only on the estimator variance for a specific value of θ, but also on how confident we are that the actual parameter will take on this value. To incorporate this confidence we introduce a weighting function, W (θ), which accounts for the relative importance of different values of θ. For instance, if

32 2.4 MLE based on binary observations: non-identical thresholds 22 we know a priori that θ (Θ 1, Θ 2 ), we can choose W (θ) = u(θ Θ 1 ) u(θ Θ 2 ), where u( ) is the unit step function. Given this weighting function, a reasonable performance indicator is the weighted variance, C W := + W (θ)var(ˆθ) dθ. (2.18) Although we do not have an expression for the variance of the MLE in (2.15) but only the CRLB (2.17), we know that the MLE will approach this bound as N. Consequently, selecting the best possible (τ, ρ) for a prescribed W (θ) amounts to finding the set (τ, ρ) that minimizes the weighted asymptotic variance given by the weighted CRLB [c.f (2.17) and (2.18)], lim NC W = NB W (τ, ρ) := N N + = + W (θ) S(θ, τ, ρ) + W (θ)b(θ, τ, ρ) dθ dθ. (2.19) Thus, the optimum set (τ, ρ ), should be selected as the solution to the problem (τ, ρ ) = arg min (τ,ρ) s.t. + W (θ) S(θ, τ, ρ) dθ, ρ k = 1, ρ k 0 k. (2.20) k Solving (2.20) is complex, but through a proper relaxation we have been able to obtain the following insightful theorem. Theorem 2.1 Assume that + W 1/2 (θ) dθ <. estimator ˆθ based on binary observations must satisfy, Then, the weighted CRLB of any [ + B W (τ, ρ) B min := 1 W 1/2 (θ) dθ N + ] 2 p 2 (u) F (u)[1 F (u)] du (2.21) Furthermore, the bound is attained if and only if there exist a set (τ, ρ) such that S(θ, τ, ρ) = KW 1/2 (θ), K := + p 2 (u) F (u)[1 F (u)] du + W 1/2 (θ) dθ. (2.22)

33 2.4 MLE based on binary observations: non-identical thresholds 23 Proof: See Appendix B. Note that the claims of Theorem 2.1, are reminiscent of Cramer-Rao s Theorem in the sense that (2.21) establishes a bound, and (2.22) offers a condition for this bound to be attained. To gain intuition on the performance limit dictated by Theorem 2.1, let us specialize (2.21) to a Gaussian-shaped W (θ), with variance σθ 2. In this case, the numerator in (2.21) becomes, [ + W 1/2 (θ) dθ] 2 = 2 2πσ θ. (2.23) The denominator in (2.21) that depends on the noise distribution cannot be integrated in closed form, but we can resort to the following numerical approximation, + p 2 (u) F (u)[1 F (u)] du 1.81 σ. (2.24) Substituting (2.24) and (2.23) in (2.21), we finally obtain B GG min 2.77 σ θσ N = 2.77σ θ σ ( ) σ 2 N. (2.25) Perhaps as we should have expected, the best possible weighted variance for any estimator based on a single binary observation per sensor can only be close to the clairvoyant variance in (2.2) when σ θ σ a condition valid in low to medium SNR scenarios. When the SNR is high (σ θ σ), the performance gap between (2.2) and (2.25) is significant and a different approach should be pursued. A similar derivation leads to an analogous expression for a uniform weight function, W (θ) = u(θ Θ 1 ) u(θ Θ 2 ), B GU min 0.55 Θ 2 Θ 1 σ ( ) σ 2 N. (2.26) Eq. (2.26) similarly allow us to infer that the variance of any estimator based on a single binary observation per sensor can only be close to the clairvoyant variance in (2.2) when Θ 2 Θ 1 σ, which corresponds to a low to medium SNR. Regarding the achievability of the bound in (2.21), note that although we cannot assure that there always exists a set (τ, ρ) such that S(θ, τ, ρ) = KW 1/2 (θ), we can adopt as a

Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA kbell@gmu.edu, hlv@gmu.

Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA kbell@gmu.edu, hlv@gmu. POSERIOR CRAMÉR-RAO BOUND FOR RACKING ARGE BEARING Kristine L. Bell and Harry L. Van rees Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA bell@gmu.edu, hlv@gmu.edu ABSRAC

More information

Introduction to Detection Theory

Introduction to Detection Theory Introduction to Detection Theory Reading: Ch. 3 in Kay-II. Notes by Prof. Don Johnson on detection theory, see http://www.ece.rice.edu/~dhj/courses/elec531/notes5.pdf. Ch. 10 in Wasserman. EE 527, Detection

More information

The CUSUM algorithm a small review. Pierre Granjon

The CUSUM algorithm a small review. Pierre Granjon The CUSUM algorithm a small review Pierre Granjon June, 1 Contents 1 The CUSUM algorithm 1.1 Algorithm............................... 1.1.1 The problem......................... 1.1. The different steps......................

More information

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 2, FEBRUARY 2002 359 Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel Lizhong Zheng, Student

More information

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network

Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Recent Advances in Electrical Engineering and Electronic Devices Log-Likelihood Ratio-based Relay Selection Algorithm in Wireless Network Ahmed El-Mahdy and Ahmed Walid Faculty of Information Engineering

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 341

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 341 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 341 Multinode Cooperative Communications in Wireless Networks Ahmed K. Sadek, Student Member, IEEE, Weifeng Su, Member, IEEE, and K.

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory

Signal Detection. Outline. Detection Theory. Example Applications of Detection Theory Outline Signal Detection M. Sami Fadali Professor of lectrical ngineering University of Nevada, Reno Hypothesis testing. Neyman-Pearson (NP) detector for a known signal in white Gaussian noise (WGN). Matched

More information

Chapter 6: Point Estimation. Fall 2011. - Probability & Statistics

Chapter 6: Point Estimation. Fall 2011. - Probability & Statistics STAT355 Chapter 6: Point Estimation Fall 2011 Chapter Fall 2011 6: Point1 Estimat / 18 Chap 6 - Point Estimation 1 6.1 Some general Concepts of Point Estimation Point Estimate Unbiasedness Principle of

More information

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

More information

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE/ACM TRANSACTIONS ON NETWORKING 1 A Greedy Link Scheduler for Wireless Networks With Gaussian Multiple-Access and Broadcast Channels Arun Sridharan, Student Member, IEEE, C Emre Koksal, Member, IEEE,

More information

On the Interaction and Competition among Internet Service Providers

On the Interaction and Competition among Internet Service Providers On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers

More information

IN THIS PAPER, we study the delay and capacity trade-offs

IN THIS PAPER, we study the delay and capacity trade-offs IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 15, NO. 5, OCTOBER 2007 981 Delay and Capacity Trade-Offs in Mobile Ad Hoc Networks: A Global Perspective Gaurav Sharma, Ravi Mazumdar, Fellow, IEEE, and Ness

More information

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks

Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks Comparison of Network Coding and Non-Network Coding Schemes for Multi-hop Wireless Networks Jia-Qi Jin, Tracey Ho California Institute of Technology Pasadena, CA Email: {jin,tho}@caltech.edu Harish Viswanathan

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR) 2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

More information

Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets

Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets Chris Kreucher a, J. Webster Stayman b, Ben Shapo a, and Mark Stuff c a Integrity Applications Incorporated 900 Victors

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Signal Detection C H A P T E R 14 14.1 SIGNAL DETECTION AS HYPOTHESIS TESTING

Signal Detection C H A P T E R 14 14.1 SIGNAL DETECTION AS HYPOTHESIS TESTING C H A P T E R 4 Signal Detection 4. SIGNAL DETECTION AS HYPOTHESIS TESTING In Chapter 3 we considered hypothesis testing in the context of random variables. The detector resulting in the minimum probability

More information

A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks

A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks Didem Gozupek 1,Symeon Papavassiliou 2, Nirwan Ansari 1, and Jie Yang 1 1 Department of Electrical and Computer Engineering

More information

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Lecture 8: Signal Detection and Noise Assumption

Lecture 8: Signal Detection and Noise Assumption ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS

PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUMBER OF REFERENCE SYMBOLS PHASE ESTIMATION ALGORITHM FOR FREQUENCY HOPPED BINARY PSK AND DPSK WAVEFORMS WITH SMALL NUM OF REFERENCE SYMBOLS Benjamin R. Wiederholt The MITRE Corporation Bedford, MA and Mario A. Blanco The MITRE

More information

Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables. Generation of random variables (r.v.) Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

More information

LOGNORMAL MODEL FOR STOCK PRICES

LOGNORMAL MODEL FOR STOCK PRICES LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as

More information

Discrete Frobenius-Perron Tracking

Discrete Frobenius-Perron Tracking Discrete Frobenius-Perron Tracing Barend J. van Wy and Michaël A. van Wy French South-African Technical Institute in Electronics at the Tshwane University of Technology Staatsartillerie Road, Pretoria,

More information

Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

More information

Maximum likelihood estimation of mean reverting processes

Maximum likelihood estimation of mean reverting processes Maximum likelihood estimation of mean reverting processes José Carlos García Franco Onward, Inc. jcpollo@onwardinc.com Abstract Mean reverting processes are frequently used models in real options. For

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

ALOHA Performs Delay-Optimum Power Control

ALOHA Performs Delay-Optimum Power Control ALOHA Performs Delay-Optimum Power Control Xinchen Zhang and Martin Haenggi Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA {xzhang7,mhaenggi}@nd.edu Abstract As

More information

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

More information

1 Maximum likelihood estimation

1 Maximum likelihood estimation COS 424: Interacting with Data Lecturer: David Blei Lecture #4 Scribes: Wei Ho, Michael Ye February 14, 2008 1 Maximum likelihood estimation 1.1 MLE of a Bernoulli random variable (coin flips) Given N

More information

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2

On the Traffic Capacity of Cellular Data Networks. 1 Introduction. T. Bonald 1,2, A. Proutière 1,2 On the Traffic Capacity of Cellular Data Networks T. Bonald 1,2, A. Proutière 1,2 1 France Telecom Division R&D, 38-40 rue du Général Leclerc, 92794 Issy-les-Moulineaux, France {thomas.bonald, alexandre.proutiere}@francetelecom.com

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

Master s thesis tutorial: part III

Master s thesis tutorial: part III for the Autonomous Compliant Research group Tinne De Laet, Wilm Decré, Diederik Verscheure Katholieke Universiteit Leuven, Department of Mechanical Engineering, PMA Division 30 oktober 2006 Outline General

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Presented by Work done with Roland Bürgi and Roger Iles New Views on Extreme Events: Coupled Networks, Dragon

More information

A Practical Scheme for Wireless Network Operation

A Practical Scheme for Wireless Network Operation A Practical Scheme for Wireless Network Operation Radhika Gowaikar, Amir F. Dana, Babak Hassibi, Michelle Effros June 21, 2004 Abstract In many problems in wireline networks, it is known that achieving

More information

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion Daniel Marbach January 31th, 2005 Swiss Federal Institute of Technology at Lausanne Daniel.Marbach@epfl.ch

More information

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

arxiv:1112.0829v1 [math.pr] 5 Dec 2011 How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

More information

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm 1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Single-Period Balancing of Pay Per-Click and Pay-Per-View Online Display Advertisements

Single-Period Balancing of Pay Per-Click and Pay-Per-View Online Display Advertisements Single-Period Balancing of Pay Per-Click and Pay-Per-View Online Display Advertisements Changhyun Kwon Department of Industrial and Systems Engineering University at Buffalo, the State University of New

More information

Maximum Likelihood Estimation of ADC Parameters from Sine Wave Test Data. László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár

Maximum Likelihood Estimation of ADC Parameters from Sine Wave Test Data. László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár Maximum Lielihood Estimation of ADC Parameters from Sine Wave Test Data László Balogh, Balázs Fodor, Attila Sárhegyi, and István Kollár Dept. of Measurement and Information Systems Budapest University

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by Statistics 580 Maximum Likelihood Estimation Introduction Let y (y 1, y 2,..., y n be a vector of iid, random variables from one of a family of distributions on R n and indexed by a p-dimensional parameter

More information

Load Balancing and Switch Scheduling

Load Balancing and Switch Scheduling EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

From the help desk: Bootstrapped standard errors

From the help desk: Bootstrapped standard errors The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

The primary goal of this thesis was to understand how the spatial dependence of

The primary goal of this thesis was to understand how the spatial dependence of 5 General discussion 5.1 Introduction The primary goal of this thesis was to understand how the spatial dependence of consumer attitudes can be modeled, what additional benefits the recovering of spatial

More information

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

THE CENTRAL LIMIT THEOREM TORONTO

THE CENTRAL LIMIT THEOREM TORONTO THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................

More information

Basics of Floating-Point Quantization

Basics of Floating-Point Quantization Chapter 2 Basics of Floating-Point Quantization Representation of physical quantities in terms of floating-point numbers allows one to cover a very wide dynamic range with a relatively small number of

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475. G. George Yin, Fellow, IEEE, and Vikram Krishnamurthy, Fellow, IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475. G. George Yin, Fellow, IEEE, and Vikram Krishnamurthy, Fellow, IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 7, JULY 2005 2475 LMS Algorithms for Tracking Slow Markov Chains With Applications to Hidden Markov Estimation and Adaptive Multiuser Detection G.

More information

Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)

Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n) Lecture 2 1 During this lecture you will learn about The Least Mean Squares algorithm (LMS) Convergence analysis of the LMS Equalizer (Kanalutjämnare) Background 2 The method of the Steepest descent that

More information

Corrections to the First Printing

Corrections to the First Printing Corrections to the First Printing Chapter 2 (i) Page 48, Paragraph 1: cells/µ l should be cells/µl without the space. (ii) Page 48, Paragraph 2: Uninfected cells T i should not have the asterisk. Chapter

More information

2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering

2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering 2014-2015 The Master s Degree with Thesis Course Descriptions in Industrial Engineering Compulsory Courses IENG540 Optimization Models and Algorithms In the course important deterministic optimization

More information

Lecture 5: Variants of the LMS algorithm

Lecture 5: Variants of the LMS algorithm 1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output

More information

8 MIMO II: capacity and multiplexing

8 MIMO II: capacity and multiplexing CHAPTER 8 MIMO II: capacity and multiplexing architectures In this chapter, we will look at the capacity of MIMO fading channels and discuss transceiver architectures that extract the promised multiplexing

More information

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization Archis Ghate a and Robert L. Smith b a Industrial Engineering, University of Washington, Box 352650, Seattle, Washington,

More information

Alok Gupta. Dmitry Zhdanov

Alok Gupta. Dmitry Zhdanov RESEARCH ARTICLE GROWTH AND SUSTAINABILITY OF MANAGED SECURITY SERVICES NETWORKS: AN ECONOMIC PERSPECTIVE Alok Gupta Department of Information and Decision Sciences, Carlson School of Management, University

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as

LOGISTIC REGRESSION. Nitin R Patel. where the dependent variable, y, is binary (for convenience we often code these values as LOGISTIC REGRESSION Nitin R Patel Logistic regression extends the ideas of multiple linear regression to the situation where the dependent variable, y, is binary (for convenience we often code these values

More information

Communication Power Optimization in a Sensor Network with a Path-Constrained Mobile Observer

Communication Power Optimization in a Sensor Network with a Path-Constrained Mobile Observer Communication Power Optimization in a Sensor Network with a Path-Constrained Mobile Observer ARNAB CHAKRABARTI, ASHUTOSH SABHARWAL, and BEHNAAM AAZHANG Rice University We present a procedure for communication

More information

Financial TIme Series Analysis: Part II

Financial TIme Series Analysis: Part II Department of Mathematics and Statistics, University of Vaasa, Finland January 29 February 13, 2015 Feb 14, 2015 1 Univariate linear stochastic models: further topics Unobserved component model Signal

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

Java Modules for Time Series Analysis

Java Modules for Time Series Analysis Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series

More information

The Effects ofVariation Between Jain Mirman and JMC

The Effects ofVariation Between Jain Mirman and JMC MARKET STRUCTURE AND INSIDER TRADING WASSIM DAHER AND LEONARD J. MIRMAN Abstract. In this paper we examine the real and financial effects of two insiders trading in a static Jain Mirman model (Henceforth

More information

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

More information

Fundamental to determining

Fundamental to determining GNSS Solutions: Carrierto-Noise Algorithms GNSS Solutions is a regular column featuring questions and answers about technical aspects of GNSS. Readers are invited to send their questions to the columnist,

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

Euler: A System for Numerical Optimization of Programs

Euler: A System for Numerical Optimization of Programs Euler: A System for Numerical Optimization of Programs Swarat Chaudhuri 1 and Armando Solar-Lezama 2 1 Rice University 2 MIT Abstract. We give a tutorial introduction to Euler, a system for solving difficult

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

MIMO CHANNEL CAPACITY

MIMO CHANNEL CAPACITY MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

TRAFFIC control and bandwidth management in ATM

TRAFFIC control and bandwidth management in ATM 134 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 5, NO. 1, FEBRUARY 1997 A Framework for Bandwidth Management in ATM Networks Aggregate Equivalent Bandwidth Estimation Approach Zbigniew Dziong, Marek Juda,

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Lecture 4: BK inequality 27th August and 6th September, 2007

Lecture 4: BK inequality 27th August and 6th September, 2007 CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

More information

Monte Carlo Methods in Finance

Monte Carlo Methods in Finance Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook October 2, 2012 Outline Introduction 1 Introduction

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Christian Bettstetter. Mobility Modeling, Connectivity, and Adaptive Clustering in Ad Hoc Networks

Christian Bettstetter. Mobility Modeling, Connectivity, and Adaptive Clustering in Ad Hoc Networks Christian Bettstetter Mobility Modeling, Connectivity, and Adaptive Clustering in Ad Hoc Networks Contents 1 Introduction 1 2 Ad Hoc Networking: Principles, Applications, and Research Issues 5 2.1 Fundamental

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

WIRELESS communication channels have the characteristic

WIRELESS communication channels have the characteristic 512 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 3, MARCH 2009 Energy-Efficient Decentralized Cooperative Routing in Wireless Networks Ritesh Madan, Member, IEEE, Neelesh B. Mehta, Senior Member,

More information

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010 Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte

More information

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically

More information

Solutions to Exam in Speech Signal Processing EN2300

Solutions to Exam in Speech Signal Processing EN2300 Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.

More information