Gaussian Probability Density Functions: Properties and Error Characterization

Size: px
Start display at page:

Download "Gaussian Probability Density Functions: Properties and Error Characterization"

Transcription

1 Gaussian Probabilit Densit Functions: Properties and Error Characterization Maria Isabel Ribeiro Institute for Sstems and Robotics Instituto Superior Tcnico Av. Rovisco Pais, Lisboa PORTUGAL {mir@isr.ist.utl.pt} c M. Isabel Ribeiro, 4 Februar 4

2 Contents 1 Normal random variables Normal random vectors 7.1 Particularization for second order Locus of constant probabilit Properties 4 Covariance matrices and error ellipsoid 4 1

3 Chapter 1 Normal random variables A random variable X is said to be normall distributed with mean µ and variance σ if its probabilit densit function (pdf) is f X () = 1 ] ( µ) ep [, < <. (1.1) πσ σ Whenever there is no possible confusion between the random variable X and the real argument,, of the pdf this is simpl represented b f() omitting the eplicit reference to the random variable X in the subscript. The Normal or Gaussian distribution of X is usuall represented b, or also, X N (µ, σ ), X N ( µ, σ ). The Normal or Gaussian pdf (1.1) is a bell-shaped curve that is smmetric about 1 the mean µ and that attains its maimum value of πσ.399 at = µ as σ represented in Figure 1.1 for µ = and σ = 1.. The Gaussian pdf N (µ, σ ) is completel characterized b the two parameters µ and σ, the first and second order moments, respectivel, obtainable from the pdf as µ = E[X] = σ = E[(X µ) ] = f()d, (1.) ( µ) f()d (1.3)

4 .3.3. f() Figure 1.1: Gaussian or Normal pdf, N(, 1. ) The mean, or the epected value of the variable, is the centroid of the pdf. In this particular case of Gaussian pdf, the mean is also the point at which the pdf is maimum. The variance σ is a measure of the dispersion of the random variable around the mean. The fact that (1.1) is completel characterized b two parameters, the first and second order moments of the pdf, renders its use ver common in characterizing the uncertaint in various domains of application. For eample, in robotics, it is common to use Gaussian pdf to statisticall characterize sensor measurements, robot locations, map representations. The pdfs represented in Figure 1. have the same mean, µ =, and σ 1 > σ > σ 3 showing that the larger the variance the greater the dispersion around the mean σ 3 f().. σ.1 σ Figure 1.: Gaussian pdf with different variances (σ 1 = 3, σ =, σ 3 = 1) 3

5 Definition 1.1 The square-root of the variance, σ, is usuall known as standard deviation. Given a real number a R, the probabilit that the random variable X N (µ, σ ) takes values less or equal a is given b a a ] 1 ( µ) P r{x a } = f()d = ep [ d, (1.4) πσ σ represented b the shaded area in Figure a Figure 1.3: Probabilit evaluation using pdf To evaluate the probabilit in (1.4) the error function, erf(), which is related with N (, 1), erf() = 1 ep / d (1.) π plas a ke role. In fact, with a change of variables, (1.4) ma be rewritten as. erf( µ a ) for σ a µ P r{x a } =. + erf( a µ ) for σ a µ stating the importance of the error function, whose values for various are displaed in Table 1.1 In various aspects of robotics, in particular when dealing with uncertaint in mobile robot localization, it is common the evaluation of the probabilit that a 4

6 erf erf erf erf Table 1.1: erf - Error function random variable Y (more generall a random vector representing the robot location) lies in an interval around the mean value µ. This interval is usuall defined in terms of the standard deviation, σ, or its multiples. Using the error function, (1.),the probabilit that the random variable X lies in an interval whose width is related with the standard deviation, is P r{ X µ σ} =.erf(1) =.6868 (1.6) P r{ X µ σ} =.erf() =.94 (1.7) P r{ X µ 3σ} =.erf(3) =.9973 (1.8) In other words, the probabilit that a Gaussian random variable lies in the interval [µ 3σ, µ + 3σ] is equal to Figure 1.4 represents the situation (1.6)corresponding to the probabilit of X ling in the interval [µ σ, µ + σ]. Another useful evaluation is the locus of values of the random variable X where the pdf is greater or equal a given pre-specified value K 1, i.e., 1 πσ ep [ ( µ) σ ] K 1 ( µ) σ K (1.9)

7 .3.3 σ.. f() Figure 1.4: Probabilit of X taking values in the interval [µ σ, µ + σ], µ =, σ = 1. with K = ln( πσk 1 ). This locus is the line segment as represented in Figure 1.. µ σ K µ + σ K f() K 1 Figure 1.: Locus of where the pdf is greater or equal than K 1 6

8 Chapter Normal random vectors A random vector X = [X 1, X,... X n ] T R n is Gaussian if its pdf is { 1 f X () = ep 1 } (π) n/ Σ 1/ ( m X) T Σ 1 ( m X ) (.1) where m X = E(X) is the mean vector of the random vector X, Σ X = E[(X m X )(X m X ) T ] is the covariance matri, n = dimx is the dimension of the random vector, also represented as X N (m X, Σ X ). In (.1), it is assumed that is a vector of dimension n and that Σ 1 eists. If Σ is simpl non-negative definite, then one defines a Gaussian vector through the characteristic function, []. The mean vector m X is the collection of the mean values of each of the random variables X i, m X = E X 1 X. X n = m X1 m X. m Xn. 7

9 The covariance matri is smmetric with elements, Σ X = Σ T X = E(X 1 m X1 ) E(X 1 m X1 )(X m X )... E(X 1 m X1 )(X n m Xn ) E(X m X )(X 1 m X1 ) E(X m X )... E(X m X )(X n m Xn ) =... E(X n m Xn )(X 1 m X1 ) E(X n m Xn ) The diagonal elements of Σ are the variance of the random variables X i and the generic element Σ ij = E(X i m Xi )(X j m Xj ) represents the covariance of the two random variables X i and X j. Similarl to the scalar case, the pdf of a Gaussian random vector is completel characterized b its first and second moments, the mean vector and the covariance matri, respectivel. This ields interesting properties, some of which are listed in Chapter 3. When studing the localization of autonomous robots, the random vector X plas the role of the robot s location. Depending on the robot characteristics and on the operating environment, the location ma be epressed as: a two-dimensional vector with the position in a D environment, a three-dimensional vector (d-position and orientation) representing a mobile robot s location in an horizontal environment, a si-dimensional vector (3 positions and 3 orientations) in an underwater vehicle When characterizing a D-laser scanner in a statistical framework, each range measurement is associated with a given pan angle corresponding to the scanning mechanism. Therefore the pair (distance, angle) ma be considered as a random vector whose statistical characterization depends on the phsical principle of the sensor device. The above eamples refer quantities, (e.g., robot position, sensor measurements) that are not deterministic. To account for the associated uncertainties, we consider them as random vectors. Moreover, we know how to deal with Gaussian random vectors that show a number of nice properties; this (but not onl) pushes us to consider these random variables as been governed b a Gaussian distribution. In man cases, we have to deal with low dimension Gaussian random vectors (second or third dimension), and therefore it is useful that we particularize 8

10 the n-dimensional general case to second order and present and illustrate some properties. The following section particularizes some results for a second order Gaussian pdf..1 Particularization for second order In the first two above referred cases, the Gaussian random vector is of order two or three. In this section we illustrate the case when n=. Let [ ] X Z =, Y be a second-order Gaussian random vector, with mean, [ ] [ ] X mx E[Z] = E = Y and covariance matri, [ σ Σ = X σ XY σ XY σ Y m Y ] (.) (.3) where σ X and σ Y are the variances of the random variables X and Y and σ XY is the covariance of X and Y, defined below. Definition.1 The covariance σ XY of the two random variables X and Y is the number σ XY = E[(X m X )(Y m Y )] (.4) where m X = E(X) and m Y = E(Y ). Epanding the product (.4), ields, σ XY = E(XY ) m X E(Y ) m Y E(X) + m X m Y (.) = E(XY ) E(X)E(Y ) (.6) = E(X) m X m Y. (.7) Definition. The correlation coefficient of the variables X and Y is defined as ρ = σ XY σ X σ Y (.8) 9

11 Result.1 The correlation coefficient and the covariance of the variables X and Y satisf the following inequalities, Proof: [] Consider the mean value of ρ 1, σ XY σ X σ Y. (.9) E[a(X m X ) + (Y m Y )] = a σ X + aσ XY + σ Y which is a positive quadratic for an a, and hence, the discriminant is negative, i.e., σ XY σ Xσ Y from where (.9) results. According to the previous definitions, the covariance matri (.3) is rewritten as [ ] σx Σ = ρσ X σ Y. (.1) ρσ X σ Y For this second-order case, the Gaussian pdf particularizes as, with z = [ ] T R, 1 f(z) = [ π detσ ep 1 ] [ m X m Y ]Σ 1 [ m X m Y ] T (.11) [ ( 1 = πσ X σ ep 1 ( mx ) Y 1 ρ (1 ρ ) σx ρ( m X)( m Y ) + ( m Y ) )] σ X σ Y σy where the role plaed b the correlation coefficient ρ is evident. At this stage we present a set of definitions and properties that, even though being valid for an two random variables, X and Y, also appl to the case when the random variables (rv) are Gaussian. Definition.3 Independence Two random variables X and Y are called independent if the joint pdf, f(, ) equals the product of the pdf of each random variable, f(), f(), i.e., σ Y f(, ) = f()f() In the case of Gaussian random variables, clearl X and Y are independent when ρ =. This issue will be further eplored later. 1

12 Definition.4 Uncorrelatedness Two random varibles X and Y are called uncorrelated if their covariance is zero, i.e., σ XY = E[(X m X )(Y m Y )] =, which can be written in the following equivalent forms: Note that ρ =, E(XY ) = E(X)E(Y ). E(X + Y ) = E(X) + E(Y ) but, in general, E(XY ) E(X)E(Y ). However, when X and Y are uncorrelated, E(XY ) = E(X)E(Y ) according to Definition.4. Definition. Orhthogonalit Two random variables X and Y are called orthognal if E(XY ) =, which is represented as X Y Propert.1 If X and Y are uncorrelated, then X m X Y m Y. Propert. If X and Y are uncorrelated and m X = and m Y =, then X Y. Propert.3 If two random variables X and Y are independent, then the are uncorrelated, i.e., f(, ) = f()f() E(XY ) = E(X)E(Y ) but the converse is not, in general, true. Proof: From the definition of mean value, E(XY ) = = f()d f()dd f()d = E(X)E(Y ). 11

13 Propert.4 If two Gaussian random variables X and Y are uncorrelated, the are also independent, i.e., for Normal or Gaussian random variables,independenc is equivalent to uncorrelatedness. If X N (µ X, Σ X ) and Y N (µ Y, Σ Y ) f() = f()f() E(XY ) = E(X)E(Y ) ρ =. Result. Variance of the sum of two random variables Let X and Y be two random variables, jointl distributed, with mean m X and m Y and correlation coefficient ρ and let Z = X + Y. Then, E(Z) = m Z = E(X) + E(Y ) = m X + m Y (.1) σ Z = E[(Z m Z ) )] = σ X + ρσ X σ Y + σ Y. (.13) Proof: Evaluating the second term in (.13) ields: σ Z = E[(X m X ) + (Y m Y )] = E[(X m X ) ] + E[(X m X )(Y m Y )] + E[(Y m Y ) ] from where the result immediatel holds. Result.3 Variance of the sum of two uncorrelated random variables Let X and Y be two uncorrelated random variables, jointl distributed, with mean m X and m Y and let Z = X + Y. Then, σ Z = σ X + σ Y (.14) i.e., if two random variables are uncorrelated, then the variance of their sum equals the sum of their variances. We regain the case of two jointl Gaussian random varaibles, X and Y, with pdf represented b (.11) to analze, in the plots of Gaussian pdfs, the influence of the correlation coefficient ρ in the bell-shaped pdf. Figure.1 represents four distinct situations with zero mean and null correlation between X and Y, i.e., ρ =, but with different values of the standard deviations σ X and σ Y. It is clear that, in all cases, the maimum of the pdf is obtained for the mean value. As ρ =, i.e., the random variables are uncorrelated, 1

14 E(X)=, E(Y)=, σ X =1, σ Y =1, ρ= E(X)=, E(Y)=, σ X =1., σ Y =1., ρ=.1.1 f(,) f(,).. a) b) E(X)=, E(Y)=, σ X =1, σ Y =., ρ= E(X)=, E(Y)=, σ X =, σ Y =1., ρ=.1.1 f(,) f(,).. c) d) Figure.1: Second-order Gaussian pdfs, with m X = m Y =, ρ = a) σ X = 1, σ Y = 1, b) σ X = 1., σ Y = 1., c) σ X = 1, σ Y =., d) σ X =, σ Y = 1 13

15 the change in the standard deviations σ X and σ Y has independent effects in each of the components. For eample, in Figure.1-d) the spread around the mean is greater along the coordinate. Moreover, the locus of constant value of the pdf is an ellipse with its ais parallel to the and ais. This ellipse has equal ais length, i.e, is a circumference, when both random variables, X and Y have the same standard deviation, σ X = sigma Y. The eamples in Figure. show the influence of the correlation coefficient on the shape of the pdf. What happens is that the ais of the ellipse referred before will no longer be parallel to the ais and. The greater the correlation coefficient, the larger the misalignment of these ais. When ρ = 1 or ρ = 1 the ais of the ellipse has an angle of π/4 relative to the -ais of the pdf.. Locus of constant probabilit Similarl to what was considered for a Gaussian random variable, it is also useful for a variet of applications and for a second order Gaussian random vector, to evaluate the locus (, ) for which the pdf is greater or equal a specified constant, K 1, i.e., { (, ) : 1 [ π detσ ep 1 ] } [ m X m Y ]Σ 1 [ m X m Y ] T K 1 which is equivalent to { (, ) : [ m X m Y ]Σ 1 [ mx m Y with K = ln(πk 1 detσ). ] } K (.1) (.16) Figures.3 and.4 represent all the pairs (, ) for which the pdf is less or equal a given specified constant K 1. The locus of constant value is an ellipse with the ais parallel to the and coordinates when ρ =, i.e., when the random variables X and Y are uncorrelated. When ρ the ellipse ais are not parallel with the and ais. The center of the ellipse coincides in all cases with (m X, m Y ). The locus in (.16) is the border and the inner points of an ellipse, centered in (m X, m Y ). The length of the ellipses ais and the angle the do with the ais and are a function of the constant K, of the eigenvalues of the covariance matri 14

16 E(X)=1, E(Y)=, σ X =1., σ Y =1., ρ=.8 E(X)=1, E(Y)=, σ X =1., σ Y =1., ρ= f(,) f(,) a) b) E(X)=1, E(Y)=, σ X =1., σ Y =1., ρ=. E(X)=1, E(Y)=, σ X =1., σ Y =1., ρ= f(,) c) d) E(X)=1, E(Y)=, σ X =1., σ Y =1., ρ=.4 E(X)=1, E(Y)=, σ X =1., σ Y =1., ρ= f(,) f(,) e) f) Figure.: Second-order Gaussian pdfs, with m X = 1, m Y =, σ X = 1., σ Y = 1. 1

17 E()=1, E()=, sigma =1, sigma =1, rho= E()=1, E()=, sigma =, sigma =1, rho= a) b) E()=1, E()=, sigma =, sigma =1, rho= c) 1 Figure.3: Locus of (, ) such that the Gaussian pdf is less than a constant K for uncorrelated Gaussian random variables with m X = 1, m Y = - a) σ X = σ Y = 1, b) σ X =, σ Y = 1, c) σ X = 1, σ Y = 16

18 E(X)=1, E(Y)=, σ X =1., σ Y =, ρ=. E(X)=1, E(Y)=, σ X =1., σ Y =, ρ= a) c) 1 E(X)=1, E(Y)=, σ X =1., σ Y =, ρ=.7 E(X)=1, E(Y)=, σ X =1., σ Y =, ρ= b) d) 1 Figure.4: Locus of (, ) such that the Gaussian pdf is less than a constant K for correlated variables. m X = 1, m Y =, σ X = 1., σ Y = - a) ρ =., b) ρ =.7, c) ρ =., d) ρ =.7 17

19 Σ and of the correlation coefficient. We will demonstrate this statement in two different steps. We show that: 1. Case 1 - if Σ in (.16) is a diagonal matri, which happens when ρ =, i.e., X and Y are uncorrelated, the ellipse ais are parallel to the frame ais.. Case - if Σ in (.16) is non-diagonal, i.e, ρ, the ellipse ais are not parallel to the frame ais. In both cases, the length of the ellipse ais is related with the eigenvalues of the covariance matri Σ in (.3) given b: λ 1 = 1 [ ] σx + σy + (σ X + 4σ X σy ρ, (.17) λ = 1 [ ] σx + σy (σ X + 4σ X σy ρ. (.18) Case 1 - Diagonal covariance matri When ρ =, i.e., the variables X and Y are uncorrelated, the covariance matri is diagonal, [ ] σ Σ = X σy (.19) and the eigenvalues particularize to λ 1 = σx and λ = σy. In this particular case, illustrated in Figure., the locus (.16) ma be written as {(, ) : ( m X) σ X or also, { (, ) : ( m X) + ( m Y ) σ Y } K (.) } 1. (.1) + ( m Y ) KσX KσY Figure. represents the ellipse that is the border of the locus in (.1) having: -ais with length σ X K -ais with length σ Y K. Case - Non-diagonal covariance matri When the covariance matri Σ in (.3) is non-diagonal, the ellipse that borders the locus (.16) has center in (m X, m Y ) but its ais are not aligned with the 18

20 3. 3 \sigma_x \sqrt{k}. \sigma_y \sqrt{k} 1. 1 (m X, m Y ) Figure.: Locus of constant pdf: ellipse with ais parallel to the frame ais coordinate frame. In the sequel we evaluate the angle between the ellipse ais and those of the coordinate frame. With no loss of generalit we will consider that m X = m Y =, i.e., the ellipse is centered in the coordinated frame. Therefore, the locus under analsis is given b { [ ] } (, ) : [ ]Σ 1 K (.) where Σ is the matri in (.3). As it is a smmetric matri, the eigenvectors corresponding to distinct eigenvalues are orthogonal. When σ X σ Y the eigenvalues (.17) and (.18) are distinct, the corresponding eigenvectors are orthogonal and therefore Σ has simple structure which means that there eists a non-singular and unitar coordinate transformation T such that where Σ = T D T 1 (.3) T = [ v 1 v ], D = diag(λ 1, λ ) and v 1, v are the unit-norm eigenvectors of Σ associated with λ 1 and λ. Replacing (.3) in (.) ields { [ ] } (, ) : [ ]T D 1 T 1 K. (.4) 19

21 Denoting [ w1 w ] [ ] = T 1 and given that T T = T 1, it is immediate that (.4) can be epressed as { [ ] 1 [ ] } λ1 w1 (w 1, w ) : [w 1 w ] K λ w (.) (.6) that corresponds, in the new coordinate sstem defined b the ais w 1 and w, to the locus bordered b an ellipse aligned with those ais. Given that v 1 and v are unit-norm orthogonal vectors, the coordinate transformation defined b (.) corresponds to a rotation of the coordinate sstem (, ), around its origin b an angle α = 1 ( ) ρσx σ Y tan 1 σx, π σ Y 4 α π 4, σ X σ Y. (.7) Evaluating (.6), ields, { (w 1, w ) : } w1 + w 1 Kλ 1 Kλ (.8) that corresponds to an ellipse having w 1 -ais with length Kλ 1 w -ais with length Kλ as represented in Figure.6.

22 w w Figure.6: Ellipses non-aligned with the coordinate ais and. 1

23 Chapter 3 Properties Let X and Y be two jointl distributed Gaussian random vectors, of dimension n and m, respectivel, i.e, X N (m X, Σ X ) Y N (m Y, Σ Y ) and Σ X a square matri of dimension n and Σ Y a square matri of dimension m. Result 3.1 The conditional pdf of X and Y is given b 1 f( ) = [ (π)n detσ ep 1 ] ( m)t Σ 1 ( m) N (m, Σ) (3.1) with m = E[X Y ] = m X + Σ XY Σ 1 Y (Y m Y ) (3.) Σ = Σ X Σ XY Σ 1 Y Σ Y X (3.3) The previous result states that, when X and Y are jointl Gaussian, f( ) is also Gaussian with mean and covariance matri given b (3.1) and (3.3), respectivel. Result 3. Let X R n, Y R m and Z R r be jointl distributed Gaussian random vectors. If Y and Z are independent, then where E[X] = m X. E[X Y, Z] = E[X Y ] + E[X Z] m X (3.4)

24 Result 3.3 Let X R n, Y R m and Z R r be jointl distributed Gaussian random vectors. If Y and Z are not necessaril independent, then E[X Y, Z] = E[X Y, Z] (3.) where ildeing Z = Z E[Z Y ] E[X Y, Z] = E[X Y ] + E[X Z] m X 3

25 Chapter 4 Covariance matrices and error ellipsoid Let X be a n-dimensional Gaussian random vector, with X N (m X, Σ X ) and consider a constant, K 1 R. The locus for which the pdf f() is greater or equal a specified constant K 1,i.e., { [ 1 : ep 1 ] } (π) n/ Σ 1/ [ m X] T Σ 1 X [ m X] K 1 (4.1) which is equivalent to { : [ mx ] T Σ 1 X [ m X] K } (4.) with K = ln((π) n/ K 1 Σ 1/) is an n-dimensional ellipsoid centered at the mean m X and whose ais are onl aligned with the cartesian frame if the covariance matri Σ is diagonal. The ellipsoid defined b (4.) is the region of minimum volume that contains a given probabilit mass under the Gaussian assumption. When in (4.) rather than having an inequalit there is an equalit, (4.), i.e., { : [ mx ] T Σ 1 X [ m X] = K } this locus ma be interpreted as the contours of equal probabilit. Definition 4.1 Mahalanobis distance The scalar quantit [ m X ] T Σ 1 X [ m X] = K (4.3) is known as the Mahalanobis distance of the vector to the mean m X. 4

26 The Mahalanobis distance, is a normalized distance where normalization is achieved through the covariance matri. The surfaces on which K is constant are ellipsoids that are centered about the mean m X, and whose semi-ais are K times the eigenvalues of Σ X, as seen before. In the special case where the random variables that are the components of X are uncorrelated and with the same variance, i.e., the covariance matri Σ is a diagonal matri with all its diagonal elements equal, these surfaces are spheres, and the Mahalanobis distance becomes equivalent to the Euclidean distance (m X,m Y ) Figure 4.1: Contours of equal Mahalanobis and Euclidean distance around (m X, m Y ) for a second order Gaussian random vector Figure 4.1 represents the contours of equal Mahalanobis and Euclidean distance around (m X, m Y ) for a second order Gaussian random vector. In oher words, an point (, ) in the ellipse is at the same Mahalanobis distance to the center of the ellipses. Also, an point (, ) in the circumference is at the same Euclidean distance to the center. This plot enhances the fact that the Mahalanobis distance is weighted b the covariance matri. For decision making purposes (e.g., the field-of-view, a validation gate), and given m X and Σ X, it is necessar to determine the probabilit that a given vector will lie within, sa, the 9% confidence ellipse or ellipsoid given b (4.3). For a given K, the relationship between K and the probabilit of ling within the

27 ellipsoid specified b K is, [3], n = 1; P r{ inside the ellipsoid} = 1 π + erf( K) n = ; P r{inside the ellipsoid} = 1 e K/ n = 3; P r{ inside the ellipsoid} = 1 π + erf( K) π Ke K/ (4.4) where n is the dimension of the random vector. Numeric values of the above epression for n = are presented in the following table Probabilit K % % %.48 8% % 4.6 For a given K the ellispoid ais are fied. The probabilit that a given value of the random vector X lies within the ellipsoid centered in the mean value, increases with the increase of K. This problem can be stated the other wa around. In the case where we specif a fied probabilit value, the question is the value of K that ields an ellipsoid satisfing that probabilit. To answer the question the statistics of K has to be analzed. The scalar random variable (4.3) has a known random distribution, as stated in the following result. Result 4.1 Given the n-dimensional Gaussian random vector X, with mean m X and covariance matri Σ X, the scalar random variable K defined b the quadratic form [ m X ] T Σ 1 X [ m X] = K (4.) has a chi-square distribution with n degrees of freedom. Proof: see, p.e., in [1]. The pdf of K in (4.), i.e., the chi-square densit with n degrees of freedom is, (see, p.e., [1]) 1 n f(k) = n Γ( n ep k )k 6

28 where the gamma function satisfies, Γ( 1 ) = π, Γ(1) = 1 Γ(n + 1) = Γ(n). The probabilit that the scalar random variable, K in (4.) is less or equal a given constant, χ p P r{k χ p} = P r{[ m X ] T Σ 1 [ m X ] χ p} = p is given in the following table where n is the number of degrees of freedom and the sub-indice p in χ p represents the corresponding probabilit under evaluation. n χ.99 χ.99 χ.97 χ.9 χ.9 χ.7 χ. χ. χ.1 χ From this table we can conclude, for eample, that for a third-order Gaussian random vector, n = 3, P r{k 6.} = P r{[ m X ] T Σ 1 [ m X ] 6.} =.9 Eample 4.1 Mobile robot localization and the error ellipsoid This eample illustrates the use of the error ellipses and ellipsoids in a particular application, the localization of a mobile robot operating in a given environment. Consider a mobile platform, moving in an environment and let P R be the position of the platform relative to a world frame. P has two components, P = [ X Y The eact value of P is not known and we have to use an particular localization algorithm to evaluate P. The most common algorithms combine internal and eternal sensor measurements to ield an estimated value of P. The uncertaint associated with all the quantities involved in this procedure, namel vehicle model, sensor measurements, environment map representation, leads to consider P as a random vector. Gaussianit is assumed for simplicit. Therefore, the localization algorithm provides an estimated value of P, denoted 7 ].

29 as ˆP, which is the mean value of the Gaussian pdf, and the associated covariance matri, i.e., P N ( ˆP, Σ P ) At each time step of the algorithm we do not know the eact value of P, but we have an estimated value, ˆP and a measure of the uncertaint of this estimate, given b Σ P. The evident question is the following: Where is the robot?, i.e., What is the eact value of P? It is not possible to give a direct answer to this question, but rather a probabilistic one. We ma answer, for eample: Given ˆP and Σ P, with 9% of probabilit, the robot is located in an ellipse centered in ˆP and whose border is defined according to the Mahalanobis distance. In this case the value of K in (4.) will be K = Someone ma sa that, for the involved application, a probabilit of 9% is small and ask to have an answer with an higher probabilit, for eample 99%. The answer will be similar but, in this case, the error ellipse, will be defined for K = 9.1, i.e.,the ellipse will be larger than the previous one. 8

30 Bibliograph [1] Yaakov Bar-Shalom, X. Rong Li, Thiagalingam Kirubarajan, Estimation with Applications to Tracking and Navigation, John Wile & Sons, 1. [] A. Papoulis, Probabilit, Random Variables and Stochastic Processes, McGraw-Hill, 196. [3] Randall C. Smith, Peter Cheeseman, On the Representation and Estimation of Spatial Uncertaint, the International Journal of Robotics Research, Vol., No.4, Winter

Examples: Joint Densities and Joint Mass Functions Example 1: X and Y are jointly continuous with joint pdf

Examples: Joint Densities and Joint Mass Functions Example 1: X and Y are jointly continuous with joint pdf AMS 3 Joe Mitchell Eamples: Joint Densities and Joint Mass Functions Eample : X and Y are jointl continuous with joint pdf f(,) { c 2 + 3 if, 2, otherwise. (a). Find c. (b). Find P(X + Y ). (c). Find marginal

More information

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes The Circle 2.6 Introduction A circle is one of the most familiar geometrical figures and has been around a long time! In this brief Section we discuss the basic coordinate geometr of a circle - in particular

More information

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes The Circle 2.6 Introduction A circle is one of the most familiar geometrical figures. In this brief Section we discuss the basic coordinate geometr of a circle - in particular the basic equation representing

More information

Addition and Subtraction of Vectors

Addition and Subtraction of Vectors ddition and Subtraction of Vectors 1 ppendi ddition and Subtraction of Vectors In this appendi the basic elements of vector algebra are eplored. Vectors are treated as geometric entities represented b

More information

1. a. standard form of a parabola with. 2 b 1 2 horizontal axis of symmetry 2. x 2 y 2 r 2 o. standard form of an ellipse centered

1. a. standard form of a parabola with. 2 b 1 2 horizontal axis of symmetry 2. x 2 y 2 r 2 o. standard form of an ellipse centered Conic Sections. Distance Formula and Circles. More on the Parabola. The Ellipse and Hperbola. Nonlinear Sstems of Equations in Two Variables. Nonlinear Inequalities and Sstems of Inequalities In Chapter,

More information

Identifying second degree equations

Identifying second degree equations Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +

More information

Introduction to polarization of light

Introduction to polarization of light Chapter 2 Introduction to polarization of light This Chapter treats the polarization of electromagnetic waves. In Section 2.1 the concept of light polarization is discussed and its Jones formalism is presented.

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Solving Quadratic Equations by Graphing. Consider an equation of the form. y ax 2 bx c a 0. In an equation of the form

Solving Quadratic Equations by Graphing. Consider an equation of the form. y ax 2 bx c a 0. In an equation of the form SECTION 11.3 Solving Quadratic Equations b Graphing 11.3 OBJECTIVES 1. Find an ais of smmetr 2. Find a verte 3. Graph a parabola 4. Solve quadratic equations b graphing 5. Solve an application involving

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

DISTANCE, CIRCLES, AND QUADRATIC EQUATIONS

DISTANCE, CIRCLES, AND QUADRATIC EQUATIONS a p p e n d i g DISTANCE, CIRCLES, AND QUADRATIC EQUATIONS DISTANCE BETWEEN TWO POINTS IN THE PLANE Suppose that we are interested in finding the distance d between two points P (, ) and P (, ) in the

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

7.3 Parabolas. 7.3 Parabolas 505

7.3 Parabolas. 7.3 Parabolas 505 7. Parabolas 0 7. Parabolas We have alread learned that the graph of a quadratic function f() = a + b + c (a 0) is called a parabola. To our surprise and delight, we ma also define parabolas in terms of

More information

Chapter 8. Lines and Planes. By the end of this chapter, you will

Chapter 8. Lines and Planes. By the end of this chapter, you will Chapter 8 Lines and Planes In this chapter, ou will revisit our knowledge of intersecting lines in two dimensions and etend those ideas into three dimensions. You will investigate the nature of planes

More information

sin(θ) = opp hyp cos(θ) = adj hyp tan(θ) = opp adj

sin(θ) = opp hyp cos(θ) = adj hyp tan(θ) = opp adj Math, Trigonometr and Vectors Geometr 33º What is the angle equal to? a) α = 7 b) α = 57 c) α = 33 d) α = 90 e) α cannot be determined α Trig Definitions Here's a familiar image. To make predictive models

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Plane Stress Transformations

Plane Stress Transformations 6 Plane Stress Transformations ASEN 311 - Structures ASEN 311 Lecture 6 Slide 1 Plane Stress State ASEN 311 - Structures Recall that in a bod in plane stress, the general 3D stress state with 9 components

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Downloaded from www.heinemann.co.uk/ib. equations. 2.4 The reciprocal function x 1 x

Downloaded from www.heinemann.co.uk/ib. equations. 2.4 The reciprocal function x 1 x Functions and equations Assessment statements. Concept of function f : f (); domain, range, image (value). Composite functions (f g); identit function. Inverse function f.. The graph of a function; its

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

2.1 Three Dimensional Curves and Surfaces

2.1 Three Dimensional Curves and Surfaces . Three Dimensional Curves and Surfaces.. Parametric Equation of a Line An line in two- or three-dimensional space can be uniquel specified b a point on the line and a vector parallel to the line. The

More information

Florida Algebra I EOC Online Practice Test

Florida Algebra I EOC Online Practice Test Florida Algebra I EOC Online Practice Test Directions: This practice test contains 65 multiple-choice questions. Choose the best answer for each question. Detailed answer eplanations appear at the end

More information

Affine Transformations

Affine Transformations A P P E N D I X C Affine Transformations CONTENTS C The need for geometric transformations 335 C2 Affine transformations 336 C3 Matri representation of the linear transformations 338 C4 Homogeneous coordinates

More information

Chapter 16, Part C Investment Portfolio. Risk is often measured by variance. For the binary gamble L= [, z z;1/2,1/2], recall that expected value is

Chapter 16, Part C Investment Portfolio. Risk is often measured by variance. For the binary gamble L= [, z z;1/2,1/2], recall that expected value is Chapter 16, Part C Investment Portfolio Risk is often measured b variance. For the binar gamble L= [, z z;1/,1/], recall that epected value is 1 1 Ez = z + ( z ) = 0. For this binar gamble, z represents

More information

So, using the new notation, P X,Y (0,1) =.08 This is the value which the joint probability function for X and Y takes when X=0 and Y=1.

So, using the new notation, P X,Y (0,1) =.08 This is the value which the joint probability function for X and Y takes when X=0 and Y=1. Joint probabilit is the probabilit that the RVs & Y take values &. like the PDF of the two events, and. We will denote a joint probabilit function as P,Y (,) = P(= Y=) Marginal probabilit of is the probabilit

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

LINEAR FUNCTIONS OF 2 VARIABLES

LINEAR FUNCTIONS OF 2 VARIABLES CHAPTER 4: LINEAR FUNCTIONS OF 2 VARIABLES 4.1 RATES OF CHANGES IN DIFFERENT DIRECTIONS From Precalculus, we know that is a linear function if the rate of change of the function is constant. I.e., for

More information

C3: Functions. Learning objectives

C3: Functions. Learning objectives CHAPTER C3: Functions Learning objectives After studing this chapter ou should: be familiar with the terms one-one and man-one mappings understand the terms domain and range for a mapping understand the

More information

5.3 Graphing Cubic Functions

5.3 Graphing Cubic Functions Name Class Date 5.3 Graphing Cubic Functions Essential Question: How are the graphs of f () = a ( - h) 3 + k and f () = ( 1_ related to the graph of f () = 3? b ( - h) 3 ) + k Resource Locker Eplore 1

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

LESSON EIII.E EXPONENTS AND LOGARITHMS

LESSON EIII.E EXPONENTS AND LOGARITHMS LESSON EIII.E EXPONENTS AND LOGARITHMS LESSON EIII.E EXPONENTS AND LOGARITHMS OVERVIEW Here s what ou ll learn in this lesson: Eponential Functions a. Graphing eponential functions b. Applications of eponential

More information

Lecture 1 Introduction 1. 1.1 Rectangular Coordinate Systems... 1. 1.2 Vectors... 3. Lecture 2 Length, Dot Product, Cross Product 5. 2.1 Length...

Lecture 1 Introduction 1. 1.1 Rectangular Coordinate Systems... 1. 1.2 Vectors... 3. Lecture 2 Length, Dot Product, Cross Product 5. 2.1 Length... CONTENTS i Contents Lecture Introduction. Rectangular Coordinate Sstems..................... Vectors.................................. 3 Lecture Length, Dot Product, Cross Product 5. Length...................................

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Connecting Transformational Geometry and Transformations of Functions

Connecting Transformational Geometry and Transformations of Functions Connecting Transformational Geometr and Transformations of Functions Introductor Statements and Assumptions Isometries are rigid transformations that preserve distance and angles and therefore shapes.

More information

5.2 Inverse Functions

5.2 Inverse Functions 78 Further Topics in Functions. Inverse Functions Thinking of a function as a process like we did in Section., in this section we seek another function which might reverse that process. As in real life,

More information

INVESTIGATIONS AND FUNCTIONS 1.1.1 1.1.4. Example 1

INVESTIGATIONS AND FUNCTIONS 1.1.1 1.1.4. Example 1 Chapter 1 INVESTIGATIONS AND FUNCTIONS 1.1.1 1.1.4 This opening section introduces the students to man of the big ideas of Algebra 2, as well as different was of thinking and various problem solving strategies.

More information

Correlation in Random Variables

Correlation in Random Variables Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between

More information

More Equations and Inequalities

More Equations and Inequalities Section. Sets of Numbers and Interval Notation 9 More Equations and Inequalities 9 9. Compound Inequalities 9. Polnomial and Rational Inequalities 9. Absolute Value Equations 9. Absolute Value Inequalities

More information

Math, Trigonometry and Vectors. Geometry. Trig Definitions. sin(θ) = opp hyp. cos(θ) = adj hyp. tan(θ) = opp adj. Here's a familiar image.

Math, Trigonometry and Vectors. Geometry. Trig Definitions. sin(θ) = opp hyp. cos(θ) = adj hyp. tan(θ) = opp adj. Here's a familiar image. Math, Trigonometr and Vectors Geometr Trig Definitions Here's a familiar image. To make predictive models of the phsical world, we'll need to make visualizations, which we can then turn into analtical

More information

The Big Picture. Correlation. Scatter Plots. Data

The Big Picture. Correlation. Scatter Plots. Data The Big Picture Correlation Bret Hanlon and Bret Larget Department of Statistics Universit of Wisconsin Madison December 6, We have just completed a length series of lectures on ANOVA where we considered

More information

D.2. The Cartesian Plane. The Cartesian Plane The Distance and Midpoint Formulas Equations of Circles. D10 APPENDIX D Precalculus Review

D.2. The Cartesian Plane. The Cartesian Plane The Distance and Midpoint Formulas Equations of Circles. D10 APPENDIX D Precalculus Review D0 APPENDIX D Precalculus Review SECTION D. The Cartesian Plane The Cartesian Plane The Distance and Midpoint Formulas Equations of Circles The Cartesian Plane An ordered pair, of real numbers has as its

More information

3D Stress Components. From equilibrium principles: τ xy = τ yx, τ xz = τ zx, τ zy = τ yz. Normal Stresses. Shear Stresses

3D Stress Components. From equilibrium principles: τ xy = τ yx, τ xz = τ zx, τ zy = τ yz. Normal Stresses. Shear Stresses 3D Stress Components From equilibrium principles:, z z, z z The most general state of stress at a point ma be represented b 6 components Normal Stresses Shear Stresses Normal stress () : the subscript

More information

Exponential and Logarithmic Functions

Exponential and Logarithmic Functions Chapter 6 Eponential and Logarithmic Functions Section summaries Section 6.1 Composite Functions Some functions are constructed in several steps, where each of the individual steps is a function. For eample,

More information

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

More information

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Elliptical copulae Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Abstract: In this paper we construct a copula, that is, a distribution with uniform marginals. This copula is continuous and can realize

More information

PROPERTIES OF ELLIPTIC CURVES AND THEIR USE IN FACTORING LARGE NUMBERS

PROPERTIES OF ELLIPTIC CURVES AND THEIR USE IN FACTORING LARGE NUMBERS PROPERTIES OF ELLIPTIC CURVES AND THEIR USE IN FACTORING LARGE NUMBERS A ver important set of curves which has received considerabl attention in recent ears in connection with the factoring of large numbers

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Clustering Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Clustering Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining Data Mining Cluster Analsis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining b Tan, Steinbach, Kumar Clustering Algorithms K-means and its variants Hierarchical clustering

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

Section V.2: Magnitudes, Directions, and Components of Vectors

Section V.2: Magnitudes, Directions, and Components of Vectors Section V.: Magnitudes, Directions, and Components of Vectors Vectors in the plane If we graph a vector in the coordinate plane instead of just a grid, there are a few things to note. Firstl, directions

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Functions and Graphs CHAPTER INTRODUCTION. The function concept is one of the most important ideas in mathematics. The study

Functions and Graphs CHAPTER INTRODUCTION. The function concept is one of the most important ideas in mathematics. The study Functions and Graphs CHAPTER 2 INTRODUCTION The function concept is one of the most important ideas in mathematics. The stud 2-1 Functions 2-2 Elementar Functions: Graphs and Transformations 2-3 Quadratic

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 13 Introduction to Linear Regression and Correlation Analysis Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

ACT Math Vocabulary. Altitude The height of a triangle that makes a 90-degree angle with the base of the triangle. Altitude

ACT Math Vocabulary. Altitude The height of a triangle that makes a 90-degree angle with the base of the triangle. Altitude ACT Math Vocabular Acute When referring to an angle acute means less than 90 degrees. When referring to a triangle, acute means that all angles are less than 90 degrees. For eample: Altitude The height

More information

Physics 53. Kinematics 2. Our nature consists in movement; absolute rest is death. Pascal

Physics 53. Kinematics 2. Our nature consists in movement; absolute rest is death. Pascal Phsics 53 Kinematics 2 Our nature consists in movement; absolute rest is death. Pascal Velocit and Acceleration in 3-D We have defined the velocit and acceleration of a particle as the first and second

More information

FINAL EXAM REVIEW MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

FINAL EXAM REVIEW MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. FINAL EXAM REVIEW MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Determine whether or not the relationship shown in the table is a function. 1) -

More information

SECTION 7-4 Algebraic Vectors

SECTION 7-4 Algebraic Vectors 7-4 lgebraic Vectors 531 SECTIN 7-4 lgebraic Vectors From Geometric Vectors to lgebraic Vectors Vector ddition and Scalar Multiplication Unit Vectors lgebraic Properties Static Equilibrium Geometric vectors

More information

Zeros of Polynomial Functions. The Fundamental Theorem of Algebra. The Fundamental Theorem of Algebra. zero in the complex number system.

Zeros of Polynomial Functions. The Fundamental Theorem of Algebra. The Fundamental Theorem of Algebra. zero in the complex number system. _.qd /7/ 9:6 AM Page 69 Section. Zeros of Polnomial Functions 69. Zeros of Polnomial Functions What ou should learn Use the Fundamental Theorem of Algebra to determine the number of zeros of polnomial

More information

Lecture 8: More Continuous Random Variables

Lecture 8: More Continuous Random Variables Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Exploratory data analysis for microarray data

Exploratory data analysis for microarray data Eploratory data analysis for microarray data Anja von Heydebreck Ma Planck Institute for Molecular Genetics, Dept. Computational Molecular Biology, Berlin, Germany heydebre@molgen.mpg.de Visualization

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Graphing Quadratic Equations

Graphing Quadratic Equations .4 Graphing Quadratic Equations.4 OBJECTIVE. Graph a quadratic equation b plotting points In Section 6.3 ou learned to graph first-degree equations. Similar methods will allow ou to graph quadratic equations

More information

CHAPTER TEN. Key Concepts

CHAPTER TEN. Key Concepts CHAPTER TEN Ke Concepts linear regression: slope intercept residual error sum of squares or residual sum of squares sum of squares due to regression mean squares error mean squares regression (coefficient)

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

MBA 611 STATISTICS AND QUANTITATIVE METHODS

MBA 611 STATISTICS AND QUANTITATIVE METHODS MBA 611 STATISTICS AND QUANTITATIVE METHODS Part I. Review of Basic Statistics (Chapters 1-11) A. Introduction (Chapter 1) Uncertainty: Decisions are often based on incomplete information from uncertain

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Moving Least Squares Approximation

Moving Least Squares Approximation Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the

More information

Introduction to Plates

Introduction to Plates Chapter Introduction to Plates Plate is a flat surface having considerabl large dimensions as compared to its thickness. Common eamples of plates in civil engineering are. Slab in a building.. Base slab

More information

Linear Inequality in Two Variables

Linear Inequality in Two Variables 90 (7-) Chapter 7 Sstems of Linear Equations and Inequalities In this section 7.4 GRAPHING LINEAR INEQUALITIES IN TWO VARIABLES You studied linear equations and inequalities in one variable in Chapter.

More information

Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

2D Geometrical Transformations. Foley & Van Dam, Chapter 5

2D Geometrical Transformations. Foley & Van Dam, Chapter 5 2D Geometrical Transformations Fole & Van Dam, Chapter 5 2D Geometrical Transformations Translation Scaling Rotation Shear Matri notation Compositions Homogeneous coordinates 2D Geometrical Transformations

More information

Solution to HW - 1. Problem 1. [Points = 3] In September, Chapel Hill s daily high temperature has a mean

Solution to HW - 1. Problem 1. [Points = 3] In September, Chapel Hill s daily high temperature has a mean Problem 1. [Points = 3] In September, Chapel Hill s daily high temperature has a mean of 81 degree F and a standard deviation of 10 degree F. What is the mean, standard deviation and variance in terms

More information

Plane Stress Transformations

Plane Stress Transformations 6 Plane Stress Transformations 6 1 Lecture 6: PLANE STRESS TRANSFORMATIONS TABLE OF CONTENTS Page 6.1 Introduction..................... 6 3 6. Thin Plate in Plate Stress................ 6 3 6.3 D Stress

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

SAMPLE. Polynomial functions

SAMPLE. Polynomial functions Objectives C H A P T E R 4 Polnomial functions To be able to use the technique of equating coefficients. To introduce the functions of the form f () = a( + h) n + k and to sketch graphs of this form through

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

I think that starting

I think that starting . Graphs of Functions 69. GRAPHS OF FUNCTIONS One can envisage that mathematical theor will go on being elaborated and etended indefinitel. How strange that the results of just the first few centuries

More information

Lines and Planes 1. x(t) = at + b y(t) = ct + d

Lines and Planes 1. x(t) = at + b y(t) = ct + d 1 Lines in the Plane Lines and Planes 1 Ever line of points L in R 2 can be epressed as the solution set for an equation of the form A + B = C. The equation is not unique for if we multipl both sides b

More information

For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall

For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall Cluster Validation Cluster Validit For supervised classification we have a variet of measures to evaluate how good our model is Accurac, precision, recall For cluster analsis, the analogous question is

More information

Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

More information

Core Maths C2. Revision Notes

Core Maths C2. Revision Notes Core Maths C Revision Notes November 0 Core Maths C Algebra... Polnomials: +,,,.... Factorising... Long division... Remainder theorem... Factor theorem... 4 Choosing a suitable factor... 5 Cubic equations...

More information

1 Maximizing pro ts when marginal costs are increasing

1 Maximizing pro ts when marginal costs are increasing BEE12 Basic Mathematical Economics Week 1, Lecture Tuesda 12.1. Pro t maimization 1 Maimizing pro ts when marginal costs are increasing We consider in this section a rm in a perfectl competitive market

More information

STIFFNESS OF THE HUMAN ARM

STIFFNESS OF THE HUMAN ARM SIFFNESS OF HE HUMAN ARM his web resource combines the passive properties of muscles with the neural feedback sstem of the short-loop (spinal) and long-loop (transcortical) reflees and eamine how the whole

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

COMPLEX STRESS TUTORIAL 3 COMPLEX STRESS AND STRAIN

COMPLEX STRESS TUTORIAL 3 COMPLEX STRESS AND STRAIN COMPLX STRSS TUTORIAL COMPLX STRSS AND STRAIN This tutorial is not part of the decel unit mechanical Principles but covers elements of the following sllabi. o Parts of the ngineering Council eam subject

More information

Algebra II. Administered May 2013 RELEASED

Algebra II. Administered May 2013 RELEASED STAAR State of Teas Assessments of Academic Readiness Algebra II Administered Ma 0 RELEASED Copright 0, Teas Education Agenc. All rights reserved. Reproduction of all or portions of this work is prohibited

More information