1 INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS Int. J. Numer. Anal. Meth. Geomech., 00; 6: (DOI: /nag.56) The point-estimate method with large numbers of variables John T. Christian 1,n,y,z and Gregory B. Baecher,} 1 3 Fredana Road, Waban, MA 0468, USA Department of Civil and Environmental Engineering, University of Maryland, College Park, MD 074, USA SUMMARY Rosenbleuth s point-estimate method has become widely used in geotechnical practice for reliability calculations. Although the point-estimate method is a powerful and simple method for evaluating the moments of functions of random variables, it is limited by the need to make n evaluations when there are n random variables. Modifications of the method reduce this to n evaluations by using points on the diameters of a hypersphere instead of at the corners of the inscribed hypercube. However, these techniques force the co-ordinates of the evaluation points farther from the means of the variables; for a bounded variable, the points may easily fall outside the domain of definition of the variable. The problem can be avoided by using other techniques for some special cases or by reducing the number of random variables that must be considered. Copyright # 00 John Wiley & Sons, Ltd. THE BASIC METHOD The point-estimate method, originally proposed by Rosenblueth [1, ], is a simple but powerful technique for evaluating the moments of functions of random variables, and has been adopted in many geotechnical reliability analyses . Miller and Rice  and Christian and Baecher  have shown that the point-estimate method is a form of Gaussian quadrature. Despite its simplicity, it can be accurate in many practical situations [3 5]. In this paper the authors describe current methods for dealing with the computational burdens that arise when the number of variables becomes large and show that these methods are themselves problematical. Examples from geotechnical reliability practice are used to illustrate both difficulties and alternative approaches. As in any Gaussian quadrature scheme, including more points in the calculation increases the order of polynomial functions that are integrated exactly, but the most widely used form of the method employs two points per variable. When there is one independent variable X ; the two values of X where calculations are made are chosen such that " rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi# n X x 1 ¼ m X þ s X 1 þ n X ð1þ n Correspondence to: J. T. Christian, Consulting Engineer, 3 Fredana Road, Waban, MA 0468, USA y z Consulting Engineer. } Professor and Chairman. Copyright # 00 John Wiley & Sons, Ltd. Received 18 June 00 Revised 17 August 00
2 1516 J. T. CHRISTIAN AND G. B. BAECHER " rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi# n X x ¼ m X þ s X þ 1 þ n X ðþ where m X is the mean of X ; s X is the standard deviation, and n X is the skewness coefficient. The function Y ðx Þ is then evaluated at each of the points and the moments of Y estimated using a weighted sum as E½Y m ŠP 1 y1 m þ P y m ð3þ where y 1 ; y ¼ values of Y at x 1 ; x ; and 3 P 1 ¼ þ n x qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi5 ð4þ 1 þðn x =Þ 3 P ¼ n x qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi5 ð5þ 1 þðn x =Þ When n X ¼ 0; Equations (1) (5) reduce to x 1 ¼ m X s X x ¼ m X þ s X ð6þ ð7þ P 1 ¼ P ¼ 1 ð8þ As an example, consider a bearing capacity prediction based on Terzaghi s superposition method , which is partly theoretical and partly empirical. The ultimate bearing capacity of a shallow, concentrically loaded strip footing on a homogeneous soil is commonly determined from this method as q v ¼ cn c þ qn q þ 1 gbn g ð9þ in which q v is the ultimate bearing capacity for a vertical concentric load, N c ; N q ; and N g are the bearing capacity factors, B the foundation width, q the uniform surcharge around the foundation, c the effective soil cohesion, and g the unit weight. For a foundation initially on the surface of a cohesionless soil ðc ¼ q ¼ 0Þ; this reduces to, q v ¼ 1 gbn g ð10þ Ingra and Baecher  performed logarithmic regression analyses on 145 model and prototype footing tests results (Figure 1) to estimate the expected value and variance of N g for footings of length to width ratio of 1.0 as E½N g Š ¼ expf :064 þ 0:173fg ð11þ Var½N g Š ¼ ð0:090þexpf 4:18 þ 0:346fg The uncertainty in N g is lognormally distributed, with skewness coefficient 0.98 . Using the point-estimate method to obtain the mean and variance of the bearing capacity, from Equations (1) and (), the calculations would be made at the two values of the bearing capacity coefficient
3 THE POINT-ESTIMATE METHOD 1517 Figure 1. Theoretical and empirical results for N g for length=width ¼ 1: x 1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½expf :064 þ 0:173fgŠ ð0:638þ ð0:090þexpf 4:18 þ 0:346fg x ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½expf :064 þ 0:173fgŠ þ ð1:566þ ð0:090þexpf 4:18 þ 0:346fg ð1þ which for f ¼ 358; become, x 1 ¼ 43:7 and x ¼ 79:6; with respective weights P 1 =0.710 and P ¼ 0:90: Figure shows the probability density function of Ng and the locations of points x 1 and x : When there are n independent variables, the procedure is to select two values for each X i and to establish the integration points as the n points defined by the intersections of these values. These are the corners of a hypercube (or hyperparallelepiped) in n-dimensional space. They also lie on a circumscribing hyperellipsoid. Figures 3(a) and 3(b) illustrate the case for three variables. Subtracting the mean from each variable and dividing the result by the standard deviation normalizes the co-ordinates; thus, the mean of each normalized variable is at the origin and the ordinates are expressed in units of standard deviation. The weight assigned to the value of Y at each point is the product of the n values of P computed from Equations (4) and (5) or (8). The analogy to the so-called Gauss points used to integrate finite element terms will be obvious. If the variables are correlated, the weight is multiplied by a factor ½1 P n P n i¼1 j¼iþ1 ðr ij ÞŠ; where r ij is the correlation coefficient between variables X i and X j : The positive sign is used if the values of X i and X j are both above or both below the means, and the negative sign is
4 1518 J. T. CHRISTIAN AND G. B. BAECHER lower calculation upper calculation lognormal PDF (over ln Nγ ) uncertainty in Nγ Bearing Capacity Factor (B/L=1) Figure. Points at which the bearing capacity is calculated using the point-estimate method with the one uncertain variable, bearing capacity factor Ng; using empirical results from Ingra and Baecher . used if one value is above its mean and the other below. This is all part of Rosenblueth s original method. MODIFICATIONS FOR LARGE NUMBERS OF VARIABLES A disadvantage of the method is that the number of points used in the calculation is equal to n ; where n is the number of random variables. Most analysts find the threshold for practical computation to be at n ¼ 5 or 6, depending on the complexity of the function Y : For example, the literature on bearing capacity contains many theoretical derivations, as well as experimental results from model tests and footings in the field. Modification of Equation (10) for load eccentricity, load inclination, foundation shape, and foundation size introduces several correction factors, so that Equation (10) becomes q v ¼ 1 gbn gr g S g E g I g ð13þ in which R g ; S g ; E g ; I g are, respectively, corrections factors for size, shape, load eccentricity, and load inclination . This generates n ¼ 5 uncertain quantities, leading to 3 calculations, simply for the case that g and f are known. If g and f are not known, then n ¼ 7 and the number of calculations becomes 18. The number of variables in a reliability analysis can easily exceed this; structural reliability problems often involve thousands of random variables. In a case closer to geotechnical practice, when using any method of slices for limiting equilibrium slope stability calculation and presuming the slices are taken wide enough to
5 THE POINT-ESTIMATE METHOD 1519 Figure 3. Distribution of evaluation points for three uncertain variables. The variables have been normalized by subtracting the means and dividing by the standard deviations. (a) Co-ordinate system. (b) Black squares are the eight points in the original Rosenblueth procedure. (c) Ellipses are circular arcs defining a sphere circumscribed around the Rosenblueth points. (d) Black dots are the intersections of the circumscribed sphere with the co-ordinate axes, which are the six points in the Harr, Hong, and Lind procedures. minimize the complicating effects of spatial correlation, the number of uncertain quantities could be two, three, or even more, times the number of slices (i.e. at least g; c; and f for each slice). Several authors have proposed methods to deal with this problem, in particular Lind , Harr [3, 10], and Hong [11, 1]. Lind  developed methods for locating the evaluation points when the variables are unskewed but correlated. Lind s first method is similar to Harr s, which is described in the next paragraph. Lind s second method develops a set of equally weighted points whose co-ordinates are found from a Cholesky decomposition of the correlation matrix. The coordinates of the evaluation points pffiffihave a term added to or subtracted from the mean coordinates that is proportional to n modified by the contributions of the correlation terms. In
6 150 J. T. CHRISTIAN AND G. B. BAECHER the limiting case of uncorrelated variables, the co-ordinates and the weights of the evaluation points are identical to those from Harr s and Hong s methods. Harr  deals primarily with the case in which all the skewness coefficients are zero but the variables may be correlated. The first step is to normalize each variable with respect to its standard deviation and to rotate the co-ordinate system into one defined by the eigenvectors of the correlation matrix. If each of the n new co-ordinates is denoted Z i ; the co-ordinates of the new evaluation points are defined by Equations (6) and (7) with the X s replaced by Z s. This defines a hypercube in the Z co-ordinate system. Harr then circumscribes a hypersphere that touches the hypercube at the evaluation points. Considering each variable Z i in turn, he holds the other n 1 variables constant at their mean values and extends the Z i variable until it intersects the surface of the hypersphere. The effect is the same as passing each of the orthogonal eigenvectors through the centroid of the circumscribing hypersphere and finding its intersection with the p surface of the hypersphere. The co-ordinates of the two points for variable Z i are m Z ffiffi n sz : The procedure gives n points, each weighted by 1/n: Harr also recommends further weighting the contribution at each point in proportion to the eigenvalue of the vector that defined it. Hong [11, 1] deals with the case in which the variables are uncorrelated but may be skewed. He proceeds with each variable in turn, holding the other n 1 variables at their means, and selects the co-ordinates of two points at " rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi# n X x j ¼ m X þ s X n þ n X ð14þ The corresponding weights are 3 P j ¼ 1 6 n 1 n x qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi5 ð15þ n þðn x =Þ This gives n points. It is equivalent to circumscribing a hyperellipsoid through the n points of the Rosenblueth hyperparallelepiped and then choosing points at the opposite ends of each axis of the hyperellipsoid. When the variables are all independent and not skewed, Harr s and Hong s methods are identical. Figures 3(c) and 3(d) illustrate the above procedures for three variables that are uncorrelated and unskewed. When the number of variables n exceeds three, the procedures are the same but involve n-dimensional space, which is difficult to visualize or to draw. Others have also proposed methods of reducing the number of evaluation points. In his original English-language paper Rosenblueth  proposed that (in his notation) %Y y % Y 1 y %Y y %Y n y and 1 þ VY ð1 þ V Y 1 Þð1 þ VY Þð1 þ VY n Þ ð17þ where %Y ¼ E½Y Š; y is the value of Y calculated with all variables at their mean values, %Y i is the mean value of Y calculated with all but the ith variable held constant at their mean values, V Y is the coefficient of variation that is to be found (i.e. s Y =m Y ), and V Yi is the coefficient of variation ð16þ
7 THE POINT-ESTIMATE METHOD 151 calculated as if the ith variable were the only random variable and all others held constant at their mean values. The method requires n þ 1 evaluations of Y ; but it applies only to the case of independent and unskewed variables. Li  developed a special procedure that applies in the special case that Y is of the form Y ðx Þ¼f ðmþþ X ða i x i þ b i x i þ c i x 3 i þ d i x 4 i ÞþX e ij x i x j ð18þ i i5j This requires ðn 3 þ 3n þ Þ= evaluations for n correlated variables and reduces to n þ 1 evaluations when the variables are uncorrelated. Figure 4 shows how the number of calculations increases with increasing n for the various point allocation schemes. The vertical scale is logarithmic, so it is clear that there is much to be gained by reducing the number of evaluation points when the number of uncertain variables is large. It should be noted that similar schemes to reduce the number of computational points are found in the literature on numerical analysis, composite experiment design, and importance sampling for Monte Carlo simulation. HOW A LARGE NUMBER OF VARIABLES AFFECTS THE LOCATION OF THE POINTS The distance between the meanp of ffiffiffi each variable and the value that is to be entered pin ffiffi the calculations is proportionate to n in Harr s method and approaches proportionality to n in Hong s and Lind s methods as n becomes large. This happens because (a) the procedures 1E+07 1E+06 Number of Calculations 1E+05 1E+04 1E+03 1E+0 ^n (n^3+3n=)/ n+1 1E+01 1E Number of Uncertain Quantities Figure 4. Numbers of calculations by various algorithms.
8 15 J. T. CHRISTIAN AND G. B. BAECHER Figure 5. Evaluation points. Black points are the original Rosenblueth points for two uncorrelated, unskewed variables. Open points are those defined by a point reduction scheme on a circle circumscribed around the four points. Large circle is a two-dimensional cut through the nine-dimensional hypersphere for nine variables, and open points on this circle are the locations of points used in point reduction schemes. identify the calculation point as the intersection of a radius with the surface p ffiffi of a hypersphere and (b) the radius of a hypersphere circumscribing a unit hypercube is n : Regardless of the distributions of the other variables, the computational procedure will move the evaluation point farther away from the mean for each variable as the number of variables increases. Figure 5 illustrates this point. Once again, the co-ordinates have been normalized. The four black points are the four points in the conventional Rosenblueth procedure for two unskewed, uncorrelated variables. The four open points on the circle drawn through the black points are the evaluation points that would be used in the Harr, Lind, or Hong procedures. There is obviously no advantage in doing this for two variables as four points are needed in either case. However, if there were nine variables, a reduction from 51 to 18 evaluations would be well worth the effort. The outer circle in Figure 5 represents a two-dimensional section through the nine-dimensional hypersphere, and the evaluation points now lie three standard deviations from the mean. If there were sixteen variables, the evaluation points would be four standard deviations away. Thus, critical values of the variables will be located far from the domain where the distributions are best known. Figure 6 shows the effect for the lognormally distributed N g factor. This is the same as Figure, except that the evaluation points for n ¼ 5 and 10 have been added. Especially when n ¼ 10; the evaluation points have now moved quite far from the central region, where the original data were densest. Because the distributions of many variables encountered in civil engineering are not well known, and because there may be physical reasons for limiting their ranges, it is common practice to use uniform, triangular, or Beta distributions. When a variable has such a bounded distribution, the values of the variable to be used in the point-estimate calculations may fall outside the bounds. Figure 7 shows what happens in the case of a triangular distribution
9 THE POINT-ESTIMATE METHOD lower calculation upper calculation lognormal PDF (over ln Nγ ) uncertainty in Nγ n =10 n = 5 n = 5 n = Bearing Capacity Factor (B/L=1) Figure 6. Points at which the bearing capacity is calculated using the point-estimate method with 5 and 10 variables for bearing capacity factor Ng; using empirical results from Ingra and Baecher . n Limit on Number of Variables Triangular Distribution between 0 and 1 Legend low point < 0 high point > c Figure 7. The number of uncertain variables necessary to move the evaluation points in Hong s method below 0 (solid lines) or above 1 (dashed lines) for a triangular distribution between 0 and 1. The parameter c is the co-ordinate of the third or peak point defining the triangular distribution.
10 154 J. T. CHRISTIAN AND G. B. BAECHER f ðxþ between 0 and 1 with its peak at x ¼ c; 05c51: The solid lines describe, as a function of c; the number of uncertain variables necessary to move the co-ordinate of the lower evaluation point in Hong s method below zero, and the dashed lines depict the number of uncertain variables that move the upper point above unity. Thus, when c ¼ 0:; five variables will move the low point below zero, and seven variables will move the upper point above unity. Both evaluation points fall outside the range of definition when n > 7: In the extreme case that the peak of the triangle occurs at one end of the distribution (i.e. c ¼ 0 or 1), one evaluation point falls outside that end when the number of variables is as low as 3. The Beta distribution allows a large range of distribution shapes between finite bounds. ( f ðxjr; qþ / xr 1 ð1 xþ q 1 for 05x51 ð19þ 0 otherwise Table I shows that for combinations of parameters typical of those that are encountered in practice the evaluation points fall outside the boundaries of definition even for relatively small numbers of variables. Only in the strongly skewed case of q ¼ and r ¼ 8 does the number of variables needed to push the evaluation point beyond the upper limit of the distribution become large (39), and in that case the number of variables required to locate the second evaluation point below the lower limit is only 5. A value of a variable falling outside its limits of definition can be physically unreasonable, for example, negative values of cohesive strength, friction angle, or construction time. Values outside the limits or far from the means may also be unacceptable because they are used in functions that become unrealistic for the extreme values of the arguments. Some functions, such as ln X for X 50; may not even have a real definition. A PRACTICAL CASE WITH BOUNDED DISTRIBUTIONS A recent study  of the reliability of the slopes of the Chuquicamata copper mine (Figure 8) dealt with two typical cases: one a failure of a particular bench, and the other a failure of a 190 m high wedge. Both analyses used the sliding wedge analytical procedures well known in rock mechanics. For the bench failure seven variables were identified as uncertain; for the large Table I. Number of variables causing evaluation points to fall outside the limits of a beta distribution. Beta parameters Number of variables for q r Lower limit Upper limit
11 THE POINT-ESTIMATE METHOD 155 Figure 8. Photograph of Chuquicamata copper mine in northern Chile, at 750 m one of the deepest excavations in the world. Table II. Parameters for uncertain variables in reliability analysis of mine slopes. Variable Distribution Low High Mode Bench wedge Cohesion Uniform 0 MPa 0:1 MPa 0:06 MPa Friction angle Triangular Joint continuity Triangular 90% 100% 95% Depth to water Uniform 0 m 5 m :5m Dip: joint set 1 Uniform 77SW 80SW 78.5SW Dip: joint set Uniform 75NE 80NE 77.5NE Strike: joint set Uniform N50W N55W N5.5W 190 m High wedge Cohesion Normal 30 kpa ðm sþ 70 kpa (m þ s) 50 kpa Friction angle Triangular Joint continuity Uniform 0% 100% 50% Depth to water Uniform 0 m 190 m 95 m wedge, four uncertain variables were used. Table II gives the parameters for the uncertain variables in both cases. The reliability analyses used the first-order second moment (FOSM) and the original point-estimate methods. The FOSM method involves expanding the functions in Taylor series about the means, linearizing by discarding terms of order two or higher, and computing the first and second moments from the results. To reduce computational effort, the
12 156 J. T. CHRISTIAN AND G. B. BAECHER number of uncertain variables in the bench case for the point-estimate method was reduced to four (joint continuity, cohesion, and the dips of the two joint sets) by ignoring the contributions of the other three. The original Rosenblueth formulations were used with no attempt to employ any of the point reduction techniques described above. The factor of safety was taken to be Normally distributed. Riela and her colleagues  reported the results listed in Table III. The last two rows show that the results for FOSM and PE methods are in good agreement for the large wedge (190 m). FOSM requires nine evaluations, and PE requires 16, so the computational efforts are comparable. These results indicate that the probability of failure of this wedge is in the region of 13 17%. If more accurate estimates are needed, the analysis should resort to more methods such as the Hasofer Lind first-order reliability method (FORM) or Monte Carlo simulation. For most purposes, the results are adequate. The results for the bench, in the second and third rows of Table III, show quite a different situation. While the values of reliability index and probability of failure are reasonably close, the expected values of the factor of safety and the standard deviations are radically different. Part of the explanation lies in the fact that the failure wedges in this case are very narrow with nearly parallel failure planes. Small differences in many of the parameters throws the sliding wedge analysis into a different failure mode. In other words, this is a very non-linear problem, and different results by different analytical approaches alert the analyst to the need for the use of more accurate techniques. EFFECT OF POINT-REDUCTION SCHEMES ON EVALUATION POINTS The point-estimate analyses for the Chuquicamata mine slopes were carried out with the original Rosenblueth distribution of points; there was no effort to reduce the computation points by any of the above schemes. To illustrate the effect that would have occurred for the distribution of evaluation points, it is instructive to look at the friction angle f; which has a symmetric triangular distribution. Figure 9 shows the probability density function (pdf) for f as well as the values of f at several evaluation points. Those labelled n ¼ 1 are the points from the original Rosenblueth formulation actually used in the analysis. Those labelled n ¼ 4 are the values that would be used in the evaluation reduction schemes when there are four variables. Those labelled n ¼ 7 are the values for the evaluation reduction schemes for seven variables. In the latter case the points fall outside the range of definition of the variable. Figure 10 shows the results if f were assumed to vary uniformly between 11 and 458: Then the evaluation points would fall outside the range of definition even when there were only four uncertain variables. Table III. Reliability analysis results for two cases at Chuquicamata copper mine. Case E½FSŠ s FS b p f (%) Small bench failure}fosm Small bench failure}pem Large wedge failure}fosm Large wedge failure}pem
13 THE POINT-ESTIMATE METHOD 157 Lines indicate location of points for different numbers of variables, n. pdf pdf n=7 n=4 n=1 n=1 n=4 n=7 mean Friction Angle (deg) Figure 9. Probability density function (pdf) for triangularly distributed friction angle in Chuquicamata analysis and points for evaluation for one, four, and seven variables. Lines indicate location of points for different numbers of variables, n. pdf pdf n=7 n=4 n=1 n=1 n=4 n=7 mean Friction Angle (deg) Figure 10. Probability density function (pdf) for uniformly distributed friction angle in Chuquicamata analysis and points for evaluation for one, four, and seven variables.
14 158 J. T. CHRISTIAN AND G. B. BAECHER CONCLUSIONS AND RECOMMENDATIONS All three of the simple ways to modify the point-estimate method to reduce the number of evaluations cause the locations of the evaluation points to move far from the central region of the distributions. In the case of bounded distributions the evaluation points may move beyond the domain of the distribution. This can happen even when n is quite small}sometimes with as few as three variables. The phenomenon is controlled by the number of variables and does not result from a poor choice of probability distributions. Since the main reason for adopting the modified point-estimate methods is to make it feasible to deal with large numbers of variables, this problem is a serious impediment. Lind s, Harr s, and Hong s methods all involve replacing the points at the n corners of the hypercube with n points at or near the intersections of the circumscribing hypersurface with its principal axes. p ffiffi Since the radius of a hypersphere circumscribing a unit hypercube of n dimensions is n ; any of these procedures must move the evaluation points farther away from the means as n increases. Locating the evaluation points on the surface of the hypercube could prevent this, but at the price of destroying the basis of the point-estimate method, which is Gaussian quadrature. There seem to be no simple, elegant ways out of this dilemma, but several alternatives suggest themselves: 1. One could use the unmodified point-estimate method. The number of evaluations will be large ( n ), but the cost may be bearable.. When the variables are uncorrelated and unskewed, Rosenblueth s  own modification as presented in Equations (16) and (17) could be used. 3. Under the conditions of Equation (13), Li s  method is appropriate. 4. Monte Carlo simulation, especially incorporating variance reduction schemes, is an increasingly attractive alternative in view of the greatly reduced cost of simulation in recent years. Finally, when faced with a large number of uncertain variables, the analyst would do well to ask whether all of them need to be carried along in the analysis. Relatively simple sensitivity studies will often reveal that uncertainties in many of the variables have little effect on the results and that other variables can be combined. The reduction in the number of active variables not only makes the computations more tractable but also increases the chances that the results can be interpreted. However, Lind  warns that removing some of the smaller contributors will reduce the overall variance, and one should be mindful of this phenomenon. REFERENCES 1. Rosenblueth E. Point estimates for probability moments. Proceedings of the National Academy of Science 1975; 7(10): Rosenblueth E. Two-point estimates in probabilities. Applied Mathematical Modelling 1981; 5(): Harr ME. Reliability-Based Design in Civil Engineering. McGraw-Hill Book Company: New York, Miller III AC, Rice TR. Discrete approximations of probability distributions. Management Science 1983; 9(3): Christian JT, Baecher GB. The point-estimate method as numerical quadrature. Journal of Geotechnical and Geoenvironmental Engineering, ASCE 1999; 15(9): Terzaghi K. Theoretical Soil Mechanics. Wiley: New York, Ingra TS, Baecher G. Uncertainty in bearing capacity of sands. Journal of Geotechnical Engineering 1983; 109(7):
15 THE POINT-ESTIMATE METHOD Aitchison J, Brown JAC. The lognormal distribution: with special reference to its uses in economics. Monographs, University of Cambridge Dept of Applied Economics; vol. 5. University Press: Cambridge, Lind NC. Modelling uncertainty in discrete dynamical systems. Applied Mathematical Modelling 1983; 7(3): Harr ME. Probabilistic estimates for multivariate analyses. Applied Mathematical Modelling 1989; 13(5): Hong HP. Point-estimate moment-based reliability analysis. Civil Engineering Systems 1996; 13(4): Hong HP. An efficient point estimate method for probabilistic analysis. Reliability Engineering and System Safety 1998; 59(3): Li KS. Point-estimate method for calculating statistical moments. Journal of Engineering Mechanics, ASCE 199; 118(7): Riela J, et al. Sliding rock wedge reliability analysis of Chuquicamata mine slopes. In Panamerican Conference on Soil Mechanics and Geotechnical Engineering, Foz do Iguaçu, Brazil, 1999.
Chapter 4 High-Dimensional Image Warping John Ashburner & Karl J. Friston The Wellcome Dept. of Imaging Neuroscience, 12 Queen Square, London WC1N 3BG, UK. Contents 4.1 Introduction.................................
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada email@example.com January 5, 2004 Abstract This paper
The Capital Asset Pricing Model: Some Empirical Tests Fischer Black* Deceased Michael C. Jensen Harvard Business School MJensen@hbs.edu and Myron Scholes Stanford University - Graduate School of Business
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
Foundations of Data Science John Hopcroft Ravindran Kannan Version 2/8/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
Chapter Regression Supervised learning can be divided into regression and classification problems. Whereas the outputs for classification are discrete class labels, regression is concerned with the prediction
Learning to Detect Objects in Images via a Sparse, Part-Based Representation Shivani Agarwal, Aatif Awan and Dan Roth, Member, IEEE Computer Society 1 Abstract We study the problem of detecting objects
The Inaugural Coase Lecture An Introduction to Regression Analysis Alan O. Sykes * Regression analysis is a statistical tool for the investigation of relationships between variables. Usually, the investigator
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
A METHODOLOGY TO MATCH DISTRIBUTIONS OF BOTH HOUSEHOLD AND PERSON ATTRIBUTES IN THE GENERATION OF SYNTHETIC POPULATIONS Xin Ye Department of Civil and Environmental Engineering Arizona State University,
1 Chapter 1 PRINCIPAL COMPONENT ANALYSIS Introduction: The Basics of Principal Component Analysis........................... 2 A Variable Reduction Procedure.......................................... 2
Gaussian Random Number Generators DAVID B. THOMAS and WAYNE LUK Imperial College 11 PHILIP H.W. LEONG The Chinese University of Hong Kong and Imperial College and JOHN D. VILLASENOR University of California,
Machine Learning, 54, 45 66, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. Support Vector Data Description DAVID M.J. TAX firstname.lastname@example.org ROBERT P.W. DUIN email@example.com
To Appear in the IEEE Trans. on Pattern Analysis and Machine Intelligence From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose Athinodoros S. Georghiades Peter
United States Office of Environmental EPA/240/B-06/002 Environmental Protection Information Agency Washington, DC 20460 Data Quality Assessment: A Reviewer s Guide EPA QA/G-9R FOREWORD This document is
A Direct Approach to Data Fusion Zvi Gilula Department of Statistics Hebrew University Robert E. McCulloch Peter E. Rossi Graduate School of Business University of Chicago 1101 E. 58 th Street Chicago,
Chapter 3 Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d 0, Response- Specific Meta-d 0, and the Unequal Variance SDT Model Brian Maniscalco and Hakwan Lau Abstract Previously we have
A Google-like Model of Road Network Dynamics and its Application to Regulation and Control Emanuele Crisostomi, Steve Kirkland, Robert Shorten August, 2010 Abstract Inspired by the ability of Markov chains
c,, 1 43 () Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Tutorial on Support Vector Machines for Pattern Recognition CHRISTOPHER J.C. BURGES Bell Laboratories, Lucent Technologies
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign
Paper to be presented at the DRUID Academy 2012 on January 19-21 at University of Cambridge /The Moeller Centre Great Minds Think Alike, and Fools Seldom Differ: A Theory on the Moments of Firms' Performance
14. Regression A. Introduction to Simple Linear Regression B. Partitioning Sums of Squares C. Standard Error of the Estimate D. Inferential Statistics for b and r E. Influential Observations F. Regression
8 INTERPRETATION OF SURVEY RESULTS 8.1 Introduction This chapter discusses the interpretation of survey results, primarily those of the final status survey. Interpreting a survey s results is most straightforward
Flexible and efficient Gaussian process models for machine learning Edward Lloyd Snelson M.A., M.Sci., Physics, University of Cambridge, UK (2001) Gatsby Computational Neuroscience Unit University College
Indexing by Latent Semantic Analysis Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637 Susan T. Dumais George W. Furnas Thomas K. Landauer Bell Communications Research 435
Machine Learning, 46, 131 159, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Choosing Multiple Parameters for Support Vector Machines OLIVIER CHAPELLE LIP6, Paris, France firstname.lastname@example.org