Linear Models and Conjoint Analysis with Nonlinear Spline Transformations
|
|
|
- Lynn Benson
- 10 years ago
- Views:
Transcription
1 Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Warren F. Kuhfeld Mark Garratt Abstract Many common data analysis models are based on the general linear univariate model, including linear regression, analysis of variance, and conjoint analysis. This chapter discusses the general linear model in a framework that allows nonlinear transformations of the variables. We show how to evaluate the effect of each transformation. Applications to marketing research are presented. Why Use Nonlinear Transformations? In marketing research, as in other areas of data analysis, relationships among variables are not always linear. Consider the problem of modeling product purchasing as a function of product price. Purchasing may decrease as price increases. For consumers who consider price to be an indication of quality, purchasing may increase as price increases but then start decreasing as the price gets too high. The number of purchases may be a discontinuous function of price with jumps at round numbers such as even dollar amounts. In any case, it is likely that purchasing behavior is not a linear function of price. Marketing researchers who model purchasing as a linear function of price may miss valuable nonlinear information in their data. A transformation regression model can be used to investigate the nonlinearities. The data analyst is not required to specify the form of the nonlinear function; the data suggest the function. The primary purpose of this chapter is to suggest the use of linear regression models with nonlinear transformations of the variables transformation regression models. It is common in marketing research to model nonlinearities by fitting a quadratic polynomial model. Polynomial models often have collinearity problems, but that can be overcome with orthogonal polynomials. The problem that polynomials cannot overcome is the fact that polynomial curves are rigid; they do not do a good job of locally fitting the data. Piecewise polynomials or splines are generalizations of polynomials that provide more flexibility than ordinary polynomials. This chapter is a revision of a paper that was presented to the American Marketing Association, Advanced Research Techniques Forum, June 14 17, 1992, Lake Tahoe, Nevada. The authors are: Warren F. Kuhfeld, Manager, Multivariate Models R&D, SAS Institute Inc., Cary NC Mark Garratt was with Conway Milliken & Associates, when this paper was presented and is now with In4mation Insights. Copies of this chapter (MR-2010J), sample code, and all of the macros are available on the Web All plots in this chapter are produced using ODS Graphics. For help, please contact SAS Technical Support. See page 25 for more information. 1213
2 1214 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Background and History The foundation for our work can be found mostly in the psychometric literature. Some relevant references include: Kruskal & Shepard (1974); Young, de Leeuw, & Takane (1976); de Leeuw, Young, & Takane (1976); Perreault & Young (1980); Winsberg & Ramsay (1980); Young (1981); Gifi (1981, 1990); Coolen, van Rijckevorsel, & de Leeuw (1982); van Rijckevorsel (1982); van der Burg & de Leeuw (1983); de Leeuw (1986), and many others. The transformation regression problem has also received attention in the statistical literature (Breiman & Friedman 1985, Hastie & Tibshirani 1986) under the names ACE and generalized additive models. Our work is characterized by the following statements: Transformation regression is an inferential statistical technique, not a purely descriptive technique. We prefer smooth nonlinear spline transformations over step-function transformations. Transformations are found that minimize a squared-error loss function. Many of the models discussed in this chapter can be directly fit with some data manipulations and any multiple regression or canonical correlation software; some models require specialized software. Algorithms are given by Kuhfeld (1990), de Boor (1978), and in SAS/STAT documentation. Next, we present notation and review some fundamentals of the general linear univariate model. The General Linear Univariate Model A general linear univariate model has the scalar form and the matrix form y = β 0 + β 1 x 1 + β 2 x β m x m + ɛ y = Xβ + ɛ The dependent variable y is an (n 1) vector of observations; y has expected value E(y) = Xβ and expected variance V (y) = σ 2 I n. The vector ɛ = y Xβ contains the unobservable deviations from the expected values. The assumptions on y imply E(ɛ) = 0 and V (ɛ) = σ 2 I n. The columns of X are the independent variables; X is an (n m) matrix of constants that are assumed to be known without appreciable error. The elements of the column vector β are the parameters. The objectives of a linear models analysis are to estimate the parameter vector β, estimate interesting linear combinations of the elements of β, and test hypotheses about the parameters β or linear combinations of β. We discuss fitting linear models with nonlinear spline transformations of the variables. Note that we do not discuss models that are nonlinear in the parameters such as y = e xβ + ɛ y = β 0 x β 1 + ɛ y = β 1x 1 + β 2 x 2 1 β 3 x 2 + β 4 x 2 + ɛ 2
3 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1215 Table 1 Cubic Polynomial Spline Basis Table 2 Cubic Polynomial With Knots at 2, 0, Table 3 Basis for a Discontinuous (at 0) Spline Our nonlinear transformations are found within the framework of the general linear model. There are numerous special cases of the general linear model that are of interest. When all of the columns of y and X are interval variables, the model is a multiple regression model. When all of the columns of X are indicator variables created from nominal variables, the model is a main-effects analysis of variance model, or a metric conjoint analysis model. The model y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + ɛ is of special interest. It is a linear model because it is linear in the parameters, and it models y as a nonlinear function of x. It is a cubic polynomial regression model, which is a special case of a spline. Polynomial Splines Splines are curves that are typically required to be continuous and smooth. Splines are usually defined as piecewise polynomials of degree d whose function values and first d 1 derivatives agree at the points where they join. The abscissa values of the join points are called knots. The term spline is also used for polynomials (splines with no knots), and piecewise polynomials with more than one discontinuous derivative. Splines with more knots or more discontinuities fit the data better and use more degrees of freedom (df). Fewer knots and fewer discontinuities provide smoother splines that user fewer df. A spline of degree three is a cubic spline, degree two splines are quadratic splines, degree one splines are piecewise linear, and degree zero splines are step functions. Higher degrees are rarely used. A simple special case of a spline is the line, from the simple regression model β 0 + β 1 x y = β 0 + β 1 x + ɛ A line is continuous and completely smooth. However, there is little to be gained by thinking of a line as a spline. A special case of greater interest was mentioned previously. The polynomial β 0 + β 1 x + β 2 x 2 + β 3 x 3
4 1216 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Figure 1. Linear, Quadratic, and Cubic Curves Figure 2. Curves For Knots at 2, 0, 2 from the linear model y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + ɛ is a cubic spline with no knots. This equation models y as a nonlinear function of x, but does so with a linear regression model; y is a linear function of x, x 2, and x 3. Table 1 shows the X matrix, (1 x x 2 x 3 ), for a cubic polynomial, where x = 5, 4,..., 5. Figure 1 plots the polynomial terms (except the intercept). See Smith (1979) for an excellent introduction to splines. Splines with Knots Here is an example of a polynomial spline model with three knots at t 1, t 2, and t 3. y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + β 4 (x > t 1 )(x t 1 ) 3 + β 5 (x > t 2 )(x t 2 ) 3 + β 6 (x > t 3 )(x t 3 ) 3 + ɛ The Boolean expression (x > t 1 ) is 1 if x > t 1, and 0 otherwise. The term β 4 (x > t 1 )(x t 1 ) 3 is zero when x t 1 and becomes nonzero, letting the curve change, as x becomes greater than knot t 1. This spline uses more df and is less smooth than the polynomial model Assume knots at 2, 0, and 2; the spline model is: y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + ɛ y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 +
5 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1217 Figure 3. A Spline Curve With Knots at 2, 0, 2 Figure 4. The Components of the Spline β 4 (x > 2)(x 2) 3 + β 5 (x > 0)(x 0) 3 + β 6 (x > 2)(x 2) 3 + ɛ Table 2 shows an X matrix for this model, Figure 1 plots the polynomial terms, and Figure 2 plots the knot terms. The β 0, β 1 x, β 2 x 2, and β 3 x 3 terms contribute to the overall shape of the curve. The β 4 (x > 2)(x 2) 3 term has no effect on the curve before x = 2, and allows the curve to change at x = 2. The β 4 (x > 2)(x 2) 3 term is exactly zero at x = 2 and increases as x becomes greater than 2. The β 4 (x > 2)(x 2) 3 term contributes to the shape of the curve even beyond the next knot at x = 0, but at x = 0, β 5 (x > 0)(x 0) 3 allows the curve to change again. Finally, the last term β 6 (x > 2)(x 2) 3 allows one more change. For example, consider the curve in Figure 3. It is the spline y = x x x (x > 2)(x 2) (x > 0)(x 0) (x > 2)(x 2) 3 It is constructed from the curves in Figure 4. At x = 2.0 there is a branch; y = x x x 3
6 1218 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations continues over and down while the curve of interest, starts heading upwards. At x = 0, the addition of y = x x x (x > 2)(x 2) 3 0.5(x > 0)(x 0) 3 slows the ascent until the curve starts decreasing again. Finally, the addition of 1.5(x > 2)(x 2) 3 produces the final change. Notice that the curves do not immediately diverge at the knots. The function and its first two derivatives are continuous, so the function is smooth everywhere. Derivatives of a Polynomial Spline The next equations show a cubic spline model with a knot at t 1 and its first three derivatives with respect to x. y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + β 4 (x > t 1 )(x t 1 ) 3 + ɛ dy dx = β 1 + 2β 2 x + 3β 3 x 2 + 3β 4 (x > t 1 )(x t 1 ) 2 d 2 y dx 2 = 2β 2 + 6β 3 x + 6β 4 (x > t 1 )(x t 1 ) d 3 y dx 3 = 6β 3 + 6β 4 (x > t 1 ) The first two derivatives are continuous functions of x at the knots. This is what gives the spline function its smoothness at the knots. In the vicinity of the knots, the curve is continuous, the slope of the curve is a continuous function, and the rate of change of the slope function is a continuous function. The third derivative is discontinuous at the knots. It is the horizontal line 6β 3 when x t 1 and jumps to the horizontal line 6β 3 + 6β 4 when x > t 1. In other words, the cubic part of the curve changes at the knots, but the linear and quadratic parts do not change.
7 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1219 Figure 5. A Discontinuous Spline Function Figure 6. A Spline With a Discontinuous Slope Discontinuous Spline Functions Here is an example of a spline model that is discontinuous at x = t 1. y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + β 4 (x > t 1 ) + β 5 (x > t 1 )(x t 1 ) + β 6 (x > t 1 )(x t 1 ) 2 + β 7 (x > t 1 )(x t 1 ) 3 + ɛ Figure 5 shows an example, and Table 3 shows a design matrix for this model with t 1 = 0. The fifth column is a binary (zero/one) vector that allows the discontinuity. It is a change in the intercept parameter. Note that the sixth through eighth columns are necessary if the spine is to consist of two independent polynomials. Without them, there is a jump at t 1 = 0, but both curves are based on the same polynomial. For x t 1, the spline is and for x > t 1, the spline is The discontinuities are as follows: y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + ɛ y = β 0 + β 4 + β 1 x + β 5 (x t 1 ) + β 2 x 2 + β 6 (x t 1 ) 2 + β 3 x 3 + β 7 (x t 1 ) 3 + ɛ β 7 (x > t 1 )(x t 1 ) 3 specifies a discontinuity in the third derivative of the spline function at t 1, β 6 (x > t 1 )(x t 1 ) 2
8 1220 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Table 4 Cubic B-Spline With Knots at 2, 0, Figure 7. B-Splines With Knots at 2, 0, 2 specifies a discontinuity in the second derivative at t 1, β 5 (x > t 1 )(x t 1 ) specifies a discontinuity in the derivative at t 1, and β 4 (x > t 1 ) specifies a discontinuity in the function at t 1. The function consists of two separate polynomial curves, one for < x t 1 and the other for t 1 < x <. This kind of spline can be used to model a discontinuity in price. Here is an example of a spline model that is continuous at x = t 1 but does not have d 1 continuous derivatives. y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + β 4 (x > t 1 )(x t 1 ) + β 5 (x > t 1 )(x t 1 ) 2 + β 6 (x > t 1 )(x t 1 ) 3 + ɛ dy dx = β 1 + 2β 2 x + 3β 3 x 2 + β 4 (x > t 1 ) + 2β 5 (x > t 1 )(x t 1 ) + 3β 6 (x > t 1 )(x t 1 ) 2 Since the first derivative is not continuous at t 1 = x, the slope of the spline is not continuous at t 1 = x. Figure 6 contains an example with t 1 = 0. Notice that the slope of the curve is indeterminate at zero.
9 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1221 Table 5 Polynomial and B-Spline Eigenvalues B-Spline Basis Eigenvalue Proportion Cumulative Polynomial Spline Basis Eigenvalue Proportion Cumulative Monotone Splines and B-Splines An increasing monotone spline never decreases; its slope is always positive or zero. Decreasing monotone splines, with slopes that are always negative or zero, are also possible. Monotone splines cannot be conveniently created from polynomial splines. A different basis, the B-spline basis, is used instead. Polynomial splines provide a convenient way to describe splines, but B-splines provide a better way to fit spline models. The columns of the B-spline basis are easily constructed with a recursive algorithm (de Boor 1978, pages ). A basis for a vector space is a linearly independent set of vectors; every vector in the space has a unique representation as a linear combination of a given basis. Table 4 shows the B-spline X matrix for a model with knots at 2, 0, and 2. Figure 7 shows the B-spline curves. The columns of the matrix in Table 4 can all be constructed by taking linear combinations of the columns of the polynomial spline in Table 2. Both matrices form a basis for the same vector space. The numbers in the B-spline basis are all between zero and one, and the marginal sums across columns are all ones. The matrix has a diagonally banded structure, such that the band moves one position to the right at each knot. The matrix has many more zeros than the matrix of polynomials and much smaller numbers. The columns of the matrix are not orthogonal like a matrix of orthogonal polynomials, but collinearity is not a problem with the B-spline basis like it is with a polynomial spline. The B-spline basis is very stable numerically. To illustrate, 1000 evenly spaced observations were generated over the range -5 to 5. Then a B-spline basis and polynomial spline basis were constructed with knots at 2, 0, and 2. The eigenvalues for the centered X X matrices, excluding the last structural zero eigenvalue, are given in Table 5. In the polynomial splines, the first two components already account for more than 99% of the variation of the points. In the B-splines, the cumulative proportion does not pass 99% until the last term. The eigenvalues show that the B-spline basis is better conditioned than the piecewise polynomial basis. B-splines are used instead of orthogonal polynomials or Box-Cox transformations because B-splines allow knots and more general curves. B-splines also allow monotonicity constraints.
10 1222 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations A transformation of x is monotonically increasing if the coefficients that are used to combine the columns of the B-spline basis are monotonically increasing. Models with splines can be fit directly using ordinary least squares (OLS). OLS does not work for monotone splines because OLS has no method of ensuring monotonicity in the coefficients. When there are monotonicity constraints, an alternating least square (ALS) algorithm is used. Both OLS and ALS attempt to minimize a squared error loss function. See Kuhfeld (1990) for a description of the iterative algorithm that fits monotone splines. See Ramsay (1988) for some applications and a different approach to ensuring monotonicity. Transformation Regression If the dependent variable is not transformed and if there are no monotonicity constraints on the independent variable transformations, the transformation regression model is the same as the OLS regression model. When only the independent variables are transformed, the transformation regression model is nothing more than a different rendition of an OLS regression. All of the spline models presented up to this point can be reformulated as y = β 0 + Φ(x) + ɛ The nonlinear transformation of x is Φ(x); it is solved for by fitting a spline model such as where y = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + β 4 (x > t 1 )(x t 1 ) 3 + β 5 (x > t 2 )(x t 2 ) 3 + β 6 (x > t 3 )(x t 3 ) 3 + ɛ Φ(x) = β 1 x + β 2 x 2 + β 3 x 3 + β 4 (x > t 1 )(x t 1 ) 3 + β 5 (x > t 2 )(x t 2 ) 3 + β 6 (x > t 3 )(x t 3 ) 3 Consider a model with two polynomials: y = β 0 + β 1 x 1 + β 2 x β 3 x β 4 x 2 + β 5 x β 6 x ɛ It is the same as a transformation regression model y = β 0 + Φ 1 (x 1 ) + Φ 2 (x 2 ) + ɛ where Φ ( ) designates cubic spline transformations with no knots. The actual transformations in this case are Φ 1 (x 1 ) = β 1 x 1 + β 2 x β 3 x 3 1 and Φ 2 (x 2 ) = β 4 x 2 + β 5 x β 6 x 3 2 There are six model df. The test for the effect of the transformation Φ 1 (x 1 ) is the test of the linear hypothesis β 1 = β 2 = β 3 = 0, and the Φ 2 (x 2 ) transformation test is a test that β 4 = β 5 = β 6 = 0. Both tests are F-tests with three numerator df. When there are monotone transformations, constrained least-squares estimates of the parameters are obtained.
11 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1223 Degrees of Freedom In an ordinary general linear model, there is one parameter for each independent variable. In the transformation regression model, many of the variables are used internally in the bases for the transformations. Each linearly independent basis column has one parameter and one model df. If a variable is not transformed, it has one parameter. Nominal classification variables with c categories have c 1 parameters. For degree d splines with k knots and d 1 continuous derivatives, there are d + k parameters. When there are monotonicity constraints, counting the number of scoring parameters is less precise. One way of handling a monotone spline transformation is to treat it as if it were simply a spline transformation with d + k parameters. However, there are typically fewer than d + k unique parameter estimates since some of those d + k scoring parameter estimates may be tied to impose the order constraints. Imposing ties is equivalent to fitting a model with fewer parameters. So, there are two available scoring parameter counts: d + k and a potentially smaller number that is determined during the analysis. Using d + k as the model df is conservative since the scoring parameter estimates are restricted. Using the smaller count is too liberal since the data and the model together are being used to determine the number of parameters. Our solution is to report tests using both liberal and conservative df to provide lower and upper bounds on the true p-values. Dependent Variable Transformations When a dependent variable is transformed, the problem becomes multivariate: Φ 0 (y) = β 0 + Φ 1 (x 1 ) + Φ 2 (x 2 ) + ɛ Hypothesis tests are performed in the context of a multivariate linear model, with the number of dependent variables equal to the number of scoring parameters for the dependent variable transformation. Multivariate normality is assumed even though it is known that the assumption is always violated. This is one reason that all hypothesis tests in the presence of a dependent variable transformation should be considered approximate at best. For the transformation regression model, we redefine three of the usual multivariate test statistics: Pillai s Trace, Wilks Lambda, and the Hotelling-Lawley Trace. These statistics are normally computed using all of the squared canonical correlations, which are the eigenvalues of the matrix H(H + E) 1. Here, there is only one linear combination (the transformation) and hence only one squared canonical correlation of interest, which is equal to the R 2. We use R 2 for the first eigenvalue; all other eigenvalues are set to zero since only one linear combination is used. Degrees of freedom are computed assuming that all linear combinations contribute to the Lambda and Trace statistics, so the F-tests for those statistics are conservative. In practice, the adjusted Pillai s Trace is very conservative perhaps too conservative to be useful. Wilks Lambda is less conservative, and the Hotelling-Lawley Trace seems to be the least conservative. It may seem that the Roy s Greatest Root statistic, which always uses only the largest squared canonical correlation, is the only statistic of interest. Unfortunately, Roy s Greatest Root is very liberal and only provides a lower bound on the p-value. The p-values for the liberal and conservative statistics are used together to provide approximate lower and upper bounds on p.
12 1224 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Scales of Measurement Early work in scaling, such as Young, de Leeuw, & Takane (1976), and Perreault & Young (1980) was concerned with fitting models with mixed nominal, ordinal, and interval scale of measurement variables. Nominal variables were optimally scored using Fisher s (1938) optimal scoring algorithm. Ordinal variables were optimally scored using the Kruskal and Shepard (1974) monotone regression algorithm. Interval and ratio scale of measurement variables were left alone nonlinearly transformed with a polynomial transformation. In the transformation regression setting, the Fisher optimal scoring approach is equivalent to using an indicator variable representation, as long as the correct df are used. The optimal scores are category means. Introducing optimal scaling for nominal variables does not lead to any increased capability in the regression model. For ordinal variables, we believe the Kruskal and Shepard monotone regression algorithm should be reserved for the situation when a variable has only a few categories, say five or fewer. When there are more levels, a monotone spline is preferred because it uses fewer model df and because it is less likely to capitalize on chance. Interval and ratio scale of measurement variables can be left alone or nonlinearly transformed with splines or monotone splines. When the true model has a nonlinear function, say or the transformation regression model y = β 0 + β 1 log(x) + ɛ y = β 0 + β 1 /x + ɛ y = β 0 + Φ(x) + ɛ can be used to hunt for parametric transformations. transformations. Plots of Φ(x) may suggest log or reciprocal Conjoint Analysis Green & Srinivasan (1990) discuss some of the problems that can be handled with a transformation regression model, particularly the problem of degrees of freedom. Consider a conjoint analysis design where a factor with c > 3 levels has an inherent ordering. By finding a quadratic monotone spline transformation with no knots, that variable will use only two df instead of the larger c 1. The model df in a spline transformation model are determined by the data analyst, not by the number of categories in the variables. Furthermore, a quasi-metric conjoint analysis can be performed by finding a monotone spline transformation of the dependent variable. This model has fewer restrictions than a metric analysis, but will still typically have error df, unlike the nonmetric analysis.
13 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1225 Curve Fitting Applications With a simple regression model, you can fit a line through a y x scatter plot. With a transformation regression model, you can fit a curve through the scatter plot. The y-axis coordinates of the curve are from the model ŷ = β 0 + Φ(x) y = β 0 + Φ(x) + ɛ With more than one group of observations and a multiple regression model, you can fit multiple lines, lines with the same slope but different intercepts, and lines with common intercepts but different slopes. With the transformation regression model, you can fit multiple curves through a scatter plot. The curves can be monotone or not, constrained to be parallel, or constrained to have the same intercept. Consider the problem of modeling the number of product purchases as a function of price. Separate curves can be simultaneously fit for two groups who may behave differently, for example those who are making a planned purchase and those who are buying impulsively. Later in this chapter, there is an example of plotting brand by price interactions. Figure 8 contains an artificial example of two separate spline functions; the shapes of the two curves are independent of each other, and R 2 = In Figure 9, the splines are constrained to be parallel, and R 2 = The parallel curve model is more restrictive and fits the data less well than the unconstrained model. In Figure 8, each curve follows its swarm of data. In Figure 9, the curves find paths through the data that are best on the average considering both swarms together. In the vicinity of x = 2, the top curve is high and the bottom curve is low. In the vicinity of x = 1, the top curve is low and the bottom curve is high. Figure 10 contains the same data and two monotonic spline functions; the shapes of the two curves are independent of each other, and R 2 = The top curve is monotonically decreasing, whereas the bottom curve is monotonically increasing. The curves in Figure 10 flatten where there is nonmonotonicity in Figure 8. Parallel curves are very easy to model. If there are two groups and the variable g is a binary variable indicating group membership, fit the model y = β 0 + β 1 g + Φ 1 (x) + ɛ where Φ 1 (x) is a linear, spline, or monotone spline transformation. Plot ŷ as a function of x to see the two curves. Separate curves are almost as easy; the model is y = β 0 + β 1 g + Φ 1 (x (1 g)) + Φ 2 (x g) + ɛ When x (1 g) is zero, x g is x, and vice versa.
14 1226 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Figure 8. Separate Spline Functions, Two Groups Figure 9. Parallel Spline Functions, Two Groups Figure 10. Monotone Spline Functions, Two Groups
15 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1227 Spline Functions of Price This section illustrates splines with an artificial data set. Imagine that subjects were asked to rate their interest in purchasing various types of spaghetti sauces on a one to nine scale, where nine indicated definitely will buy and one indicated definitely will not buy. Prices were chosen from typical retail trade prices, such as $1.49, $1.99, $2.49, and $2.99; and one penny more than a typical price, $1.00, $1.50, $2.00, and $2.50. Between each round number price, such as $1.00, and each typical price, such as $1.49, three additional prices were chosen, such as $1.15, $1.25, and $1.35. The goal is to allow a model with a separate spline for each of the four ranges: $1.00 $1.49, $1.50 $1.99, $2.00 $2.49, and $2.50 $2.99. For each range, a spline with zero or one knot can be fit. One rating for each price was constructed and various models were fit to the data. Figures 11 through 18 contain results. For each figure, the number of model df are displayed. One additional df for the intercept is also used. The SAS/STAT procedure TRANSREG was used to fit all of the models in this chapter. Figure 11 shows the linear fit, Figure 12 uses a quadratic polynomial, and Figure 13 uses a cubic polynomial. The curve in Figure 13 has a slight nonmonotonicity in the tail, and since it is a polynomial, it is rigid and cannot locally fit the data values. Figure 14 shows a monotone spline. It closely follows the data and never increases. A range for the model df is specified; the larger value is a conservative count and the smaller value is a liberal count. The curves in Figures 12 through 14 are all continuous and smooth. These curves do a good job of following the data, but inspection of the data suggests that a different model may be more appropriate. There is a large drop in purchase interest when price increases from $1.49 to $1.50, a smaller drop between $1.99 and $2.00, and a still smaller drop between $2.49 and $2.50. In Figure 15, a separate quadratic polynomial is fit for each of the four price ranges: $1.00 $1.49, $1.50 $1.99, $2.00 $2.49, and $2.50 $2.99. The functions are connected. The function over the range $1.00 $1.49 is nearly flat; there is high purchase interest for all of these prices. Over the range $1.50 $1.99, purchase interest drops more rapidly with a slight leveling in the low end; the slope decreases as the function increases. Over the range $2.00 $2.49, purchase interest drops less rapidly; the slope increases as the function increases. Over the range $2.50 $2.99, the function is nearly flat. At $1.99, $2.49, and $2.99 there is a slight increase in purchase interest. In Figure 16, there is a knot in the middle of each range. This gives the spline more freedom to follow the data. Figure 17 uses the same model as Figure 16, but monotonicity is imposed. When monotonicity is imposed the curves touch fewer of the data values, passing in between the nonmonotonic points. In Figure 18, the means for each price are plotted and connected. This analysis uses the most model df and is the least smooth of the plots. Benefits of Splines In marketing research and conjoint analysis, the use of spline models can have several benefits. Whenever a factor has three or more levels and an inherent ordering of the levels, that factor can be modeled as a quadratic monotone spline. The df used by the variable is controlled at data analysis time; it is not simply the number of categories minus one. When the alternative is a model in which a factor is designated as nominal, splines can be used to fit a more restrictive model with fewer model df. Since the spline model has fewer df, it should yield more reproducible results.
16 1228 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Figure 11. Linear Function, 1 df Figure 12. Quadratic Function, 2 df Figure 13. Cubic Function, 3 df Figure 14. Monotone Function, 5 7 df
17 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations 1229 Figure 15. Discontinuous Polynomial, 11 df Figure 16. Discontinuous Spline Function, 15 df Figure 17. Discontinuous Monotone Spline, df Figure 18. Cell Means, 19 df
18 1230 MR-2010J Linear Models and Conjoint Analysis with Nonlinear Spline Transformations The opposite alternative is also important. Consider a variable with many values, like price in some examples. Instead of using a restrictive single df linear model, splines can be used to fit a more general model with more df. The more general model may show information in the data that is not apparent from the ordinary linear model. This can be a benefit in conjoint analyses that focus on price, in the analysis of scanner data, and in survey research. Splines give you the ability to examine the nonlinearities that may be very important in predicting consumer behavior. Fitting quadratic and cubic polynomial models is common in marketing research. Splines extend that capability by adding the possibility of knots and hence more general curves. Spline curves can also be restricted to be monotone. Monotone splines are less restrictive than a line and more restrictive than splines that can have both positive and negative slopes. You are no longer restricted to fitting just a line, polynomial, or a step function. Splines give you possibilities in between. Conclusions Splines allow you to fit curves to your data. Splines may not revolutionize the way you analyze data, but they will provide you with some new tools for your data analysis toolbox. These new tools allow you to try new methods for solving old problems and tackle new problems that could not be adequately solved with your old tools. We hope you will find these tools useful, and we hope that they will help you to better understand your marketing data.
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.
Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
the points are called control points approximating curve
Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.
Multivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
Multivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
Straightening Data in a Scatterplot Selecting a Good Re-Expression Model
Straightening Data in a Scatterplot Selecting a Good Re-Expression What Is All This Stuff? Here s what is included: Page 3: Graphs of the three main patterns of data points that the student is likely to
Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:
Profile Analysis Introduction Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases: ) Comparing the same dependent variables
DISCRIMINANT FUNCTION ANALYSIS (DA)
DISCRIMINANT FUNCTION ANALYSIS (DA) John Poulsen and Aaron French Key words: assumptions, further reading, computations, standardized coefficents, structure matrix, tests of signficance Introduction Discriminant
Regression Modeling Strategies
Frank E. Harrell, Jr. Regression Modeling Strategies With Applications to Linear Models, Logistic Regression, and Survival Analysis With 141 Figures Springer Contents Preface Typographical Conventions
Smoothing and Non-Parametric Regression
Smoothing and Non-Parametric Regression Germán Rodríguez [email protected] Spring, 2001 Objective: to estimate the effects of covariates X on a response y nonparametrically, letting the data suggest
Nonlinear Regression:
Zurich University of Applied Sciences School of Engineering IDP Institute of Data Analysis and Process Design Nonlinear Regression: A Powerful Tool With Considerable Complexity Half-Day : Improved Inference
Dimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models
Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis
GRADES 7, 8, AND 9 BIG IDEAS
Table 1: Strand A: BIG IDEAS: MATH: NUMBER Introduce perfect squares, square roots, and all applications Introduce rational numbers (positive and negative) Introduce the meaning of negative exponents for
Gerry Hobbs, Department of Statistics, West Virginia University
Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit
The degree of a polynomial function is equal to the highest exponent found on the independent variables.
DETAILED SOLUTIONS AND CONCEPTS - POLYNOMIAL FUNCTIONS Prepared by Ingrid Stewart, Ph.D., College of Southern Nevada Please Send Questions and Comments to [email protected]. Thank you! PLEASE NOTE
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Regression III: Advanced Methods
Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models
(Refer Slide Time: 1:42)
Introduction to Computer Graphics Dr. Prem Kalra Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 10 Curves So today we are going to have a new topic. So far
A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study
A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study But I will offer a review, with a focus on issues which arise in finance 1 TYPES OF FINANCIAL
Module 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
2013 MBA Jump Start Program. Statistics Module Part 3
2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Time Series Analysis. 1) smoothing/trend assessment
Time Series Analysis This (not surprisingly) concerns the analysis of data collected over time... weekly values, monthly values, quarterly values, yearly values, etc. Usually the intent is to discern whether
Simple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
Chapter 4: Vector Autoregressive Models
Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...
Fitting Subject-specific Curves to Grouped Longitudinal Data
Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: [email protected] Currie,
Chapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Nonlinear Regression Functions. SW Ch 8 1/54/
Nonlinear Regression Functions SW Ch 8 1/54/ The TestScore STR relation looks linear (maybe) SW Ch 8 2/54/ But the TestScore Income relation looks nonlinear... SW Ch 8 3/54/ Nonlinear Regression General
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
ANNUITY LAPSE RATE MODELING: TOBIT OR NOT TOBIT? 1. INTRODUCTION
ANNUITY LAPSE RATE MODELING: TOBIT OR NOT TOBIT? SAMUEL H. COX AND YIJIA LIN ABSTRACT. We devise an approach, using tobit models for modeling annuity lapse rates. The approach is based on data provided
Introduction to Multilevel Modeling Using HLM 6. By ATS Statistical Consulting Group
Introduction to Multilevel Modeling Using HLM 6 By ATS Statistical Consulting Group Multilevel data structure Students nested within schools Children nested within families Respondents nested within interviewers
Poisson Models for Count Data
Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the
http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions
A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.
How To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
Regression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)
CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration
Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the
Piecewise Cubic Splines
280 CHAP. 5 CURVE FITTING Piecewise Cubic Splines The fitting of a polynomial curve to a set of data points has applications in CAD (computer-assisted design), CAM (computer-assisted manufacturing), and
International Statistical Institute, 56th Session, 2007: Phil Everson
Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: [email protected] 1. Introduction
Part 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
Introduction to Regression and Data Analysis
Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it
Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP
Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP ABSTRACT In data mining modelling, data preparation
5. Multiple regression
5. Multiple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/5 QBUS6840 Predictive Analytics 5. Multiple regression 2/39 Outline Introduction to multiple linear regression Some useful
Regression III: Advanced Methods
Lecture 4: Transformations Regression III: Advanced Methods William G. Jacoby Michigan State University Goals of the lecture The Ladder of Roots and Powers Changing the shape of distributions Transforming
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Chapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
Multivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
Adequacy of Biomath. Models. Empirical Modeling Tools. Bayesian Modeling. Model Uncertainty / Selection
Directions in Statistical Methodology for Multivariable Predictive Modeling Frank E Harrell Jr University of Virginia Seattle WA 19May98 Overview of Modeling Process Model selection Regression shape Diagnostics
Developing Risk Adjustment Techniques Using the SAS@ System for Assessing Health Care Quality in the lmsystem@
Developing Risk Adjustment Techniques Using the SAS@ System for Assessing Health Care Quality in the lmsystem@ Yanchun Xu, Andrius Kubilius Joint Commission on Accreditation of Healthcare Organizations,
Multivariate Analysis of Variance (MANOVA)
Multivariate Analysis of Variance (MANOVA) Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu Keywords: MANCOVA, special cases, assumptions, further reading, computations Introduction
Applying Statistics Recommended by Regulatory Documents
Applying Statistics Recommended by Regulatory Documents Steven Walfish President, Statistical Outsourcing Services [email protected] 301-325 325-31293129 About the Speaker Mr. Steven
DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.
DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,
7 Time series analysis
7 Time series analysis In Chapters 16, 17, 33 36 in Zuur, Ieno and Smith (2007), various time series techniques are discussed. Applying these methods in Brodgar is straightforward, and most choices are
9.2 User s Guide SAS/STAT. Introduction. (Book Excerpt) SAS Documentation
SAS/STAT Introduction (Book Excerpt) 9.2 User s Guide SAS Documentation This document is an individual chapter from SAS/STAT 9.2 User s Guide. The correct bibliographic citation for the complete manual
CHAPTER 1 Splines and B-splines an Introduction
CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Penalized regression: Introduction
Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood
Testing for Lack of Fit
Chapter 6 Testing for Lack of Fit How can we tell if a model fits the data? If the model is correct then ˆσ 2 should be an unbiased estimate of σ 2. If we have a model which is not complex enough to fit
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected]
Analysing Questionnaires using Minitab (for SPSS queries contact -) [email protected] Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine
2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Notes on Applied Linear Regression
Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:
UNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
Getting Correct Results from PROC REG
Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking
17. SIMPLE LINEAR REGRESSION II
17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.
Impact / Performance Matrix A Strategic Planning Tool
Impact / Performance Matrix A Strategic Planning Tool Larry J. Seibert, Ph.D. When Board members and staff convene for strategic planning sessions, there are a number of questions that typically need to
Logistic regression modeling the probability of success
Logistic regression modeling the probability of success Regression models are usually thought of as only being appropriate for target variables that are continuous Is there any situation where we might
Conjoint Analysis. Warren F. Kuhfeld. Abstract
Conjoint Analysis Warren F. Kuhfeld Abstract Conjoint analysis is used to study consumers product preferences and simulate consumer choice. This chapter describes conjoint analysis and provides examples
BookTOC.txt. 1. Functions, Graphs, and Models. Algebra Toolbox. Sets. The Real Numbers. Inequalities and Intervals on the Real Number Line
College Algebra in Context with Applications for the Managerial, Life, and Social Sciences, 3rd Edition Ronald J. Harshbarger, University of South Carolina - Beaufort Lisa S. Yocco, Georgia Southern University
A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution
A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September
A short introduction to splines in least squares regression analysis
Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft University of Regensburg Working Papers in Business, Economics and Management Information Systems A short introduction to splines in least squares
Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours
MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT
1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number
1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression
Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.
Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
Association Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
HLM software has been one of the leading statistical packages for hierarchical
Introductory Guide to HLM With HLM 7 Software 3 G. David Garson HLM software has been one of the leading statistical packages for hierarchical linear modeling due to the pioneering work of Stephen Raudenbush
Empirical Model-Building and Response Surfaces
Empirical Model-Building and Response Surfaces GEORGE E. P. BOX NORMAN R. DRAPER Technische Universitat Darmstadt FACHBEREICH INFORMATIK BIBLIOTHEK Invortar-Nf.-. Sachgsbiete: Standort: New York John Wiley
Factors affecting online sales
Factors affecting online sales Table of contents Summary... 1 Research questions... 1 The dataset... 2 Descriptive statistics: The exploratory stage... 3 Confidence intervals... 4 Hypothesis tests... 4
