Blockwise Sparse Regression
|
|
|
- Estella Blair
- 9 years ago
- Views:
Transcription
1 Blockwise Sparse Regression 1 Blockwise Sparse Regression Yuwon Kim, Jinseog Kim, and Yongdai Kim Seoul National University, Korea Abstract: Yuan an Lin (2004) proposed the grouped LASSO, which achieves shrinkage and selection simultaneously, as LASSO does, but works on blocks of covariates. That is, the grouped LASSO provides a model where some blocks of regression coefficients are exactly zero. The grouped LASSO is useful when there are meaningful blocks of covariates such as polynomial regression and dummy variables from categorical variables. In this paper, we propose an extension of the grouped LASSO, called Blockwise Sparse Regression (BSR). The BSR achieves shrinkage and selection simultaneously on blocks of covariates similarly to the grouped LASSO, but it works for general loss functions including generalized linear models. An efficient computational algorithm is developed and a blockwise standardization method is proposed. Simulation results show that the BSR compromises the ridge and LASSO for logistic regression. The proposed method is illustrated with two datasets. Key words and phrases: gradient projection method, LASSO, ridge, variable selection. 1. Introduction Let {(y i, x i ), y i Y R, x i X R k, i = 1,, n} be n pairs of observations, assumed to be a random sample from an unknown distribution over Y X. For (generalized) linear models, the objective is to find the regression coefficient vector, β R k, which minimizes the prediction error evaluated by the expected loss E[L(y, x β)] where L : Y R R is a given loss function. The choice of L(y, x β) = (y x β) 2 renders the ordinary least square regression. For the logistic regression, L(y, x β) is given as the negative log likelihood, yx β + log (1 + exp(x β)). Unfortunately, prediction error is not available since the underlying distribution is unknown. One of the techniques for resolving this situation is to estimate β by minimizing the empirical expected loss C(β) defined by n i=1 L(y i, x i β). This research was supported (in part) by the SRC/ERC program of MOST/KOSEF (R ).
2 2 Yuwon Kim, Jinseog Kim, and Yongdai Kim However, it is well known that this method suffers from so-called overting, especially when k is large compared to n. A remedy for overting is to restrict the regression coefficients when minimizing C(β). The LASSO proposed by Tibshirani (1996) has gained popularity since it produces a sparse model while keeping high prediction accuracy. The main idea of the LASSO is to estimate β by minimizing C(β) subject to k β j M where β j is j-th component of β. That is, the L 1 norm of the regression coefficients is restricted. Since the LASSO has been initially proposed by Tibshirani (1996) in the context of (generalized) linear models, the idea of restricting the L 1 norm has been applied to various problems such as wavelets (Chen, Donoho and Saunders (1999); Bakin (1999)), kernel machines (Gunn and Kandola (2002); Roth (2004)), smoothing splines (Zhang et al. (2003)), multiclass logistic regressions (Krishnapuram et al. (2004)), etc. Another direction of extending the LASSO is to develop new restrictions on the regression coefficients using other than the L 1 penalty. Fan and Li (2001) proposed the SCAD, Lin and Zhang (2003) proposed COSSO, Zou and Hastie (2004) proposed the elastic net and Tibshirani et al. (2005) proposed the fused LASSO. Recently, Yuan an Lin (2004) proposed an interesting restriction called the grouped LASSO for the linear regression. The main advantage of the grouped LASSO is that one can make a group of regression coefficients be zero simultaneously, which is not possible for the LASSO. When we want to produce more flexible functions other than linear models or to include categorical variables, we add new variables generated from original ones using appropriate transformations such as polynomials or dummy variables. In these cases, it is very helpful in interpretation to introduce the concept of block, which means a group of covariates highly related to each other. For examples, dummy variables generated from the same categorical variable is a block. When the concept block is important, it is natural to select blocks rather than individual covariates. The ordinary LASSO is not feasible for blockwise selection since the sparsity in individual coefficients does not ensure the sparsity in blocks while the grouped LASSO selects j=1
3 Blockwise Sparse Regression 3 or eliminates blocks. The objective of this paper is to extend the idea of the grouped LASSO for general loss functions, to include generalized linear models. We name the proposed method Blockwise Sparse Regression (BSR). We develop an efficient computational algorithm, a GCV-type criterion and a blockwise standardization method for the BSR. The paper is organized as follows. In Section 2, the formulation of the BSR is given and an efficient computational algorithm is developed. In Section 3, a way of selecting the restriction parameter M using a GCV-type criterion is proposed. Section 4 presents a method of standardizing covariates by blocks. In Section 5, simulation results for comparing the BSR with the LASSO and ridge are given, and the proposed method is illustrated with two examples in Section 6. Concluding remarks follow in Section The Blockwise Sparse Regression 2.1. Definition Suppose the regression coefficient vector is partitioned into p blocks denoted by β = (β (1),, β (p) ) where β (j) is a d j -dimensional coefficient vector for the j-th block. The standard ridge solution is obtained by minimizing C(β) subject to β 2 M where is the L 2 norm on the Euclidean space. Since β 2 = p j=1 β (j) 2, we can interpret β 2 as the squared L 2 norm of the p-dimensional vector of the norms of the blocks, i.e. ( β (1),, β (p) ). For sparsity in blocks, we introduce a LASSO-type restriction on ( β (1),, β (p) ). That is, the BSR estimate ˆβ is obtained by minimizing C(β) subject to p β (j) M. j=1 Since the regression coefficients are restricted by the sum of the norms of blocks, some blocks with small contribution would shrink to exact zero as in the LASSO, and hence all the coefficients in those blocks become exactly zero simultaneously. For blocks with positive norms, the regression coefficients in the blocks shrink
4 4 Yuwon Kim, Jinseog Kim, and Yongdai Kim Ridge Blockwise Sparse Regresion LASSO Block effects Block effects Block effects relative M relative M relative M Figure 2.1: Paths of the norms of each block for example 1 in Section 5. The solid curves denote the signal blocks and the dashed curves denote the noise blocks. The vertical solid lines are positioned on the models selected by CV. The values in the horizontal axes are the ratios of the values of the restriction M to the norms of the ordinary logistic regression solutions. similarly to the ridge regression. Note that the usual ridge regression is obtained if a block contains all covariates. On the other hand, when each covariate separately forms a block, the BSR reduces to the ordinary LASSO. That is, the BSR compromises the ridge and LASSO. Also, under squared error loss, the BSR is equivalent to the grouped LASSO. Figure 2.1 compares the ridge, LASSO and BSR methods on the simulated model whose details are given in Section 5.2. The simulated model has eight blocks, each of which consists of three covariates, and among which only the first two blocks are informative to responses and other six blocks are just noise. Figure 2.1 shows the paths of the norms of each block for various values of the restriction parameter M. The vertical line in the middle of the each plot represents the optimal M selected by 5-fold Cross Validation. It is clear that the BSR method outperforms the other two methods in terms of block selectivity.
5 Blockwise Sparse Regression 5 Table 2.1: Algorithm for Gradient Projection Method. 1. Initialize : β 0 B, s : sufficiently small positive scalar 2. For t = 1 to T : (a) [Gradient step]: Calculate C(β t 1 ) (b) [Projection step]: Let β t = argmin β B β t 1 s C(β t 1 ) β 2 end For. 3. Return β T The correct two blocks are selected by the BSR while the LASSO includes some noise blocks. The Ridge is the worst since it includes all blocks Algorithm We first introduce the Gradient Projection Method, a well known technique for optimization on convex sets, and explain how to modify it for the BSR. Consider a constrained optimization problem given by minimizing C(β) subject to β B, where B is a convex set. Let C(β) be the gradient of C(β). The algorithm of the gradient projection method, which finds the solution by iterating between moving toward the opposite direction of the gradient and projecting it onto the constraint set B, is given in Table 2.1. If the gradient function is convex and Lipschitz continuous with the Lipschitz constant L, the sequence of solutions generated by the gradient projection method, with step size s < 2/L, converges to the global optimum. s can be selected by calculating the Lipschitz constant L or by trial and error. The speed of convergence does not depend strongly on the choice of s unless s is very small. For details of the algorithm, see Bertsekas (1999), and for its application to various constrained problems, see Kim (2004). For the BSR, we can easily check that the constraint set, B = { p j=1 β (j) M}, is convex, and so we can apply the gradient projection method. The hardest part of the algorithm is the projection (2b of Table 2.1). In general, computational cost here is large. However, for the BSR, projection can be done easily as
6 6 Yuwon Kim, Jinseog Kim, and Yongdai Kim follows. Let b = β t s C(β t ) and let b (j) be the jth block of b. Then β t+1 is the minimizer of p b (j) β (j) 2 j=1 subject to p β (j) M (2.1) with respect to β. Suppose M j = β t+1 (j) are known for j = 1,, p. Then, we have β t+1 M j (j) = b (j) b (j). (2.2) So, for finding β t+1, it suffices to find M j s. For M j, plugging (2.2) to (2.1), we solve the reduced problem as p minimizing ( b (j) M j ) 2 j=1 j=1 subject to p M j M (2.3) with respect to M j s with M j 0. For solving (2.3), note that if the projection of ( b (1),, b (p) ) onto the hyperplane of the form M j = M has non-positive values on some coordinates, then the solution of (2.3) should have exact zeros on those coordinates, which we call inactive. Once inactive coordinates are found, we rule out them and re-calculate the projection onto the reduced hyperplane until no more negative values occur in the projection. A small number, at most p, of iterations is required. Once we solve (2.3), we get β t+1 by (2.2). The procedure is summarized in Table Selection of the restriction parameter M In this section, we describe methods for selecting the restriction parameter M of the BSR. One popular method for selecting M is the K-fold cross-validation (CV). However, the K-fold CV suffers from its computational burden. Another method is to use the generalized cross validation (GCV) proposed by Craven and Wahba (1979). A GCV type criterion for the LASSO is found in Tibshirani (1996), where the LASSO solution is approximated by the ridge solution. An extended GCV type criterion is proposed by Tibshirani (1997), where the LASSO technique is applied to the proportional hazard model. We propose a GCV type criterion for the BSR as is Tibshirani (1996, 1997). Suppose that all blocks have nonzero norms. When some blocks have zero norms, we reduce the design matrix by eliminating the blocks whose norms are zero. j=1
7 Blockwise Sparse Regression 7 Table 2.2: Algorithm for the Blockwise Sparse Regression. 1. Initialize : β 0 = 0, s : sufficiently small positive scalar 2. For t = 1 to T : Calculate gradient C(β t 1 ). Set b = β t 1 s C(β t 1 ) and τ = {1,, p}. Start loop. Calculate the projection ( M j = I(j τ) b (j) + M j τ b ) (j) τ for j = 1,, p. where τ is the cardinality of τ. If (M j 0) for all j, then abort the loop. Else update the active set τ = {j : M j > 0}. End loop. Get a new solution, β t (j) = b (j) for j = 1,, p. b (j) end For. 3. Return β T M j Using the relation, p j=1 β (j) = p j=1 β (j) 2 / β (j), ˆβ can be obtained as the solution of minimizing C(β) + λ 2 p j=1 β (j) 2 β (j), for some λ uniquely defined by M. Now, assuming that the ˆβ (j) are known, we expect that the solution β of minimizing C(β) + λ 2 p β (j) 2 j=1 ˆβ (j), (3.1) gives an approximation to ˆβ. Let X denote the design matrix with x i as the i-th row and let η = Xβ. The iterative reweight least square method for solving
8 8 Yuwon Kim, Jinseog Kim, and Yongdai Kim (3.1) gives a linear approximation β (X AX + λw ) 1 X Az (3.2) where A = 2 C(β)/ ηη, u ( = C(β)/ η, z = Xβ A 1 u and W is the j 1 ) k k diagonal matrix whose h=1 d h + l -th diagonal element is 1/ β (j) for j = 1,, p, l = 1,, d j, all evaluated at β = ˆβ. From (3.2), we construct a GCV-type criterion GCV (M) = 1 C(ˆβ) n (1 p(m)/n) 2, where p(m) = tr[x(x AX + λw ) 1 X A] as in by Tibshirani (1997). agree that the GCV is a rough approximation, but we show that the GCV gives comparable performances to the 5-fold CV in the simulation study. 4. Standardization within Blocks A common way of generating blocks is to use transformations such as polynomial or dummies from the original covariates. In many cases, there is more than one transformation. That results in the same model. Unfortunately, the block norm is not invariant with transformations. For example, consider a covariate x having values on the three categories denoted by {1, 2, 3}. For dummy variables (z 1, z 2 ) to represent x, we can let (z 1, z 2 ) = (1, 0) for x = 1, (z 1, z 2 ) = (0, 1) for x = 2, and (z 1, z 2 ) = (0, 0) for x = 3. Suppose that the regression coefficients of z 1 and z 2 are 1 and -1, respectively. In this case, the block norm becomes 2. Now, we can use as dummy variables (z 1, z 2 ) = (1, 0) for x = 1, (z 1, z 2 ) = (0, 0) for x = 2, and (z 1, z 2 ) = (0, 1) for x = 3. The corresponding regression coefficients become 2 and 1, and so the block norm is 5. Hence, different transformations may result in different block selections in the BSR. In this section, we propose a method of blockwise standardization of covariates to resolve this problem. Let X 1 and X 2 be two design matrices of a given block such that there are two regression coefficients β 1 and β 2 with X 1 β 1 = X 2 β 2. The blockwise standardization is based on the fact that β 1 = β 2 when X 1X 1 = X 2X 2 = ti for some t > 0. Hence, when a design matrix for a block is given, we propose to orthogonalize the design matrix in advance, and then to estimate the corresponding regression coefficients using the BSR. Specifically, let X be a design matrix partitioned into p sub-matrices (X (1),, X (p) ), where X (j) corresponds to the j-th block. For simplicity, we assume all columns We
9 Blockwise Sparse Regression 9 of X are centered at zero. Using the spectral decomposition X (j) X (j) = P (j) Λ (j) P (j), where Λ (j) is the diagonal matrix with positive eigenvalues and P (j) is the corresponding eigenmatrix, we can construct blockwise orthogonal design matrices as Z (j) = 1 tj X (j) T (j), (4.1) for some t j > 0 where T (j) = P (j) Λ 1/2 (j). We can easily check Z (j) Z (j) = t 1 j I. For t j, we recommend using the number of positive eigenvalues of X (j) X (j). This recommendation is motivated by the observation that the block norm tends to be larger when the size of the block is larger. Hence, blocks with more coefficients have more chance to be selected. With our recommendation, the determinants of Z (j) Z (j), j = 1,..., p, are all the same, and so the contribution of each block to the response can be measured only by the norm of the regression coefficients. To demonstrate the necessity of the recommended choice of t j s, we compare the selectivity of the recommended choice of t j s with the choice t j = 1 for all j. We generated 100 samples of 250 observations which consist of 14 independent uniform covariates (x 1,, x 14 ) with range [-1,1] and response y generated by a logistic regression model with f(x) = x 1. We assigned the covariates into three blocks. Block 1 has the signal covariate x 1, Block 2 has three noise covariates (x 2, x 3, x 4 ). The other ten noise covariates are assigned to Block 3. We investigated which of Blocks 2 and 3 went to zero first as the restriction parameter M decreased. When we used t j = 1 for all j, Block 2 went to zero first in 96 samples out of 100, which means that Block 3 was selected more frequently. When we used the recommended choice of t j s, Block 2 went to zero first in 58 samples out of 100. That is, the recommended choice of t j s helped the fair selection. Once the solution β with the blockwise standardized design matrix given in (4.1) is obtained by the BSR, the coefficient β on the original design matrix is reconstructed as β (j) = 1 T tj (j) β (j). 5. Simulation 5.1. Outline We compare the prediction error and the variable selectivity of the BSR with the ridge and LASSO on three examples with logistic regression models.
10 10 Yuwon Kim, Jinseog Kim, and Yongdai Kim The restriction parameters of the three methods are chosen using GCV as well as 5-fold CV. Prediction errors are measured by the averages of test errors using two loss function, the logistic loss (LOL) and the misclassification rate (MIR) on 10,000 randomly selected design points. The blockwise standardization is not used for the BSR to make the comparison fair Example 1 We generated 100 samples, each of which consisted of 250 observations. The covariates were generated from the 8-dimensional independent uniform distribution on [-1,1]. The binary response y was generated by a logistic model with Pr(y = 1) = exp(f(x))/(1 + exp(f(x))) and Pr(y = 0) = 1 Pr(y = 1), where the true regression function is f(x) = 2p 1 (x 1 ) + 2p 2 (x 1 ) + 2p 3 (x 1 ) + p 1 (x 2 ) + p 2 (x 2 ) + p 3 (x 2 ), with p 1 (x) = x, p 2 (x) = (3x 2 1)/2, and p 3 (x) = (5x 3 3x)/2, the first three Legendre polynomials. Thus all covariates except the first and second ones are pure noise. The design matrix has 24 columns consisting of p 1 (x j ), p 2 (x j ) and p 3 (x j ) for j = 1,, 8. The three columns of p 1 (x j ), p 2 (x j ) and p 3 (x j ) from a original covariate x j form a natural block. Consequently, we the logistic regression model with the BSR penalty with the eight blocks, each of which has three regression coefficients. The ridge and LASSO penalty put restrictions on 24 individual regression coefficients. The results are shown in Table 5.3. The BSR had the lowest prediction error, followed by LASSO. For variable selectivity, LASSO solutions included fewer variables but more blocks than did BSR solutions. The box plots of the norm of each block are presented in Figure 5.2. It is clear that the norms of the noise blocks (blocks generated from x 3,, x 8 ) from the BSR were smaller than those from the other two methods Example 2 In this example, we only change the regression function in Example 1 to be more suitable to LASSO. The true regression function is f(x) = 2p 1 (x 1 ) + 2p 2 (x 2 ) + 2p 3 (x 3 ) + p 1 (x 4 ) + p 2 (x 5 ) + p 3 (x 6 ),
11 Blockwise Sparse Regression 11 β(1) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(2) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(3) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(4) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(5) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(6) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(7) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) β(8) Ridge(CV) BSR(CV) LASSO(CV) Ridge(GCV) BSR(GCV) LASSO(GCV) Figure 5.2: Boxplots of the norms of the eight blocks for Example 1.
12 12 Yuwon Kim, Jinseog Kim, and Yongdai Kim Table 5.3: The averaged prediction errors (standard errors) and averaged number of blocks and individual regression coefficients being zero from Example 1. Method Average prediction error Average count of zeros LOL MIR Block Covariate Ridge(CV) (0.0013) (0.0011) BSR(CV) (0.0011) (0.0009) LASSO(CV) (0.0013) (0.0011) Ridge(GCV) (0.0018) (0.0012) BSR(GCV) (0.0011) (0.0009) LASSO(CV) (0.0013) (0.0011) Table 5.4: The averaged prediction errors (standard errors) and averaged number of blocks and individual regression coefficients being zero from Example 2. Method Average prediction error Average count of zeros LOL MIR Block Covariate Ridge(CV) (0.0012) (0.0009) BSR(CV) (0.0013) (0.0010) LASSO(CV) (0.0012) (0.0010) Ridge(GCV) (0.0017) (0.0010) BSR(GCV) (0.0015) (0.0010) LASSO(CV) (0.0013) (0.0010) where only one column among p 1 (x j ), p 2 (x j ) and p 3 (x j ) is related to the response for j = 1,..., 6, and x 7 and x 8 are pure noises. The results are shown in Table 5.4, where LASSO outperforms the others in prediction accuracy and the BSR is slightly better than the ridge. For variable selectivity, the BSR does not work well since it includes too many noise covariates. This is partly because the BSR includes all covariates in the block when the block has nonzero norm Example 3 We try a different model from that in Example 1 to be more suitable to the
13 Blockwise Sparse Regression 13 Table 5.5: The averaged prediction errors (standard errors) and averaged number of blocks and individual regression coefficients being zero from Example 3. Method Average prediction error Average count of zeros LOL MIR Block Covariate Ridge(CV) (0.0009) (0.0010) BSR(CV) (0.0011) (0.0012) LASSO(CV) (0.0012) (0.0015) Ridge(GCV) (0.0011) (0.0010) BSR(GCV) (0.0011) (0.0014) LASSO(CV) (0.0011) (0.0013) ridge. Here f(x) = 8 j=1 1 2 (p 1(x j ) + p 2 (x j ) + p 3 (x j )). The results are shown in Table 5.5. For prediction accuracy, the ridge worked the best, the BSR next and LASSO the worst. Note that the BSR selects most of variables while LASSO fails to detect significant amount of covariates. That is, the BSR is better in variable selectivity than LASSO. 6. Examples 6.1. German credit data German credit data consists of 1000 credit histories with a binary response, and 20 covariates. The binary response represents good (700 cases) and bad credits (300 cases), respectively. Seven covariates are numerical and the rest are categorical, with the number of categories ranging from 2 to 10. The data set is available from UCI Machine Learning Repository ( mlearn/mlrepository.html). The 20 covariates are given below. V1 :Status of existing checking account (4 categories). V2 :Duration in month (numerical). V3 :Credit history (5 categories). V4 :Purpose (10 categories). V5 :Credit amount (numerical). V6 :Savings account/bonds (5 categories).
14 14 Yuwon Kim, Jinseog Kim, and Yongdai Kim V7 :Present employment since (5 categories). V8 :Installment rate in percentage of disposable income(numerical). V9 :Personal status and sex (4 categories). V10 :Other debtors / guarantors (3 categories). V11 :Present residence since (numerical). V12 :Property (4 categories). V13 :Age in years (numerical). V14 :Other installment plans (3 categories). V15 :Housing (3 categories). V16 :Number of existing credits at this bank (numerical). V17 :Job (4 categories). V18 :Number of people being liable to provide maintenance for (numerical). V19 :Telephone (2 categories). V20 :Foreign worker (2 categories). We expand each of the numerical covariates to a block using transformations up to the 3-th order polynomial, and each of the categorical covariates to a block with the corresponding dummy variables. The blocks are standardized as in (4.1). Then, the logistic regression model with blockwise restriction is ted, where the restriction parameter M is chosen by GCV. For GCV, we used the 0-1 loss in the numerator instead of the log-likelihood, for the former tends to yield better models. The BSR eliminates the covariates V16 and V17, whose block norms are zero. The partial s on the remaining 18 covariates are shown in Figure 6.3, where the covariates with larger block norms appear first. This suggests that V1 and V4 are the most important covariates. We compare the prediction accuracies of the BSR with those of the ridge and LASSO. The misclassification rates are estimated by 10 repetitions of the 10-fold CV. For each repetition, the restriction parameters are chosen by GCV. The results summarized in Table 6.6 show that the LOL and MIS of the BSR and LASSO are close, and that the ridge does less well Breast Cancer Data The second example is the Breast cancer data, available from UCI Machine Learning Repository ( mlearn/mlrepository.html). The
15 Blockwise Sparse Regression 15 A11 A13 A40 A43 A A30 A32 A34 A61 A63 A65 V1 V4 V2 V3 V A71 A73 A A91 A93 V5 V7 V8 V11 V9 A101 A103 A141 A143 A121 A123 A201 A V10 V14 V12 V20 V13 A151 A153 A191 A V15 V19 V18 Figure 6.3: The partial s on 18 selected covariates in German credit data.
16 16 Yuwon Kim, Jinseog Kim, and Yongdai Kim Table 6.6: Estimates of LOL and MIS and their standard errors in German credit data. Method LOL MIS Ridge (0.0008) (0.0014) BSR (0.0011) (0.0020) LASSO (0.0010) (0.0020) deg malig inv nodes tumor size ge40 lt40 premeno no yes no yes menopause irradiat node caps Figure 6.4: The partial s on 6 selected covariates in Breast Cancer data. data set includes nine covariates and a binary response. The response has norecurrence-events in 201 cases, which are coded to 0, and recurrence-events on 85 cases, which are coded to 1. Five covariates are categorical and four covariates are numerical. We expand covariates as was done for the German credit data. The BSR eliminates three covariates and the partial s on remaining six covariates are shown in Figure 6.4. The prediction accuracies of the ridge, BSR and LASSO obtained by 10 repetitions of the 10 fold CV are summarized in Table 6.7, which shows that the BSR is the best in prediction accuracy.
17 Blockwise Sparse Regression 17 Table 6.7: Estimates of LOL and MIS and their standard errors in Breast cancer data. Method LOL MIS Ridge (0.0028) (0.0052) BSR (0.0015) (0.0028) LASSO (0.0016) (0.0028) 7. Concluding Remarks Even though we focused on logistic regression, the proposed method can be applied to many problems, such as Poisson regression and the proportional hazard model. The only modification required for such extensions is to calculate the corresponding gradient. There are various possible extensions of the BSR. For example, we can combine the idea of blockwise sparsity with other sparse penalties such as SCAD, fused LASSO or the elastic net mentioned in the Introduction. We believe that the computational algorithm proposed in this paper is general enough to be easily modified for such extensions. The asymptotic properties of the BSR can be derived following Knight and Fu (2000). This is important for variance estimation of the estimated regression coefficients. We will pursue this problem in the near future. References Bakin, S. (1999). Adaptive Regression and Model Selection in Data Mining Problem. PhD thesis, Australian National University, Australia. Bertsekas, D.P. (1999). Nonlinear Proramming. Athena Scientific, Belmount, Massachusettes, second edition. Breiman, L. and Friedman, J. (1985). Estimating optimal transforms for multiple regression and correlation. J. Amer. Statist. Assoc. 80, Chen, S.S., Donoho, D.L. and Saunders, M.A. (1999). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, Craven, P. and Wahba, G. (1979). Smoothing noisy data with spline functions. J. Amer. Statist. Assoc. 31, Fan, J. and Li, R. (2001). Variable selection via noncocave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96,
18 18 Yuwon Kim, Jinseog Kim, and Yongdai Kim Gunn, S.R. and Kandola, J.S. (2002). Structural modelling with sparse kernels. Machine Learning Hastie, T.J. and Tibshirani, R. (1990). Generalized Additive Models. Chapman & Hall, London. Kim, Y. (2004). Gradient projection method for constrained estimation, unpublished manuscript. (available at gary). Knight, K. and Fu, W.J. (2000). Asymptotics for lasso-type estimators. Ann. Statist. 28, Krishnapuram, B., Carlin, L., Figueiredo, M.A.T., and Hartemink, A. (2004). Learning sparse classifier: Multi-class formulation, fast algorithms and generalization bounds. Technical report, ISDS, Duke university. Lee, Y., Kim, Y., Lee, S., and Koo, J.-Y. (2004). Structured Multicategory SVM with ANOVA decomposition. Technical Report 743, Department of Statistics, The Ohio State University. Lin, Y. and Zhang, H. (2003). Component Selection and Smoothing in Smoothing Spline Analysis of Variance Models. Technical Report 1072, Department of Statistics, University of Wisconsin-Madison. Roth, V. (2004). The generalized lasso. IEEE Trans. Neural Networks 15, Tibshirani, R. (1996). Regression shirinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58, Tibshirani, R. (1997). The Lasso method for variable selection in the Cox model. Statistics in medicine 16, Tibshirani, R., Rosset, S., Zhu, J., and Knight, K. (2005). Sparsity and smoothness via the fused lasso. J. Roy. Statist. Soc. B 67, Zhang, H., Wahba, G., Lin, Y., Voelker, M., Ferris, M., Klein, R., and Klein, B. (2003). Variable selection and model building via likelihood basis pursuit. Technical Report 1059r, Department of Statistics, University of Wisconsin, Madison, WI. Yuan, M. and Lin, Y. (2004). Model selection and estimation in regression with grouped variables. Technical Report 1095, Department of Statistics, University of Wisconsin, Madison, WI. Zou, H. and Hastie, T.J. (2004). Regularization and variable selection via elastic net. Technical report, Department of Statistics, Stanford University.
19 Blockwise Sparse Regression 19 Statistical Research Center for Complex Systems, Seoul National University, Korea Statistical Research Center for Complex Systems, Seoul National University, Korea Department of Statistics, Seoul National University, Korea
BLOCKWISE SPARSE REGRESSION
Statistica Sinica 16(2006), 375-390 BLOCKWISE SPARSE REGRESSION Yuwon Kim, Jinseog Kim and Yongdai Kim Seoul National University Abstract: Yuan an Lin (2004) proposed the grouped LASSO, which achieves
Lasso on Categorical Data
Lasso on Categorical Data Yunjin Choi, Rina Park, Michael Seo December 14, 2012 1 Introduction In social science studies, the variables of interest are often categorical, such as race, gender, and nationality.
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
Regularized Logistic Regression for Mind Reading with Parallel Validation
Regularized Logistic Regression for Mind Reading with Parallel Validation Heikki Huttunen, Jukka-Pekka Kauppi, Jussi Tohka Tampere University of Technology Department of Signal Processing Tampere, Finland
Effective Linear Discriminant Analysis for High Dimensional, Low Sample Size Data
Effective Linear Discriant Analysis for High Dimensional, Low Sample Size Data Zhihua Qiao, Lan Zhou and Jianhua Z. Huang Abstract In the so-called high dimensional, low sample size (HDLSS) settings, LDA
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Penalized Logistic Regression and Classification of Microarray Data
Penalized Logistic Regression and Classification of Microarray Data Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France Penalized Logistic Regression andclassification
Fitting Subject-specific Curves to Grouped Longitudinal Data
Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: [email protected] Currie,
D-optimal plans in observational studies
D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Machine Learning Big Data using Map Reduce
Machine Learning Big Data using Map Reduce By Michael Bowles, PhD Where Does Big Data Come From? -Web data (web logs, click histories) -e-commerce applications (purchase histories) -Retail purchase histories
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery Zhilin Zhang and Bhaskar D. Rao Technical Report University of California at San Diego September, Abstract Sparse Bayesian
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Predict the Popularity of YouTube Videos Using Early View Data
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
Local classification and local likelihoods
Local classification and local likelihoods November 18 k-nearest neighbors The idea of local regression can be extended to classification as well The simplest way of doing so is called nearest neighbor
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Model selection in R featuring the lasso. Chris Franck LISA Short Course March 26, 2013
Model selection in R featuring the lasso Chris Franck LISA Short Course March 26, 2013 Goals Overview of LISA Classic data example: prostate data (Stamey et. al) Brief review of regression and model selection.
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
How to assess the risk of a large portfolio? How to estimate a large covariance matrix?
Chapter 3 Sparse Portfolio Allocation This chapter touches some practical aspects of portfolio allocation and risk assessment from a large pool of financial assets (e.g. stocks) How to assess the risk
Statistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
Efficient variable selection in support vector machines via the alternating direction method of multipliers
Efficient variable selection in support vector machines via the alternating direction method of multipliers Gui-Bo Ye Yifei Chen Xiaohui Xie University of California, Irvine University of California, Irvine
Penalized regression: Introduction
Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Classification of Bad Accounts in Credit Card Industry
Classification of Bad Accounts in Credit Card Industry Chengwei Yuan December 12, 2014 Introduction Risk management is critical for a credit card company to survive in such competing industry. In addition
Linear Classification. Volker Tresp Summer 2015
Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong
BOOSTED REGRESSION TREES: A MODERN WAY TO ENHANCE ACTUARIAL MODELLING
BOOSTED REGRESSION TREES: A MODERN WAY TO ENHANCE ACTUARIAL MODELLING Xavier Conort [email protected] Session Number: TBR14 Insurance has always been a data business The industry has successfully
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S
Linear smoother ŷ = S y where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S 2 Online Learning: LMS and Perceptrons Partially adapted from slides by Ryan Gabbard
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Logit Models for Binary Data
Chapter 3 Logit Models for Binary Data We now turn our attention to regression models for dichotomous data, including logistic regression and probit analysis. These models are appropriate when the response
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
7 Time series analysis
7 Time series analysis In Chapters 16, 17, 33 36 in Zuur, Ieno and Smith (2007), various time series techniques are discussed. Applying these methods in Brodgar is straightforward, and most choices are
We discuss 2 resampling methods in this chapter - cross-validation - the bootstrap
Statistical Learning: Chapter 5 Resampling methods (Cross-validation and bootstrap) (Note: prior to these notes, we'll discuss a modification of an earlier train/test experiment from Ch 2) We discuss 2
Data Mining - Evaluation of Classifiers
Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010
Poisson Models for Count Data
Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
Generalized Boosted Models: A guide to the gbm package
Generalized Boosted Models: A guide to the gbm package Greg Ridgeway August 3, 2007 Boosting takes on various forms th different programs using different loss functions, different base models, and different
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
The equivalence of logistic regression and maximum entropy models
The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/
Ridge Regression. Patrick Breheny. September 1. Ridge regression Selection of λ Ridge regression in R/SAS
Ridge Regression Patrick Breheny September 1 Patrick Breheny BST 764: Applied Statistical Modeling 1/22 Ridge regression: Definition Definition and solution Properties As mentioned in the previous lecture,
1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Solving polynomial least squares problems via semidefinite programming relaxations
Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose
Part 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
Several Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
Linear Models and Conjoint Analysis with Nonlinear Spline Transformations
Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Warren F. Kuhfeld Mark Garratt Abstract Many common data analysis models are based on the general linear univariate model, including
Multivariate Analysis (Slides 13)
Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables
A Simple Introduction to Support Vector Machines
A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability
Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller
Degrees of Freedom and Model Search
Degrees of Freedom and Model Search Ryan J. Tibshirani Abstract Degrees of freedom is a fundamental concept in statistical modeling, as it provides a quantitative description of the amount of fitting performed
Suk-Geun Hwang and Jin-Woo Park
Bull. Korean Math. Soc. 43 (2006), No. 3, pp. 471 478 A NOTE ON PARTIAL SIGN-SOLVABILITY Suk-Geun Hwang and Jin-Woo Park Abstract. In this paper we prove that if Ax = b is a partial signsolvable linear
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
Big Data Analytics: Optimization and Randomization
Big Data Analytics: Optimization and Randomization Tianbao Yang, Qihang Lin, Rong Jin Tutorial@SIGKDD 2015 Sydney, Australia Department of Computer Science, The University of Iowa, IA, USA Department of
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
Statistics Graduate Courses
Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.
An Introduction to Machine Learning
An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia [email protected] Tata Institute, Pune,
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Nonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris [email protected] Nonlinear optimization c 2006 Jean-Philippe Vert,
Location matters. 3 techniques to incorporate geo-spatial effects in one's predictive model
Location matters. 3 techniques to incorporate geo-spatial effects in one's predictive model Xavier Conort [email protected] Motivation Location matters! Observed value at one location is
Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA [email protected]
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Towards Online Recognition of Handwritten Mathematics
Towards Online Recognition of Handwritten Mathematics Vadim Mazalov, joint work with Oleg Golubitsky and Stephen M. Watt Ontario Research Centre for Computer Algebra Department of Computer Science Western
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher
When is missing data recoverable?
When is missing data recoverable? Yin Zhang CAAM Technical Report TR06-15 Department of Computational and Applied Mathematics Rice University, Houston, TX 77005 October, 2006 Abstract Suppose a non-random
Which Is the Best Multiclass SVM Method? An Empirical Study
Which Is the Best Multiclass SVM Method? An Empirical Study Kai-Bo Duan 1 and S. Sathiya Keerthi 2 1 BioInformatics Research Centre, Nanyang Technological University, Nanyang Avenue, Singapore 639798 [email protected]
Logistic Regression (1/24/13)
STA63/CBB540: Statistical methods in computational biology Logistic Regression (/24/3) Lecturer: Barbara Engelhardt Scribe: Dinesh Manandhar Introduction Logistic regression is model for regression used
Building risk prediction models - with a focus on Genome-Wide Association Studies. Charles Kooperberg
Building risk prediction models - with a focus on Genome-Wide Association Studies Risk prediction models Based on data: (D i, X i1,..., X ip ) i = 1,..., n we like to fit a model P(D = 1 X 1,..., X p )
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Monotonicity Hints. Abstract
Monotonicity Hints Joseph Sill Computation and Neural Systems program California Institute of Technology email: [email protected] Yaser S. Abu-Mostafa EE and CS Deptartments California Institute of Technology
KERNEL LOGISTIC REGRESSION-LINEAR FOR LEUKEMIA CLASSIFICATION USING HIGH DIMENSIONAL DATA
Rahayu, Kernel Logistic Regression-Linear for Leukemia Classification using High Dimensional Data KERNEL LOGISTIC REGRESSION-LINEAR FOR LEUKEMIA CLASSIFICATION USING HIGH DIMENSIONAL DATA S.P. Rahayu 1,2
ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING
ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING BY OMID ROUHANI-KALLEH THESIS Submitted as partial fulfillment of the requirements for the degree of
