Package trimtrees. February 20, 2015



Similar documents
Chapter 12 Bagging and Random Forests

Classification and Regression by randomforest

Package missforest. February 20, 2015

Package bigrf. February 19, 2015

M1 in Economics and Economics and Statistics Applied multivariate Analysis - Big data analytics Worksheet 3 - Random Forest

Using multiple models: Bagging, Boosting, Ensembles, Forests

Gerry Hobbs, Department of Statistics, West Virginia University

Fine Particulate Matter Concentration Level Prediction by using Tree-based Ensemble Classification Algorithms

Predicting borrowers chance of defaulting on credit loans

Generalizing Random Forests Principles to other Methods: Random MultiNomial Logit, Random Naive Bayes, Anita Prinzie & Dirk Van den Poel

Decision Trees from large Databases: SLIQ

Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets

Package EstCRM. July 13, 2015

Using Ensemble of Decision Trees to Forecast Travel Time

The Operational Value of Social Media Information. Social Media and Customer Interaction

Better credit models benefit us all

Package acrm. R topics documented: February 19, 2015

PaRFR : Parallel Random Forest Regression on Hadoop for Multivariate Quantitative Trait Loci Mapping. Version 1.0, Oct 2012

Implementation of Breiman s Random Forest Machine Learning Algorithm

Leveraging Ensemble Models in SAS Enterprise Miner

Marrying Prediction and Segmentation to Identify Sales Leads

Package SHELF. February 5, 2016

Data Mining Practical Machine Learning Tools and Techniques

Package empiricalfdr.deseq2

CART 6.0 Feature Matrix

Fast Analytics on Big Data with H20

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning

Data Science with R Ensemble of Decision Trees

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation

Winning the Kaggle Algorithmic Trading Challenge with the Composition of Many Models and Feature Engineering

Beating the MLB Moneyline

Heritage Provider Network Health Prize Round 3 Milestone: Team crescendo s Solution

Classification of Bad Accounts in Credit Card Industry

Package hier.part. February 20, Index 11. Goodness of Fit Measures for a Regression Hierarchy

Fraud Detection for Online Retail using Random Forests

Lavastorm Analytic Library Predictive and Statistical Analytics Node Pack FAQs

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Data Mining. Nonlinear Classification

Getting Even More Out of Ensemble Selection

Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Knowledge Discovery and Data Mining. Bootstrap review. Bagging Important Concepts. Notes. Lecture 19 - Bagging. Tom Kelsey. Notes

Data Mining Techniques Chapter 6: Decision Trees

Data Mining - Evaluation of Classifiers

Data Mining Lab 5: Introduction to Neural Networks

Package NHEMOtree. February 19, 2015

Statistics in Retail Finance. Chapter 2: Statistical models of default

Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP

Data Mining: An Overview of Methods and Technologies for Increasing Profits in Direct Marketing. C. Olivia Rud, VP, Fleet Bank

Gamma Distribution Fitting

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

arxiv: v1 [stat.ml] 21 Jan 2013

Lecture 10: Regression Trees

Drugs store sales forecast using Machine Learning

Weight of Evidence Module

Chapter 11 Boosting. Xiaogang Su Department of Statistics University of Central Florida - 1 -

R software tutorial: Random Forest Clustering Applied to Renal Cell Carcinoma Steve Horvath and Tao Shi

Applied Multivariate Analysis - Big data analytics

Package bigdata. R topics documented: February 19, 2015

New Ensemble Combination Scheme

Microsoft Azure Machine learning Algorithms

Homework Assignment 7

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

The Artificial Prediction Market

D-optimal plans in observational studies

Model Combination. 24 Novembre 2009

Distributed forests for MapReduce-based machine learning

Modeling Lifetime Value in the Insurance Industry

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

International Statistical Institute, 56th Session, 2007: Phil Everson

DATA ANALYSIS II. Matrix Algorithms

Comparison of Data Mining Techniques used for Financial Data Analysis

Chapter 3 RANDOM VARIATE GENERATION

Analysis of WEKA Data Mining Algorithm REPTree, Simple Cart and RandomTree for Classification of Indian News

IBM SPSS Direct Marketing 23

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. KTI, TU Graz

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

Survey, Statistics and Psychometrics Core Research Facility University of Nebraska-Lincoln. Log-Rank Test for More Than Two Groups

How To Make A Credit Risk Model For A Bank Account

IBM SPSS Direct Marketing 22

Getting Correct Results from PROC REG

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

Package GSA. R topics documented: February 19, 2015

Using Random Forest to Learn Imbalanced Data

Package HHG. July 14, 2015

5. Multiple regression

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Statistical Data Mining. Practical Assignment 3 Discriminant Analysis and Decision Trees

BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts

A Study Of Bagging And Boosting Approaches To Develop Meta-Classifier

Predictive Data modeling for health care: Comparative performance study of different prediction models

Multiple Linear Regression in Data Mining

UNDERSTANDING THE EFFECTIVENESS OF BANK DIRECT MARKETING Tarun Gupta, Tong Xia and Diana Lee

Transcription:

Package trimtrees February 20, 2015 Type Package Title Trimmed opinion pools of trees in a random forest Version 1.2 Date 2014-08-1 Depends R (>= 2.5.0),stats,randomForest,mlbench Author Yael Grushka-Cockayne, Victor Richmond R. Jose, Kenneth C. Lichtendahl Jr. and Huanghui Zeng, based on the source code from the randomforest package by Andy Liaw and Matthew Wiener and on the original Fortran code by Leo Breiman and Adele Cutler. Maintainer Yael Grushka-Cockayne <grushkay@darden.virginia.edu> Description Creates point and probability forecasts from the trees in a random forest using a trimmed opinion pool. Suggests MASS License GPL (>= 2) NeedsCompilation yes Repository CRAN Date/Publication 2014-08-14 07:11:17 R topics documented: cinbag............................................ 2 hitrate............................................ 5 trimtrees.......................................... 7 Index 12 1

2 cinbag cinbag Modified Classification and Regression with Random Forest Description Usage cinbag implements a modified random forest algorithm (based on the source code from the randomforest package by Andy Liaw and Matthew Wiener and on the original Fortran code by Leo Breiman and Adele Cutler) to return the number of times a row appears in a tree s bag. cinbag returns a randomforest object, e.g., rfobj, with an additional output, a matrix with inbag counts (rows) for each tree (columns). For instance, rfobj$inbagcount is similar to rfobj$inbag, but with inbag counts instead of inbag indicators. Arguments cinbag(x, y=null, xtest=null, ytest=null, ntree=500, mtry=if (!is.null(y) &&!is.factor(y)) max(floor(ncol(x)/3), 1) else floor(sqrt(ncol(x))), replace=true, classwt=null, cutoff, strata, sampsize = if (replace) nrow(x) else ceiling(.632*nrow(x)), nodesize = if (!is.null(y) &&!is.factor(y)) 5 else 1, maxnodes = NULL, importance=false, localimp=false, nperm=1, proximity, oob.prox=proximity, norm.votes=true, do.trace=false, keep.forest=!is.null(y) && is.null(xtest), corr.bias=false, keep.inbag=false,...) x y xtest ytest ntree mtry replace classwt cutoff a data frame or a matrix of predictors, or a formula describing the model to be fitted (for the print method, an randomforest object). A response vector. If a factor, classification is assumed, otherwise regression is assumed. If omitted, randomforest will run in unsupervised mode. a data frame or matrix (like x) containing predictors for the test set. response for the test set. Number of trees to grow. This should not be set to too small a number, to ensure that every input row gets predicted at least a few times. Number of variables randomly sampled as candidates at each split. Note that the default values are different for classification (sqrt(p) where p is number of variables in x) and regression (p/3). Should sampling of cases be done with or without replacement? Priors of the classes. Need not add up to one. Ignored for regression. (Classification only) A vector of length equal to number of classes. The winning class for an observation is the one with the maximum ratio of proportion of votes to cutoff. Default is 1/k where k is the number of classes (i.e., majority vote wins).

cinbag 3 Value strata sampsize nodesize maxnodes importance localimp nperm proximity oob.prox norm.votes do.trace keep.forest corr.bias keep.inbag A (factor) variable that is used for stratified sampling. Size(s) of sample to draw. For classification, if sampsize is a vector of the length the number of strata, then sampling is stratified by strata, and the elements of sampsize indicate the numbers to be drawn from the strata. Minimum size of terminal nodes. Setting this number larger causes smaller trees to be grown (and thus take less time). Note that the default values are different for classification (1) and regression (5). Maximum number of terminal nodes trees in the forest can have. If not given, trees are grown to the maximum possible (subject to limits by nodesize). If set larger than maximum possible, a warning is issued. Should importance of predictors be assessed? Should casewise importance measure be computed? (Setting this to TRUE will override importance.) Number of times the OOB data are permuted per tree for assessing variable importance. Number larger than 1 gives slightly more stable estimate, but not very effective. Currently only implemented for regression. Should proximity measure among the rows be calculated? Should proximity be calculated only on out-of-bag data? If TRUE (default), the final result of votes are expressed as fractions. If FALSE, raw vote counts are returned (useful for combining results from different runs). Ignored for regression. If set to TRUE, give a more verbose output as randomforest is run. If set to some integer, then running output is printed for every do.trace trees. If set to FALSE, the forest will not be retained in the output object. If xtest is given, defaults to FALSE. perform bias correction for regression? Note: Experimental. Use at your own risk. Should an n by ntree matrix be returned that keeps track of which samples are in-bag in which trees (but not how many times, if sampling with replacement)... optional parameters to be passed to the low level function cinbag.default. An object of class randomforest, which is a list with the following components: call type predicted importance the original call to randomforest one of regression, classification, or unsupervised. the predicted values of the input data based on out-of-bag samples. a matrix with nclass + 2 (for classification) or two (for regression) columns. For classification, the first nclass columns are the class-specific measures computed as mean descrease in accuracy. The nclass + 1st column is the mean descrease in accuracy over all classes. The last column is the mean decrease in Gini index. For Regression, the first column is the mean decrease in accuracy and the second the mean decrease in MSE. If importance=false, the last measure is still returned as a vector.

4 cinbag importancesd localimp ntree mtry forest err.rate confusion votes oob.times proximity mse rsq test inbag inbagcount The standard errors of the permutation-based importance measure. For classification, a p by nclass + 1 matrix corresponding to the first nclass + 1 columns of the importance matrix. For regression, a length p vector. a p by n matrix containing the casewise importance measures, the [i,j] element of which is the importance of i-th variable on the j-th case. NULL if localimp=false. number of trees grown. number of predictors sampled for spliting at each node. (a list that contains the entire forest; NULL if randomforest is run in unsupervised mode or if keep.forest=false. (classification only) vector error rates of the prediction on the input data, the i-th element being the (OOB) error rate for all trees up to the i-th. (classification only) the confusion matrix of the prediction (based on OOB data). (classification only) a matrix with one row for each input data point and one column for each class, giving the fraction or number of (OOB) votes from the random forest. number of times cases are out-of-bag (and thus used in computing OOB error estimate) if proximity=true when randomforest is called, a matrix of proximity measures among the input (based on the frequency that pairs of data points are in the same terminal nodes). (regression only) vector of mean square errors: sum of squared residuals divided by n. (regression only) pseudo R-squared : 1 - mse / Var(y). if test set is given (through the xtest or additionally ytest arguments), this component is a list which contains the corresponding predicted, err.rate, confusion, votes (for classification) or predicted, mse and rsq (for regression) for the test set. If proximity=true, there is also a component, proximity, which contains the proximity among the test set as well as proximity between test and training data. An indicator (1 or 0) for each training set row and each tree. The indicator is 1 if the training set row is in the tree s bag and is 0 otherwise. Note that this value is not listed in the original randomforest function s output, although it is implemented. A count for each training set row and each tree. The count is the number of times the training set row is in the tree s bag. This output is not available in the original randomforest package. The purpose of the cinbag function is to augment the randomforest function so that it returns inbag counts. These counts are necessary for computing and ensembling the trees empirical cumulative distribution functions. Note cinbag s source files call the C functions classrfmod.c and regrfmod.c, which are slightly modified versions of the randomforest s source files classrf.c and regrf.c, respectively.

hitrate 5 Author(s) Yael Grushka-Cockayne, Victor Richmond R. Jose, Kenneth C. Lichtendahl Jr. and Huanghui Zeng, based on the source code from the randomforest package by Andy Liaw and Matthew Wiener and on the original Fortran code by Leo Breiman and Adele Cutler. References Breiman L (2001). Random forests. Machine Learning 45 5-32. Breiman L (2002). Manual on setting up, using, and understanding random forests V3.1. http: //oz.berkeley.edu/users/breiman/using_random_forests_v3.1.pdf. See Also trimtrees, hitrate Examples # Load the data data <- as.data.frame(mlbench.friedman1(500, sd=1)) summary(data) # Prepare data for trimming train <- data[1:400, ] test <- data[401:500, ] xtrain <- train[,-11] ytrain <- train[,11] xtest <- test[,-11] ytest <- test[,11] # Run cinbag rf <- cinbag(xtrain, ytrain, ntree=500, nodesize=5, mtry=3, keep.inbag=true) rf$inbag[,1] # First tree s inbag indicators rf$inbagcount[,1] # First tree s inbag counts hitrate Empirical Hit Rates for a Crowd of Forecasters Description Usage This function calculates the empirical hit rates for a crowd of forecasters over a testing set. The function takes as its arguments the forecasters probability integral transform (PIT) values one for each testing set row and the prediction interval of interest. hitrate(matrixpit, interval = c(0.25, 0.75))

6 hitrate Arguments matrixpit interval A ntest-by-nforecaster matrix of PIT values where ntest is the number of rows in the testing set and nforecaster is the number of forecasters. Each column represents a different forecaster s PITs for the testing set. A PIT value is the forecaster s cdf evaluated at the realization of the response in the testing set. Prediction interval of interest. The default interval=c(0.25, 0.75) is the central 50% prediction interval. Value HR An nforecaster vector of empirical hit rates one for each forecaster. A forecaster s empirical hit rate is the percentage of PIT values that fall within [interval[1],interval[2]], e.g., [0.25,0.75] according to the default. Author(s) Yael Grushka-Cockayne, Victor Richmond R. Jose, Kenneth C. Lichtendahl Jr., and Huanghui Zeng. References Grushka-Cockayne Y, Jose VRR, Lichtendahl KC Jr. (2014). Ensembles of overfit and overconfident forecasts, working paper. See Also trimtrees, cinbag Examples # Load the data data <- as.data.frame(mlbench.friedman1(500, sd=1)) summary(data) # Prepare data for trimming train <- data[1:400, ] test <- data[401:500, ] xtrain <- train[,-11] ytrain <- train[,11] xtest <- test[,-11] ytest <- test[,11] # Run trimtrees tt <- trimtrees(xtrain, ytrain, xtest, ytest, trim=0.15) # Outputs from trimtrees mean(hitrate(tt$treepits))

trimtrees 7 hitrate(tt$trimmedensemblepits) hitrate(tt$untrimmedensemblepits) trimtrees Trimmed Opinion Pools of Trees in Random Forest Description This function creates point and probability forecasts from the trees in a random forest using Jose et al. s trimmed opinion pool, a trimmed average of the trees empirical cumulative distribution functions (cdf). For tuning purposes, the user can input the trimming level used in this trimmed average and then compare the scores of the trimmed and untrimmed opinion pools, or ensembles. Usage trimtrees(xtrain, ytrain, xtest, ytest=null, ntree = 500, mtry = if (!is.null(ytrain) &&!is.factor(ytrain)) max(floor(ncol(xtrain)/3), 1) else floor(sqrt(ncol(xtrain))), nodesize = if (!is.null(ytrain) &&!is.factor(ytrain)) 5 else 1, trim = 0,trimIsExterior = TRUE, uquantiles = seq(0.05, 0.95, 0.05), methodiscdf = TRUE) Arguments xtrain ytrain xtest ytest ntree mtry nodesize trim A data frame or a matrix of predictors for the training set. A response vector for the training set. If a factor, classification is assumed, otherwise regression is assumed. A data frame or a matrix of predictors for the testing set. A response vector for the testing set. If no testing set is passed, probability integral transform (PIT) values and scores will be returned as NAs. Number of trees to grow. Number of variables randomly sampled as candidates at each split. Minimum size of terminal nodes. The trimming level used in the trimmed average of the trees empirical cdfs. For the cdf approach, the trimming level is the fraction of cdfs values to be trimmed from each end of the ordered vector of cdf values (for each support point) before the average is computed. For the moment approach, the trees means are computed, ordered, and trimmed. The trimmed opinion pool using the moment approach is an average of the remaining trees. trimisexterior If TRUE, the trimming is done exteriorly, or from the ends of the ordered vector. If FALSE, the trimming is done interiorly, or from the middle of the ordered vector. uquantiles A vector of probabilities in a strictly increasing order and between 0 and 1. For instance, if uquantiles=c(0.25,0.75), then the 0.25-quantile and the 0.75- quantile of the trimmed and untrimmed ensembles are scored. methodiscdf If TRUE, the method for forming the trimmed opinion pool is according to the cdf approach in Jose et al (2014). If FALSE, the moment approach is used.

8 trimtrees Value An object of class trimtrees, which is a list with the following components: forestsupport treevalues treecounts treecumcounts treecdfs treepmfs treemeans treevars treepits Possible points of support for the trees and ensembles. For the last testing set row, this component outputs each tree s ytrain values (not necessarily unique) that are both inbag and in the xtest s terminal node. Note that the ytrain values may not be unique. This component is an ntrainby-ntree matrix where ntrain is the number of rows in the training set. For the last testing set row, each tree s counts of treevalues and lists them by their unique values. This component is an nsupport-by-ntree matrix. nsupport is the number of unique ytrain values, or support points of the forest. Cumulative tally of treecounts of dimension nsupport+1-by-ntree. Each tree s empirical cdf based on treecumcounts for the last testing set row only. This component is an nsupport+1-by-ntree matrix. Note that the first row in this matrix is all zeros. Each tree s empirical probability mass function (pmf) for the last testing set row. This component is an nsupport-by-ntree matrix. For each testing set row, each tree s mean according to its empirical pmf. This component is an ntest-by-ntree matrix where ntest is the number of rows in the testing set. For each testing set row, each tree s variance according to its empirical pmf. This component is an ntest-by-ntree matrix. For each testing set row, each tree s probability integral transform (PIT), the empirical cdf evaluated at the realized ytest value. This component is an ntestby-ntree matrix. If ytest is NULL, NAs are returned. treequantiles For the last testing set row, each tree s quantiles one for each element in uquantiles, the empirical cdf evaluated at the realized ytest value. This component is an ntree-by-nquantile matrix where nquantile is the number of elements in uquantiles. treefirstpmfvalues For each testing set row, this component outputs the pmf value on the minimum (or first) support point in the forest. For binary classification, this corresponds to the probability that the minimum (or first) support point will occur. This component s dimension is ntest-by-ntree. It is useful for generating calibration curves (stated probabilities in bins vs. their observed frequencies) for binary classification. bracketingrate For each testing set row, the bracketing rate from Larrick et al. (2012) is computed as 2*p*(1-p) where p is the fraction of trees means above the ytest value. If ytest is NULL, NAs are returned. bracketingrateallpairs The average bracketing rate across all testing set rows for each pair of trees. This component is a symmetric ntree-by-ntree matrix. If ytest is NULL, NAs are returned.

trimtrees 9 trimmedensemblecdfs For each testing set row, the trimmed ensemble s forecast of ytest in the form of a cdf. This component is an ntest-by-nsupport + 1 matrix. nsupport is the number of unique ytrain values, or support points of the forest. trimmedensemblepmfs For each testing set row, the trimmed ensemble s pmf. This component is an ntest-by-nsupport matrix. trimmedensemblemeans For each testing set row, the trimmed ensemble s mean. This component is an ntest vector. trimmedensemblevars For each testing set row, the trimmed ensemble s variance. trimmedensemblepits For each testing set row, the trimmed ensemble s probability integral transform (PIT), the empirical cdf evaluated at the realized ytest value. If ytest is NULL, NAs are returned. trimmedensemblequantiles For the last testing set row, the trimmed ensemble s quantiles one for each element in uquantiles. trimmedensemblecomponentscores For the last testing set row, the components of the trimmed ensemble s linear and log quantile scores.if ytest is NULL, NAs are returned. trimmedensemblescores For each testing set row, the trimmed ensemble s linear and log quantile scores, ranked probability score, and two-moment score. See Jose and Winkler (2009) for a description of the linear and log quantile scores. See Gneiting and Raftery (2007) for a description of the ranked probability score. The two-moment score is the score in Equation 27 of Gneiting and Raftery (2007). If ytest is NULL, NAs are returned. untrimmedensemblecdfs For each testing set row, the linear opinion pool s, or untrimmed ensemble s, forecast of ytest in the form of a cdf. untrimmedensemblepmfs For each testing set row, the untrimmed ensemble s pmf. untrimmedensemblemeans For each testing set row, the untrimmed ensemble s mean. untrimmedensemblevars For each testing set row, the untrimmed ensemble s variance. untrimmedensemblepits For each testing set row, the untrimmed ensemble s probability integral transform (PIT), the empirical cdf evaluated at the realized ytest value. If ytest is NULL, NAs are returned. untrimmedensemblequantiles For the last testing set row, the untrimmed ensemble s quantiles one for each element in uquantiles. untrimmedensemblecomponentscores For the last testing set row, the components of the untrimmed ensemble s linear and log quantile scores. If ytest is NULL, NAs are returned.

10 trimtrees untrimmedensemblescores For each testing set row, the untrimmed ensemble s linear and log quantile scores, ranked probability score, and two-moment score. If ytest is NULL, NAs are returned. Author(s) Yael Grushka-Cockayne, Victor Richmond R. Jose, Kenneth C. Lichtendahl Jr., and Huanghui Zeng. References Gneiting T, Raftery AE. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association 102 359-378. Jose VRR, Grushka-Cockayne Y, Lichtendahl KC Jr. (2014). Trimmed opinion pools and the crowd s calibration problem. Management Science 60 463-475. Jose VRR, Winkler RL (2009). Evaluating quantile assessments. Operations Research 57 1287-1297. Grushka-Cockayne Y, Jose VRR, Lichtendahl KC Jr. (2014). Ensembles of overfit and overconfident forecasts, working paper. Larrick RP, Mannes AE, Soll JB (2011). The social psychology of the wisdom of crowds. In J.I. Krueger, ed., Frontiers in Social Psychology: Social Judgment and Decision Making. New York: Psychology Press, 227-242. See Also hitrate, cinbag Examples # Load the data data <- as.data.frame(mlbench.friedman1(500, sd=1)) summary(data) # Prepare data for trimming train <- data[1:400, ] test <- data[401:500, ] xtrain <- train[,-11] ytrain <- train[,11] xtest <- test[,-11] ytest <- test[,11] # Option 1. Run trimtrees with responses in testing set. tt1 <- trimtrees(xtrain, ytrain, xtest, ytest, trim=0.15) #Some outputs from trimtrees: scores, hit rates, PIT densities. colmeans(tt1$trimmedensemblescores) colmeans(tt1$untrimmedensemblescores)

trimtrees 11 mean(hitrate(tt1$treepits)) hitrate(tt1$trimmedensemblepits) hitrate(tt1$untrimmedensemblepits) hist(tt1$trimmedensemblepits, prob=true) hist(tt1$untrimmedensemblepits, prob=true) # Option 2. Run trimtrees without responses in testing set. # In this case, scores, PITs, or hit rates will not be available. tt2 <- trimtrees(xtrain, ytrain, xtest, trim=0.15) # Some outputs from trimtrees: cdfs for last test value. plot(tt2$trimmedensemblecdfs[100,],type="l",col="red",ylab="cdf",xlab="y") lines(tt2$untrimmedensemblecdfs[100,]) legend(275,0.2,c("trimmed", "untrimmed"),col=c("red","black"),lty = c(1, 1)) title("cdfs of Trimmed and Untrimmed Ensembles") # Compare the CDF and moment approaches to trimming the trees. ttcdf <- trimtrees(xtrain, ytrain, xtest, trim=0.15, methodiscdf=true) ttma <- trimtrees(xtrain, ytrain, xtest, trim=0.15, methodiscdf=false) plot(ttcdf$trimmedensemblecdfs[100,], type="l", col="red", ylab="cdf", xlab="y") lines(ttma$trimmedensemblecdfs[100,]) legend(275,0.2,c("cdf Approach", "Moment Approach"), col=c("red","black"),lty = c(1, 1)) title("cdfs of Trimmed Ensembles")

Index Topic classif cinbag, 2 hitrate, 5 trimtrees, 7 Topic randomforest cinbag, 2 hitrate, 5 trimtrees, 7 Topic regression cinbag, 2 hitrate, 5 trimtrees, 7 Topic tree cinbag, 2 hitrate, 5 trimtrees, 7 cinbag, 2, 6, 10 hitrate, 5, 5, 10 trimtrees, 5, 6, 7 12